How useful is a GPS position for theft prevention? IME cops are not interested in doing more than filing a report after a theft, even if you have a live GPS location of the item for them. Do you try and go get it yourself?
i can speak to this as i had my motorcycle stolen on NYE last year in Santa Monica with an airtag in it. the Santa Monica police said “smart, but it’s in LA so we can’t help you get it. tell the LAPD.”. it took me seven hours of calls to the LAPD while personally hunting down my bike in the shadiest areas of LA, and being a block away from getting it myself, did they come. so yes, if you’re in LA, you basically get it yourself.
in my case, the damage was so much i wish i had just left it stolen and taken the bigger insurance payout.
GPS won't prevent theft, but can help in recovery. Can.
But Apple does more stuff as well, like encrypting your phone and making it so even harvesting a stolen phone for parts is unattractive (everything has serial numbers and you can't just swap a part out).
IPs owned by Iranian entities could be blocked straightforwardly by network operators at various levels. They could probably fudge the paperwork via Russian or Chinese entities and obfuscate the routes with cooperation from Russian/Chinese network operators, but that would take time.
If they mean a real proxy, that’s even more involved - you can’t just do that with BGP configurations and will need someone running what is basically a country-wide VPN in Russia or China (which will probably be very identifiable).
I’m really not sure what you’re going on about. Russia and China won’t cut off Iran and will be more than happy to have them pay to connect to VPN providers in country (eg the routes advertised will only let them connect to VPN providers). Or you can use botnets to proxy your traffic for you.
Okay, but now you’re running a country-scale VPN service in Russia or whatever, and somebody has to pay for it. Or you have to acquire a botnet, again to handle the internet of a whole country. These are non-trivial barriers to overcome, and simply having transit to Russia does not solve them.
You know they can run NAT at the border right? Iran -> Russia -> the rest of the Internet. Russians would never re-advertise Iranian routes, and because they're all such good pals they could let them use whatever segment, so all Iranian traffic would simply appear as Russian traffic to the rest of the Internet, using normal residential subnets of which there are plenty. No need to ever involve the commercial VPN providers, and the scale would be negligible considering we're only talking about "the elites" of which there aren't many.
But also Wireguard VPNs are also trivial to spin up and operate at scale, especially since what’s being discussed here is Internet access just for the elites which is a small fraction of the entire population. But yeah, lots and lots of ways to make this work if you don’t get everyone else to go along with your embargo.
Does the Iranian economy rely heavily on access to the global internet? They can’t trade with most of the world due to sanctions, so what in their internal economy grinds to a halt without global communications? I’m not saying I think that it wouldn’t, just that I don’t immediately grasp the mechanism.
Good points! I’m not an expert, so I’ll wait for people who know more to weigh in. But as far as I know: (1) they still need to import basic necessities like food and medicine, and (2) despite heavy investment, they haven’t managed to build an intranet that’s fully isolated from the internet.
Even if you don't have fiber all the way into your house, most cable internet terminates pretty close to the home these days. It kind of has to, since bandwidth has gone way up and as a result they can't put very many subscribers on the same termination system.
We didn't really understand this kind of thing when the Carrington event happened, so nobody knows for sure, but estimates for induced voltage on long conductors are usually something on the order of 20V/km. So for a 5 km long coaxial cable, you're only talking about ~120V of induced potential difference (i.e. the same voltage as a residential plug in the US). When people are analyzing the potential damage from this kind of electromagnetic disturbance (E3 is the term you'll see, based on analysis of nuclear EMP which has other components that you don't see in geomagnetic storms regardless of severity), it's mostly about really long conductors, like on the order of 100km.
Interesting - I found this quantitative historical study [0] showing that while a civil war does significantly increase the likelihood of inflation, only 36% of countries analyzed which had a civil war between 1975-1999 ended up in an inflationary crisis. And with the USD having such a strong foundation, I would expect the risk to be significantly lower.
I am speculating wildly but I would expect the exact opposite due to different actors trying to destabilize the US to the point of no recovery in such an event.
They're threatening to take their ball and go home. If they move all of their operations out of Italy, under what principle does Italy demand they block content globally? Should Wikipedia remove their page on Tiananmen Square because the Chinese government demands it (which they would, if they thought it would work)?
They can read the statement, and the definitions that the statement references. If everything it references is in a well-tread part of the Lean library, you can have pretty high confidence in a few minutes of going over the syntax.
It depends what they mean by some of these: are the state machine race conditions logic races (which Rust won’t trivially solve) or data races? If they are data races, are they the kind of ones that Rust will catch (missing atomics/synchronization) or the ones it won’t (bad atomic orderings, etc.).
It’s also worth noting that Rust doesn’t prevent integer overflow, and it doesn’t panic on it by default in release builds. Instead, the safety model assumes you’ll catch the overflowed number when you use it to index something (a constant source of bugs in unsafe code).
I’m bullish about Rust in the kernel, but it will not solve all of the kinds of race conditions you see in that kind of context.
> are the state machine race conditions logic races (which Rust won’t trivially solve) or data races? If they are data races, are they the kind of ones that Rust will catch (missing atomics/synchronization) or the ones it won’t (bad atomic orderings, etc.).
The example given looks like a generalized example:
spin_lock(&lock);
if (state == READY) {
spin_unlock(&lock);
// window here where another thread can change state
do_operation(); // assumes state is still READY
}
So I don't think you can draw strong conclusions from it.
> I’m bullish about Rust in the kernel, but it will not solve all of the kinds of race conditions you see in that kind of context.
Sure, all I'm trying to say is that "the class of bugs described here" covers more than what was listed in the parentheses.
The default Mutex struct in Rust makes it impossible to modify the data it protects without holding the lock.
"Each mutex has a type parameter which represents the data that it is protecting. The data can only be accessed through the RAII guards returned from lock and try_lock, which guarantees that the data is only ever accessed when the mutex is locked."
Even if used with more complex operations, the RAII approach means that the example you provided is much less likely to happen.
I'd argue, that while null ref and those classes of bugs may decrease, logic errors will increase. Rust is not an extraordinary readable language in my opinion, especially in the kernel where the kernel has its own data structures. IMHO Apple did it right in their kernel stack, they have a restricted subset of C++ that you can write drivers with.
Which is also why in my opinion Zig is much more suitable, because it actually addresses the readability aspect without bring huge complexity with it.
> I'd argue, that while null ref and those classes of bugs may decrease, logic errors will increase.
To some extent that argument only makes sense; if you can find a way to greatly reduce the incidence of non-logic bugs while not addressing other bugs then of course logic bugs would make up a greater proportion of what remains.
I think it's also worth considering the fact that while Rust doesn't guarantee that it'll catch all logic bugs, it (like other languages with more "advanced" type systems) gives you tools to construct systems that can catch certain kinds of logic bugs. For example, you can write lock types in a way that guarantees at compile time that you'll take locks in the correct order, avoiding deadlocks [0]. Another example is the typestate pattern [1], which can encode state machine transitions in the type system to ensure that invalid transitions and/or operations on invalid states are caught at compile time.
These, in turn, can lead to higher-order benefits as offloading some checks to the compiler means you can devote more attention to things the compiler can't check (though to be fair this does seem to be more variable among different programmers).
> Rust is not an extraordinary readable language in my opinion, especially in the kernel where the kernel has its own data structures.
The above notwithstanding, I'd imagine it's possible to think up scenarios where Rust would make some logic bugs more visible and others less so; only time will tell which prevails in the Linux kernel, though based on what we know now I don't think there's strong support for the notion that logic bugs in Rust are a substantially more common than they have been in C, let alone because of readability issues.
Of course there's the fact that readability is very much a personal thing and is a multidimensional metric to boot (e.g., a property that makes code readable in one context may simultaneously make code less readable in another). I don't think there would be a universal answer here.
Maybe increase as a ratio, but not absolute. There are various benefits of Rust that affect other classes of issues: fancy enums, better errors, ability to control overflow behaviour and others. But for actual experience, check out what the kernel code developer has to say: https://xcancel.com/linaasahi/status/1577667445719912450
> Zig is much more suitable, because it actually addresses the readability aspect
How? It doesn't look very different from Rust. In terms of readability Swift does stand out among LLVM frontends, don't know if it is or can be used for systems programming though.
I think they are right in that claim, but in making it so, at least some of the code loses some of the readability of Swift. For truly low-level code, you’ll want to give up on classes, may not want to have copy-on-write collections, and may need to add quite a few some annotations.
Swift is very slow relative to rust or c though. You can also cause seg faults in swift with a few lines. I Don't find any of these languages particularly difficult to read, so I'm not sure why this is listed as a discriminator between them.
You can make a fully safe segfault the same way you can in go. Swapping a base reference between two child types. The data pointer and vft pointer aren't updated atomically, so a thread safety issue becomes a memory safety one.
When did that happen? Or is it something I have to turn on? I had Claude write a swift version of the go version a few months ago and it segfaulted.
Edit: Ah, the global variable I used had a warning that it isn't concurrency safe I didn't notice. So you can compile it, but if you treat warnings as errors you'd be fine.
This is I think an under-appreciated aspect, both for detractors and boosters. I take a lot more “risks” with Rust, in terms of not thinking deeply about “normal” memory safety and prioritizing structuring my code to make the logic more obviously correct. In C++, modeling things so that the memory safety is super-straightforward is paramount - you’ll almost never see me store a std::string_view anywhere for example. In Rust I just put &str wherever I please, if I make a mistake I’ll know when I compile.
- mandatory byte-range locks enforced by the kernel
- explicit sharing modes
- guarantees around write ordering and durability
- safe delete-on-close
- first-class cache coherency contracts for networked access
POSIX aims for portability while NTFS/Win32 aims for explicit contracts and enforced behavior. For apps assuming POSIX semantics (e.g. git) NTFS feels rigid and weird. Coming the other way from NTFS, POSIX looks "optimistic" if not downright sloppy.
Of course ZFS et al. are more theoretically more robust than EXT4 but are still limited by the lowest common denominator POSIX API. Maybe you can detect that you're dealing with a ZFS backed volume and use extra ioctls to improve things but its still a messy business.
These are pretty much all about mandatory locking. Which giveth and taketh away in my experience. I’ve had substantially fewer weird file handling bugs in my Linux code than my Windows code. POSIX is very loosey-goosey in general, but Linux’s VFS system + ext4 has a much stronger model than the general POSIX guarantees.
`FILE_FLAG_DELETE_ON_CLOSE`’s equivalent on Posix is just rm. Windows doesn’t let you open new handles to `FILE_FLAG_DELETE_ON_CLOSE`ed files anyway, so it’s effectively the same. The inode will get deleted when the last file description is removed.
NFS is a disaster though, I’ll give you that one. Though mandatory locks on SMB shares hanging is also very aggravating in its own right.
Also to point out that outside UNIX, surviving mainframe and micros, the filesystems are closer to NTFS than UNIX world, in regards to what is enforced.
There is also the note that some of them are more database like, than classical filesystems.
Ah, and modern Windows also has Resilient File System (ReFS), which is used on Dev Drives.
reply