> I’d like to know the memory profile of this. The bottleneck is obviously sort which buffers everything in memory.
That's not obvious to me. I checked the manuals for sort(1) in GNU and FreeBSD, and neither of them buffer everything in memory by default. Instead they read chunks to an in-memory buffer, sort each chunk, and (if there are multiple chunks) use the filesystem as temporary storage for an external mergesort.
This sorting program was originally developed with memory-starved computers in mind, and the legacy shows.
If money ever starts looking particularly illusory, try thinking in terms of the underlying resources that markets allocate.
That's 'resources' viewed as expansively as possible, everything from the specialized labor-hours of people who know how to do quality control on bulk-manufactured photovoltaics to the ore used to make ball bearings in the factory all the way to the guy in charge of managing a grain elevator that was involved in making the bread for the sandwich one of the janitors had for lunch. The web of collaboration between all these far-flung people who mostly don't know each other, too vast and intricate to fit in any living mind, is how we currently get most of our material stuff.
... And in a conventional market system, the core of how those people coordinate their efforts is money. The price that each person is willing to buy something for or sell it for sends a signal about how much they care about it relative to other things. And markets are one popular way of aggregating that information, helping guide society's cooperative efforts in the direction of what people care about.
There are various allocation systems that don't involve money, both theoretical and historical. Community-based mutual reciprocity with a reputation mechanism to discourage freeloading, for example, can be found all over the place in pre-modern history because it worked – as long as your community was small enough that you can realistically all know each other. Or, back in the 20th century, there were a number of efforts to scale up operations research toward the level of nations, since suddenly we had computers fast enough to handle e.g. non-trivial linear programming. (The successes and failures were both instructive.)
--
Coordination problems are hugely underrated in political discourse. So when I hear people say things like "The economic system is the ideology holding us back", I always have to wonder: how carefully has this person thought about a what a viable alternative would look like?
"I dislike the current system" is only the first and most trivial part of a real reform agenda; the next part has to be "... and here is how to meaningfully change it in a way that doesn't result in disaster, with a detailed discussion of mechanism design and a look at relevant historical prior attempts. [Insert essay or hyperlink here.]"
Sadly a lot of people look at our economic system through an ideological lens - how it allocates resources is, to them, driven by political, cultural and social motivations. The fact that by far its most important purpose is resource allocation is often completely ignored.
Rising petrol prices here in Australia draw criticism against fossil fuel wholesalers - as if they are doing this solely to screw over Australians. The fact that these high prices are caused by an actual lack of resources and that the higher prices are driving a reallocation of resources to those who need them most (ie. most willing to pay for them) is not on the radar for many.
> The fact that these high prices are caused by an actual lack of resources and that the higher prices are driving a reallocation of resources to those who need them most (ie. most willing to pay for them) is not on the radar for many
Careful using words like "need". The resources are allocated to the economically most efficient sectors. Since if you are economically efficient, your profits are higher and can afford to pay more than others.
In most cases these are congruent ideas, though. If I have no choice but to drive, but someone can drive or take public transport or work from home, high fuel prices incentivise them to not use it, saving some for myself.
I'm sure there are plenty of people throughout an economy who just don't care, but on average it has substantial impacts, and it's common now for people to totally dismiss that.
"It’s not only our reality which enslaves us. The tragedy of our predicament when we are within ideology is that when we think that we escape it into our dreams, at that point we are within ideology." - Slavoj Zizek
> The fact that these high prices are caused by an actual lack of resources and that the higher prices are driving a reallocation of resources to those who need them most (ie. most willing to pay for them) is not on the radar for many.
This, for example, is a deeply ideological statement. Do I really need something most just cause I can pay more for it? Does the billionaire need the mansion more than the homeless person needs some living space?
The other replying commenter made a good point that "need" is perhaps not the best description, but I'll stand by it as reasonably close to what I mean.
Yes, there are plenty of people with high incomes who continue commanding resources they may not strictly "need", but across the economy as a whole the effects of these prices is still to allocate resources in an efficient way. The point is this avoids an acute shortage and rationing, which is the alternative to transmitting this information via prices and almost certainly far less economically productive.
If you're familiar with the technical specs, I'd be interested in hearing what size of objects the star trackers can sense and at what range. In theory the fancier star trackers can see objects around 10 cm diameter hundreds of kilometers away, without needing to worry about a pesky atmosphere [1], but I don't know how sensitive the sensors on Starlink's current generation satellites are, and this web site isn't saying.
They're mostly touting the improvement in latency over existing tracking, from delays measured in hours to ones measured in minutes. Which is very nice, of course, but the lack of other technical detail is mildly frustrating.
Note from analysis in the paper: (CST = Commercial Star Tracker, for which they model several common ones flown on satellites)
>From Fig. 1, it is clear that many typical CSTs can be used to detect debris with characteristic length less than
10 cm at distances as far as roughly 50 km. These same sensors have the potential to detect debris as small as 1 cm
in diameter as far as 5 km away. Even space-limited CubeSats using nanosatellite-class CSTs can detect 10-cm-class
debris at roughly 25 km away or 1-cm-class debris at a distance of 2.5 km. Higher-performing imagers like the MOST
telescope can further characterize orbital debris of 10 cm diameter as far as 400 km away or be used to characterize
orbital debris smaller than 1 cm at ranges not exceeding 40 km.
Honestly, these two paragraphs are one of the most compelling things they could possibly say in a press release:
> Stargaze already has a proven track record in its utility for space safety. In late 2025, a Starlink satellite encountered a conjunction with a third-party satellite that was performing maneuvers, but whose operator was not sharing ephemeris. Until five hours before the conjunction, the close approach was anticipated to be ~9,000 meters—considered a safe miss-distance with zero probability of collision. With just five hours to go, the third-party satellite performed a maneuver which changed its trajectory and collapsed the anticipated miss distance to just ~60 meters. Stargaze quickly detected this maneuver and published an updated trajectory to the screening platform, generating new CDMs which were immediately distributed to relevant satellites. Ultimately, the Starlink satellite was able to react within an hour of the maneuver being detected, planning an avoidance maneuver to reduce collision risk back down to zero.
> With so little time to react, this would not have been possible by relying on legacy radar systems or high-latency conjunction screening processes. If observations of the third-party satellite were less frequent, conjunction screening took longer, or the reaction required human approval, such an event might not have been successfully mitigated.
Looks like a non-trivial upgrade to previous systems, and they're making Stargaze's data available to other satellite operators free of charge. Nice!
> they're making Stargaze's data available to other satellite operators free of charge
With so many Starlink satellites odds are that one false move on anyone's part ends up in an incident involving them. Sharing this data makes the field safer for everyone, and Starlink gets to steer clear of any bad news titles.
It will be interesting when multiple parties are using these systems and still failing to communicate out of band. Like trying to pass someone in a hallway who keeps trying to make the same course correction as you until you both make eye contact and come to a real agreement.
> In a statement posted on social media late Dec. 12, Michael Nicolls, vice president of Starlink engineering at SpaceX, said a satellite launched on a Kinetica-1 rocket from China two days earlier passed within 200 meters of a Starlink satellite.
> CAS Space, the Chinese company that operates the Kinetica-1 rocket, said in a response that it was looking into the incident and that its missions “select their launch windows using the ground-based space awareness system to avoid collisions with known satellites/debris.” The company later said the close approach occurred nearly 48 hours after payload separation, long after its responsibilities for the launch had ended.
> The satellite from the Chinese launch has yet to be identified and is listed only as “Object J” with the NORAD identification number 67001 in the Space-Track database. The launch included six satellites for Chinese companies and organizations, as well as science and educational satellites from Egypt, Nepal and the United Arab Emirates.
Alternative: the system exists, so people in the know may well have done proper risk assessment and may have identified multiple reasons that could result in a collision. Some of those reasons are accidental, some are not.
If so, SpaceX's longer term response being "here's our SSA data for everyone and here's how we source it" is a good one for all parties involved (even more so for SpaceX and govt customers they share it with if they have other capabilities...)
Well we already know Starshield (the military version) has specialist space domain awareness capabilities that aren't being shared, and it's entirely plausible that data from regular Starlink sensors/receivers (other than the disclosed star trackers) can be fused into something useful by SpaceX and/or the Space Force.
Orbital mechanics can be somewhat counterintuitive.
If you want to change the altitude of your orbit at a certain place, the most efficient place for that is generally when you're on the other side of the planet from that place.
In low earth orbit it takes about 90 minutes to go around the planet, so a small nudge 45 minutes before the potential intercept is going to be vastly more efficient than a big shove when the collision is 5 minutes away.
Starlink uses high efficiency ion thrusters so it has to do small nudges anyway..
So I would not be surprised if most of that hour is spent waiting for the right time to fire the thrusters.
Maybe I misinterpreted the statement - I thought it was talking about the time from detection to sending the command to the satellite, not the time until the satellite actually took action.
Slowing the adoption of much-safer-than-humans robotaxis, for whatever reason, has a price measured in lives. If you think that the principle you've just stated is worth all those additional dead people, okay; but you should at least be aware of the price.
Failure to acknowledge the existence of tradeoffs tends to lead to people making really lousy trades, in the same way that running around with your eyes closed tends to result in running into walls and tripping over unseen furniture.
You may not have any way of knowing but the rest of society has developed all sorts of systems of knowing. "Scientific method", "Bayesian reasoning", etc. or start with the Greek philosophy classics.
It's a tale as old as schlock journalism: an article seems interesting... until it talks about something you actually know about personally, at which point it suddenly starts saying obvious nonsense.
My experience is that very few people understand what I am saying if I really explain things. It is usually better to say obvious nonsense that gets people work in the same direction. I most masscommunication is meaningless until you find the meaning yourself, there are some rather wonderful educators that prove I am wrong. I can only think of one that I have met, and he spent 50% of his time talking about "unrelated" topics. He died, teaching to the end.
I'm not too familiar with the JVM so perhaps I'm missing something here: how would that help? The file is tiny, just a few bytes, so I'd expect the main slowdown to come from system call overhead. With non-mmap file I/O you've got the open/read/close trio, and only one read(2) should be needed, so that's three expensive trips into kernel space. With mmap, you've got open/stat/mmap/munmap/close.
Memory-mapped I/O can be great in some circumstances, but a one-time read of a small file is one of the canonical examples for when it isn't worth the hassle and setup/teardown overhead.
As with all the ahead-of-time compiled languages that I checked, the answer is that it generates non-SIMD code for the hot loop. The assembly code I see in godbolt.org isn't bad at all; the compiler just didn't do anything super clever.
The common element is that they're written with the most obvious version of the code, while the ones in the faster bucket are either explicitly vectorized or written in non-obvious ways to help the compiler auto-vectorize. For example, consider the Objective C version of the loop in leibniz.m:
for (long i = 2; i <= rounds + 2; i++) {
x *= -1.0;
pi += x / (2.0 * i - 1.0);
}
With my older version of Clang, the resulting assembly at -O3 isn't vectorized. Now look at the C version in leibniz.c:
rounds += 2u; // do this outside the loop
for (unsigned i=2u; i < rounds; ++i) // use ++i instead of i++
{
double x = -1.0 + 2.0 * (i & 0x1); // allows vectorization
pi += (x / (2u * i - 1u)); // double / unsigned = double
}
This produces vectorized code when I compile it. When I replace the Objective C loop with that code, the compiler also produces vectorized code.
You see something similar in the other kings-of-speed languages. Zig? It's the C code ported directly to a different syntax. D? Exact same. Fortran 90? Slightly different, but still obviously written with compiler vectorization in mind.
(For what it's worth, the trunk version of Clang is able to auto-vectorize either version of the loop without help.)
The delta there is because the Rust 1.92 version uses the straightforward iterative code and the 1.94-nightly version explicitly uses std::simd vectorization. Compare the source code:
That's not obvious to me. I checked the manuals for sort(1) in GNU and FreeBSD, and neither of them buffer everything in memory by default. Instead they read chunks to an in-memory buffer, sort each chunk, and (if there are multiple chunks) use the filesystem as temporary storage for an external mergesort.
This sorting program was originally developed with memory-starved computers in mind, and the legacy shows.
reply