Hacker Newsnew | past | comments | ask | show | jobs | submit | dgfl's commentslogin

Yes, challenger. The O-ring failed, creating a gas exhaust that almost instantly destroyed the main propellant tank.

I believe what it destroyed was the strut holding the booster to the tank. When the strut burned through the assembly came apart and aerodynamic forces did the remainder of the destruction.

This is underselling the risks. On top of the many trajectories which push them into unrecoverable situations, leaving them stranded in orbit, there can be trajectories where the moon gives a gravity assist strong enough to fling the spacecraft into escape velocity, fulfilling the OP.

In fact, the trajectory they chose for this mission exploited the opposite effect to yield a free return without propellant expense.

In the modern day, the chance of a math error being the root cause behind this failure mode are vanishingly small, but minor burn execution mistakes that do not require hundreds of extra pounds of propellant are definitely plausible. They were extremely common in the early days of spaceflight and plagued most of the very first moon exploration attempts. Again, with modern RCS this is unlikely. But reentry is still incredibly tight and dangerous. Apollo famously had a +-1° safe entry corridor, and Orion is way heavier and coming in even faster. If their perigee was off they could’ve easily burned up or doubled their mission time, which they may not have been able to survive.


The amount of things that would have to go wrong for the craft to get an accidental gravity boost and be ejected would be significant.

I feel like the original claim paints the whole thing as on a knife edge and barely achieved by virtue of not making a single mistake. In today's age with so many moon landing deniers and worse I feel like we should be specific about where the actual dangers challenges and unknowns there were here. In reality, the orbital mechanics are one of the simplest parts of the entire problem, at least when we're talking about a moon flyby


Yes, this is a fair point. I agree that orbital mechanics is trivially easy compared to everything else. The chances of a math mistake in particular are null, these trajectories have all been calculated years in advance.

The lumpiness of the moon's gravity is not well mapped out.

It is now better mapped after the GRAIL mission: https://en.wikipedia.org/wiki/Gravity_Recovery_and_Interior_...

The moon's gravity turns out to be "lumpy" because its density is not constant. This was detected by the Apollo missions and caused them to make errors in orbit calculations. This source of error could have influenced the flyby.


Isn’t this what AGI is by design? People CAN learn to become good at videogames. Modern LLMs can’t, they have to be retrained from scratch (I consider pre-training to be a completely different process than learning). I also don’t necessarily agree that a grandma would fail. Give her enough motivation and a couple days and she’ll manage these.

My main criticism would be that it doesn’t seem like this test allows online learning, which is what humans do (over the scale of days to years). So in practice it may still collapse to what you point out, but not because the task is unsuited to showing AGI.


What I'm saying is that this test is just another "out-of-distribution task" for an LLM. And it will be solved using the exact same methods we always use: it will end up in the pre-training data, and LLMs will crush it.

This has absolutely nothing to do with AGI. Once they beat these tests, new ones will pop up. They'll beat those, and people will invent the next batch.

The way I see it, the true formula for AGI is: [Brain] + [External Sensors] (World Receptors) + [Internal State Sensors] + [Survival Function] + [Memory].

I won't dive too deep into how each of these components has its own distinct traits and is deeply intertwined with the others (especially the survival function and memory). But on a fundamental level, my point is that we are not going to squeeze AGI out of LLMs just by throwing more tests and training cycles at them.

These current benchmarks aren't bringing us any closer to AGI. They merely prove that we've found a new layer of tasks that we simply haven't figured out how to train LLMs on yet.

P.S. A 2-year-old child is already an AGI in terms of its functional makeup and internal interaction architecture, even though they are far less equipped for survival than a kitten. The path to AGI isn't just endless task training—it's a shift toward a fundamentally different decision-making architecture.


> . Once they beat these tests, new ones will pop up. They'll beat those, and people will invent the next batch.

that's exactly the point! once we cannot invent the next batch (that is easy for humans to solve), that will be AGI


good post, but I disagree Surival Function is needed for AGI. Why do you think Survival Function is needed?

The item I think you should add is a Mesolimbic System (Reward / Motivation). I think AGI needs motivation to direct its learning and tasks.

Also, I don't think the industry has just been training LLMs with more data to get advancement the last 2 years. RAG / Agents loops / skills / context mgmt are all just early forms a Memory system. An LLM with an updatable working set memory is a lot more capable than just an LLM.


> Isn’t this what AGI is by design?

Well, the "G" in AGI is kinda important. These are specifically games/puzzles.

> they have to be retrained from scratch

Is that true? Didn't DeepMind already build plenty of agents that are generally good at most computer games without being retrained?


Kids develop video game skills, grandmothers do not. Hypothetically grandmothers develop baking skills, that kids do not (perfectly golden brown cookies). A human intelligence is generally capable of developing video game skills or baking skills, given enough motivation and experience to hone those skills. One test of AGI is if the same system can develop video game skills and baking skills, without having to rebuild the core models... this would demonstrate generalized intelligence.


Disagree on the last statement. Makie is tremendously superior to matplotlib. I love ggplot but it is slow, as all of R is. And my work isn’t so heavy on statistics anyway.

Makie has the best API I’ve seen (mostly matlab / matplotlib inspired), the easiest layout engine, the best system for live interactive plots (Observables are amazing), and the best performance for large data and exploration. It’s just a phenomenal visualization library for anything I do. I suggest everyone to give it a try.

Matlab is the only one that comes close, but it has its own pros and cons. I could write about the topic in detail, as I’ve spent a lot of time trying almost everything that exists across the major languages.


I love Makie but for investigating our datasets Python is overall superior (I am not familiar enough with R), despite Julia having the superior Array Syntax and Makie having the better API. This is simply because of the brilliant library support available in scikit learn and the whole compilation overhead/TTFX issue. For these workflows it's a huge issue that restarting your interactive session takes minutes instead of seconds.


I recently used Makie to create an interactive tool for inspecting nodes of a search graph (dragging, hiding, expanding edges, custom graph layout), with floating windows of data and buttons. Yes, it's great for interactive plots (you can keep using the REPL to manipulate the plot, no freezing), yes Observables and GridLayout are great, and I was very impressed with Makie's plotting abilities from making the basics easy to the extremely advanced, but no, it was the wrong tool. Makie doesn't really do floating windows (subplots), and I had to jump through hoops to create my own float system which uses GridLayout for the GUI widgets inside them. I did get it to all work nearly flawlessly in the end, but I should probably have used a Julia imGUI wrapper instead: near instant start time!



Yes. And I did port my GUI layer to CimGui.jl. The rest of it is pretty intertwined with Makie, didn't do that yet. The Makie version does look better than ImGui though.


I tried some Julia plotting libraries a few years ago and they had apis that were bad for interactively creating plots as well as often being buggy. I don’t have performance problems with ggplot so that’s what I tend to lean to. Matplotlib being bad isn’t much of a problem anymore as LLMs can translate from ggplot to matplotlib for you.


Some quick napkin math: AI energy usage for a chat like that in the post (estimated ~100 Wh) is comparable to driving ~100m in the average car, making 1 of toast, or bring 1 liter of water to boiling.

I’d wager the average American eats more than 20 dollars/month of meat overall, but let’s say they spend as much as an OpenAI subscription on beef. If you truly believe in free markets, then they have the same environmental impact. But which one has more externalities? Many supply chain analyses have been done, which you can look up. As one might expect, numbers don’t look good for beef.


Exactly the same as pointing out that LLMs use energy. That whole conversation probably used as much energy as making a piece of toast.


There’s an expected amount of defects per wafer. If a chip has a defect, then it is lost (simplification). A wafer with 100 chips may lose 10 to defects, giving a yield of 90%. The same wafer but with 1000 smaller chips would still have lost only 10 of them, giving 99% yield.


As another comment referenced in this thread states, Cerebras seems to have solved by making their big chip a lot of much smaller cores that can be disposed of if they have errors.


Indeed, the original comment you replied to actually made no sense in this case. But there seemed to be some confusion in the thread, so I tried to clear that up. I hope I’ll get to talk with one of the cerebras engineers one day, that chip is really one of a kind.


Yes, amazing tech. You should join their Discord, it's pretty active these days!


If you let the computer run for long enough, it will compute any atomic spectrum to arbitrary accuracy. Only QFT has non-divergent series, so at least in theory we expect the calculations to converge.

There’s an intrinsic physical limit to which you can resolve a spectrum, so arbitrarily many digits of precision aren’t exactly a worthy pursuit anyway.


Feynman is indeed often quoted among the first people to propose the idea of a quantum computer! This talk he gave in ‘81 is among the earliest discussion of why a quantum universe requires a quantum computer to be simulated [1]:

> Can a quantum system be probabilisticaUy simulated by a classical (probabilistic, I'd assume) universal computer? In other words, a computer which will give the same probabilities as the quantum system does. If you take the computer to be the classical kind I've described so far, (not the quantum kind described in the last section) and there're no changes in any laws, and there's no hocus-pocus, the answer is certainly, No! This is called the hidden-variable problem: it is impossible to represent the results of quantum mechanics with a classical universal device.

Another unique lecture is a 1959 one [2] about the potential of nanotechnology (not even a real thing back then). He speaks of directly manipulating atoms and building angstrom-scale engines and microscope with a highly unusual perspective, extremely fascinating for anyone curious about these things and the historical perspective. Even for Feynman’s standards, this was a unique mix of topics and terminology. For context, the structure of DNA has been discovered about 5 years prior, and the first instruments capable of atomic imaging and manipulation are from at least the 80’s.

If you’re captivated by this last one as I was, I can also recommend Greg Bear’s novel “Blood Music”. It doesn’t explore the nanotechnology side much, but the main hook is biological cells as computers. Gets very crazy from there on.

1. https://s2.smu.edu/~mitch/class/5395/papers/feynman-quantum-... 2. https://www.zyvex.com/nanotech/feynman.html


If you're into atomic physics and getting a feel for the intricate structure of the basic processes, the best find I had recently is this MIT course by Wolfgang Ketterle. The first lecture is an informal overview, and he gives vivid and detailed descriptions of the phenomena they can create and control now, like why we see different kinds of thing happening at very low temperatures: the atoms are moving past each other so slowly that it gives their wavefunctions time to overlap and interact, using intersecting lasers to create arrays of dimples in the electromagnetic field to draw in and hold single atoms, this kind of thing. It gives a more tangible insight into the quantum aspects of matter that can otherwise seem inscrutable

https://www.youtube.com/watch?v=Agu68RGaoWM&list=PLUl4u3cNGP...

He also got the Nobel prize in the 90s for making a Bose-Einstein condensate iirc.


The quote is not suggesting a quantum computer can’t be simulated classically, it can in fact, just slowly, by keeping track of the quantum state where n qubits is 2^n complex amplitudes.

It relates more to the Bell results, that there doesn’t exist a hidden variable system that’s equivalent to QM.



Are there Feynmans today making predictions which we scoff at.


“There’s plenty of space at the bottom” only really took off in popularity decades later. Feynman’s accomplishments are undeniable, Nobel prize and all, but his celebrity status is given by other aspects of his personality. No Feynman equivalent I can think of is alive today. Perhaps Geoffrey Hinton and his views on the risk of AGI? He’s far from the only one of course.


indeed there are.


Said by the man himself no less


Talk about a cliffhanger


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: