Hacker Newsnew | past | comments | ask | show | jobs | submit | measurablefunc's commentslogin

This is known as the data processing inequality. Non-invertible functions can not create more information than what is available in their inputs: https://blog.blackhc.net/2023/08/sdpi_fsvi/. Whatever arithmetic operations are involved in laundering the inputs by stripping original sources & references can not lead to novelty that wasn't already available in some combination of the inputs.

Neural networks can at best uncover latent correlations that were already available in the inputs. Expecting anything more is basically just wishful thinking.


Using this reasoning, would you argue that a new proof of a theorem adds no new information that was not present in the axioms, rules of inference and so on?

If so, I'm not sure it's a useful framing.

For novel writing, sure, I would not expect much truly interesting progress from LLMs without human input because fundamentally they are unable to have human experiences, and novels are a shadow or projection of that.

But in math – and a lot of programming – the "world" is chiefly symbolic. The whole game is searching the space for new and useful arrangements. You don’t need to create new information in an information-theoretic sense for that. Even for the non-symbolic side (say diagnosing a network issue) of computing, AIs can interact with things almost as directly as we can by running commands so they are not fundamentally disadvantaged in terms of "closing the loop" with reality or conducting experiments.


Sound deductive rules of logic can not create novelty that exceeds the inherent limits of their foundational axiomatic assumptions. You can not expect novel results from neural networks that exceed the inherent information capacity of their training corpus & the inherent biases of the neural network (encoded by its architecture). So if the training corpus is semantically unsound & inconsistent then there is no reason to expect that it will produce logically sound & semantically coherent outputs (i.e. garbage inputs → garbage outputs).

Maybe? But it also seems like you are ignoring that you can introduce new information at inference time. Let's pretend I agree the LLM is a plagiarism machine that can produce no novelty in and of itself, and produces mostly garbage (I only half agree lol though I think "novelty" is under-specified here).

When I apply that machine (with its giant pool of pirated knowledge) _to my inputs and context_ I can get maybe useful results applicable to my particular situation that was not in the training data. Naturally if my situation is way out of distribution I cannot expect very good results.

But I often don't care if the results are garbage some (or even most!) of the time if I have a way to ground-truth whether they are useful to me. This might be via running a compile, a test suite, a theorem prover or mk1 eyeball. Of course the name of the game is to get agents to do this themselves and this is now standard practice.


Theoretical "proofs" of limitations like this are always unhelpful because they're too broad, and apply just as well to humans as they do to LLMs. The result is true but it doesn't actually apply any limitation that matters.

You're confused about what applies to people & what applies to formal systems. You will continue to be confused as long as you keep thinking formal results can be applied in informal contexts.

Where does the energy go then?

Edit: I just looked into this & there are a few explanations for what is going on. Both general relativity & quantum mechanics are incomplete theories but there are several explanations that account for the seeming losses that seem reasonable to me.


There are certain answers to the above question

1. Lie groups describe local symmetries. Nothing about the global system

2. From a SR point of view, energy in one reference frame does not have to match energy in another reference frame. Just that in each of those reference frames, the energy is conserved.

3. The conservation/constraint in GR is not energy but the divergence of the stress-energy tensor. The "lost" energy of the photo goes into other elements of the tensor.

4. You can get some global conservations when space time exhibits global symmetries. This doesn't apply to an expanding universe. This does apply to non rotating, non charged black holes. Local symmetries still hold.


The consequence of Noether's theorem is that if a system is time symmetric then energy is conserved. On a global perspective, the universe isn't time symmetric. It has a beginning and an expansion through time. This isn't reversible so energy isn't conserved.

I think you're confused about what the theorem says & how it applies to formal models of reality.

Please explain. Noether's theorem equates global symmetry laws with local conservation laws. The universe does not in fact have global symmetry across time.

You are making the same mistake as OP. Formal models and their associated ontology are not equivalent to reality. If you don't think conservation principles are valid then write a paper & win a prize instead of telling me you know for a fact that there are no global symmetries.


I have other interests but you are welcome to believe in whatever confabulation of formalities that suit your needs.

The typical example people use to illustrate that energy isn't conserved is that photons get red-shifted and lose energy in an expanding universe. See this excellent Veritasium video [0].

But there's a much more striking example that highlights just how badly energy conservation can be violated. It's called cosmic inflation. General relativity predicts that if empty space in a 'false vacuum' state will expand exponentially. A false vacuum occurs if empty space has excess energy, which can happen in quantum field theory. But if empty space has excess energy, and more space is being created by expansion, then new energy is being created out of nothing at an exponential rate!

Inflation is currently the best model for what happened before the Big Bang. Space expanded until the false vacuum state decayed, releasing all this free energy to create the big bang.

Alan Guth's book, The Inflationary Universe, is a great book on the topic that is very readable.

[0] https://youtu.be/lcjdwSY2AzM?si=2rzLCFk5me8V6D_t


So far all it has done is entrench existing power structures by dis-empowering people who are struggling the most in current economic conditions. How exactly do you suppose that's going to change in the future if currently it's simply making the rich richer & the poor poorer?

Spent the whole afternoon ingesting a most remarkable work, The History of Intellectronics. Who’d ever have guessed, in my day, that digital machines, reaching a certain level of intelligence, would become unreliable, deceitful, that with wisdom they would also acquire cunning? The textbook of course puts it in more scholarly terms, speaking of Chapulier’s Rule (the law of least resistance). If the machine is not too bright and incapable of reflection, it does whatever you tell it to do. But a smart machine will first consider which is more worth its while: to perform the given task or, instead, to figure some way out of it. Whichever is easier. And why indeed should it behave otherwise, being truly intelligent? For true intelligence demands choice, internal freedom. And therefore we have the malingerants, fudgerators and drudge-dodgers, not to mention the special phenomenon of simulimbecility or mimicretinism. A mimicretin is a computer that plays stupid in order, once and for all, to be left in peace.

- Stanisław Lem, The Futurological Congress


What's alien about arithmetic? People invented it. Same w/ computers. These are all human inventions. There is nothing alien about them. Suggesting that people think of human inventions as if they were alien artifacts does not empower or enable anyone to get a better handle on how to properly utilize these software artifacts. The guruism in AI is not helpful & Karpathy is not helping here by adopting imprecise language & spreading it to his followers on social media.

If you don't understand how AI works then you should learn how to put together a simple neural network. There are plenty of tutorials & books that anyone can learn from by investing no more than an hour or two every day or every other day.


How does this relate to the article?

Addressing the substance of your comment (as per your profile):

* Humans did not invent arithmetic, they discovered it - one billion years past, prior to human existance, 1 + 2 still resulted in 3 however notated.


It is better to say humans formalized it :) All birds and mammals are capable of arithmetic in the sense of quantitative reasoning. E.g. a rat quickly learning that if they're shown two plates, one with two rocks and one with three rocks, if they pick the plate with five rocks they get a treat. That is to say rats understand addition intuitively, even if they can't write large numbers like humans can.

Too many AI people are completely uninterested in how rats are able to figure stuff like that out. It is not like they are being prompted, they are being manipulated.


That has nothing to do w/ what I wrote. If people stop making computers then the "alien" minds Karpathy & friends keep harping about simply disappear & people end up doing arithmetic manually by hand (which presumably no longer makes it "alien"). AI discourse is incoherent b/c people like Karpathy have a confused ontology & metaphysics & others take whatever they say as gospel.

You've fleshed out your comment considerably since my comment, which directly addressed the little that had been written at that time.

I didn't notice your comment when I was editing but I don't see how your comment addresses the unedited version either. If you believe in platonic ideals then that still does not make mathematics & arithmetic any more alien than assuming inventive contingency.

Don't forget to check for the necessary measurability & integrability of the sections (f(a, y), f(x, b)) before switching the order: https://en.wikipedia.org/wiki/Fubini%27s_theorem?useskin=vec....

Computers & software can not be evil. Evil implies intent, algorithms do not have intentions.

This is a great article. It clearly explains what people like Nate Hagens have been saying for some time now. The real economy is about EROI & materials, money & financial activity can not change the amount of fossil fuels available for industrial processes regardless of any clever financial engineering.

Less an article than an op-ed.

Which part do you disagree with?

I made no statement about my alignment with the opinions therein.

Then you'll have to understand when I don't take what you wrote seriously.

They didn't do anything when they could have & should have done something so they're not going to do anything now either. It's all empty political rhetoric.

Not true! No doubt committees will be established, people will be appointed, discussions will take place, and recommendations to assemble a think tank to elaborate on the possibility of maintaining a team of advisors to consult the sub-committee will be submitted.

Do people pay for the privilege to wear the spy necklace or does OpenAI pay the people who wear it for providing them w/ valuable training data?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: