Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A first for physics: Fundamental quantum physics problem proved unsolvable (nature.com)
193 points by fluxic on Dec 10, 2015 | hide | past | favorite | 62 comments


Companion article to original paper:

Paradox at the heart of mathematics makes physics problem unanswerable: Gödel’s incompleteness theorems are connected to unsolvable calculations in quantum physics.

http://www.nature.com/news/paradox-at-the-heart-of-mathemati...

Toby Cubitt, a "quantum-information theorist" at UCL should win this years relevant username contest ;)


Thanks! The companion comment also provides a link to the non-paywalled and full arxiv version: http://arxiv.org/abs/1502.04573


A great example of nominative determinism.


To be fair, his name only collapsed into a determinative state once we looked at it in the context of this article. /pun


The companion made clear something that the (abstract of) the original post didn't; this sort of system can't be used to physically perform "hypercomputation," since encoding an undecidable problem requires an infinite lattice. Ah well.


The proof combines Hamiltonian complexity techniques with aperiodic tilings, to construct a Hamiltonian whose ground state encodes the evolution of a quantum phase-estimation algorithm followed by a universal Turing machine.

Just when I thought linking quantum mechanics with Turing completeness was cool, they went one step further and linked them using aperiodic tilings.


Tilings are a neat way to simulate Turing machines, so it's not super surprising that they show up in an undecidability proof:

https://en.wikipedia.org/wiki/Wang_tile


So if I'm understanding this right, they proved that you can't calculate the bandgap of some class of systems of arbitrary size in any predetermined finite period of time, right? I guess when you really think about it, the opposite finding would be much more surprising. (That you could calculate the bandgap of any system of any size in some finite time period, however large).


It's not a constant period of time. The amount of time is allowed to increase with the size of the system. In fact it's allowed to increase quadratically, or exponentially, or factorially, or whatever - it will never be enough. Whatever algorithm you come up with for calculating the rate of expansion, you will always underestimate the amount of time (for a sufficiently large system).

https://en.wikipedia.org/wiki/Busy_beaver#Non-computability_...


No, it's worse than that: From Wikipedia [1]: "In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is known to be impossible to construct a single algorithm that always leads to a correct yes-or-no answer."

The definition of algorithm includes termination after a finite amount of steps. This number isn't predefined.

[1] https://en.wikipedia.org/wiki/Undecidable_problem


The finite period of time isn't predetermined, and different systems could take different amounts of time. So, its not intuitive in the way you imply.


Next up: tie this info cryptography and information theoretic complexity, so we can attempt to create meaningful algorithm security proofs.



I can't determine what the postulates are that this proof is based upon. If someone has a relatively succinct explanation, I'd appreciate it.


Mathematically, this is in the land of linear algebra (complex vector spaces.) The particular question at hand concerns the spectrum of a (Hermitian) linear operator, known as "the Hamiltonian". The "gap" refers to a separation in the lowest two eigenvalues ("energy levels") of this operator.

It seems plausible that one could easily encode a turing incomplete problem into linear algebra (think of tensor networks, constraint satisfaction, etc.) but what these guys have done is place very strong restrictions on the form of the linear operator (in jargon: two dimensional, nearest neighbour interactions, translational invariance). This now comes near to a class of physical systems that one may hope to build, or at least understand theoretically, and the result of the paper puts a stark upper limit on what we may hope to achieve in this vein.


Thank you for this.


This might be an ignorant question, but can one still force a decidable solution by assuming a stronger set of axioms? And if so, does this undecidability have any physical significance?


Yes, you can add more axioms, but your new system with more axioms will have other undecidable statements.

I don't know much physics, but my interpretation is this: we have a mathematical model of physics in which we have found a "naturally-ocurring" undecidable statement (scare quotes because most famous undecidable statements were specifically constructed to be undecidable). This does not really say anything about the natural world, merely that our mathematical modelling of it is incomplete, but it always must be incomplete.


Is it really impossible to have some system of axioms where most statements you care about (ie. can make a physical experiment out of) can be decided? If you can't measure something (ie., decide it someway from your theory or leave it as a free parameter), it's not science, string-theory nonwithstanding.


> Is it really impossible to have some system of axioms where most statements you care about (ie. can make a physical experiment out of) can be decided?

Of course it's possible. Most statements we care about are decidable. Otherwise, mathematics would be pretty hopeless. :-) It's just that any system has undecidable statements, as long as it's strong enough to do arithmetic. That's Gödel's theorem.


I know due to Godel in principle that a logical system can't be complete, but science is a little more specified beyond just logical consistency, it has to be experimentally verifiable, which is a constraint that the mathematics used by science doesn't have. What I'm hinting at is there is a subset of logical statements that the mathematics of science contains defined by the set of experimentally testable statements, and this subset is decidable given independent axioms and free parameters.

EDIT: to expand upon this further, things like the continuum hypothesis are interesting from a mathematical perspective, let's say I'm Godel and I assert that CH is false and there is some set A "in between" the reals and N in cardinality. However, if my friend Enrico Fermi wants to do experiments and he wants me to do a calculation, I won't use elements from A because it wouldn't be useful for his experiments in with particle physics. Worse, I can't even use all the real numbers, because he'll need numbers he can compare to his measurements, so I'm restricted to a countable subset of even the reals, and of course, I'm restricted to using statements from the decidable subset of mathematics.

There's a big difference between what mathematics is capable of saying and what we observe in the real world. Even if the real world had a set of undecidable facts, the fact that the only way science can understand it is by experiments restricts falsifiable scientific statements to a subset of mathematical statements which is incomplete (because of the need of axioms and free parameters like the mass of particles or their charge, fundamental constants, etc), but decidable within it's axioms, because otherwise, it's not testable and it's not science.


As Einstein said, "As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."[1]

Forgive me for not addressing directly what you said, as you seem to be using "countable" to mean something other than "in bijection with the natural numbers". I don't really understand exactly what you mean. But it appears to me that you're trying to perfectly reconcile mathematics with experimental reality. This is impossible. Also, a lot of mathematical physics is very distant from experimental reality.

--

[1] http://www-groups.dcs.st-and.ac.uk/history/Extras/Einstein_g...


I might be confusing this thread with another. The posters there claimed that when considering the infinite number of possible systems that you end up with undecidable statements. Probably countability has nothing to do with this.

Also, the popular article linked elsewhere seems to imply that this has very real implications on experiment, which is upsetting.


It probably is impossible because it's usually possible to encode mathematics into anything. Gödel's big result was essentially encoding proof theory into arithmetic, thus proving that any system powerful enough to contain arithmetic contained undecidable statements. Likewise this result - AIUI they've found a way to encode the same mathematics into the arrangement of atoms in a material.


Undecidability is a consequence of the vast range of askable questions.

For practically any system you can come up with questions that have an answer that can't be computed in the generic case. It isn't even a case of number of possible solutions - it can be a case of how long it takes to parse the very question itself.


To expand on what jordigh said: all these sorts of undecidability problems depend on a formal infinity. One of the infinities in this model (not sure if it's the crucial one) is that the Hamiltonian system is infinitely large in space. Finite systems, like the ones in the real world, don't have this problem.

These kind of results can sometimes be clarifying to help theorists who are deep in these abstractions from going down rabbit holes (e.g., by trying to construct an algorithm that always determines whether a system is gapped), but physical implications of this result are slight at best.

Toby Cubitt is a really smart guy and I congratulate him on the Nature paper. However, the attention this gets from the public is mostly because the paper manages to call on a bunch of sexy ideas in one place without (I think) actually adding anything profound. It's possible I'm missing something.


Scott Aaronson clarifies my first paragraph: http://www.scottaaronson.com/blog/?p=2586


If you keep adding axioms, the system at some point flips from having undecidable statements to having demonstrably true statements that are mutually contradictory. So you have a "choice" there.


I can't tell how much of this is "quantum physics" and how much is "mathematical theory that also arises in quantum physics".

As in, I doubt their construction allows "building something" which then allows solving a conventionally unsolvable problem. With new physics you may be able to build machines that can compute more; however I don't think they are talking about such physics here.

But, they do quote: Marian B. Pour-El and Jonathan I. Richards. Computability in Analysis and Physics. Springer, 1989.

M. B. Pour-El and J. Richards have earlier results:

http://www.sciencedirect.com/science/article/pii/00018708819...: The wave equation with computable initial data such that its unique solution is not computable (1981)

http://philpapers.org/rec/POUTWE: The wave equation with computable initial data whose unique solution is nowhere computable (1997)

The Spectral Gap authors seem to call these results "easier" (page 8 at http://arxiv.org/pdf/1502.04573v2.pdf)


It's nice that they showed this for the problem of finding the ground state of a Hamiltonian, but we already knew the axioms of quantum computing encode undecidable problems because it includes all of classical computing.


Reminds me of an adage I once read and have taken to heart (but I can't recall the source and I'm paraphrasing): what is impossible in theory is often not nearly as interesting as what is possible in practice.


Wait,

Does that imply that it could be possible to construct a system which gives uncomputable results?

Does it assume an infinite plane or something? (Assume might be the wrong word)

Because, if there's something uncomputable, due to halting problem, how does that fit with the bekenstein bound?


No, that is incompatible with the definition of undecidable. Any fixed system either has a gap or doesn't, so for fixed systems there always exists a TM that produces the right answer (it's either the one that say yes, or the one that says no). To make something undecidable you necessarily need to have an infinite family of systems.

As the (observable) universe can only contain a finite number of things (AFAIK), the universe can't contain an undecidable family of systems. This result "merely" provides some evidence that the TM that decides the has-a-gap problem for all systems in the universe might be pretty large, which is kind of a bummer if you want some concise description from which you can make predictions about the universe.


I've always wondered about this. Whether or not the universe has an infinite number of things or not, human beings and thus science will be constrained to measuring and understanding a countable subset of phenomena in the universe. If that is the case, then it seems like the universe of discourse in science is really countable, up to an extent. Parametrizations of the theories may be continuous, like spacetime is continuous, but can the body of "scientifically tested statements" be continuous? You can only put every scientifically tested theory into a one-to-one correspondence with the number of experiments done to test them, which is a countable set.


Wouldn't an infinite number of things imply an infinite amount of energy which would render the conservation of energy laws meaningless?


> Does that imply that it could be possible to construct a system which gives uncomputable results?

They produce a family of systems for which it is not possible to construct an algorithm that can yield a certain property (being gapped) of each system.

> Does it assume an infinite plane or something? (Assume might be the wrong word)

Yes. The system is infinite. (Or, take a limit as the system size grows to infinity.)


But spectral gaps do occur in nature. Does that mean that even though it is impossible to compute if a spectral gap will occur or not, we could just observe such a system in nature and that way we could build a computer which would compute uncomputable results?


In nature it does not matter if the spectral gap is really tiny. It's de facto without spectral gap if it's just small enough.

Whereas for this particular decision problem the mere existence of spectral gap matters.


Who says? Graphene gaps, (it must gap) even though its so small it's impossible to observe in practice. This had very important consequences for topological materials theory...


This is exactly what I meant.

Even if it has theoretical reasons to have a tiny gap in practice it works as zero gap.


Except that in our result, the spectrum is either continuous, or the spectral gap is guaranteed to be >= 1 (in natural units). It does not become arbitrarily small in the gapped case.

The reason you can't use this to compute the uncomputable is that real systems are finite, and the spectral gap is always computable in principle (maybe with a lot of effort) for any finite system. A real (possibly very large but still finite) system will either have a gap or not, and you'll be able to measure it. This definitely doesn't solve an undecidable problem.

However, the undecidability in the idealised infinite lattice limit "shows through" to the experimentally accessible finite-size case, in the form of some rather unusual finite-size physics. This is discussed in more detail in the paper itself (the relevant section is quoted verbatim here http://mathoverflow.net/a/225905) and in the comments on Scott Aaronson's blog.


Sounds like it doesn't it? Could we build a big sheet of this material that encoded e.g. a search for a counterexample to the Riemann Hypothesis, and then just measure whether it had a gap or not? I can't believe that it would be that simple, but I can't immediately see anything wrong with that logic.


I think not, since these systems are "infinite", ie. you would never know how big to build it before you got close enough to the correct result.


I've wondered if this can go the other way as well:

e.g. "If Golbach's conjecture is true, then FTL signalling is possible. Since physics says FTL is impossible, then GC must be false"

[If you could reduce GC to a scheme for moving information via quantum entaglement. I use GC as an example because it is famous and an unproved, but maybe a better example is the Erdos Discrepancy [0] which makes statements about aritrarily long sequences in {1,-1} just like sampling quantum states!]

[0]: http://arxiv.org/abs/1509.05363


I think most mathematicians would not accept something like that as a proof. Our understanding of physics is based on real-world observations and experiments in a way that math is not.


Since physical states exist in reality, it makes no sense to me that a physical problem is not mathematically decidable.

The only way a problem is mathematically undecidable is when it's malformed (i.e. non-sensical).


But surely, any physical state would be available to you only as a representation, and not directly. Meaning, that in order to make it decidable, you'll probably have to look at it differently.


Uncomputable or unsolvable?


Undecidable.

https://en.wikipedia.org/wiki/Undecidable_problem

See here for a concrete example of a similar problem (props to adrianN for the link in the first place):

https://en.wikipedia.org/wiki/Wang_tile#Domino_problem


These two are roughly the same thing. Formally, a proof is just a computation within an axiomatic system. A computation is just transforming one expression into another according to a set of rules. Whether this transformed expression is an algebraic or a logical formula is not a huge difference.

In terms that may be more familiar to programmers, lisp's homoiconicity of data and code corresponds roughly to how Gödel was able to express logical formulae as arithmetic statements via Gödel numbering. Once you are able to encode logical formulae as natural numbers, you can then use laws of arithmetic (e.g. Peano's axioms) to compute/prove other statements/numbers.


The post and linked article titles don't match at all (??).


Such a statement just means they can implement computation in this formalism. Not a huge surprise as you can implement computation with many other structures.


This is being downvoted, but I'd like an explanation of what the alternative is. Can someone to explain how this result differs from a claim "I can build a computer which turns a light on when a given program terminates, so therefore 'light on-ness' is undecidable".


Whether a light is on or off is contingent. This is more like proving the light would always be on as a fundamental fact of basic physics. Even if it were done by showing that the existence of a computer connected to the light, proving that from first physical principles would be a big achievement.


You have no idea what you're talking about. That is not what undecidable means.

Also take a look at the Church-Turing thesis


> Also take a look at the Church-Turing thesis

The Church-Turing thesis is an (unprovable) statement of faith, not a mathematical theorem.

Effectively, it states that "all algorithms can be computed on a Turing machine, and vice-versa".

Note that this thesis is not as radical as it first sounds; I'm assuming that if we ever do find a way of computing functions that cannot be computed on a Turing machine, then we'll just call it something else, not an 'algorithm'. This would make the Church-Turing thesis just a tautological circular definition.


> it states that "all algorithms can be computed on a Turing machine, and vice-versa".

It is about algorithms of a certain kind, the kind of computation that we do when we, say, factor polynomials or check a step in a euclidean geometry proof. That is the kind of computation called for in the Entscheidungsproblem. Turing's analysis of that kind of computation was very persuasive, and consequently influential.

But as to whether there is more, whether we can go farther with some kind of analog computation involving lasers and interference, or some kind of quantum stuff ... well, maybe. It doesn't seem impossible that Turing, in the 1930's, lying on the grassy river bank after a hard run and reflecting on what a device can compute, did not imagine it all. To date (as I understand it) no analysis of what can be done with a real device under current physics has for the community of experts the same persuasiveness of Turing's work.

This paper seems to me to at least hint that there may be things that nature computes, but which no Turing machine can compute. It makes me wish to understand the physics.


Well, normally one makes the thesis concrete by saying that lambda calculus and Turing machines yield an equivalent notion of computability.


But, but, but - what if I use a D-Wave machine to solve it? ;-)


I know you jest, but is there a proof (or good hypothesis) that allows one to convert a formula to an appropriately sped-up version quantum optimized algorithm?

Compare the Sieve of Atkin to Shor's algorithm, for what I am referring to.


My understanding from reading Scott Aaronson, is that his belief is that most algorithms will not have significant speedups as a quantum algorithm, and that the currently known algorithms with large (exponential) speedups aren't very useful in the sense that they're just breaking cryptographic problems which were used solely because they were hard to break. In other words, don't expect everything to have a Shor's algorithm analogue.


Then you find it's undecidable a million times more quickly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: