Hacker Newsnew | past | comments | ask | show | jobs | submit | more s1dev's commentslogin

Is there some good intuition why P-complete problems are difficult to parallelize? This is the first I've heard of it (but then again, I'm usually interested in more obscure complexity classes)


This is the first I have heard of it as well (and I have properly studied NP complete in graph theory, so I don't know why this missed my attention). It seems that P-complete are difficult to parallelize by definition as they are the set of tractable problems (as opposed to NP) which don't parallelize well.

"The class P, typically taken to consist of all the "tractable" problems for a sequential computer, contains the class NC, which consists of those problems which can be efficiently solved on a parallel computer. This is because parallel computers can be simulated on a sequential machine. It is not known whether NC = P. In other words, it is not known whether there are any tractable problems that are inherently sequential. " From: https://en.m.wikipedia.org/wiki/P-complete


It's not by definition, there's a bit of math behind it! :)

The intuition is that the hardest problems in P have linear dependency chains. If you could have a good parallel algorithm for a P complete problem, you could take a problem with a dependency chain and solve parts of it in parallel without waiting for the results those parts are depending on.


Cerium is correct, we don't know if P is efficiently parallelizable.

Is there a formal proof of what you're talking about that we can read?


Are you perhaps confusing P with P complete?

https://www.researchgate.net/profile/Walter-Ruzzo/publicatio...


How does that matter? P-Complete is a subset of P.


What am I looking for in this 300 page document? Proof by intimidation, eh?


There is an easy daily example: the PBKDF function, the whole point of which is to be expensive to compute, and impossible to efficiently parallelize.


Yes, linear(-ish) dependency chains so that your threads have to wait for one thread to provide a result (infinitely often).


Are you stretching "this specific strategy for parallelizing P that I came up with won't work" to "there's no way to parallelize P"?


It’s more like : you win a Turing award by finding a strategy to parallelize this problem as you’ll be able to use that approach to parallelize all problems in P, proving NC = P.


This applies both ways. You'll win a Turing award if you prove NC ≠ P, which is kind of what you said — at least, that the best way I see of reading your first and a few following messages.


This is not quite true. You only need to keep the qubits at a fixed temperature as you scale the system, so the resources required to add additional qubits grow only polynomially with the system size. Once you have many qubits with a sufficiently low (but constant) error rate, you can do quantum error correction which also only has polynomial overhead.


No you are missing the fact that if the qubits are coupled the systems fundamental gap has shrunk demanding a lower temperature for the same error rate.

You should think more about why your rosy scheme hasn't worked yet if you can't explain that empirically maybe you don't quite understand.


The qubits aren't all coupled all the time. The whole point of a 2-qubit gate is the coupling is controllable. Also -- why is the gap of the bare system important? The whole point of QEC is the creation of a decoherence free subspace. Your model is wrong and you don't understand quantum error correction.


The qubits aren't meant to be coupled, but in practice everything is coupled at some level. I find it plausible that having a system with tons of degrees of freedom on a limited energy scale might make it difficult to isolate all the degrees of freedom from each other.


Yeah but these are the most basic of basic problems in QC and people would be discarding approaches that didn’t have potential solutions to this issue. In superconducting QCs, neighboring qubits are not necessarily even resonant, and are in their own little metal boxes. With neutral atoms, you have extremely long lived internal states that barely couple to the environment, and the atoms are macroscopic distances from each other.


I hear good things about this book “how to prove it”

I also like Knuth’s art of computer programming book which is more about proving correctness of algorithms

In general, stuff like combinatorics, algorithms, etc I find to have a low barrier to entry, so it’s a nice playground

https://www.cambridge.org/highereducation/books/how-to-prove...


Yes. Breaking things in aviation frequently results in a body count


The correction card is legally required equipment and there are a few maintenance items that trigger a requirement to update it


What experimental evidence of quantum gravity is there? I like to argue for theoretical evidence of quantum gravity, but afaik there hasn’t been any experiments to rule out classical GR coupled to a quantum matter sector


The fact that there is gravity at all, which is not a part of the standard model.


It is worth noting that all known classical algorithms to solve this problem scale exponentially with the problem size, and fundamentally there is no reason why putting 60 or 70 qubits in the fridge is particularly more difficult than 53. The point is that while classical algorithm advances can take off orders of magnitude off the runtime, this has to be done _every_ time there is a small increase in the size of the quantum computing device in order to kill a supremacy claim.

(To clarify, I personally think optimization is not a remotely near-term application)


You're assuming that quantum algorithms scale better than classical ones. To the best of my knowledge (which is not much), the cases in which this is known to be true are extremely limited. We have a faster algorithm for factoring numbers, Shor's algorithm, and I'm not sure there's anything of note beyond that.


I once ran across an application that had a heap footprint for which the size was fixed at runtime. The data locality loss from the indirection killed the performance. In principle, JIT could fairly transparently unbox this and have no heap footprint


Quantum error correction is a key ingredient for future realizations of fault-tolerant quantum computers.

Physics is local, so errors in far separated regions of the device are only slightly correlated. In this setting, threshold theorems can be proved to show that as long as the device is below some constant amount of noise at the physical level, the noise can be suppressed arbitrarily (exponentially) at the logical level.


Very interesting! How close are we to achieving said noise threshold?


More or less, we're just before the finish line. About 20% lower noise and we'll be there. For every little bit past that, the overhead gets reduced immensely

https://arxiv.org/abs/2207.06431


From a theoretical point of view, this is evidence that quantum computing is a more powerful model of computation. I think it would be hard to argue that a model of computation that can solve more problems is any less useful. Applications like quantum simulation do appear to be difficult on a classical computer yet efficiently computable on a quantum computer


Basketball is hard to simulate on a classical computer, but easy to simulate by playing a game of basketball.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: