Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, this is an old idea (which I really like) but it hasn't really taken off yet. GridCoin was one example, where you solved BOINC problems or RLC that's for more general computation.


The problem is that, currently, large ML models need to be trained on clusters of tightly-connected GPUs/accelerators. So it's kinda useless having a bunch of GPUs spread all over the world with huge latency and low bandwidth between them. That may change though - there are people working on it: https://github.com/learning-at-home/hivemind


It hasn't taken off because it doesn't work. PoW only works for things that are hard to calculate but easy to verify. Any meaningful result is equally hard to verify.


> Any meaningful result is equally hard to verify.

This is very much not true. A central class in complexity is NP whose problems are hard to answer but easy to verify if the answer is yes.

E.g. is there a path visiting all nodes in this graph of length less than 243000? Hard to answer but easy to check any proposed answer.


It's easy to verify ML training - inference on a test set has lower error than it did before.

Training NN ML is much slower than inference (1000x at least) because it has to calculate all of the gradients.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: