Pardon my naivety. The only reason individuals would willingly participate at a large scale, would be for an open source AI project. Does that mean once the problem is solved, everybody will likely be able to run their own state of the art AI, or are the real-time calculations too intensive for a single server?
As a reference point, the large version of facebook's LLaMA was originally designed to be run on 4 server-grade GPUs, but after it was leaked people managed to make it run on a normal computer with just a CPU and 40GB RAM or so, though it's somewhat slow when running that way. GPT 3 is about 3 times larger than that. But at least for LLMs we have mostly explored how to either make really big models with the best capabilities (OpenAI etc) or models that are faster to train for comparable capabilities (facebook etc). There's comparative little work done so far on models that are able to run with the least amount of resources possible, though there are indications that you can invest more resources in training to get models that are much easier to run (training a small model for really long to get the same capabilities as a larger model). That might be a great route for an open source project, spending great combined effort to make something everyone can more easily run at home.
Of course that's mostly LLMs, for image generation you can easily run Stable Diffusion on any somewhat decent GPU, and being able to collaboratively train better models might be a huge boon there too.