This is something I think will break the mystique around Carmack. Back in the 1990s and early 2000s he was considered the whiz kid when it came to rendering engines but in reality something that's as nuanced and still not fully understood (intelligence, especially AGI) is likely too far out of his expertise to develop. Why? Because the last 60 or so years of development in the field hasn't yield much in the way of results in pure research or practical application. Machine learning and other hat tricks aren't even in the realm of AI properly but rather probability models that work well enough on a narrow set of problems to possibly commercialize them (ex. chat bots for help desk replacements). The thought that anyone, and I mean anyone, is going to develop artificial general intelligence before the 2050s or even later is to me laughable. We barely can understand how to talk to parts of our own brains, we're still learning more from other so-called lower lifeforms with respect to how they solve problems (fun fact: it seems bumble bees like to play or at least do things that are roughly analogous to being play-like, all without any kind of human or hominid-like brain so fancy that). So, it just seems like hubris to imagine any corporation, research department, or a singular scientist cracking this nut. I think we're more in the technological Bronze Age with respect to computing than anything else. It would be nice we finally got started on our computing Iron Age.
This is something I think will break the mystique around Carmack. Back in the 1990s and early 2000s he was considered the whiz kid when it came to rendering engines but in reality something that's as nuanced and still not fully understood (intelligence, especially AGI) is likely too far out of his expertise to develop. Why? Because the last 60 or so years of development in the field hasn't yield much in the way of results in pure research or practical application. Machine learning and other hat tricks aren't even in the realm of AI properly but rather probability models that work well enough on a narrow set of problems to possibly commercialize them (ex. chat bots for help desk replacements). The thought that anyone, and I mean anyone, is going to develop artificial general intelligence before the 2050s or even later is to me laughable. We barely can understand how to talk to parts of our own brains, we're still learning more from other so-called lower lifeforms with respect to how they solve problems (fun fact: it seems bumble bees like to play or at least do things that are roughly analogous to being play-like, all without any kind of human or hominid-like brain so fancy that). So, it just seems like hubris to imagine any corporation, research department, or a singular scientist cracking this nut. I think we're more in the technological Bronze Age with respect to computing than anything else. It would be nice we finally got started on our computing Iron Age.