Is Nvidia really cooked? If this new RF tech does scale, couldn't a bigger model be made that would require more compute power for training and inference?
I read around that DeepSeek's team managed to work-around hardware limitations, and that in theory goes against the "gatekeeping" or "frontrunning" investment expectations from nvidia. If a partial chunk of investment is a bet on those expectations, that would explain a part of the stock turbulence. I think their 25x inference price reduction vs openai is what really affected everything, besides the (uncertain) training cost reduction.
We all use PCs and heck even phones that have thousands of times the system memory of the first PCs.
Making something work really efficiently on older hardware doesn't necessarily imply less demand. If those lessons can be taken and applied to newer generations of hardware, it would seem to make the newer hardware all the more valuable.
Imagine an s-curve relating capital expenditure on compute and "performance" as the y-axis. It's possible that this does not change the upper bound of the s-curve but just shifts the performance gains way to the left. Such a scenario would wipe out a huge amount of the value of Nvidia.
I don't think it matters much to Nvidia so long as they're the market leader. If AI gets cheaper to compute it just changes who buys. Goes from hyperscalers to there being an AI chip in every phone, tablet, laptop, etc. still lots and lots of money to be made.