Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To be fair, LLMs (especially Google LLMs) aren't merely 24 months old. This is part of a long line of models that draw their heritage from BERT and t5-flan. Google has been at this longer than most, particularly in the field of edge-compute models. This isn't even close to a first-generation model family.

That's not to say this is an insignificant contribution. New models are great, especially when released for free, and it's important for big firms to keep the ball rolling for tech to progress. Though there is also legitimate concern that all LLMs aren't improving as fast as they used to improve, and we may have hit the proverbial bathtub curve of AI progress.



I think there is valid criticism of google for inventing a cool technology only to have the rest of the industry discover its usefulness before them. But to say Gemini 1.0 or OG Gemma aren't first generation models because BERT and flan existed before is like saying the iPad wasn't a first generation device because Apple made the Newton. Like sure, they're the same in that they're transformers trained on language and text, but these are new families of models. The training mechanisms are different, their architectures are different, the data sets are different, the intended purpose of the models are completely different, etc. At some point I guess it's a semantic difference, maybe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: