I agree that they provide that disclaimer on the homepage. I was talking more broadly that society (namely the news media and government) should be aware of the limitations of LLM's in general. Take this article from NYT[1], depending on how well you understand the limitations of LLM's will depend on how you react to this article, it's either alarming or "meh". All I'm staying is society in general should understand that LLM's can generate fake information and that's just one it's core limitations, not a nefarious feature.
[1]: https://www.nytimes.com/2023/02/08/technology/ai-chatbots-di...