We know that Grok, like other LLMs, is trained on data that is not held to any rigorous standard of knowledge like exists in the fields of journalism or academia.
There is no reason to think that such a system is even capable of determining truth. At best, we might be able to say that it reflects a consensus of opinion, but even that is a stretch given the nature of these systems.
And that ignores the fact that we know that these products are designed to be sycophantic to their users/operators. That is not a recipe for objective knowledge. Especially given what we know about Grok (meddling by Musk).
And to cap it all off, for all of Wikipedia's real or perceived flaws, all of the decision making is done in public view. This is very much not the case with Grok.
It's dystopian and obviously so. Your comment about "Luddite mentality" is farcical on its face. Of you, I might say something like... "techbro mentality", or maybe even something less flattering.
Your point about Wikipedia being out in the open is correct and fair. Grokipedia should do the same. Grokipedia is at version .1 and they have stated the intention is to open source it, so it seems obvious that it will be similarly open.
Just because it’s done in the open doesn’t mean it can’t have some pretty bad biases though. Just looking at the set of allowable sources shows pretty extreme bias to begin with. The type of people who self select to become Wikipedia editors (just like Reddit mods) skew heavily on many topics, and there isn’t much effort to correct for it.
Again though, you are not pointing out anything wrong with the information presented itself, only the fact that you don’t like the person/technology compiling the information.
Wikipedia is known to have explicit biases, which is the entire point of the Grokipedia project.
>Fact-checking by an LLM is however not acceptable to many.
Sort of a Luddite mentality to dismiss information without even reading it just because a technology you don't like was involved in its production.