I am not an expert on LLMs, so I may be misunderstanding here. But doesn't this research basically imply one of two things?
1. LLMs are not really capable of "being controlled" in the sense of saying, "I want you to hold certain views about the world and logically extrapolate your viewpoints from there." Rather, they differ in political biases because the content they are trained on differs.
...or...
2. LLMs are capable of being controlled in that sense, but their owners are deliberately pushing the scales in one direction or another for their own aims.
you seem to believe that llm are a neutral engine with bias applied. its not the case. the majority of the bias is in the model training data itself.
just like humans, actually. fe: grow up in a world where chopping one of peoples finger off every decade is normal and happens to everyone.. and most will think its fine and that its how you keep gods calm and some crazy stuff like that.
right now, news, reddit, Wikipedia, etc. have a strong authoritarian and progressive bias, so do the models, and a lot of humans who consume daily news, tiktoks, instagrams.
1. LLMs are not really capable of "being controlled" in the sense of saying, "I want you to hold certain views about the world and logically extrapolate your viewpoints from there." Rather, they differ in political biases because the content they are trained on differs.
...or...
2. LLMs are capable of being controlled in that sense, but their owners are deliberately pushing the scales in one direction or another for their own aims.