"Right" feelings and tonal language? "Right" for what? For whom?
We've already seen how much damage dishonest actors can do by manipulating our text communications with words they don't mean, plans they don't intend to follow through on, and feelings they don't experience. The social media disinfo age has been bad enough.
Are you sure you want a machine which is able to manipulate our emotions on an even more granular and targetted level?
LLMs are still machines, designed and deployed by humans to perform a task. What will we miss if we anthropomorphize the product itself?
This gives me a lot of anxiety but my only recourse is to stop paying attention to AI dev. Its not going to stop, downside be damned. The "We're working super hard to make these things safe" routine from tech companies, who have largely been content to make messes and then not be held accountable in any significant way, rings pretty hollow for me. I don't want to be a doomer but I'm not convinced that the upside is good enough to protect us from the downside.
We've already seen how much damage dishonest actors can do by manipulating our text communications with words they don't mean, plans they don't intend to follow through on, and feelings they don't experience. The social media disinfo age has been bad enough.
Are you sure you want a machine which is able to manipulate our emotions on an even more granular and targetted level?
LLMs are still machines, designed and deployed by humans to perform a task. What will we miss if we anthropomorphize the product itself?