Yeah in that regard we should always treat it like a junior something. Very much like you can't expect your own kids to never do something dangerous even if tell it for years to be careful. I got used to getting my kid from the Kindergarten with a new injury at least once a month.
I think it's very dangerous to use the term "junior" here because it implies growth potential, where in fact it's the opposite: you are using a finished product, it won't get any better. AI is an intern, not a junior. All the effort you're spending into correcting it will leave the company, either as soon as you close your browser or whenever the manufacturer releases next year's model -- and that model will be better regardless of how much time you waste on training this year's intern, so why even bother? Thinking of AI as a junior coworker is probably the least productive way of looking at it.
We should move well beyond human analogies.
I have never met a human that would straight up lie about something, or build up so much deceptive tests that it might as well be lying.
Granted this is not super common in these tools, but it is essentially unheard of in junior devs.
> I have never met a human that would straight up lie about something
This doesn't match my experience. Consider high profile things like the VW emissions scandal, where the control system was intentionally programmed to only engage during the emissions test. Dictators. People are prone to lie when it's in their self interest, especially for self preservation. We have entire structures of government, courts, that try to resolve fact in the face of lying.
If we consider true-but-misleading, then politics, marketing, etc. come sharply into view.
I think the challenge is that we don't know when an LLM will generate untrue output, but we expect people to lie in certain circumstances. LLMs don't have clear self-interests, or self awareness to lie with intent. It's just useful noise.
There is an enormous amount of difference between planned deception as part of a product, and undermining your own product with deceptive reporting about its quality. The difference is collaboration and alignment. You might have evil goals, but if your developers are maliciously incompetent, no goal will be accomplished.