Which is why "prompt injection" is just a flip side of intelligence in this sense. We want LLMs to be able to do risk/benefit analysis and act on it; we cry "security vulnerability" when it makes a different choice to the one we'd like it to. But you can't have the former without the possibility of the latter.
Show it to my boss and let them decide.