Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's definitely been reported before that Claude was used for Iran attacks, at the beginning of March or earlier:

https://www.theguardian.com/technology/2026/mar/01/claude-an...

Edit: Also, https://www.washingtonpost.com/technology/2026/03/04/anthrop...



Amodei looks absolutely prescient for taking a stand against use of Claude in the kill chain. Not to mention how utterly foolish DoD looks declaring Claude to be a national security threat while simultaneously using to choose targets. No wonder they got humiliated in court.


Well, to people who don't believe in precognition, it sounds like Anthropic had quality control engineers dedicated to their military clients' usage. Basically running through the prompts and inspecting the answers and digging deeper how their chatbots gave those answers. Somebody must have pressed the high-alert button, resulting in Anthropic taking a stance.


Certainly possible but I'd assume DoD expressly forbid anyone looking at their usage and Anthropic had to support that to win their contract. They may have gotten wind of what they were doing somehow.


"The U.S. used Anthropic's Claude to support Operation Epic Fury against Iran yesterday, sources familiar with the Pentagon's operations tell Axios."

OK. The US probably also used telephones and Diet Coke.

Nothing cited said that Claude was selecting targets or informing target selection.





Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: