Who ultimately governs the behavior of intelligent machines that influence military power?
Anthropic AI, until very recently, occupied a privileged position within the technological architecture of the United States national security apparatus. The company, founded by former artificial intelligence researchers and lavishly financed by Silicon Valley capital, developed an advanced family of large language models known as Claude. These systems were designed to ingest vast oceans of data, synthesize intelligence, assist engineers, support cyber operations, and accelerate the decision making processes that increasingly define modern warfare. Because of these capabilities, Claude became the only frontier artificial intelligence (AI) system authorized for use within certain classified Pentagon environments. It was used for intelligence analysis, research inside national laboratories, cybersecurity tasks, and complex logistical modeling.
Read Full Article: https://armedforces.press/pentagon/2026/03/13/the-ideological-contamination-of-the-arsenal-why-the-pentagon-cast-anthropic-out-of-americas-military-ai-supply-chain/
