OpenAI Takes Pentagon Contract After Anthropic Exclusion – Criticism Over Mass Surveillance and Security Gaps

Widely Covered
The Trump administration has made a significant decision in the field of artificial intelligence: the U.S. military will no longer use AI models from Anthropic, as the company has been classified as a supply chain risk. The move followed growing concerns over data security and control, after Anthropic refused to remove certain security guidelines for military AI applications demanded by the Pentagon. In the wake of this decision, OpenAI seized the opportunity to secure a major contract with the Pentagon for handling sensitive data, sparking widespread criticism over mass surveillance and autonomous weapons. While OpenAI defended the agreement, stating there would be no domestic surveillance or autonomous weapons, the ethical and political concerns remain unresolved.

The backlash was intensified by the resignation of Caitlin Kalinowski, head of OpenAI’s robotics division, who stepped down after publicly criticizing the company’s military ties, particularly regarding mass surveillance and autonomous weapons. Her departure highlighted the deep ethical tensions within the AI development community, where technological advancement often clashes with moral and societal responsibilities. Despite OpenAI’s reassurances, the incident underscored the ongoing debate about whether society is willing to accept AI technologies in military contexts when ethical boundaries remain ambiguous.

In a separate but equally concerning incident, a Mexican software startup incurred over $82,000 in cloud costs within two days after a Google Cloud API key for accessing Google’s Gemini AI service was stolen. The incident revealed the vulnerability of AI services to unauthorized use when access credentials are compromised. The costs were charged to Google Cloud, even though the startup was not responsible for the misuse. This case highlights the urgent need for stronger security protocols and clearer accountability between technology providers and users, especially in the nascent AI industry, where startups often operate with limited resources.