AI company Anthropic is embroiled in a complex conflict between national security, economic interests, and ethical principles. Following a designation by the U.S. Department of Defense classifying the company as a supply chain risk, the Pentagon threatened expropriation unless Anthropic removed security filters from its AI model Claude for military use. The decision, stemming from an order by Donald Trump, has been questioned by legal experts as potentially unlawful, as it could exclude Anthropic from government contracts. In response, the company has filed a lawsuit against the U.S. government, arguing the classification is legally invalid and infringes on its freedom of speech. Anthropic maintains that the designation does not affect its regular operations but could result in significant financial and reputational damage.
The conflict between Anthropic and the Pentagon is part of a broader geopolitical and technological struggle. While Anthropic emphasizes AI safety and refuses to allow military applications where safety and reliability cannot be guaranteed, OpenAI has secured a contract with the Pentagon for military AI applications. This highlights divergent approaches among AI companies regarding the use of their technologies. Additionally, Anthropic faces accusations from competitors such as DeepSeek, Moonshot, and MiniMax, who are alleged to have illegally used Claude models to train their own AI systems through distillation. The company views this as a geopolitical risk, fearing that its technology could fall into the hands of authoritarian regimes.
Parallel to these developments, various studies reveal that expectations for AI's economic benefits are often unmet. A study by the National Bureau of Economic Research found that most companies have not seen measurable productivity or employment effects from AI despite heavy investments. Meanwhile, a report by Fastly and Sapio Research indicates that companies heavily integrating AI are more vulnerable to cyberattacks—damage costs increase by 135 percent, and two-thirds of respondents experienced a significant data breach within three months. These security risks underscore the importance of stringent safety guidelines, which Anthropic defends.
In everyday life, AI is also gaining increasing importance. A study by GFU shows that over two-thirds of Germans use digital AI assistants, particularly for purchase advice and price comparisons. Older users are especially open to the technology, and many expect AI to become a standard part of shopping support in the near future. At the same time, the authenticity of AI-generated content is increasingly questioned: A review of Resident Evil Requiem on the website Videogamer was removed by Metacritic due to suspicion of AI generation. The debate was fueled by unusual author profiles and restructuring at the media group ClickOut Media, with Metacritic stating that AI-generated reviews are not accepted.
Behind the technological and economic developments lies progress in hardware innovation. Researchers at Cornell University have developed a high-resolution 3D electron microscopy technique that enables the detection of defects in Gate-All-Around transistors down to the atomic level. Based on electron ptychography, the method provides detailed 3D structural data, aiding defect analysis in chip development. This innovation could accelerate the development of more powerful and secure chips, which could form the foundation for future AI systems. Currently, Anthropic and the Pentagon are resuming negotiations on a KI agreement, with contentious issues such as mass surveillance and autonomous weapons under discussion.