Concerns Over "Military Decisions Faster Than Human Thought"
Anti-Trump Favoring Anthropic Increases as ChatGPT Usage Declines

(C) Medium
Following the U.S. strikes on Iran, it has become clear that Artificial Intelligence (AI) has established itself as a core asymmetric force and essential infrastructure on the battlefield. This has sparked growing concerns regarding the military application of AI. AI-based military technology played a pivotal role not only in the drone-centered Ukraine war but also in the U.S. airstrikes on Venezuela last January. AI is already widely used by the U.S. military for satellite imagery analysis, cyber threat response, and missile defense system operations.
However, the scope and intensity of AI utilization this time have reached a level described as "AI Warfare." The Wall Street Journal reported on the 1st that the U.S. Central Command utilized Anthropic’s generative AI model, 'Claude,' for intelligence assessment, target tracking, and battlefield scenario simulations. AI has been "internalized" into military decision-making, where it identifies targets, performs strike simulations, and commanders approve operations based on these outputs. This background allowed the U.S. to conduct a "blitzkrieg," striking 90 precision targets within Iran just 12 hours after the start of the operation.
Anthropic was designated as a "supply chain risk" by the U.S. for the first time among American companies late last month because it insisted on ethical controls—such as excluding mass surveillance or autonomous lethal weapons—against the Department of Defense's demand to use Claude for "all legal purposes." President Donald Trump subsequently banned federal agencies from using Claude. However, since Claude is already integrated into U.S. military classified networks via Palantir’s platform, experts predict it will take at least three months to replace it.
As AI drastically shortens the time required for operational planning and decision-making, ethical controversies are surfacing. Craig Jones, a lecturer at Newcastle University, told the Guardian, "AI weapons are, in some ways, recommending what to attack faster than the speed of thought." A study suggesting that AI has a low taboo regarding the use of nuclear weapons is also raising alarms. A recent simulation by King’s College London involving three AI models (GPT-5.2, Claude Sonnet 4, Gemini 3 Flash) on hypothetical geopolitical conflicts found that AI chose the nuclear option in 95% of the scenarios, showing a preference for nuclear strikes over negotiations or sanctions.
Observers also predict that U.S.-China AI competition will intensify. The effectiveness of cutting-edge U.S. AI models on the battlefield is expected to further stimulate China's push for technological self-reliance. William Wei, vice president of cyber security firm Webray, told the South China Morning Post on the 2nd, "The militarization of AI is a wake-up call for the entire industry, showing the urgency for China to accelerate its self-reliance." The National Interest forecasted that this will strengthen the argument for the U.S. to maintain its lead in advanced computing, semiconductors, and AI integration.
The AI industry landscape is also shifting, with "latecomer" Anthropic gaining momentum despite its conflict with the Pentagon. Users of Claude, which emphasizes "AI Safety," have surged, while the status of OpenAI—which signed a contract for "full-scale utilization" with the DoD—is declining. Since February 28, Claude has held the #1 spot on the U.S. App Store's free apps list, overtaking OpenAI’s ChatGPT. Furthermore, approximately 900 employees from Google and OpenAI have launched a petition titled "We Are Not Divided," urging their management to refuse the DoD's use of AI for mass surveillance or autonomous lethal weapons.
[Copyright (c) Global Economic Times. All Rights Reserved.]


























