US Military Confirms AI Deployment in Iran War: 1,000 Targets Struck in 24 Hours

The US military has publicly confirmed using AI tools, including Claude and Palantir's Maven system, in operations against Iran. The "Epic Fury" campaign demonstrates AI's deep integration in modern warfare while sparking debates about automation bias and human judgment.

US Military Confirms AI Deployment in Iran War: 1,000 Targets Struck in 24 Hours

The US military has for the first time publicly confirmed deploying advanced artificial intelligence tools in its operations against Iran. The campaign, dubbed "Epic Fury," showcases the deep integration of AI technology in modern warfare while sparking heated debates about warfare ethics and human judgment.

1,000 Precision Strikes in 24 Hours

According to The Washington Post, US forces successfully struck approximately 1,000 targets in the first 24 hours of operations against Iran—an astonishing efficiency partly attributed to artificial intelligence applications. US Central Command Commander, in communications with media, specifically emphasized that AI tools helped forces "fight faster and smarter" in combat missions.

In this operation, the US military utilized Claude AI tools from Anthropic, combined with Palantir's Maven Smart System, for real-time target identification and prioritization. The Maven system employs AI algorithms to identify potential targets from satellite and other intelligence data, while Claude assists military planners in organizing information and making target selection and prioritization decisions.

AI's Role: Decision Support, Not Autonomous Weapons

Jon R. Lindsay, Associate Professor of Cybersecurity and Privacy and of International Affairs at Georgia Tech, analyzed that Claude is fundamentally a decision support system, not a weapon system. In an interview, he stated: "Modern combat organizations rely on countless digital applications for intelligence analysis, campaign planning, battle management, communications, logistics, administration, and cybersecurity."

Lindsay emphasized that military AI falls into two main categories: automated weapons systems and decision support systems. The former has autonomous capability to select or engage targets, while the latter provides intelligence and planning information to human personnel. He noted: "Current military AI applications, including those in ongoing and recent Middle East conflicts, primarily serve decision support systems rather than weapons."

This distinction is crucial. Israel's Lavender and Gospel systems used in the Gaza war are also decision support systems—these AI applications provide analytical and planning support, but humans ultimately make the decisions.

Decades of Technical Accumulation

Lindsay pointed out that the military's AI capability did not emerge overnight. "The effective use of automated systems depends on extensive infrastructure and skilled personnel. It is only thanks to many decades of investment and experience that the US can use AI in war today."

From Cold War-era command and control system precursors to modern network-centric warfare concepts, the US military has continuously developed and refined its AI infrastructure. The Semi-Automatic Ground Environment (SAGE) system of the 1950s and the Igloo White project during the Vietnam War were all precursors to modern decision support systems.

Ethical Debates and Human Judgment

However, the application of AI in military contexts has raised serious concerns. Automation bias—the tendency for humans to overly rely on automated decisions—has become a focal point of discussion. But Lindsay believes these concerns are not new.

He noted that the Igloo White system during the Vietnam War was often misled by Vietnamese decoys. In 1988, an advanced US Aegis cruiser mistakenly shot down an Iranian airliner. Intelligence failures led US stealth bombers to accidentally strike the Chinese embassy in Belgrade, Serbia, in 1999.

Lindsay specifically mentioned recent evidence that a Tomahawk cruise missile mistakenly struck a girls' school adjacent to an Iranian naval base, killing approximately 175 people, mostly students. This targeting may have resulted from US intelligence failures.

"The successes and failures of decision support systems in war are due more to organizational factors than technology," Lindsay wrote. "AI can help organizations improve their efficiency, but AI can also amplify organizational biases."

Humanity's Increasingly Important Role

Despite the expanding application of AI in military contexts, Lindsay believes this is actually making "humans more important in war, not less."

He explained: "In economic terms, AI improves prediction—generating new data based on existing data. But prediction is only one part of decision-making. People ultimately make the judgments that matter about what to predict and how to use predictions. People have preferences, values, and commitments regarding real-world outcomes, but AI systems intrinsically do not."

The US Defense Advanced Research Projects Agency's (DARPA) Strategic Computing Program in the 1980s spurred advances in semiconductors and expert systems. In fact, defense funding originally enabled the rise of AI.

The Pentagon has confirmed it will continue using AI tools for target positioning while emphasizing that human decision-makers still play a key role in wartime decisions. This position creates a delicate balance with voices calling for stricter regulation of military AI applications.

Reference Sources: The Conversation, NBC News, DefenseScoop, Reuters