Maven Smart System, Claude in Classified Networks, and the First Fully AI-Integrated War
At dawn on February 28, 2026, the United States and Israel launched Operation Epic Fury — a massive coordinated strike campaign against Iranian military infrastructure.[1] Within the first 24 hours, AI systems had identified and prioritized over 1,000 potential targets, enabling approximately 900 strikes in just 12 hours.[2] What used to take the military days or weeks — the full cycle from target identification to strike authorization — was compressed to seconds.[3]
On March 11, CENTCOM commander Admiral Brad Cooper confirmed the use of AI in a video statement: "Advanced AI tools can turn processes that used to take hours and sometimes even days into seconds."[4] By that date, U.S. forces had struck more than 5,500 targets inside Iran, including drone and ballistic missile sites, command-and-control facilities, naval vessels, air defense systems, and military communications infrastructure.[4] Iranian drone attacks had decreased 83 percent and ballistic missile attacks 90 percent since the opening salvo.[4]
We've gone from identifying the target to now coming up with a course of action, to now actioning that target, all from one system. This is revolutionary.
The technological backbone of Epic Fury is Palantir's Maven Smart System — the classified platform through which CENTCOM operators access AI capabilities including Anthropic's Claude.[5] Maven consolidates what were previously eight or nine separate intelligence and targeting systems into a single visualization and workflow tool.[5] At Palantir's AIPCON conference on March 13, DoD CDAO Cameron Stanley detailed how the system has transformed the kill chain.
The term "kill chain" — military jargon for the sequence from target identification to engagement — is central to understanding what Maven does. Stanley explained that Maven allows operators to select data, move it into a decision workflow, and determine how best "to prosecute" the target — all within a single interface.[5] Palantir architect Chad Wahlquist quantified the human reduction: "Normally we would have 2,000 intelligence officers actually trying to do targeting and look at stuff. Now that's 20 and they're doing it in rapid succession."[5]
Maven's origins trace to 2016 under the name Project Maven — initially a program to use computer vision to analyze drone surveillance footage.[5] Google was the original partner but withdrew in 2018 after 3,000 employees signed an open letter declaring "Google should not be in the business of war."[6] Palantir inherited the contract in 2019. Seven years later, the system that Google employees once protested as morally unconscionable is now the operating system of the largest U.S. military operation since Iraq.
Anthropic's Claude is the only major AI model currently deployed in the Pentagon's classified networks.[7] Access was established through a 2024 partnership with Palantir, which was "the first industry partner to bring Claude models to classified environments."[8] Military personnel access Claude through the Maven Smart System — the same platform now running the Iran targeting campaign.[9] This makes Claude the first frontier AI model known to operate inside a classified military kill chain.
Anthropic developed a specialized version called Claude Gov for defense and intelligence use. According to Anthropic's own lawsuit against the Pentagon, "Claude Gov is less prone to refuse requests that would be prohibited in the civilian context, such as using Claude for handling classified documents, military operations, or threat analysis."[6] The company spent years building specialized infrastructure and loosening its standard safety restrictions to accommodate national security workflows.[10]
The first confirmed operational use of Claude in combat was Operation Absolute Resolve — the January 2026 raid that captured Venezuelan President Nicolás Maduro.[11] The Wall Street Journal reported that the U.S. military used Claude during the classified operation, making Anthropic the first AI developer whose technology was employed in a classified Pentagon mission.[11] Anthropic declined to confirm or deny its use, saying it "cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise."[12] Venezuela was the proof of concept. Iran is full deployment.
Epic Fury is not a single AI system — it is an ecosystem of autonomous and AI-augmented platforms operating simultaneously. The Maven Smart System serves as the command layer, but beneath it, multiple AI-driven weapons systems saw their first combat deployment.
LUCAS one-way attack drones, produced by Arizona-based SpektreWorks, were deployed "for the first time in history" during opening strikes on February 28.[13] LUCAS is a reverse-engineered clone of Iran's own Shahed-136 — the same drone Russia has used extensively against Ukraine.[14] The Pentagon turned Iran's own weapon design against them, a technological irony lost on no one. Task Force Scorpion Strike operated LUCAS alongside Anduril's Lattice-networked drone swarms and Shield AI's Hivemind-piloted platforms.[15]
Days before Epic Fury launched, on February 24, Anduril demonstrated a capability with profound implications: its YFQ-44A Collaborative Combat Aircraft (CCA) switched between two different AI autonomy systems midflight — from Shield AI's Hivemind to Anduril's Lattice for Mission Autonomy — without stopping or landing.[16] The demonstration proved that combat drones can swap AI "brains" dynamically during a mission. The U.S. Air Force's CCA program envisions AI-piloted drones flying alongside manned fighters, with the AI making tactical decisions in real-time — the exact architecture Anthropic's red lines were designed to constrain.
Admiral Cooper's statement that "humans will always make final decisions on what to shoot" masks a spectrum of autonomy that is rapidly narrowing the gap between human oversight and machine initiative.
The stated principle — humans decide, machines assist — obscures the practical reality. When AI generates 1,000 targets in 24 hours for 20 analysts to review, the "decision" has already been made by the algorithm. The human role becomes ratification, not deliberation. As one researcher noted, Israel's Lavender targeting system in Gaza operated with a 10% false positive rate and still generated authorized strikes — because the pace of AI-generated targeting overwhelmed the capacity for meaningful human review.[17]
During Palantir's AIPCON presentation on March 13, CDAO Cameron Stanley displayed a Maven map of the Middle East showing dozens of red target icons across Iran. Investigators noted that one icon was positioned directly on the area corresponding to Minab — where, on the first day of the campaign, a missile struck the Shajareh Tayyebeh girls' elementary school, killing more than 175 people, mostly schoolgirls.[5][18]
A preliminary Pentagon investigation has determined the United States was responsible for the strike.[19] The school sat adjacent to an Iranian navy base — a legitimate military target. But the question of whether AI systems generated the targeting recommendation, and whether the proximity of a school was weighed by an algorithm or a human, goes to the heart of the debate Anthropic tried to force: who is accountable when AI collapses the kill chain to seconds?
Operation Epic Fury represents a phase change in warfare — the first conflict where AI systems are confirmed to operate across every stage of the kill chain simultaneously. The implications extend far beyond Iran.
AI has collapsed the observe-orient-decide-act (OODA) loop from days to seconds. The adversary cannot react faster than the algorithm can retarget. This creates escalation dynamics no doctrine has accounted for — when machines set the tempo of war, human deliberation becomes a bottleneck to be optimized away.
When 20 analysts process 1,000 AI-generated targets in 24 hours, the meaningful decision is made by the algorithm, not the human who clicks "approve." The Minab school strike demonstrates the lethal consequences. No framework exists to assign responsibility when AI targeting systems produce civilian casualties at machine speed.
The CCA midflight AI-switching demonstration, four days before Epic Fury, signals the direction: combat platforms that can swap between autonomy systems dynamically, adapting their behavior without human intervention. The "human in the loop" is becoming the "human on the loop" — monitoring, not deciding.
Epic Fury is the most effective advertisement for AI-integrated warfare in history. Every nation watching will accelerate its own military AI programs. China's response will not be restraint — it will be competitive deployment. The window for international norms on AI in warfare is closing.
Operation Epic Fury is the first war fought at machine speed. The Maven Smart System, with Claude integrated into its classified workflows, has compressed the kill chain from a multi-day, multi-system process into a single-platform, seconds-fast targeting engine. The 100x reduction in human analysts — from 2,000 to 20 — is not an efficiency gain. It is a fundamental shift in who makes life-and-death decisions in warfare.[5]
The Venezuela operation was the beta test. Iran is production deployment. The technology stack — Maven for targeting, Claude for analysis, Lattice for C2, LUCAS and CCA for autonomous engagement — represents a fully integrated AI warfare ecosystem that no prior conflict has matched. And it works. Iranian combat power has been systematically degraded in under two weeks, with drone attacks down 83% and ballistic missile capability down 90%.[4]
But the Minab school strike — 175 dead, mostly children, on a target the Maven system had flagged — exposes the fault line. When AI generates targets faster than humans can meaningfully evaluate them, the "human in the loop" becomes a legal fiction. The question is no longer whether AI will fight wars. It is whether anyone will be accountable when it gets them wrong.
No fair fights. If I can avoid it, let's not have fair fights. Our guys win and we come home.