Agentic Artificial Intelligence and its Impact on Modern Warfare- An Analysis
Published Indian Defence Review ISSN 0970-2512 Oct- Dec 2025 Vol.40(4)
Agentic Artificial Intelligence (AI) represents a class of AI systems capable of autonomous decision-making and executing complex tasks independently. These systems can operate with or without human involvement and respond to situations by following procedures to achieve predetermined objectives. In contrast to traditional AI tools that adhere strictly to predefined instructions, agentic AI is dynamic and goal oriented. It can comprehend its environment, devise plans, engage in reasoning, execute tasks, and adapt as necessary to fulfil its objectives. A fundamental aspect of this technology is its capacity for learning and improvement through environmental interaction and feedback utilisation, which facilitates enhanced decision making over time.
The key features of this technology are as follows:
– Agentic AI can independently make decisions without requiring continuous human oversight, thereby expediting, and enhancing the efficiency of decision-making processes.
– These systems are designed to continuously learn and self-improve, enabling them to adapt to novel tasks and environments without relying solely on pre-existing data.
– Unlike other AI systems that concentrate on singular tasks, Agentic AI pursues broader objectives, prioritising tasks and making decisions that best align with its overarching goals.
– Agentic AI can recognise its limitations or errors and adjust, accordingly, thereby improving its performance and managing unforeseen circumstances.
Agentic AI comprises multiple capabilities that utilise the following key technologies and frameworks to function autonomously:
Reinforcement Learning (RL): This methodology enables agentic AI to learn through trial and error, deriving insights from mistakes. By interacting with dynamic environments, agentic AI learns to select actions that yield optimal outcomes, facilitating adaptation to new scenarios.
Natural Language Processing (NLP): This component of agentic AI aids in the comprehension of human language by computers. In military applications, NLP is crucial for conveying high-level goals and strategies to agentic AI systems in a natural manner, allowing humans to provide broad directives rather than detailed ones.
Robotics and Simulation: The integration of agentic AI with robotics results in systems capable of autonomous navigation in the physical world. Simulations are employed to train these systems in virtual environments, mitigating risks prior to real-world deployment, and aiding in the resolution of complex problems.
These technologies are implemented using widely used open-source and proprietary frameworks, which serve as foundational elements for developing agentic AI applications. AutoGen is a system comprising multiple agents for complex tasks, featuring layers that facilitate scalability with additional agents. CrewAI organises agents like a team, each with specific roles and tasks, to collaboratively address complex assignments. LangChain and LangGraph assist in constructing and managing tasks using Large Language Models (LLMs), with LangGraph employing a graph-based approach suitable for intricate or non-linear tasks. Other systems, such as OpenAI’s Swarm and FIPA, emphasise collaboration and task delegation between agents.
This shift to agentic AI signifies a movement towards “Human-Machine Teaming” and “trusted autonomy.” Systems such as CrewAI illustrate, where agentic AI functions as an active team member rather than as a mere tool. The strategic advantage lies not only in individual systems but also in the arrangement of multiple systems. While a single drone serves as a tool, a coordinated group of drones constitutes an effective weapon system. The emphasis is on platforms capable of managing and deploying these agents on a large scale, indicating that future warfare will prioritise the most effective network of autonomous systems over the superiority of individual platforms. The large-scale management of these agents represents a novel military advantage.
In the field of intelligence, agentic AI facilitates the integration of information, tracking of targets, and recommendation of actions, thereby enhancing efficiency. These systems also reduce reliance on human soldiers while enhancing operational effectiveness. This concept is central to the United States Department of Defence’s “Replicator Initiative,” which seeks to develop cost-effective, expendable systems to counterbalance the numerical superiority of the People’s Liberation Army.
Weapon Design and Manufacturing
Agentic AI, coupled with predictive analytics, is transforming defence manufacturing by analysing extensive datasets, including material properties and supply chain logistics. This transformation has resulted in reduced costs, time, and defects, thereby enhancing the efficiency and precision. According to a 2025 McKinsey report, AI in defence manufacturing could yield annual savings of $500 billion by 2030 through improved workflows.
Industry Examples
– Agentic AI-driven design and digital twins facilitate the rapid development of complex equipment. Northrop Grumman employs these technologies to expedite drone development for swift deployment. Lockheed Martin utilises agentic AI to accelerate F-35 jet production, reducing assembly time by 15%. General Dynamics employs agentic AI to hasten tank production and decrease assembly time by 22%. Raytheon Technologies achieved an 18% reduction in missile production costs, whereas Boeing decreased quality control defects by 20%. BAE Systems also experienced a 15% reduction in defects in naval equipment.
– Agentic AI anticipates equipment failures before they occur, preventing costly downtimes and extending machinery lifespan.
These reductions in production time and cost extend beyond business gains and contribute to novel military strategies. The increased efficiency enables the mass production of cost-effective, expendable systems suitable for high-risk scenarios, thereby altering risk calculations and fortifying military strategies. Innovations in manufacturing have facilitated a shift in military tactics.
The Rise of Swarm and Multi-Domain Operations
Agentic AI is instrumental in the development of “integrated autonomous systems” and “swarms” that employ drones, robots, and agentic AI for complex operations. These swarms offer substantial advantages on the battlefield, as they can overwhelm defences, execute coordinated attacks, and disrupt critical infrastructure with minimal human intervention. Novel deployment strategies for these systems are being integrated into traditional military domains. For instance, drone motherships can transport vehicles across land, air, and sea, suggesting a future in which cross-domain asset mobility would be common. The proliferation of accessible and commercially available AI-powered technologies presents a significant challenge to conventional military forces. These technologies are accessible to non-state actors, levelling the battlefield and necessitating preparedness for both state and non-state threats to national security. Future conflicts may involve a blend of state and irregular warfare, all of which leverage agentic AI.
The Role of Agentic AI in Contemporary Naval Warfare
Maritime Domain Awareness and Anti-Submarine Warfare
Agentic AI facilitates the automation of monitoring and detection of maritime security threats, which is essential for enhancing Maritime Domain Awareness (MDA). Agentic AI systems can identify events and activities and detect anomalies, such as erratic course changes by ships, by analysing data from AIS paths and video feeds. Deep learning is also employed for the “semantic segmentation” of maritime scenes, aiding unmanned vessels in path planning and collision avoidance.
In the challenging domain of Anti-Submarine Warfare (ASW), agentic AI is revolutionising the detection and tracking of submarines. By training on extensive datasets of submarine noise patterns, navies can discern subtle acoustic signals that may elude human detection. These systems enhance accuracy and reduce false alarms, enabling naval forces to respond swiftly and obtain a strategic advantage.
The Hybrid Fleet
The United States Navy is developing a “hybrid fleet” comprising both crewed and uncrewed systems to augment its naval capabilities and counter the larger number of the People’s Liberation Army Navy (PLAN). These unmanned surface vessels (USVs) and unmanned underwater vehicles (UUVs) extend the operational reach of a fleet, provide strategic advantages, and ensure personnel safety. Agentic AI-driven “swarm intelligence” enables multiple drones to collaborate in surveillance, reconnaissance, and mine-clearing operations. A swarm of small drones can overwhelm a target’s defences and radar systems, facilitating the success of other weaponry systems. A notable example is the deployment of a canister-encapsulated UAV from a submerged submarine, representing a significant advancement in covert intelligence gathering and transforming valuable assets into networked strike systems.
Traditionally, naval warfare has relied on costly ships, such as aircraft carriers and destroyers. However, the advent of inexpensive agentic AI-powered drones has altered this paradigm. These drones can inflict substantial damage on expensive fleets, as demonstrated in the Ukraine conflict. In the future, naval power will be contingent not only on fleet size but also on a nation’s ability to rapidly produce and control intelligent autonomous systems.
Enhancing Naval Logistics and Shipbuilding
AI contributes to the acceleration and efficiency of naval logistics and shipbuilding processes. The largest U.S. shipbuilder Huntington Ingalls Industries, HII, employs agentic AI to identify production bottlenecks in submarine construction to meet the Navy’s demand for expedited building timelines. This involves using agentic AI to aggregate data from diverse sources, including spreadsheets and accounting systems.
In maritime logistics, agentic AI systems can enhance operational efficiency by optimising route planning, cargo allocation, and maintenance processes. Unlike conventional systems, agentic AI can adapt to unforeseen circumstances and learn from experiential data. These systems can autonomously modify routes in response to real-time conditions such as weather and traffic.
National Strategies and Geopolitical Competition
The United States
The United States Department of Defence regards AI as a pivotal tool for securing advantages in future conflicts. The U.S. strategy employs a system that enables AI to operate with clear logic and adhere to commanders’ directives. It also incorporates a model that decentralises AI and deploys it to small devices on the front lines, even under challenging conditions. The key initiatives include:
-Replicator Initiative: A strategy to produce numerous low-cost, disposable autonomous systems to counter China’s extensive array of ships, missiles, and personnel.
-Chief Digital and Artificial Intelligence Office (CDAO): Established in 2022, this office accelerates the integration of AI into military missions.
-Global Information Dominance Experiments (GIDEs): These exercises evaluate how data and AI can enhance military command and control capabilities.
The People’s Republic of China
The People’s Liberation Army is preparing for “intelligentized” warfare by leveraging AI to develop unmanned combat systems, enhance battlefield awareness, and conduct operations across multiple domains. China’s strategy harnesses national resources and academic initiatives to cultivate talent and employs a military-civil fusion approach. This fusion facilitates the rapid transfer of AI from private enterprises to the military, with numerous non-traditional firms and universities securing defence contracts.
Key projects include:
-Developing unmanned intelligent combat systems, such as Caihong UAVs and HSU-001 underwater vehicles.
– Conducting research on “swarming” techniques to enable large groups of unmanned systems to collaborate effectively. AI models, such as DeepSeek AI, are being utilised for non-combat applications, including generating intelligence reports.
The United Kingdom
In 2022, the UK Ministry of Defence unveiled its Defence AI Strategy, with the objective of becoming the most effective, efficient, trusted, and influential defence organisation of its size. This strategy emphasises the development of a robust AI system through collaboration with industry, academia, and international allies. The Ministry is committed to the safe and ethical deployment of AI to maintain public confidence.
The key initiatives include:
– Autonomous Logistics: Implementing predictive analytics to anticipate vehicle part failures and optimise supply chain management, thereby enhancing fleet readiness and reducing costs.
– Autonomous Mine Hunting: Deploying multiple vessels to detect and identify mines remotely.
-Swarming drones: Agentic AI is utilised for reconnaissance missions.
– Imagery Analysis: Employing machine learning to identify objects in images, enabling analysts to concentrate on critical elements.
– AI for Arctic Security: Leveraging agentic AI to manage maritime, aerial, and space resources to enhance situational awareness in challenging environments.
India
India allocates approximately $50 million annually to AI in defence, with the aim of enhancing combat capabilities and tactics. India is establishing specialised groups and collaborating with both the public and private sectors. The focus is on developing dual-use technologies and fostering international partnerships.
Significant efforts include the following:
– Defence AI Council (DAIC) and Defence AI Project Agency (DAIPA): Established in 2019 to spearhead AI advancements in the Indian military.
– Centre for AI and Robotics (CAIR): Founded by the Defence Research and Development Organisation (DRDO) to advance India’s defence technology.
– Project Army Integrated Decision Support System (AIDSS): Operational since 2011, this project integrates AI across all defence domains, from administration to operations.
– Indigenous Development: India is developing autonomous weapons, such as drones and unmanned vehicles.
Comparative Analysis of National AI Strategies
Various nations have distinct AI development strategies. The United States seeks to share its comprehensive AI technology with allied nations to maintain its leadership position. China integrates military and civilian technologies and disseminates open-source models. The UK and India are not only developing their own technologies but are also collaborating on sophisticated weaponry and engines. This competition extends beyond technological superiority; it involves shaping the global AI framework and establishing future standards and supply chains in the field.
An examination of these positions reveals that the fundamental distinction between powers lies not only in technological capabilities but also in military regulations, particularly concerning the permissible actions of their AI systems.
The UK is adopting an active approach to the ethical use of AI and ensuring human oversight. The fundamental principle highlighted in the UK’s Defence AI Playbook is “Meaningful human control”. It emphasizes that AI systems, particularly those with considerable real-world implications (such as in military/defence), should not function independently without well-defined and accessible points for human intervention. The human operator must retain the ultimate authority, ensuring that AI serves as a tool rather than a substitute for human judgment and accountability. The UK is committed to aligning AI with existing moral, legal, and operational standards from the beginning.
The US strategy is marked by significant challenges concerning trust and the transparency of AI systems. Trust deficits imply a lack of confidence from users, regulators, or the public in AI systems. Such deficits often stem from poor performance, biases, or, crucially, a lack of understanding of how AI arrives at its conclusions. The aim of Explainable AI (XAI) is to devise methods that enable humans to comprehend the logic, factors, and data points an AI system used to make a decision, reducing its mystery and, ideally, enhancing trust.
Both of these ethical approaches are crucial for accountability and public trust but may delay deployment, resulting in the loss of tactical advantage in conflicts.
The Inherent Vulnerabilities of Agentic AI
Agentic AI systems are susceptible to several types of failures, ranging from technical malfunctions to strategic misjudgements. Such failures can result in errors, unintended conflicts, and severe repercussions.
Unpredictable and Emergent Behaviour
Complex AI systems, composed of numerous components, may exhibit unpredictable behaviour, known as “emergent behaviour.” This phenomenon occurs when system components interact in unforeseen ways, leading to complex and unexpected outcomes. In military operations, which are characterised by chaos and communication challenges, such behaviour can cause the system to deviate from the expected performance.
The primary concern associated with advanced artificial intelligence (AI) is the “paradox of autonomy.” As these systems become more sophisticated, they present increased risk. Their ability to modify themselves based on new data can lead to errors. The interactions among system components may lead to the emergence of unforeseen strategies. These systems cannot be tested exhaustively, rendering their behaviour in warfare unpredictable. An adversary could exploit this vulnerability by disrupting a force that is reliant on AI to a large extent.
Algorithmic bias refers to unfair discrimination by AI systems. These systems can mirror and amplify the biases present in their training data. This issue is particularly significant in military contexts, where bias can have severe repercussions, such as misidentifying individuals or targeting them based on their race, gender, or ethnicity.
Bias can be introduced at various stages of the AI lifecycle for example, bias in datasets, bias in design and development, bias in use and bias in Post-Use Review.
The most significant threat is the formation of negative feedback loops. When an AI system provides a biased recommendation, individuals may accept it without examining it. This action reinforces the flawed logic of the system, creating a cycle in which AI becomes increasingly biased over time.
Hallucinations and Fictitious Outputs
Hallucinations in large language models (LLMs) occur when artificial intelligence generates information that appears credible but is incorrect. This phenomenon arises because these models predict subsequent actions based on the probabilistic assessments. In military contexts, such inaccuracies pose significant risks because of the critical nature of decision-making. If an AI system produces a plan or report containing fabricated data, it can lead to errors. For example, AI might identify patterns or connections that do not exist, potentially causing harm to non-combatants. The U.S. The Army’s CamoGPT chatbot exemplifies the importance of caution regarding potential hallucinations, underscoring the necessity of verifying AI-generated information. The challenge of tracing the original source of AI-generated data complicates efforts to prevent false information dissemination.
Adversarial Attacks and Deception
Adversarial attacks pose a substantial threat to AI systems. These attacks involve manipulating data to deceive the AI, resulting in erroneous actions. These threats extend beyond the digital realm. An adversary may transmit false signals to radar systems, causing them to detect non-existent objects. A hypothetical example includes a 3D-printed turtle that misleads an AI-controlled weapon system into misidentifying it as a threat, thereby endangering the civilians.
An adversary can exploit adversarial attacks to gain a strategic advantage by causing AI systems to misidentify military targets as civilians thereby halting the attacks. Such attacks can also undermine confidence in AI systems, compelling opponents to withdraw their assets. Another tactic, known as “data poisoning,” involves introducing false data into AI systems, leading to erroneous decisions. Pre-emptively compromising an opponent’s AI capabilities can potentially secure a victory without direct conflict.
The Black Box Problem and Lack of Transparency
Many advanced AI systems, particularly those employing deep learning, function as “black boxes,” which means that their decision-making processes are opaque. Although these models excel in identifying patterns within large datasets, they do not reveal the mechanisms underlying their decisions. The complexity of their decision-making processes renders them incomprehensible to humans. This lack of transparency poses a critical issue for military applications, as it complicates the attribution of responsibility for errors, whether to the operator, developer, or machine itself. This contravenes International Humanitarian Law, which emphasises human accountability. If a system cannot elucidate its decision-making process, it may violate legal principles concerning target distinction and attack assessments. Furthermore, autonomous AI systems may exhibit unpredictable behaviours, raising ethical concerns regarding their predictability and reliability.
Strategic and Technical Countermeasures
Addressing the challenges associated with agentic artificial intelligence necessitates a comprehensive strategy that integrates technical solutions and human-centred approaches. Optimal solutions are not isolated but are components of a holistic plan encompassing the entire AI system lifecycle from development to deployment.
Cultivating Synergetic Human-AI Teaming
Human-machine teaming (HMT) is a methodology for leveraging the capabilities of AI while preserving human judgment and ethical considerations. In HMT, humans and AI collaborate, each contributing to their respective strengths. Humans provide context, intuition, and creativity, whereas AI excels in data processing and task execution with high precision. This collaboration ensures that humans retain decision-making authority in complex situations. For HMT to be effective, military organizations must foster an “AI-ready culture.” This involves training personnel to operate AI systems and equipping them with skills such as empathy and critical thinking to identify and rectify errors. The U.S. The Army is actively preparing soldiers for this concept, emphasising the development of skills for “AI War fighters” who manage risks and utilise AI to enhance efficiency and accuracy.
Rigorous Verification, Validation, and Testing (VVT)
Comprehensive verification and testing are imperative to ensure the reliability and security of military AI systems. Given the inherent unpredictability of AI, exhaustive testing of all potential scenarios is prohibitively expensive and time-consuming. The U.S. The Department of Defence (DoD) is developing advanced testing methodologies to enhance the robustness and reliability of AI systems. This includes adversarial testing, which involves simulating attacks on a system to identify vulnerabilities. The DoD also employs independent test data to verify system performance and detect issues before deployment.
Ensuring Data Integrity and Provenance
The efficacy of AI systems is contingent on the quality of the data they use. The DoD employs several strategies to ensure data reliability.
– Source Reliability and Provenance Tracking: AI systems should utilise data from credible sources. Tracking the origin of the data aids in identifying any compromised data.
– Verification and Maintenance: Techniques such as checksums and cryptographic hashes are employed to ensure data integrity during storage or transmission.
– Access Controls and Encryption: Data are classified based on sensitivity, and robust access controls and encryption are implemented to always safeguard them.
Supply Chain Security:
The AI supply chain presents significant risks owing to its reliance on numerous external vendors and interconnected systems. To mitigate these risks, military organisations employ AI to anticipate and manage potential threats to their operations. They scrutinise vendor data to identify suppliers who may provide counterfeit or defective components. For instance, the Pentagon utilised AI to identify over 19,000 high-risk vendors out of a total of 43,000, demonstrating an innovative approach to safeguarding the data and hardware on which it depends.
Implementing Robust Cybersecurity Measures
AI plays a crucial role in cyber warfare, serving both defensive and offensive functions. It can monitor network traffic, detect anomalous activities, and swiftly neutralise threats more efficiently than human operators. This capability is essential for combating sophisticated AI-driven cyberattacks that rapidly evolve to evade detection and cause extensive network breaches. However, the deployment of AI introduces cybersecurity vulnerabilities. If military AI systems are compromised, adversaries can alter defence strategies, access classified information or even redirect weapons to target friendly forces or civilians. To avert such scenarios, militaries must implement robust security measures that exceed conventional standards, including continuous monitoring, real-time threat detection, and advanced encryption. Additionally, employing AI to create deceptive networks to mislead adversaries is a critical component of the defence strategy.
Human-Centred Design for Transparency and Explainability
Addressing the “black box problem” is essential for developing trustworthy AI systems. The Department of Defence emphasises responsible AI (RAI), which entails creating comprehensible and transparent systems. The Defence Advanced Research Projects Agency (DARPA) initiated the Explainable Artificial Intelligence (XAI) program to develop AI systems that users can understand and trust. Developing XAI is challenging because of the trade-off between AI performance and explainability. The most accurate models are often the least interpretable ones. The solution lies in devising novel or hybrid machine learning methodologies that enhance the model explainability without compromising performance.
Law, Ethics, and Policy
The integration of military artificial intelligence (AI) raises significant questions regarding international law, ethics, and governance. The prevailing consensus among nations is that existing international legal frameworks, including the International Humanitarian Law (IHL), remain applicable to AI. However, as AI systems gain autonomy, adherence to fundamental principles, such as distinguishing combatants from civilians, employing proportional force, and exercising caution in attacks, becomes increasingly challenging. This situation results in a “responsibility gap,” as machines make critical life-or-death decisions, complicating the attribution of accountability for the casualties. Current legal frameworks mandate that humans, such as commanders and engineers, bear responsibility, rather than machines. Consequently, efforts are underway to establish clear regulatory guidelines.
The United Nations Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) is actively engaged in formulating new legal standards for LAWS. Their objectives include prohibiting systems incapable of target discrimination and regulating other systems. The Department of Defence (DoD) has developed its own AI guidelines, emphasising safety and trust in AI systems. The Rules of Engagement (ROE) facilitate the practical application of these principles by setting parameters for the deployment and operation of AI systems to ensure human oversight and collaboration. This approach enables military forces to comply with political and legal obligations while effectively utilising AI.
“We get to make that decision because, simply put, it’s just technology… being able to use and apply artificial intelligence in future military operations is going to be critical to the success of those operations”….General John E. Hyten
