Press "Enter" to skip to content
Credit: Canva

From Code to Combat: AI Systems Shape Battlefield Strategy and Operations

AI warfare has evolved from a laboratory curiosity to a modern-day reality. These AI combat systems were often personified in films and books as large, humanoid machines (à la the Transformers), and while evocative, real-world systems are software-driven, sensor-based platforms.

The military application of AI in combat operations can be traced back to the Gulf War of 1991. Back then, they were used for threat recognition and assessment in defense systems, such as the Patriot missiles. They were also used in autonomous navigation and decision-support systems in cruise missiles. The military world had become a theatre where these AI systems could be deployed.

Militaries prize AI because it promises three decisive advantages: faster perception of the battlespace, accelerated decision-making, and precision-guided action. Machine-learning models ingest radar returns, satellite feeds, signals intercepts, and logistics data, turning raw information into actionable intelligence in near-real time. AI systems compress the famous OODA loop – observe, orient, decide, act – allowing commanders to outpace an adversary’s ability to react to threats on the battlefield.

Interest in AI for warfare is growing. The global defense AI market is projected to reach $19.29 billion by 2030, reflecting a compound annual growth rate of 13% from 2025 to 2030, according to Grand View Research. This investment underscores recognition among military leaders that AI is a revolutionary addition to the battlespace and military capabilities.

Intelligence, Surveillance and Reconnaissance (ISR)

AI’s most mature military use is in intelligence gathering and analysis. Modern forces collect massive amounts of data from satellites, drones, signals, and human sources. AI excels at sifting through this flood, spotting patterns and anomalies that humans might miss.

Project Maven, launched by the U.S. Department of Defense in 2017, uses computer vision to analyze drone footage, identifying vehicles, people, and targets in minutes instead of hours. Ukraine’s Palantir Gotham platform combines open-source and classified data to create real-time battlefield intelligence.

In the 2023-2024 Gaza conflict, Israel’s “Gospel” AI produced more targeting recommendations as a result of enhanced intelligence gathering. This speed changes how forces find and engage threats.

Autonomous weapons systems

The use of autonomous weapons systems in combat is a heavily debated topic on the international scene. These range from missile defense systems to offensive weapons that hunt enemy forces.

Turkey’s Kargu-2 drone reportedly attacked militia fighters in Libya in 2020 without real-time human control. This may have been the first use of a fully autonomous weapon in combat, highlighting the tech’s rapid advancement, according to a 2021 U.N. Security Council report.

Ukraine’s SmartPilot ‘mother drone’ flies 300 kilometers, launches two AI-guided kamikaze drones, and returns without using GPS, all autonomously. At $10,000 per mission, it’s far cheaper than cruise missiles, making autonomous weapons an economic game-changer.

Command and control systems

AI is transforming command and control by fusing data in real time and automating decisions. The U.S. military’s Joint All-Domain Command and Control (JADC2) project aims to instantly link the “best sensor” to the “best shooter” across all battle domains. For example, a satellite spotting enemy radar could cue a submarine missile within seconds. As of mid-2025, JADC2 has completed large-scale exercises (Project Convergence 21) but remains in prototype stages.

NATO’s 2024 AI Strategy pushes for “AI-ready interoperability,” setting common data standards so allies can share AI insights across borders. This marks a shift from traditional command to networked, AI-driven decision-making.

Cyber warfare and information operations

AI changes both cyber defense and offense. Defensively, AI can spot and isolate cyber threats instantly. Companies like Darktrace develop systems that respond to attacks without requiring human intervention.

Offensively, AI-driven malware creates changes that are too fast for traditional defenses. Generative AI can also produce deepfake videos and fake news on a massive scale, making disinformation campaigns more powerful than ever. Information warfare operations in military operations are important as they aid in destabilizing enemy combatants.

However, the increasing use of AI systems by the military during operations raises a lot of concerns and issues which require immediate attention and answers.

One major issue is accountability. When an AI system causes civilian casualties, it’s unclear who’s responsible. Responsibility can stretch from the programmers who created the algorithms to the commanders who deployed the systems. This spread makes it hard to enforce legal and moral standards.

The idea of ‘meaningful human control’ over AI weapons is still vague. Some say human oversight is enough, while others argue humans must be able to step in immediately to stop unlawful attacks. This discourse shapes how autonomous weapons are designed and used, but without clear agreement, the risk of misuse or mistakes keeps growing.

Bias in AI systems is a big concern. Since AI learns from data, any bias in that data can lead to wrongful targeting, like mistaking civilians for combatants or missing protected sites. Many AI models are ‘black boxes,’ meaning their decision-making is hard to understand. This lack of transparency makes it tough to ensure they follow the laws and conventions on war and hold violators accountable.

AI warfare might also lower the threshold for war.

The ICRC (International Committee of the Red Cross) position on autonomous weapons prioritizes safety for civilians who might be affected by military operations where these systems are deployed. This position and concern are reiterated in the U.N. Secretary-General’s call for a binding “meaningful human control” standard.

AI warfare might also lower the threshold for war. Autonomous weapons reduce risks to soldiers and cut costs, which could make leaders more willing to use force. Cheaper AI weapons could give smaller countries or groups powerful tools, upsetting regional stability and increasing the chance of conflict.

The Russo-Ukrainian War from 2022 saw the deployment and application of AI warfare systems; an example is the case of AI company Palantir. Working with Palantir, Ukraine combined commercial satellite images with secret data to get near-real-time battlefield views. This helped target artillery strikes quickly, destroying thousands of Russian vehicles.

In 2023 to 2024, Israel used the Gospel AI to speed up target identification. Additionally, SmartPilot drones used by Ukraine are known for bypassing GPS jamming using visual and LiDAR navigation, extending strike reach deep into enemy territory.

By eliminating human bias in decision-making and targeting, these warfare systems have changed the entire narrative and landscape on the battlefield.

The crucial question is this: ‘Does AI shape military strategy to a great extent?’ The deployment of these systems in shaping policy raises concerns bordering on ethical and legal frontiers of warfare. These frontiers, once fully explored and analyzed, hold the key to the responsible use of military AI.

Author

  • Divine-Favour Ukoh

    Divine-Favour Ukoh is the Nigeria-based editor, research and development, of A.L.L. Africa and a contributing writer at My ESQ Legal and The AI Innovator.

    View all posts

2 Comments

  1. Odudu Inyangudoh Odudu Inyangudoh September 3, 2025

    Responsibility is key, as long as humans control and analyze, there is hope for AI to be used as a tool for balance and not destruction.

    Interesting work, Divine!

    • Divine Divine September 5, 2025

      Beautiful comment, Odudu!
      I totally agree with you, human oversight remains the key to a balanced deployment of AI systems in military operations.

Comments are closed.

×