Leadership: Artificial Intelligence in Decision-Making

Represent Leadership: Artificial Intelligence in Decision-Making article
5m read

Artificial Intelligence (AI) is rapidly advancing and its integration into various sectors, including military decision-making, is accelerating. The Department of Defense (DoD) recently announced the formation of the Artificial Intelligence Rapid Capabilities Cell (AI RCC) to speed up the implementation of AI technology, focusing particularly on generative AI. This new office is tasked with exploring AI's use across critical areas such as command and control, autonomous drones, intelligence, weapons testing, financial systems, and human resources.

However, despite this push for integration, a critical question arises: can AI truly replace the indispensable human factor in leadership decision-making? While AI holds immense promise for improving processes and providing advantages, the answer, especially in complex and high-stakes environments like military operations, is a resounding no. AI should serve as a powerful assistant, but it cannot supplant human judgment, experience, and adaptability.

Understanding AI in Context

To discuss AI effectively, we must define our terms. AI, in a general sense, refers to machines performing tasks that typically require human intelligence. This is distinct from the futuristic, often exaggerated portrayals seen in science fiction. A key subset of AI is Machine Learning (ML), which allows computers to learn from data and make predictions or decisions without being explicitly programmed. Generative AI, a sub-field of ML, focuses on creating new content, such as text, images, or code, often leveraging Large Language Models (LLMs).

LLMs are foundational models trained on vast datasets containing billions of parameters. This training allows them to understand, process, and generate human-like text and perform diverse language-related tasks. Examples include well-known models like OpenAI's GPT series or Google's LaMDA and PaLM. In a military context, this technology is already being explored, with initiatives like CamoGPT integrating military doctrine and lessons learned into generative AI models.

AI's Role in Warfighting Functions: Promise and Peril

The AI RCC's focus areas include critical Warfighting Functions like Intelligence and Command and Control (C2). These functions are the bedrock of military operations, uniting tasks and systems commanders use to achieve missions. Human factors are deeply embedded in every step, from intelligence officers assessing enemy courses of action to leaders selecting the final plan.

Project Maven, initiated in 2017, offers a concrete example of AI's valuable assistance. Tasked with accelerating AI integration to turn vast amounts of data into actionable intelligence, Project Maven successfully employed algorithms to process full-motion video from unmanned aerial systems in near-real time. This allowed analysts to quickly identify objects and irregularities from data that was previously overwhelming to process manually, significantly improving the speed of intelligence analysis.

The Intelligence Warfighting Function is defined as the related tasks and systems that help commanders understand the operational environment – the enemy, terrain, weather, and civil considerations. It directly enables C2 and situational understanding, facilitating decisive action. Similarly, the C2 Warfighting Function involves tasks and systems enabling commanders to synchronize and converge all elements of combat power. It is the mechanism that drives operations across all military functions.

Deputy Defense Secretary Hicks rightly stated that a main reason for integrating AI is to improve decision advantage. AI can process data faster and potentially identify patterns invisible to the human eye, offering insights that can accelerate the decision-making cycle. However, the critical distinction lies between improving decision *advantage* and allowing AI to *make* the decisions themselves.

Why Humans Must Remain In The Loop

While AI can process data at incredible speed and scale, its limitations and vulnerabilities are significant, making it unsuitable for autonomous decision-making in critical military contexts. There must always be a human-in-the-loop or, at minimum, human-on-the-loop.

One major concern is the potential for AI to produce false and misleading information, often referred to as "hallucinations." LLMs, despite their vast training data, can generate outputs that are plausible but factually incorrect. Furthermore, the immense datasets used for training can be biased, unreliable, or incomplete, leading to skewed or undesirable outputs. Relying solely on such potentially flawed inputs for critical intelligence assessments or C2 decisions is inherently risky.

Another fundamental vulnerability is security. Like any software, AI systems are built by humans and are susceptible to programming mistakes that create attack surfaces. Adversarial actors, particularly sophisticated nation-states, can exploit these vulnerabilities. Research has demonstrated that machine learning models are vulnerable to "adversarial examples" – malicious inputs designed to cause erroneous outputs that may appear normal to humans. Studies have shown high success rates (over 80%) in causing models from major tech companies to misclassify inputs through such attacks. Imagine an adversary tampering with the parameters of an AI system used for analyzing drone footage, causing it to misidentify friendly forces as hostile, or worse, fail to detect critical enemy assets. The consequences in a combat scenario could be catastrophic.

The human element brings qualities that AI simply cannot replicate: intuition, experience-based judgment, and adaptability. Human leaders develop these qualities over time through successes and failures. Leadership is not merely following a script; it is about understanding context, weighing intangible factors, caring for personnel, challenging assumptions, and adapting to unforeseen circumstances. Vince Lombardi's quote, "Leaders aren't born, they are made and they are made just like anything else, through hard work," highlights this developmental nature.

Military doctrine, tactics, techniques, and procedures (TTPs) provide a framework, but they are guidelines, not rigid rules. U.S. military success has often stemmed from leaders at all levels exercising mission command – empowering subordinates to adapt and make the best decisions based on the situation, even if it means deviating from standard TTPs. This creative adaptability and initiative are beyond the capabilities of current AI, which is constrained by its programming and training data.

A compelling historical example underscores this point: a Russian officer during the Cold War prevented a potential nuclear war by refusing to trust automated radar systems that falsely indicated a U.S. missile attack. His human judgment, based on intuition and an understanding of the broader context, overrode the erroneous machine output, saving countless lives. This highlights the danger of allowing automation to override human decision-making in life-or-death scenarios.

The Path Forward: AI as a Tool, Not a Commander

AI and ML technologies offer powerful tools to enhance military capabilities, improve efficiency, and provide commanders with more and better-analyzed information at speed. Project Maven demonstrated AI's ability to handle data overload. Future AI applications can further refine logistics, predict maintenance needs, simulate scenarios, and assist in intelligence gathering and preliminary analysis.

However, in the critical Warfighting Functions of Intelligence and Command and Control, where understanding nuance, assessing complex human intentions, adapting to unpredictable situations, and making ethical judgments are paramount, human leadership remains irreplaceable. Military success, as DoD leaders have often acknowledged, hinges on the quality of leadership, the ingenuity of personnel, and the ability of humans to make decisive choices.

As the military continues to integrate AI, the focus must remain on leveraging it to augment human capabilities and provide decision *advantage*, not to replace the human decision-maker. The vulnerabilities of AI to error, bias, and malicious manipulation, combined with the unique, indispensable qualities of human judgment, experience, and adaptability, dictate that a human must always remain in control, particularly when the stakes are highest.

Avatar picture of The AI Report
Written by:

The AI Report

Author bio: Daily AI, ML, LLM and agents news

There are no comments yet
loading...