Skip to content

The Risks Facing a Transnational AI Strategy

On a May 5th interview with CNN’s Fareed Zakaria1, Admiral James Stavridis, NATO’s former Supreme Allied Commander Europe, outlined a vision for how Artificial Intelligence (AI) could affect America’s and allied nations’ military preparedness. He noted that predictive maintenance models and strategy copilots could become indispensable tools for commanders. His most dramatic suggestion about the military applications of AI, however, came in the form of a warning: America and its allies were vulnerable to AI-enabled drone swarms that could overwhelm traditional air defenses. He cautioned that aircraft carriers, the backbone of American naval dominance, might be rendered ineffective by AI-coordinated drones and cruise missiles.

America and its military allies should listen attentively to Admiral Stavridis’ prescriptions. Developing a transnational AI strategy to realize his vision requires more than talks between militaries, however. This article highlights the political risks that can derail effective AI policy. The lessons presented here are applicable to any democratic alliance, such as the North Atlantic Treaty Organization2 (NATO) or the Quadrilateral Security Dialogue3 (Quad) between Australia, India, Japan, and the U.S.

Non-Military Cooperation Cannot Be Neglected

In his essay Soft Power, diplomat Joseph Nye presents the post-Cold War world as an arrangement of three stacked chess boards. Traditional, interstate military conflicts represent one board, while economic competition and loosely defined transnational issues, such as terrorism, represent the other two boards.4 Nye’s model demonstrates that any international partnership is buoyed by more than just military agreements: Economic and political cooperation is essential, too. Indeed, growing economic fissures between the U.S. and Europe and political fissures within Europe threaten to derail a shared AI agenda. Economic nationalism is becoming vogue in American politics, to the chagrin of European industry.5 Political disagreements between the EU and its illiberal members, such as Hungary, scutter bloc-wide action.6 The desire for a globally competitive and ethical military AI strategy is shared, but it will only become reality if alliances rest on solid economic and political foundations.

Policymakers Must Avoid Half-Baked AI Regulation

The performance of Generative AI models has improved remarkably over the past few years, heightening public concern about the technology. According to a Pew Research poll conducted in August 2023, 52% of American adults were more concerned about AI than excited.7 This concern may be misplaced: Generative AI models have incredible recall abilities, but weak cognitive skills8, necessitating further development. Politicians should enact sensible regulations pertaining to certain applications of AI—such as mandating human-in-the-loop decision making for cases where human life is at risk—without infringing upon ongoing research. In October 2023, the Biden Administration issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.9 Among other requirements, this Executive Order mandates firms developing AI models that require more than 1026 floating-point operations during the training process (1023 for biomedical data) to notify the federal government and disclose safety evaluation results. For now, this requirement isn’t burdensome: Researchers at UC Berkeley estimate that OpenAI’s GPT-3, which has 175 billion parameters, requires 3.1 x 1023 floating-point operations to train, 300 times less than the government’s reporting threshold.10 What is concerning, though, is how arbitrary this requirement seems: What dangers do AI models pose to society specifically at this threshold? In any case, floating-point operations count is a poor metric of performance: A smaller model trained on high-quality data may outperform a larger model trained on poor-quality data.11 Regulatory confusion hamstrings nations’ AI capabilities and is a boon for adversaries with fewer ethical qualms.

Export Controls Are Largely Ineffective

In October 2022, the U.S. government introduced extensive export controls prohibiting the sale of certain advanced semiconductors and manufacturing technologies to China, citing fears that the nation would use those technologies to bolster its military AI capabilities.12 In response, semiconductor firms, including market leader NVIDIA, which earned over 20% of its income from sales to China, introduced marginally slower chips to comply with export controls.13 The federal government responded one year later with a more comprehensive set of controls, enveloping not only top-of-the-line enterprise products, but also high-end consumer products and low-end datacenter products.12 According to a report from Reuters, the government tightened these regulations again earlier this year.14 Enforcing the export control regime has proven even harder than crafting it. The Economist writes that Chinese firms can obtain advanced semiconductor products through middlemen that are not within the purview of American export controls.13 Moreover, Chinese firms, such as technology powerhouse Huawei and chip manufacturer SMIC, are rapidly closing the gap with their Western competitors.13 There is a diplomatic cost to export controls, too, as American allies begrudgingly comply with tough policies.13 Restricting the semiconductor trade may please constituents but does little to maintain strategic leadership and alliance unity.

Summary

America and its allies correctly identify AI as the next arena of global competition, and bloc-wide discussions regarding the technology’s potential are commendable. Political risks, however, threaten the effectiveness of a coordinated AI strategy. The free world must choose between two futures for its cherished military alliances: Rising to the occasion or fading into irrelevance.

Sources

1 https://www.cnn.com/videos/world/2024/05/05/gps-0505-ai-in-the-military.cnn

2 https://www.nato.int/

3 https://www.cfr.org/in-brief/quad-indo-pacific-what-know

4 https://www.beaconjournal.com/story/opinion/columns/2013/06/13/joseph-s-nye-jr-game/10618285007/

5 https://www.economist.com/europe/2022/12/01/americas-green-subsidies-are-causing-headaches-in-europe

6 https://www.europarl.europa.eu/news/en/press-room/20240112IPR16780/the-hungarian-government-threatens-eu-values-institutions-and-funds-meps-say

7 https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/

8 https://www.linkedin.com/posts/yann-lecun_the-weak-reasoning-abilities-of-llms-are-activity-7017542356425928704-OV0m?utm_source=share&utm_medium=member_desktop

9 https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

10 https://github.com/amirgholami/ai_and_memory_wall#nlp-models

11 https://arxiv.org/abs/2306.11644

12 https://cset.georgetown.edu/article/bis-2023-update-explainer/

13 https://www.economist.com/business/2024/01/21/why-americas-controls-on-sales-of-ai-tech-to-china-are-so-leaky

14 https://www.reuters.com/technology/us-commerce-updates-export-curbs-ai-chips-china-2024-03-29/