We Need to Understand What Large Language Models (LLMs) Mean For Soft Power | Opinion

🎙️ Voice is AI-generated. Inconsistencies may occur.

Amid GPT-4o's launch and the emergence of new tech policies worldwide, artificial intelligence (AI) is now at the center of today's public discourse. Foreign policy is not immune to this trend, with the U.N. passing its first AI safety resolution in 2024, and an AI race brewing globally. While foreign policy thinking about AI has often focused on hard power, such as enhancing military capabilities, there has been comparatively less discussion on how AI, particularly large language models (LLMs), might influence soft power.

Soft power, coined by Harvard political scientist Joseph Nye, refers to countries' ability to achieve their goals through "attraction and persuasion" rather than military force. Soft power relies on the attractiveness of a country's culture, values, educational systems, and more to influence foreign public opinion. In turn, a country's soft power can place pressure on leaders abroad to strengthen ties with that country or at least discourage hostility toward it. The United States wields significant soft power through media, universities, and cultural exports that shape perceptions worldwide.

LLMs like ChatGPT may represent a new dimension of soft power. Although data remains limited, early evidence suggests LLMs implicate soft power in several ways. First, the development of frontier LLMs in a nation can enhance its prestige, attracting top AI researchers, and strengthening the country's appeal as an innovation hub. Second, and more importantly, early-stage research suggests that LLMs may contribute to the spread of a country's values abroad.

A numerical code runs across a screen.
A numerical code runs across a screen. Nicolas Armer/picture-alliance/dpa/AP Images

Researchers observing GPT-3, a LLM trained mostly on English language data, observed that much of the model's output aligned more closely to American values compared to the cultural values of other countries. As users of American LLMs grow worldwide, these tools could potentially further spread American values, culture, and other means of soft power. Foreign governments may try to counter this by promoting the development of native-language LLMs to preserve a given nation's cultural heritage and soft power.

Third, as LLMs begin to enhance machine translation (MT) methods between languages, the result may eventually enable LLMs to act as a "force multiplier" for a country's soft power. As LLMs facilitate quicker and cheaper translations, documents that once existed only in a country's native language will be read by more people globally, allowing that country to enhance its soft power brand. This benefit may be particularly strong for countries whose cultural media is not as widespread due to language barriers.

There is increasing evidence that governments worldwide recognize the soft power potential of LLMs. In Europe, several governments have crafted proposals to support the construction of native language LLMs explicitly for soft power purposes. The government of France has supported the homegrown startup Mistral in its efforts to build French language LLMs that preserve the continuity of French cultural and linguistic traditions against English language LLMs. Meanwhile, in Spain, some have argued that Madrid's efforts to build a Spanish LLM could boost its influence across the Spanish-speaking world—a textbook case of soft power promotion. These efforts are not limited to Europe—India, the United Arab Emirates, and more have been supporting domestic LLM development as well.

As states race to develop homegrown LLMs for soft power purposes, this contest gives rise to global competition over investment in and influence over language models native to specific languages and cultural contexts. Take for example, Jais, one of the world's highest quality Arabic language chatbots, produced by the Emirati company G42. When first introduced in 2023, Jais was hailed as a major innovation due to the difficulty of training a chatbot in Arabic. Both American and Chinese firms expressed interest in partnering with and supplying G42, creating a competition for influence over the company. In the end, Microsoft triumphed and G42 agreed to stop using Huawei telecommunications equipment. This episode highlights how geopolitical concerns can motivate strategic competition over investment in LLMs and their associated soft power.

Above all, LLMs' soft power implications yield important questions for researchers and policymakers to consider. Can LLMs influence the proliferation of democratic values worldwide? Some have suggested that LLMs may spread democratic values due to being trained on predominantly Western data. Others have argued that LLMs might instead enhance censorship. Can LLMs' soft power influence consumer sentiment, national mood, and more?

It is too early to definitively answer most of these questions, especially as evidence on LLMs' impact remains limited. That's why we need more research on LLMs and soft power. We need more empirical analyses of how states support LLM development, how individuals' behaviors are shaped by LLM use, and more. Ultimately, we must move beyond seeing LLMs, and AI more broadly, exclusively as a hard power tool and instead recognize its immense soft power implications. Doing so is vital to map AI's transformative role worldwide.

Sergio Imparato is a lecturer on government at Harvard University and author of The Sovereign President (Pisa University Press, 2015). He has previously published in The Hill, The Diplomat, and STAT.

Sarosh Nagar is a Marshall Scholar and researcher at Harvard, where his work focuses on the economic and geopolitical impacts of frontier technologies like AI and synthetic biology. His work has previously been published by The United Nations, The Hill, JAMA and Nature Biotechnology.

The views expressed in this article are the writers' own.

About the writer

Sergio Imparato and Sarosh Nagar