Insider

Your all-access pass to FP

Why the United States Is Winning the AI Race—for Now

Paul Scharre expands on his FP cover essay.

By , the editor in chief of Foreign Policy.
No audio? Hover over the video player, and tap the Click to Unmute button.

On-demand recordings of FP Live conversations are available to FP subscribers.

Artificial intelligence has gone from being a relatively niche area a year ago to a topic that the broader public seems obsessed with. Some of this new interest, to be fair, is because of OpenAI’s ChatGPT. When it launched late last year, its performance resonated with non-experts because it started doing well at human tasks such as school tests or the bar exam. As a result, it spurred a bigger debate about AI’s role in society, its impact on jobs, the economy, and much else.

Artificial intelligence has gone from being a relatively niche area a year ago to a topic that the broader public seems obsessed with. Some of this new interest, to be fair, is because of OpenAI’s ChatGPT. When it launched late last year, its performance resonated with non-experts because it started doing well at human tasks such as school tests or the bar exam. As a result, it spurred a bigger debate about AI’s role in society, its impact on jobs, the economy, and much else.

One area that has gotten less public discussion is how AI can impact geopolitics. That’s the focus of FP’s summer print issue: “The Scramble for AI” explores how this new technology has sparked a new arms race and is already having an impact on warfare and deterrence.

The magazine’s lead essay, by Paul Scharre, compares the AI race today to the nuclear race a generation ago. Just as it did then, this competition among countries and companies will create a world of haves and have-nots. Much of the scramble for AI is around controlling access to the computing hardware needed to train powerful AI models, which leads to new questions about alliances, sanctions, and regulation.

To delve deeper, I spoke with Scharre on FP Live, the magazine’s platform for live journalism. In addition to our cover essay, he is also the author of Four Battlegrounds: Power in the Age of Artificial Intelligence. Scharre worked on emerging weapons technologies at the Pentagon and is currently the director of studies at the Center for a New American Security. Subscribers can watch the full interview in the video box atop this page. What follows is an edited and condensed transcript.

Ravi Agrawal: Paul, you compare today’s scramble for AI dominance to the nuclear race. Why?

Paul Scharre: One of the trends we’ve seen in AI is this need for massive amounts of computing hardware to train the most capable systems like ChatGPT. They’re trained on thousands of very specialized chips running for weeks or months at a time. That’s very costly and requires a lot of engineering talent. And that’s concentrating power in the hands of a very small number of actors.

RA: How much computing power does GPT-4 need? And if I wanted to try to match it, how much would it cost?

PS: The costs have been growing at an astronomical rate. The amount of computing power that’s been used to train the most powerful systems has increased by a factor of 10 billion from 2010 to 2022. In the last year, we’ve started to see the statistics go dark, because companies like OpenAI are not reporting how many chips they use anymore. The most advanced models are costing tens of millions of dollars just for their final training run. They cost more when you look at all of the experiments that are leading up to that point. Sam Altman, the CEO of OpenAI, has quoted a figure of $100 million for training GPT-4. That’s pretty pricey.

RA: Why is controlling access to computing hardware so crucial in the development of AI?

PS: In the last year or two we’ve seen companies scaling up massive amounts of computing hardware and huge data sets. One effect of this is that the models can do a wide range of tasks, and that means that they’re inherently dual use—similar to how nuclear power has beneficial uses for civilian nuclear reactors but also can be used for nuclear weapons. AI is in some ways even more than dual use. It could be used for a whole range of things, good and bad: fixing computer code, but also finding cyber-vulnerabilities, writing marketing emails, generating spear phishing emails or disinformation. This makes proliferation a problem.

RA: When you talk about proliferation, which part of it are we the most worried about?

PS: Right now what’s happening is that a lot of trained models were put up on open source on the web—and that proliferates very easily. But the hook to think about controlling proliferation is really at the hardware level.

Much like with nuclear weapons, where the world was able to manage proliferation and slow it down by controlling access to the underlying physical resources such as weapons-grade uranium and plutonium, there’s an opportunity to do something similar with AI by controlling hardware. That’s because the most advanced chips needed to train these very specialized models are only built in two countries—Taiwan and South Korea. They rely on equipment that only three countries in the world make—the Netherlands, Japan, and the United States. And so there’s an opportunity to start shaping who has access to the underlying physical resources needed to train these most capable models and to manage proliferation that way.

RA: Much of what we’re discussing is about a great global race between countries. But there’s a separate, parallel race between companies. How do companies and countries differ in this case?

PS: We’re in an interesting place where the leaders in the technology are from private industry as opposed to the nuclear or space race, when it was a government developing those technologies.

As geopolitical tensions heat up between the United States and China, that’s obviously led to big moves in Washington and Beijing to calibrate some of those U.S.-China tech ties. The export controls that the Biden administration levied on advanced chips going to China and chip-manufacturing equipment back in October was a huge move from a geopolitical standpoint and also a big one in the industry. We will begin to see Washington step in from a national security standpoint to regulate some of these U.S.-China tech relationships, whether it’s from business or academia. Sometimes those regulations create points of friction, and companies aren’t going to come out publicly and say that. But the interests are not always directly aligned.

RA: You mentioned the CHIPS Act. We’re now several months into the start of its implementation. Bearing in mind that a lot of sanctions take time to ripple through the industry, especially with contractors and subcontractors, how is the CHIPS Act working to constrain China’s access to high-end semiconductors?

PS: Slowing Chinese industry will take some time to play out, in part because the way that the export controls are set right now, they’re just below the most advanced chips. China already has stockpiled some of the most advanced chips that are restricted. There are ones that they can still purchase that are close to but not quite there. They’re not going to have an immediate effect. If these controls stay in place, they’re likely to have an effect that builds over time as the gap between the hardware that China can get access to, and what other countries can have access to, widens.

RA: How much of a leader is the United States on AI right now vis-à-vis China or other countries? How do you characterize the difference?

PS: It depends on what you’re counting. If you’re looking at the most advanced models, sure, the companies that are building them are in the United States or in the United Kingdom. But these models proliferate very quickly, so Chinese labs are maybe 18 months behind some of the most advanced models. That’s not that far behind. In many ways, what’s going to matter more is implementation and who is able to take these systems and adopt them in society, and that’s a place where it’s a mixed bag on both sides. There are some areas, such as facial recognition, where China is far ahead because they’ve adopted it as part of this very broad surveillance state. There are other areas, like large language models, for example, where Chinese centers are actually cracking down on the use of these generative models because they don’t want AI system models saying something that’s offensive to the Chinese Communist Party. The political systems in the United States and China affect implementation in different ways. The key insight here is that the United States has core advantages in hardware and in human talent over China in the long run, because the best AI scientists in the world want to come to the United States and study and work here.

RA: Outside of the world’s top seven or eight economies, the majority of the world lives in countries that would fall in the have-nots category, to continue the nuclear analogy we began with. Where do they stand when it comes to AI in the next decade or 20 years? How do they compete?

PS: Over the last decade, the benefits of AI technology have been pretty widely distributed because these systems proliferate so quickly. The models that are trained are put online as open-source repositories where anyone can download them and use them. Once a model’s been trained, it can be modified pretty easily and cheaply. If an AI model that cost $10 million to train is put online, someone else might be able to download that model and use it as is, or even modify it for only a few hundred dollars. That’s a huge asymmetry in cost, and that’s been a reason why up until now the gains have been pretty widely distributed.

Going forward, we will likely see a bifurcation of the industry, where we still have a thriving and active open-source ecosystem that everyone can benefit from, but the most capable and advanced systems will end up being closed and proprietary. A lot of actors are going to get locked out of these most advanced systems.

RA: Gen. Stanley McChrystal and Anshu Roy wrote another essay in our print issue looking at how companies and countries are already mining immense amounts of data, putting it into AI models and systems that are able to predict with a surprising degree of efficiency and accuracy where certain trends and wars are headed. Can you give us a sketch of where you see countries competing over AI as it pertains to warfare?

PS: Data is an essential input into machine-learning systems because they’re trained on data, rather than a set of rules programmed in by people. Companies, countries, or militaries that can find ways of harnessing their data, and then use it for machine-learning systems, are going to have a huge advantage.

But there are weaknesses here, and it’s important to acknowledge that AI is not magic. What the AI systems don’t do well is they don’t generalize to novel situations. If you end up in a circumstance where there’s something that’s not in the training data that the AI system hadn’t foreseen, the AI system fails catastrophically—militaries are also going to have to take that into account. That’s a huge factor of warfare. You have an adaptive adversary. They’re going to try to find ways to exploit your AI system and take advantage of it as well.

RA: There has been a spate of warnings about AI and its potential to lead to human extinction. Are these warnings overblown? Is some of this about sentience, or is it about traditional risks, such as ChatGPT-4 being able to simulate weapons construction?

PS: Whether or not these warnings are overblown is the subject of a lot of debate. It’s worth keeping a couple of things in mind. ChatGPT is not going to take over the world. There is no risk of that with the systems that exist today. But things have been moving forward fast. The last 18 months have seen faster progress than researchers anticipated, which is what’s causing top researchers to say, “We’re worried.”

The goal of the field of AI is to build machines that are intelligent like people. So if they’re able to do what they’re trying to do, that could be a problem. Right now, we have a hard time controlling the systems that exist today. We have a hard time controlling facial recognition systems and making sure they’re not discriminatory or biased. As we see these systems scaling up in intelligence, those problems are not yet solved. As we see systems that become more capable, we want to make sure that they’re aligned with human values, that we can make sure that they act in a way that’s reliable and consistent, and that they’re not going to cause harm either intentionally or accidentally.

Ravi Agrawal is the editor in chief of Foreign Policy. Twitter: @RaviReports

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber? .

Join the Conversation

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?

Join the Conversation

Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.

You are commenting as .

More from Foreign Policy

Jill Biden walks slightly in front of Joe Biden as they exit the debate stage. Both are gazing down and neither is smiling.

Yes, Biden Flopped. But Let’s Not Overreact. 

The United States is in a very bad place. Just not as bad as people think.

U.S. and Philippine soldiers take part in a joint live fire exercise as part of the annual 'Balikatan' (shoulder-to-shoulder) U.S.-Philippines war exercises, on March 31, 2022 in Crow Valley, Tarlac, Philippines.

Against China, the United States Must Play to Win

Washington’s competition with Beijing should not be about managing threats—but weakening and ultimately defeating the Chinese Communist Party regime.

U.S. President Joe Biden (right) and former U.S. President Donald Trump participate in the first presidential debate of the 2024 election at CNN’s studios in Atlanta, Georgia.

Key Foreign-Policy Moments From the Trump-Biden Debate

The two candidates clashed over Russia’s war in Ukraine, the Israel-Hamas war, immigration, and America’s global image. 

Outgoing Dutch Prime Minister Mark Rutte, wearing wire-frame glasses, a suit jacket, and open-collared button-up shirt with no tie, furrows his brow as he looks to his right.

NATO’s New Leader Was Planning This the Whole Time

Mark Rutte, a workaholic obsessed with routine, is about to take over the West’s military alliance.