My Statement on the Biden Administration’s Executive Order on Artificial Intelligence
Seven years ago, I asked my Chief Science Advisor to study how artificial intelligence could play a growing role in the future of the United States. The report provided a framework for how to think about and adapt to emerging technological change. Since then, the world has seen how quickly AI can evolve — with the potential to change the way we work, learn, and create.
In the past, governments haven’t always adapted well to this kind of transformational change, but we have an opportunity to get this right. That’s why I’m glad to see the Biden administration acting with a sense of urgency — securing voluntary commitments from leading companies, and now signing an executive order designed to encourage innovation while avoiding some of the biggest risks. Congress should follow President Biden’s lead and look to his executive order for opportunities to fund this work.
It’s clear by now that AI will affect us all. It makes sense that much of the attention — both in government and the private sector — is focused on extreme risks and national security threats. We don’t want anyone with an internet connection to be able to create a new strain of smallpox, access nuclear codes, or attack our critical infrastructure. And we have to make sure this technology doesn’t fall into the hands of people who want to use it to turbocharge things like cybercrime and fraud.
But I’m glad the Biden administration is also thinking about other challenges that could end up being far more common. We’ve already seen what can happen when our shared basis of facts begins to erode, for example, affecting everything from politics and the economy to public health. Generative AI tools shouldn’t accelerate this trend.
That’s why today’s executive order is an important step in the right direction. It calls for developing guidelines and tools to make sure AI systems are safe, and requires developers of the most powerful models to share key information with the government. It also directs government agencies to prepare for the rise of AI — protecting consumers and workers and addressing the potential for discrimination and bias while also making sure we harness the enormous potential of AI to make our lives better.
Because technology transcends borders, the executive order also creates a roadmap for the U.S. to engage in more direct diplomacy, help set international standards, and work more closely with our G7 partners, as Vice President Harris will do when she participates in a UK summit on AI later this week.
None of this will be easy, which is why we need to convince more talented people to work in government, not just the private sector. I’m happy to see this administration embrace programs to quickly hire more AI professionals and add new capabilities to the U.S. Digital Service, and I encourage anyone who wants to learn more about these opportunities to visit www.ai.gov.
We also need to create new ways for people who care about these issues — from governments and companies to advocacy groups and civil society — to come together and debate the best way forward. The good news is that many groups — from the Leadership Conference on Civil and Human Rights to Upturn to the Alignment Research Center — are already tackling these questions, and making sure more people feel like their concerns are being heard and addressed.
As we think more critically about these issues, we should heed the lessons of the past. When social media was on the rise, most decisions were made by a small group of people with almost no oversight. Those people created platforms that helped us connect in new and exciting ways, but they also failed to anticipate the harm their tools could do. By the time it became clear, much of the damage had already been done. We can’t make the same mistake again, and the industry leaders I talk to agree. The stakes are too high.
Finally, we need to recognize that democracy and innovation depend on each other. If we want America to continue to lead, we need to keep pushing new technology forward. But we also need democratic values — from freedom of speech to the rule of law — that make innovation possible. That’s why anyone working to harness the power of these new tools has to make a choice: ignore potential problems until it’s too late, or proactively address them in a way that unlocks the enormous benefits of breakthrough technology while also strengthening democracy.
If we want AI to be a force for good, we have to be able to stand for something bigger — not just because it’s the right thing to do, but because it’s the smart thing to do. I applaud the Biden administration for taking this important step, and hope it’s just the beginning.