Artificial intelligence (AI) is at the heart of current social, legal and ethical issues that governments, legislators, regulators, and the wider civil society are grappling with. At present, there are several notable efforts globally to determine the most effective ways to govern AI, although the structure and purpose of this oversight shows considerable variation across different jurisdictions. The European Union (EU) and the U.S. are two such prominent jurisdictions.
The EU's AI Act is a landmark piece of legislation that lays out, for the first time, a detailed and wide-ranging framework for the comprehensive regulation of AI deployment in the EU covering the development, testing and usage of AI. Although the U.S. has not yet instituted any comprehensive federal legislation on AI, the White House took a significant step towards establishing a governance regime for AI development and use in the U.S. by issuing an Executive Order in October 2023.
Given the substantial economic and geopolitical influence of the EU and the U.S., any regulatory progress in these regions will significantly impact the global trajectory of AI. This, in turn, will have far-reaching effects on the broader societal, legal and ethical consequences linked with the technology's adoption.
What Did We Find?
Our findings underline the importance of examining the EU AI Act further, particularly its potential implications for the U.S. Our research also highlights the need for deepening collaboration between the U.S. and the EU in the realm of AI governance, learning from each other's unique experiences as the EU puts the EU AI Act into action.
Given that both regions share the common objectives of enhancing AI safety, security and trustworthiness, this collaboration is crucial. In the face of a complex and multifaceted global AI ecosystem with various regulatory frameworks, fostering transatlantic and indeed broader international cooperation on AI governance could be mutually beneficial for both Washington and Brussels.