In a landmark development, European Union policymakers have reached an agreement on the AI Act, a sweeping new law designed to regulate artificial intelligence (AI). This legislation stands as one of the world’s initial comprehensive attempts to govern the use of rapidly evolving AI technology, addressing its societal and economic implications.
The AI Act establishes a global benchmark for countries seeking to harness the potential benefits of the technology while mitigating risks such as job automation, the spread of misinformation, and threats to national security. The law needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.
European policymakers targeted the riskiest applications of AI by companies and governments, including law enforcement and essential services like water and energy. Notably, creators of large general-purpose AI systems, including those powering popular chatbots like ChatGPT, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by AI, according to E.U. officials and earlier drafts of the law.
The use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7 percent of global sales.
Thierry Breton, the European commissioner involved in negotiations, emphasized Europe’s role as a pioneer in setting global standards. However, questions persist about the law’s effectiveness, with some provisions expected to take 12 to 24 months to come into effect.
The Brussels agreement, reached after three days of negotiations, reflects a balance between fostering innovation and safeguarding against potential harm. The complexity of the discussions highlights the challenges faced by policymakers in finding this equilibrium.
The urgency to regulate AI heightened following the global sensation of ChatGPT’s release in the previous year. While the United States issued an executive order on AI’s national security, other nations like Britain and Japan adopted a more hands-off approach. China imposed some restrictions on data use and recommendation algorithms.
Europe has been one of the regions furthest ahead in regulating AI, having started working on what would become the A.I. Act in 2018. In recent years, E.U. leaders have tried to bring a new level of oversight to tech, akin to the regulation of the healthcare or banking industries. The bloc has already enacted far-reaching laws related to data privacy, competition, and content moderation.
AI The Act imposes disclosure requirements on makers of large AI models, requiring information about system functioning and evaluation for “systemic risk.” Enforcement across 27 nations remains a challenge, as the law necessitates hiring new experts at a time when government budgets are tight. Legal challenges are expected, echoing previous criticisms of uneven enforcement in EU legislation like the General Data Protection Regulation.
As global governments and businesses increasingly turn to AI in various sectors, the AI Act’s impact will be closely monitored. It not only affects major AI developers but also businesses in education, healthcare, and banking, signaling a significant step in shaping the future of AI regulation worldwide.