While October's Executive Order on Artificial Intelligence was largely a voluntary and fact-finding mission, the European Union's parliament is set to become the world's AI police after putting forth its own regulations after serious and contentious negotiations.

The EU's new AI Act is a significant legislative step aimed at regulating the use of artificial intelligence, focusing on mitigating risks to fundamental rights in areas like healthcare, education, and public services. Here are the key points from the MIT Technology Review's analysis:

  • Binding Rules on Transparency and Ethics: The AI Act introduces legally binding rules for tech companies, requiring them to inform users when interacting with AI systems like chatbots, biometric categorization, or emotion recognition systems. It mandates the labeling of deepfakes and AI-generated content and designing systems for the detectability of AI-generated media.
  • Wiggle Room for AI Companies: The Act considers foundation models and requires them to have better documentation, comply with EU copyright law, and disclose training data. However, stricter rules apply only to the most powerful AI models, and it's up to companies to assess their compliance requirements. (Rules may, however, change as time goes on and the technology is better understood.)
  • EU as AI Police: A new European AI Office will enforce the Act, with fines ranging from 1.5% to 7% of a firm's global sales turnover for noncompliance. The EU aims to set a global standard in AI regulation, similar to what happened with GDPR.
  • National Security Exemptions: The Act bans certain AI uses, such as untargeted facial recognition scraping and emotion recognition in workplaces or schools. However, it exempts AI systems developed exclusively for military and defense uses. Police use of biometric systems in public places is allowed only with court approval and for specific crimes such as terrorism and human trafficking, but predictive policing is banned (unless "used with 'clear human assessment and objective facts'").
  • Implementation Timeline: The final wording of the bill is pending, with technical adjustments and approvals required from European countries and the EU Parliament. Once in force, tech companies have two years to comply with the rules, with bans on certain AI uses applying after six months, and foundation model developers having one year for compliance.

In addition, citizens will have the ability to sue to find out how an algorithm made a decision, so look for an uptick in Explainable AI in the months to come.