The European Parliament has approved the world’s first comprehensive AI Act on Artificial Intelligence (AI) to minimize the risks associated with its use, BBC reports. The draft law classifies AI developments according to the level of danger and establishes appropriate control measures.
The field of AI is booming and becoming very profitable, but at the same time it raises concerns about biases, privacy, and even the future of humanity. Creators of the bill say that it will make the technology more human-centered.
Advanced position
- According to BBC, the law puts the EU at the forefront of global efforts to tackle the dangers of AI.
- China has already introduced a number of AI laws. In October 2023, US President Joe Biden signed an executive order requiring artificial intelligence (AI) developers to share security findings with the US government. But the EU went even further.
- The EU’s AI law is the world’s first and only set of mandatory requirements to mitigate risks, according to Enza Iannopollo, principal analyst at Forrester. She added that this would make the EU the “de facto” world standard for reliable AI, leaving all other regions, including the UK, to ” play catch-up”.
- The UK hosted an AI safety summit in November 2023, but is not planning legislation along the lines of the AI Act.
How the AI Act will work
- The main idea behind the law is to regulate AI based on its ability to harm society: the higher the risk, the stricter the rules.
- AI applications that pose a “clear risk to fundamental rights” will be banned, for example some of those that involve the processing of biometric data.
- AI systems considered as “high-risk”, such as those used in critical infrastructure, education, healthcare, law enforcement, border management or elections, must meet strict requirements.
- “Low-risk” services such as spam filters will face the lightest regulation – the EU expects most services to fall into this category.
- The law also creates provisions to combat the risks posed by the systems behind generative AI tools and chatbots, such as OpenAI’s ChatGPT.
- This will require manufacturers of some so-called multi-purpose AI systems that can be used for multiple tasks to be open about the materials used to train their models and comply with EU copyright law.
Copyright laws
- The copyright provisions were one of the most lobbied parts of the bill.
- OpenAI, Stability AI, and GPU giant Nvidia are among a handful of AI companies facing lawsuits over their use of data to train generative models.
- Some artists, writers, and musicians argue that the process of “harvesting” vast amounts of data, potentially including their own work, from virtually all corners of the Internet violates copyright laws.
- The bill needs to go through several more stages before it officially becomes law.
Lawyer-linguists, whose job is to check and translate laws, will scour its text and the European Council — composed of representatives of EU member states — will also need to endorse it, though that is expected to be just a formality.