Context: European Union officials have reached a provisional deal on the world's first comprehensive laws to regulate the use of artificial intelligence. The European Parliament will vote on the AI Act proposals in early 2024, and the legislation is expected to take effect in 2025.
Major Highlights:
- The European Parliament defines AI as software that can "for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with".
- The framework comprises safeguards on AI use within the European Union. These safeguards will include the framework through which consumers will be empowered to file complaints against violations, precise guardrails on AI adoption by law enforcement agencies and imposition of fine on violations.

Classification of AI:
- The Act’s central approach is the classification of AI techbased on the level of risk they pose to the “health and safety or fundamental rights” of a person. There are four risk categories in the Act — unacceptable risk, high risk, limited risk and minimal risk.
- The Act prohibits using technologies in the unacceptable risk category with little exception. E.g., Deployment of facial recognition technology on a large scale, with a few exemptions for law enforcement.
- The Act lays substantial focus on AI in the high-risk category, prescribing many pre-and post-market requirements for developers and users of such systems. E.g., use of AI tools for self-driving cars will be permitted, but it will be subject to certification.
- AI systems in the limited (medium) and minimal risk category are allowed to be used with a few requirements like transparency obligations. E.g., generative AI chatbots, video games.
Enforcement:
- The EU will be able to monitor and sanction those who violate the law through a new body called the EU AI office.
- The EU AI office will have the power to slap a fine worth seven percent of a company's turnover or 35 million euros, whichever is larger.
