Context: The new draft of the European Union’s Artificial Intelligence Act has been agreed upon by the European Parliament. The Act aims to regulate the development and use of AI in the EU and includes provisions for increased transparency, accountability, and obligations for high-risk AI systems.
Major Highlights:
- The AI legislation was drafted in 2021 to bring transparency, trust, and accountability to AI and create a framework to mitigate risks to the safety, health, fundamental rights, and democratic values of the EU.
- The draft of the AI Act broadly defines AI as “software that is developed with one or more of the techniques that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. It identifies AI tools based on machine learning and deep learning, knowledge as well as logic-based and statistical approaches.
- The legislation seeks to strike a balance between promoting “the uptake of AI while mitigating or preventing harms associated with certain uses of the technology” and addressing ethical questions and implementation challenges in various sectors.
Risk categories of AI in the Act:
- The Act’s central approach is the classification of AI tech based on the level of risk they pose to the “health and safety or fundamental rights” of a person. There are four risk categories in the Act — unacceptable, high, limited and minimal.
- The Act prohibits using technologies in the unacceptable risk category with little exception. These include:
- the use of real-time facial and biometric identification systems in public spaces
- systems of social scoring of citizens by governments leading to unjustified and disproportionate detrimental treatment
- subliminal techniques to distort a person’s behaviour; and technologies which can exploit vulnerabilities of the young or elderly, or persons with disabilities.
- The Act lays substantial focus on AI in the high-risk category, prescribing a number of pre-and post-market requirements for developers and users of such systems. Some systems falling under this category include:
- biometric identification and categorisation of natural persons
- AI used in healthcare, education, employment (recruitment), law enforcement, justice delivery systems
- tools that provide access to essential private and public services (including access to financial services such as loan approval systems).
- The Act envisages establishing an EU wide database of high-risk AI systems and setting parameters so that future technologies can be included if they meet the high-risk criteria.
- AI systems in the limited and minimal risk category such as spam filters or video games are allowed to be used with a few requirements like transparency obligations.
Concerns:
- While some industry players have welcomed the legislation, others have warned that broad and strict rules could stifle innovation.
- Companies have also raised concerns about transparency requirements, fearing that it could mean divulging trade secrets. Explainability requirements in the law have caused unease as it is often not possible for even developers to explain the functioning of algorithms.
Kindly click on the given link to read more about ‘Artificial Intelligence and its Regulation’.