Context: The Council of Europe (COE) has taken a big step by adopting the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law, also known as the ‘AI convention’, on May 17, 2024.
Background:
As of now, there are many tools, guidelines, and governance principles at national levels, but none of them are binding or acceptable at a global level. In this background, the Council of Europe has adopted a framework convention on Artificial Intelligence and Human Rights, Democracy and Rule of Law.
About Council of Europe:
The Council of Europe is Europe’s leading human rights organization. Since its foundation in 1949, the organization has created a common legal space, centered on the European Convention on Human Rights (ECHR), across its 46 member states. This represents a death penalty-free zone for more than 700 million people.
Need for Rules on Artificial Intelligence: AI Act ensures that humans can trust what AI has to offer. While most AI systems pose limited to no risk and contribute to solving many societal challenges, certain AI systems create risks that must be addressed to avoid undesirable outcomes like decision bias or unfair advantage to a community.
- The proposed act will:
- Address risks specifically created by AI applications.
- Prohibit AI practices that pose unacceptable risks.
- Determine a list of high-risk applications.
- Set clear requirements for AI systems for high-risk applications.
- Define specific obligations for deployers and providers of high-risk AI applications.
- Require a conformity assessment before a given AI system is put into service or placed on the market.
- Put enforcement in place after a given AI system is placed into the market.
- Establish a governance structure at European and national level.
Key features of Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law (AI Convention):
- The Convention aims to create a balanced framework that encourages technological advancement while safeguarding fundamental freedoms and democratic values. By establishing clear guidelines, it seeks to prevent AI from undermining democratic institutions. It also ensures the ethical use and development of AI systems.
- Article 1: Aims to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law.
- Article 3: Covers the activities within the lifecycle of artificial intelligence systems that have the potential to interfere with human rights, democracy, and the rule of law as follows:
- a. Each Party shall apply this Convention to the activities within the lifecycle of artificial intelligence systems undertaken by public authorities or private actors acting on their behalf.
- b. Each Party shall address risks and impacts arising from activities within the lifecycle of artificial intelligence systems by private actors... in a manner conforming with the object and purpose of this Convention.”
- Military application of AI is not covered under the convention because of lack of consensus.
- Article 4: Provides for ‘General Obligations’ in the convention pertaining to the protection of human rights and the integrity of democratic processes.
- Article 5: Provisions for the fundamental principles of governance such as respect for the rule of law (Article 5). Even though issues like disinformation and deep fakes haven’t been addressed specifically, parties to the convention are expected to take steps against issues such as disinformation and deep fakes.
- To ensure its effective implementation, the convention establishes a follow-up mechanism in the form of a Conference of the Parties.
- Convention requires that each party establishes an independent oversight mechanism to oversee compliance with the convention, and raise awareness, stimulate an informed public debate, and carry out multi stakeholder consultations on how AI technology should be used.

Risk Categorization: The Regulatory Framework defines 4 levels of risk for AI systems:
| Risk Category | Features |
| Minimal Risk | Low risk: Most AI systems are expected to be low risk, such as content recommendation systems or spam filters etc. Companies can choose to follow voluntary requirements and codes of conduct. |
| Limited Risk | Limited risk: Refers to the risks associated with lack of transparency in AI usage. The AI Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back.To ensure that AI-generated content is identifiable. Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labeled as artificially generated. This also applies to audio and video content constituting deep fakes. |
| High Risk | High risk: AI systems that negatively affect safety or fundamental rights will be considered high risk they face requirements like using high-quality data and providing clear information to the users and will be divided into two categories:AI systems that are used in products falling under product safety legislation. This includes toys, aviation, cars, medical devices, and lifts.AI systems falling into specific areas that will have to be registered in an EU database:Management and operation of critical infrastructureEducation and vocational trainingEmployment, worker management and access to self-employmentAccess to and enjoyment of essential private services and public services and benefitsLaw enforcementMigration, asylum, and border control managementAssistance in legal interpretation and application of the law. |
| Unacceptable risk | Unacceptable risk: Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:Cognitive behavioral manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behavior in children.Social scoring: classifying people based on behavior, socio-economic status, or personal characteristics.Biometric identification and categorization of peopleReal-time and remote biometric identification systems, such as facial recognition |
Way forward:
As AI technology evolves, the treaty may need to be reviewed and adapted to address new challenges and opportunities. This will ensure that the treaty remains relevant and effective in safeguarding human rights and democratic values. Further, efforts should be made to encourage more countries, including non-European states, to join the treaty. This would promote a global standard for the responsible use of AI, ensuring broader protection of human rights and democratic principles.

The AI Convention adopted by the Council of Europe is a crucial step in regulating AI's impact on human rights and democracy. By addressing risks and establishing clear governance, it balances technological progress with the protection of freedoms. The treaty's future revisions will ensure it adapts to emerging challenges in AI.