ICMR releases ethical guidelines for AI usage in healthcare

Context: The Indian Council of Medical Research (ICMR) has released the country’s first ‘Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare’, aimed at creating “an ethics framework which can assist in the development, deployment, and adoption of AI-based solutions” in the fields specified.

Ethical Principles for AI Technology in Healthcare

The implementation and advancement of AI technology in healthcare should be guided by ethical values and principles followed by all relevant stakeholders.

AI technology employs diverse data sets and algorithms including supervised, semi-supervised, and unsupervised learning. Although AI holds promise in healthcare, its complex and machine-driven analytical processes warrant vigilance among healthcare professionals and researchers.

Unlike other AI fields, AI for Health has a direct impact on human life and may have significant implications on patients’ well-being.

Thus, an ethical and prudent approach is essential before integrating these algorithms into routine healthcare practices. Additionally, safety and confidentiality issues pertaining to patients’ health data must be cautiously addressed during all phases of AI for Health development and deployment.


The ten ethical principles in the above figure addresses issues specific to AI for health. These principles are patient-centric and are expected to guide all the stakeholders in the development and deployment of responsible and reliable AI for health. These principles are as follows –

  1. Autonomy – The use of AI in healthcare raises concerns about the potential for the system to operate independently and compromise human autonomy. Incorporating AI into healthcare may result in machines taking over the responsibility of decision-making. It is essential that humans maintain full control over AI-based healthcare systems and medical decision-making. Under no circumstances should AI technology interfere with patient autonomy.
  1. Safety and Risk Minimization – Before widespread implementation, it is necessary to ensure that any AI technology-based system will operate safely and reliably. All stakeholders involved in the development and deployment of the technology bear the responsibility of ensuring participant safety. Patient dignity, rights, safety, and well-being must be the highest priority. Risk levels associated with deploying AI technology in clinical research or patient care depend on the use case and deployment methodology. For instance, unsupervised models run the risk of being more hazardous than those supervised by AI researchers and healthcare professionals. Similarly, the deployment of AI-enabled tools in high-risk patient care areas is riskier than their deployment in other areas.
  1. Trustworthiness Trustworthiness is the most desirable quality of any diagnostic or prognostic tool to be used in AI healthcare. Clinicians need to build confidence in the tools that they use and the same applies to AI technologies. In order to effectively use AI, clinicians and healthcare providers need to have a simple, systematic and trustworthy way to test the validity and reliability of AI technologies.
  1. Data Privacy AI-based technology should ensure privacy and personal data protection at all stages of development and deployment. Maintaining the trust of all the stakeholders including the recipient of healthcare over the safe and secure Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare 19 use of their data is of prime importance to the successful and widespread deployment of AI. Data privacy must aim to prevent unauthorized access, modification, and/ or loss of personal data. The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
  1. Accountability and Liability – The concept of accountability entails that an individual or organization is responsible for their actions and should be transparent about their activities. When it comes to AI technologies for healthcare, it is crucial that they are subject to scrutiny by relevant authorities at any given time. Regular internal and external audits must be conducted to ensure that the AI technologies are functioning effectively. The results of these audits should be made available to the public.
  1. Optimization of Data Quality – The performance of AI technology heavily relies on the data utilized for training and testing purposes, making it a data-driven technology. In the healthcare sector, the quality and size of the dataset are critical as a skewed or insufficient dataset can result in issues such as data bias, errors, and discrimination. Data bias is regarded as the most significant risk to data-driven technologies like AI in healthcare. It is crucial to exercise due diligence to ensure that the training data is unbiased and represents a substantial portion of the target population.
  1. Accessibility, Equity and Inclusiveness – The use of computers for development as well as the deployment of AI technologies in healthcare presupposes wider availability of infrastructure. The digital divide is known to exist in almost all countries and is more prominent in low- and middle-income countries (LMICs). The heavy reliance on technology may therefore interfere with the wider application of promising tools in areas where it is expected to make a greater difference. 
  1. Collaboration –  In the field of AI for health, having a large and well-curated dataset is crucial for effective utilization of AI. This can only be achieved through fostering collaboration at all levels. With the rapidly changing landscape of AI technology, it is essential to collaborate among AI experts during research and development to ensure the most appropriate techniques and algorithms are used to address healthcare issues. Collaboration between AI researchers and healthcare professionals throughout the development and adoption of AI-based solutions is expected to enhance the benefits of this promising technology.
  1. Non Discrimination and Fairness Principles: In order to refrain from biases and inaccuracies in the algorithms and ensure quality, the following principles should be followed:
    • The data set used for the training algorithm must be accurate and representative of the population in which it is used. The researcher has the responsibility to ensure data quality.
    • Inaccuracy and biases can cause suboptimal or malfunctioning of AI technologies external independent algorithmic audits and continuous end-user feedback analysis should be performed to minimize inaccuracies and biases. The AI developers/researchers must acknowledge any biases involved and should take the necessary steps to rectify it.
    • AI should never be used as a tool for exclusion. Special attention must be given to under-represented and vulnerable groups like children, ethnic minorities, persons with disabilities, etc. The AI developers should promote the active inclusion of women and minority groups.
    • Developers should give special attention to promoting and protecting the equality of individuals. Freedom, rights and dignity, should be treated with equality and justice.
    • AI technologies should be designed for universal usage. Discrimination of individuals or groups on the grounds of race, age, caste, religion, social status is unethical.
    • The reversibility of decisions made by the AI technology should be considered; if harm has occurred to any patient/ participant. Before implementing the technology, the option for reversibility of decision must be integrated with the AI design.
    • In case of any unfortunate events arise from the malfunctioning of the AI technology occurs, then there should be an appropriate redressal mechanism for the victim. The manufacturer must ensure that there is a provision for proper grievance redressal.
    • There must be a safe mechanism to raise concerns pertaining to the AI technology the issues can be technical, functional, ethical, or misuse of technology. There should be a proper mechanism for protecting the whistleblower.
  1. Validity – AI technology applied in healthcare must undergo rigorous validation in both clinical and field settings to ensure its safety and efficacy for patients or participants. Differences in datasets used for training AI algorithms can amplify the divergence of AI-based algorithms. This discordance in diagnostic abilities among different AI solutions may cause confusion for end-users, including health professionals and patients. It is essential to have an internal mechanism to monitor such issues and provide appropriate feedback to developers while considering the clinical context. An efficient feedback mechanism is also crucial for necessary updates when AI technology affects individuals or healthcare systems. The application of AI-based decisions in clinical settings can lead to potential health hazards or mismanagement.

Mains Practice Question: 

Q. What are the key ethical principles that should be considered when developing and deploying AI technology in healthcare, and how can these principles be effectively integrated into the development process to ensure the safe and responsible use of these technologies?

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 20 MB. You can upload: image, document, archive, other. Drop files here

Online Counselling