LOADING...
How India's new AI framework targets risks, bias and misuse
The newly released guidelines detail how AI should be developed and deployed

How India's new AI framework targets risks, bias and misuse

Feb 16, 2026
09:48 am

What's the story

Ahead of the five-day AI Impact Summit 2026, the Indian government has unveiled its first set of comprehensive artificial intelligence (AI) governance guidelines. The framework is based on principles and existing laws, with new oversight bodies to strike a balance between innovation and safeguards. The move shows India's commitment to responsible AI governance without a standalone law, addressing issues like bias, misuse, and lack of transparency in AI systems while ensuring technological adoption isn't hindered.

Framework details

Guidelines based on 7 principles or 'sutras'

The newly released guidelines detail how AI should be developed and deployed in sectors like healthcare, education, agriculture, finance, and governance. The framework is based on seven broad principles or sutras, which are trust as the foundation; people first; innovation over restraint; fairness and equity; accountability; understandable by design; safety, resilience, and sustainability. Together they emphasize that AI systems should support human decision-making processes while being transparent to avoid discrimination with clear safeguards in place.

Legal reliance

AI risks already covered under existing laws

A key aspect of the guidelines is their reliance on existing laws. Officials have said that many AI-related risks are already covered under current legal provisions such as IT rules, data protection laws, and criminal statutes. Instead of enacting a separate AI law right now, the government has opted for periodic reviews and targeted amendments as technology evolves.

Advertisement

Oversight bodies

National-level bodies proposed for AI governance

The framework proposes the establishment of national-level bodies to oversee AI governance. These include an AI governance group for policy coordination across ministries, a technology and policy expert committee for specialist advice, and an AI safety institute focusing on testing standards, safety research, and risk assessment. The guidelines also define responsibilities for AI developers and deployers such as transparency reports, clear disclosures when using AI-generated content, grievance redressal mechanisms for those affected by these systems.

Advertisement

Risk management

India aims to be global leader in responsible AI governance

High-risk applications, especially those impacting safety, rights or livelihoods are expected to follow stronger safeguards with human oversight. The guidelines reflect India's belief that AI shouldn't be limited to a few companies or countries but should be widely deployed for real-world problems while remaining trustworthy. By balancing innovation with safeguards, the government hopes to position India as not just a major user of AI but also a global leader in responsible and inclusive governance aligned with 'Viksit Bharat 2047' vision.

Advertisement