India wants to make AI safer with new rules
India just dropped a white paper outlining a fresh way to govern AI—called a techno-legal framework.
The idea? Build legal checks and tech safeguards right into AI from day one.
They're planning things like an AI Governance Group to keep ministries in sync, expert committees for advice on law and cybersecurity, and even an AI Safety Institute to test systems for bias or security issues.
How will this actually work?
To keep tabs on things, India's proposing a National AI Incident Database that logs any failures or biases in AI—kind of like what the OECD does globally, but tweaked for India.
They also want companies to step up with transparency reports and self-regulation, offering incentives for those who play it safe.
Why now?
This move comes after the white paper's release in January 2026.
The goal is simple: to help India lead in safe, trustworthy AI, and to promote responsible deployment across key sectors.