IndiaAI Mission: Centre asks AI developers to reduce bias
What's the story
The Indian government has asked developers working on large language models (LLMs) under the IndiaAI Mission to prioritize bias mitigation. The move comes as a part of efforts to ensure that these AI models do not produce insensitive results when faced with difficult prompts. The Ministry of Electronics and Information Technology (MeitY) emphasized the need for caution, given India's diverse society and culture.
Bias mitigation
MeitY stresses on handling sensitive connotations with care
A MeitY official stressed the need to handle sensitive connotations related to caste, gender, food habits, regional and linguistic stereotypes as well as ethnic and religious differences with utmost care. The official said they want Indian models to be inclusive and not discriminatory or based on historical biases. All under-construction LLMs have been directed to integrate stringent stress testing into their framework.
Testing tools
IndiaAI mission's call for stress testing tools
Back in October, the IndiaAI Mission had invited expressions of interest (EOI) for Stress Testing Tools (STT). These are projects to test AI systems in extreme conditions. The models will be tested under 'adversarial inputs, data drift, or distributional shifts' and not just generic IT load testing. The focus will also be on AI behaviors like fairness, privacy, and accountability.
Algorithmic bias
Historical biases in AI algorithms and their impact
When AI algorithms detect historical biases or systemic inequities in the data they are trained on, their conclusions can also reflect those biases and inequities. This became evident in 2018 when Amazon scrapped an experimental AI recruiting tool after finding it systematically discriminated against female candidates in the US. Bias mitigation is the process of systematically identifying and reducing such unfair prejudices in AI systems.
AI development
Start-ups and consortia race to develop indigenous foundational LLM
A number of start-ups and industry-academia consortia are racing to develop the first indigenous foundational LLM, with government support. This includes Param-1, a bilingual model with 25% Indic data, and other speech or vision models for Indian languages being developed by BharatGen and Tech Mahindra's Indus LLM. Sarvam AI's 24-billion-parameter open-source hybrid model excelling in math, programming, and Indian languages called Sarvam M is also in this race.
Ethical AI
Government's focus on bias mitigation and ethical AI
The government is also focusing on bias mitigation as part of its global agreement to create and deploy a set of open access AI tools. These tools will lead to responsible and ethical deployment across the world. Called 'AI Commons,' they will also feature ethical AI certification, anonymization, and stress testing features where the whole world can contribute.