Thursday, 6 November 2025

Open Letter to Niti Ayog: Urgent Need for Disability-Inclusive AI Governance Framework in Light of Disability Rights

NITI Aayog
Government of India
New Delhi

Subject: Response to Times of India Article on AI Regulation – Disability Inclusion Cannot Be Left to Existing Laws Alone

Sir,

This letter is in response to the recent Times of India article dated 6 November 2025 titled “Don’t need separate law for AI: Panel”, [Click Here to View TOI Article], which reports the conclusion of the high-level committee that existing sectoral laws are sufficient to govern artificial intelligence in India. With due respect, this position overlooks the disproportionate and often irreversible harms that AI systems are already inflicting on 27.4 million Indians with disabilities.

The panel’s stance—that “existing rules can address the majority of risks” at this stage—ignores ground realities. India’s legal and regulatory approach to accessibility has historically suffered from weak enforcement and voluntary compliance. The Supreme Court’s landmark judgment in Rajive Raturi v. Union of India (8 November 2024) must serve as a caution. The Court held that Rule 15 of the Rights of Persons with Disabilities Rules, 2017 created “a ceiling without a floor” by offering only aspirational guidelines rather than enforceable standards. It directed the Union Government to frame mandatory accessibility rules within three months in consultation with Disabled Persons’ Organisations (DPOs). The Court made it unequivocally clear that accessibility cannot be voluntary or discretionary.

If mandatory standards are now required even for physical infrastructure, it is untenable that AI systems—which increasingly mediate access to education, employment, healthcare, welfare and civic participation—remain governed by existing, non-specific laws. As the Court observed, accessibility is “a fundamental requirement for enabling individuals, particularly those with disabilities, to exercise their rights fully and equally.” This principle applies with even greater force to AI, whose decisions are automated, opaque and scaled.

Global best practice reinforces this. The European Union’s Artificial Intelligence Act (2024) embeds disability inclusion as a legal requirement.

  • Article 5(1)(b) prohibits AI systems that exploit disability-related vulnerabilities.
  • Article 16(l) mandates that all high-risk AI systems must comply with accessibility standards by design.
  • Developers are required to assess training data for disability bias and undertake Fundamental Rights Impact Assessments before deployment, backed by penalties.

None of these are optional recommendation. They are a binding law. India’s approach must be no less rights-based.

The risks of relying on existing laws are already visible:
AI proctoring tools misinterpret screen-reader use or stimming as cheating, denying disabled students fair access to education.
Hiring algorithms replicate historic discrimination, filtering out disabled candidates due to biased data sets.
Healthcare chatbots and telemedicine platforms routinely exclude blind, deaf, neurodivergent and persons with cognitive disabilities due to inaccessibility.
Welfare fraud detection systems flag disabled persons as “anomalies” due to atypical income or healthcare patterns, increasing wrongful denial of benefits.
Facial recognition and biometric systems routinely fail to detect persons with disabilities, leading to denial of services, harassment or misidentification.

The NALSAR-CDS report “Finding Sizes for All” demonstrated how accessibility laws—without enforcement mechanisms—fail in practice. The same pattern will repeat with AI if we pretend that pre-existing laws are adequate. AI’s scale and opacity make post-facto redress ineffective; thousands will be harmed long before litigation can provide relief.

Therefore, disability inclusion in AI cannot be left to goodwill, post-hoc complaints or fragmented sectoral laws. India needs a non-negotiable baseline of mandatory safeguards, even if a separate AI law is not enacted immediately.

In this context, I urge NITI Aayog to:

  1. Notify mandatory accessibility and non-discrimination standards for all high-risk AI systems, especially in education, employment, healthcare, public services and welfare.
  2. Require Fundamental Rights Impact Assessments for all AI deployments by government and regulated entities.
  3. Mandate disability-bias testing in datasets and model outputs before deployment.
  4. Set up a permanent advisory mechanism with DPOs to co-create and monitor AI governance norms.
  5. Explicitly prohibit AI systems that exploit disability vulnerabilities or result in discriminatory exclusion.
Innovation must not come at the cost of constitutional rights. India’s commitment under the Rights of Persons with Disabilities Act, 2016 and the UNCRPD requires accessible and equitable technology. The Supreme Court has shown the way in Raturi—create a floor of enforceable standards and allow progressive enhancement thereafter. AI governance must adopt the same logic.

India cannot lead the world in AI while systematically excluding its disabled citizens. Building AI that is inclusive, accessible and non-discriminatory is not optional—it is a constitutional, legal and ethical obligation.

Yours sincerely,