Search This Blog

Translate

Showing posts with label Disability Advocacy. Show all posts
Showing posts with label Disability Advocacy. Show all posts

Thursday, 13 November 2025

An Open Letter to the Ministry of Electronics and Information Technology: A Critique of the India AI Governance Guidelines on the Omission of Mandatory Disability and Digital Accessibility Rules

 To:

The Secretary, Ministry of Electronics and Information Technology (MeitY)
Government of India, New Delhi
Email: secretary[at]meity[dot]gov[dot]in

Preamble: The Mandate for Accessible and Inclusive AI

The recently issued India AI Governance Guidelines (I-AIGG) assert a vision of “AI for All”  [Click here to view document] and commit India to inclusive technology, social goods optimisation, and the avoidance of discrimination. However, the guidelines have failed to operationalise mandatory and enforceable disability and digital accessibility rules – a legal and ethical lapse that undermines both national and international obligations. As a professional engaged in technology policy and disability rights, and in light of the Supreme Court's Rajive Raturi v. Union of India (2024) judgment, this letter outlines why voluntary commitments are insufficient and why robust, mandatory accessibility standards are immediately warranted.​​

The Policy Paradox: Aspirational Promises versus Legal Obligations

The I-AIGG framework advances “voluntary” compliance, elevates inclusive rhetoric, and references “marginalised communities” in its principles. However, it neither defines “Persons with Disabilities” (PwDs) nor mandates conformance with domestic accessibility rules, as legally required by the Rights of Persons with Disabilities Act, 2016 (RPwD Act). This introduces a regulatory gap: aspirational principles supplant the non-negotiable legal floor guaranteed to persons with disabilities. Such dilution is legally unsustainable given India’s obligations under the UNCRPD and under Sections 40, 44, 45, 46, and 89 of the RPwD Act.​​

The Rajive Raturi Judgment: Reinforcing Mandatory Compliance

The Supreme Court’s decision in Rajive Raturi (2024) unambiguously directed the Union Government to move from discretionary, guideline-based approaches to compulsory standards for accessibility across physical, informational, and digital domains. The Court found that reliance on non-binding guidelines and sectoral discretion violated statutory mandates, and it instructed the creation of enforceable, uniform, and standardised rules developed in consultation with persons with disabilities and stakeholders.​​  This is particularly relevant to digital and AI governance, where exclusion can be algorithmic, structural, and scaled, denying access to education, employment, health, and social participation. The judgment refutes the adequacy of sectoral or voluntary approaches – digital accessibility is a fundamental right and non-compliance amounts to denial of rights for PwDs in India.​​

The EU Benchmark: Legal Mandates, Not Discretion

The European Union’s AI Act (Regulation (EU) 2024/1689) and its general accessibility directives establish mandatory, rights-based compliance for digital accessibility. The EU Act:

  • Explicitly enforces accessibility as a legal obligation, not a voluntary commitment, anchored in the UNCRPD and Universal Design principles.​​
  • Mandates that all high-risk AI systems comply with technical accessibility standards by design, with legal penalties for non-compliance.​​
  • Classifies systems impacting education, employment, healthcare, and public services as high-risk, subjecting them to strict regulatory scrutiny.​​
  • Prohibits any AI deployment that exploits or discriminates against persons with disabilities, addressing historical and algorithmic bias at source.​​

Thus, the EU approach demonstrates enforceable protection for PwDs, with stakeholder consultation, technical linkage to sectoral accessibility standards, and mechanisms for remediation and complaint.

Critique of I-AIGG: Core Deficiencies and Recommendations

Absence of Disability-Specific Provisions:

The term “marginalised communities” is insufficiently specific. India’s legal framework demands explicit protection for PwDs, including reasonable accommodation, accessible formats (such as ePUB, OCR-based PDF), and compliance with domestic (GIGW, Harmonised Guidelines 2021) standards.​

  1. No Accessibility-By-Design Mandate for AI: While the I-AIGG insists on “Understandability by Design,” it fails to require “Accessibility by Design.” Systems that are explainable but not operable by PwDs remain discriminatory.​​
  2. Inadequate Response to Algorithmic Bias:  AI bias mitigation in the I-AIGG does not extend to underrepresented disability data or to systemic exclusion caused by inaccessible training sets. The EU model, by contrast, mandates active audit and correction for disability-related data bias.​​
  3. Weak Grievance Redressal Mechanisms:  Voluntary or generic redress measures neglect the diversity of disability and the necessity for robust, accessible remedies in every sector where AI is used.​
  4. Non-compliance with Judicial Mandate:  Above all, the approach bypasses the Supreme Court’s explicit instructions to operationalise compulsory rules – an omission that is both ultra vires and constitutionally indefensible.​​

Policy Prescription: Steps Toward Compliance

Draft and Notify Mandatory AI Digital Accessibility Standards:

  • MeitY must codify and enforce AI digital accessibility standards as binding, not optional, rules. These must reference existing Indian standards (GIGW/HG21), adopt international best practices (WCAG), and be technology-agnostic.​​
  • Classify High-Risk AI Systems with Disability Lens: Mandate Disability Impact Assessments, mirroring the EU approach, for all AI systems deployed in health, education, employment, and public services.
  • Institutionalise Disability Rights Expertise: Add disability rights experts and diverse PwD representatives to the AI Governance Group and the Technology Policy Expert Committee, to ensure continued compliance monitoring and gap correction.​​
  • Mandate Dataset Audits and Privacy Protections: Require dataset bias audits for disability, establish anonymisation protocols for disability-rights data, and ensure representation in AI datasets.
  • Create Enforceable, Accessible Grievance Redress Channels: Grievance and remedy processes must be designed for operability by all 21 disability categories, in multiple formats and languages, with offline options for digitally marginalised users.​​

VII. Conclusion and Urgent Appeal

Presently, the I-AIGG’s disability approach is aspirational, not enforceable; voluntary, not mandatory. This is contrary to the Supreme Court's directive, India's legal obligations, and international best practice. To prevent algorithmic exclusion and rights denial, MeitY must urgently revise the I-AIGG:To operationalise mandatory disability accessibility safeguards across all
AI and digital systems;

  • To implement Disability Impact Assessments as standard in high-risk domains;
  • To establish permanent, consultative mechanisms with DPOs and subject-matter experts.

Failure to act will perpetuate digital exclusion, legal non-compliance, and undermine the promise of “AI for All.” India’s technology policy must embrace enforceable accessibility, both as a legal imperative and a standard of global leadership.

Yours faithfully,

Nilesh Singit

https://www.nileshsingit.in/

References

  • Rajive Raturi v. Union of India, Supreme Court of India, 8 November 2024.​​
  • India AI Governance Guidelines: Enabling Safe and Trusted AI Innovation, MeitY, 2025.​​
  • Rights of Persons with Disabilities Act, 2016, and associated Rules.​
  • Finding Sizes for All: Report on Status of the Right to Accessibility in India, for facts on digital exclusion.​
  • European Union, AI Act 2024 (Regulation (EU) 2024/1689), especially Recital 80, Article 5(1)(b), Article 16(l).​​
  • Web Content Accessibility Guidelines (WCAG) and Guidelines for Indian Government Websites (GIGW).
  • Open letter references and scope: blog.nileshsingit.org/open-letter-to-niti-ayog-ai-disability-inclusion. ​

Thursday, 30 October 2025

Prototype — Accessible to Whom? Legible to What?

When I first read this theme, I thought to myself, at last someone has asked the right two questions, though perhaps in reverse.

We often think of prototyping as a neutral, creative act—a space of optimism and experimentation. Yet, for many of us in the disability community, it is also the stage where inclusion quietly begins or silently ends.

And when Artificial Intelligence enters this space, another question arises: what does it mean for a prototype to be legible to a machine before it is accessible to a human?

My argument today is straightforward: AI-powered assistive technologies often make disabled people legible to machines, but not necessarily empowered as agents of design.

The challenge before us is to move from designing for to designing with, and ultimately to designing from disability.

Accessibility and Legibility

Accessibility, as Aimi Hamraie reminds us, is not a technical feature but a relationship—a continuous negotiation between bodies, spaces, and technologies.

It is not about adding a ramp at the end, but about asking why there was a staircase to begin with.

Legibility, on the other hand, concerns what systems can recognise, process, and render into data. Within Artificial Intelligence, what is not legible simply ceases to exist.

Now imagine a person whose speech, gait, or expression does not fit the model. The algorithm blinks and replies: “Pardon? You are not in my dataset.”

Speech-recognition tools mishear dysarthric voices; facial-recognition models misclassify disabled expressions as errors.

In such moments, accessibility collapses into machinic readability. One is included only if the code can comprehend them. The bureaucracy of bias, once paper-bound, now speaks in silicon.

The Bias Pipeline—What Goes In, Comes Out Biased

In one experiment, researchers submitted pairs of otherwise identical resumes to AI-powered screening tools. In one version, the candidate had a “Disability Leadership Award” or involvement in disability advocacy listed; in the other, that line was omitted. The AI system consistently ranked the non-disability version higher, asserting that the presence of disability credentials indicated “less leadership emphasis” or “focus diverted from core responsibilities.”Much Much Spectrum+1

This is discrimination by design. A qualified person with disability is judged unsuitable—even when their skills match or exceed the baseline—because the algorithm treated their disability as liability. Such distortions stem not from random error but from biased training data and value judgments encoded invisibly.

The Tokenism Trap

The bias in data is reinforced by bias in design. Disabled persons are often invited into the process only when the prototype is complete—summoned for validation rather than collaboration.

This is an audit theatre, a performance of inclusion without participation.

The United Kingdom’s National Disability Survey was struck down by its own High Court for precisely this reason: it claimed to be the largest listening exercise ever held, yet failed to involve disabled people meaningfully.

Even the European Union’s AI Act, though progressive, risks the same trap. It mandates accessibility for high-risk systems but leaves enforcement weak.

Most developers receive no formal training in accessibility. When disability appears, it is usually through the medical model—as something to be corrected, not as expertise to be centred.

Real-World Consequences

AI hiring systems rank curricula vitae lower if they contain disability-related words, even when qualifications are stronger.

Video-interview platforms misread the facial expressions of stroke survivors or autistic candidates.

Online proctoring software has flagged blind students as “cheating” for not looking at screens. During the pandemic, educational technology in India expanded rapidly, yet accessibility lagged behind.

Healthcare algorithms trained on narrow datasets make incorrect inferences about disability-related conditions.

Each of these failures flows from inaccessible prototyping practices.

Disability-Led AI Prototyping

If the problem lies in who defines legibility, the solution lies in who leads the prototype.

Disability-led design recognises accessibility as a form of knowledge. It asks not, “How do we fix you?” but “What can your experience teach the machine about the world?”

Google’s Project Euphonia trains AI to understand atypical speech. The effort is valuable, yet it raises questions of data ownership and labour—who benefits from making oneself legible to the machine?

By contrast, community-led mapping projects, where wheelchair users, blind travellers, and neurodivergent coders co-train AI systems, are slower but more authentic.

Here, accessibility becomes reciprocity: the machine learns to listen, not merely to predict.

As Sara Hendren writes, design is not a solution; it is an invitation.

When disability leads, that invitation becomes mutual—the technology adjusts to us, not the other way round.


Nilesh Singit