Showing posts with label The Bias Pipeline. Show all posts
Showing posts with label The Bias Pipeline. Show all posts

Tuesday, 18 November 2025

How Algorithmic Bias Shapes Accessibility: What Disabled People Ought to Know

 How Algorithmic Bias Shapes Accessibility: What Disabled People Ought to Know

Artificial intelligence (AI) is often described as a tool that shall make life easier, faster and more efficient. Yet for many disabled people, AI brings both promise and risk. When algorithms are trained on limited or biased data, or when designers fail to consider diverse disabled experiences, these systems may quietly reproduce old forms of exclusion in new digital forms. Accessibility, therefore, is not simply a technical feature but a matter of rights, dignity and equal participation.

Under the UN Convention on the Rights of Persons with Disabilities (UNCRPD), States Parties shall ensure accessible information, communication and technologies. The Rights of Persons with Disabilities Act, 2016 carries this obligation into Indian law. Meanwhile, the European Union’s AI Act offers a regulatory model that treats disability bias as a serious risk requiring oversight. Bringing these frameworks together helps us understand why disabled persons ought to be vigilant about the role AI plays in everyday life.

How algorithmic bias affects accessibility

Algorithmic bias occurs when an AI system consistently produces outcomes that disadvantage a particular group. In the disability context, this may happen when data lacks representation of disabled people, or when models assume “typical” bodies, voices or behaviour. Such bias affects accessibility in very practical ways.

Speech-recognition tools may fail to understand persons with atypical speech. Facial-recognition systems may misclassify persons with facial differences. Recruitment algorithms may penalise gaps in employment history or interpret assistive-technology use as “unusual”. Navigation apps may not consider wheelchair-friendly routes because the training data assumes a walking user. Each of these failures reduces accessibility and reinforces the barriers the UNCRPD seeks to dismantle.

Crucially, these problems are rarely intentional. AI systems do not “decide” to discriminate; rather, they reflect the gaps, stereotypes and exclusions already embedded in data and design. This makes bias more difficult to detect, but no less harmful.

Why this matters for disabled people

Accessibility is not a favour. It is a right grounded in the principles of equality, non-discrimination and full participation. When AI systems shape access to employment, education, public services or communication, biased outcomes can have life-changing consequences.

For disabled people in India, the impact may be even greater. Digital public systems such as Aadhaar-linked services, online recruitment platforms, telemedicine and e-governance tools increasingly rely on automated processes. If these systems are inaccessible or biased, disabled persons may be excluded from essential services by design.

The UN Special Rapporteur on the rights of persons with disabilities has warned that AI can deepen inequality if disabled persons are not part of design, testing and oversight. Disability rights organisations must therefore engage proactively with AI governance, insisting on meaningful participation and accountability.

What rights and safeguards exist

The UNCRPD provides a clear rights-based framework: States shall ensure accessibility of ICTs, prohibit discrimination and guarantee equal opportunity. The RPwD Act mirrors these obligations within India. While neither document was written specifically with AI in mind, their principles apply directly to automated systems that determine or mediate access.

The EU AI Act, although external to India, demonstrates how regulation can address disability bias explicitly. It prohibits AI systems that exploit vulnerability due to disability and classifies several disability-related systems as high-risk, subject to strict obligations. Importantly, it permits the use of disability-related data for the purpose of detecting and mitigating bias, provided strong safeguards are in place.

Taken together, these instruments show that accessible AI is not merely a technical ideal; it is a regulatory and human-rights requirement.

What disabled persons and advocates ought to do

Disabled users and organisations shall insist on the following:

1. Inclusive and representative data
Developers must ensure that disabled persons are represented in training datasets. Without such inclusion, AI systems will continue to misrecognise disabled bodies, voices and patterns of interaction.

2. Accessibility-by-design
Accessibility must be built from the outset, not added as an afterthought. This includes compatibility with assistive technologies, multiple input modes and recognition of diverse communication styles.

3. Transparency and oversight
System owners shall explain how AI models work, what data they use and how they address disability bias. Automated decisions affecting rights or access ought to have a human review mechanism.

4. Participation of disabled people
Persons with disabilities must participate directly in design, testing and policy-making processes. Without lived experience informing design, accessibility will remain superficial.

5. Accountability and redress
When AI systems harm or exclude disabled users, there must be clear pathways for complaint, rectification and accountability. Disability rights bodies in India shall integrate AI harms into their oversight.

Moving towards accessible and fair AI

AI can expand accessibility when designed with care: speech-to-text tools, captioning systems, image-to-speech applications and digital navigation aids all hold transformative potential. However, potential alone is insufficient. Without deliberate attention to disability rights, AI may reinforce the very inequalities it claims to solve.

India stands at a critical point. With rapid digitisation and a strong disability-rights framework, it has the opportunity to lead in disability-inclusive AI. Policymakers, designers, researchers and civil-society actors must ensure that systems deployed in public and private sectors respect accessibility, transparency and fairness.

AI shall not decide the terms of accessibility; human judgement, accountability and rights-based governance must guide its development.

Click here to read longer article


Nilesh Singit

Thursday, 13 November 2025

An Open Letter to the Ministry of Electronics and Information Technology: A Critique of the India AI Governance Guidelines on the Omission of Mandatory Disability and Digital Accessibility Rules

 To:

The Secretary, Ministry of Electronics and Information Technology (MeitY)
Government of India, New Delhi
Email: secretary[at]meity[dot]gov[dot]in

I. Preamble: The Mandate for Accessible and Inclusive AI

The recently issued India AI Governance Guidelines (I-AIGG) assert a vision of “AI for All”  [Click here to view document] and commit India to inclusive technology, social goods optimisation, and the avoidance of discrimination. However, the guidelines have failed to operationalise mandatory and enforceable disability and digital accessibility rules – a legal and ethical lapse that undermines both national and international obligations. As a professional engaged in technology policy and disability rights, and in light of the Supreme Court's Rajive Raturi v. Union of India (2024) judgment, this letter outlines why voluntary commitments are insufficient and why robust, mandatory accessibility standards are immediately warranted.

II. The Policy Paradox: Aspirational Promises versus Legal Obligations

The I-AIGG framework advances “voluntary” compliance, elevates inclusive rhetoric, and references “marginalised communities” in its principles. However, it neither defines “Persons with Disabilities” (PwDs) nor mandates conformance with domestic accessibility rules, as legally required by the Rights of Persons with Disabilities Act, 2016 (RPwD Act). This introduces a regulatory gap: aspirational principles supplant the non-negotiable legal floor guaranteed to persons with disabilities. Such dilution is legally unsustainable given India’s obligations under the UNCRPD and under Sections 40, 44, 45, 46, and 89 of the RPwD Act.

III. The Rajive Raturi Judgment: Reinforcing Mandatory Compliance

The Supreme Court’s decision in Rajive Raturi (2024) unambiguously directed the Union Government to move from discretionary, guideline-based approaches to compulsory standards for accessibility across physical, informational, and digital domains. The Court found that reliance on non-binding guidelines and sectoral discretion violated statutory mandates, and it instructed the creation of enforceable, uniform, and standardised rules developed in consultation with persons with disabilities and stakeholders.

This is particularly relevant to digital and AI governance, where exclusion can be algorithmic, structural, and scaled, denying access to education, employment, health, and social participation. The judgment refutes the adequacy of sectoral or voluntary approaches – digital accessibility is a fundamental right and non-compliance amounts to denial of rights for PwDs in India.

IV. The EU Benchmark: Legal Mandates, Not Discretion

The European Union’s AI Act (Regulation (EU) 2024/1689) and its general accessibility directives establish mandatory, rights-based compliance for digital accessibility. The EU Act:

  • Explicitly enforces accessibility as a legal obligation, not a voluntary commitment, anchored in the UNCRPD and Universal Design principles.
  • Mandates that all high-risk AI systems comply with technical accessibility standards by design, with legal penalties for non-compliance.
  • Classifies systems impacting education, employment, healthcare, and public services as high-risk, subjecting them to strict regulatory scrutiny.
  • Prohibits any AI deployment that exploits or discriminates against persons with disabilities, addressing historical and algorithmic bias at source.

Thus, the EU approach demonstrates enforceable protection for PwDs, with stakeholder consultation, technical linkage to sectoral accessibility standards, and mechanisms for remediation and complaint.

V. Critique of I-AIGG: Core Deficiencies and Recommendations

  1. Absence of Disability-Specific Provisions:
    The term “marginalised communities” is insufficiently specific. India’s legal framework demands explicit protection for PwDs, including reasonable accommodation, accessible formats (such as ePUB, OCR-based PDF), and compliance with domestic (GIGW, Harmonised Guidelines 2021) standards.

  2. No Accessibility-By-Design Mandate for AI:
    While the I-AIGG insists on “Understandability by Design,” it fails to require “Accessibility by Design.” Systems that are explainable but not operable by PwDs remain discriminatory.

  3. Inadequate Response to Algorithmic Bias:
    AI bias mitigation in the I-AIGG does not extend to underrepresented disability data or to systemic exclusion caused by inaccessible training sets. The EU model, by contrast, mandates active audit and correction for disability-related data bias.

  4. Weak Grievance Redressal Mechanisms:
    Voluntary or generic redress measures neglect the diversity of disability and the necessity for robust, accessible remedies in every sector where AI is used.

  5. Non-compliance with Judicial Mandate:
    Above all, the approach bypasses the Supreme Court’s explicit instructions to operationalise compulsory rules – an omission that is both ultra vires and constitutionally indefensible.

VI. Policy Prescription: Steps Toward Compliance

  • Draft and Notify Mandatory AI Digital Accessibility Standards:
    MeitY must codify and enforce AI digital accessibility standards as binding, not optional, rules. These must reference existing Indian standards (GIGW/HG21), adopt international best practices (WCAG), and be technology-agnostic.

  • Classify High-Risk AI Systems with Disability Lens:
    Mandate Disability Impact Assessments, mirroring the EU approach, for all AI systems deployed in health, education, employment, and public services.

  • Institutionalise Disability Rights Expertise:
    Add disability rights experts and diverse PwD representatives to the AI Governance Group and the Technology Policy Expert Committee, to ensure continued compliance monitoring and gap correction.

  • Mandate Dataset Audits and Privacy Protections:
    Require dataset bias audits for disability, establish anonymisation protocols for disability-rights data, and ensure representation in AI datasets.

  • Create Enforceable, Accessible Grievance Redress Channels:
    Grievance and remedy processes must be designed for operability by all 21 disability categories, in multiple formats and languages, with offline options for digitally marginalised users.

VII. Conclusion and Urgent Appeal

Presently, the I-AIGG’s disability approach is aspirational, not enforceable; voluntary, not mandatory. This is contrary to the Supreme Court's directive, India's legal obligations, and international best practice. To prevent algorithmic exclusion and rights denial, MeitY must urgently revise the I-AIGG:

  • To operationalise mandatory disability accessibility safeguards across all AI and digital systems;
  • To implement Disability Impact Assessments as standard in high-risk domains;
  • To establish permanent, consultative mechanisms with DPOs and subject-matter experts.
  • Failure to act will perpetuate digital exclusion, legal non-compliance, and undermine the promise of “AI for All.” India’s technology policy must embrace enforceable accessibility, both as a legal imperative and a standard of global leadership.

Yours faithfully,
Nilesh Singit


References

  • Rajive Raturi v. Union of India, Supreme Court of India, 8 November 2024.
  • India AI Governance Guidelines: Enabling Safe and Trusted AI Innovation, MeitY, 2025.
  • Rights of Persons with Disabilities Act, 2016, and associated Rules.
  • Finding Sizes for All: Report on Status of the Right to Accessibility in India, for facts on digital exclusion.
  • European Union, AI Act 2024 (Regulation (EU) 2024/1689), especially Recital 80, Article 5(1)(b), Article 16(l).
  • Web Content Accessibility Guidelines (WCAG) and Guidelines for Indian Government Websites (GIGW).

 

  • Open letter references and scope: blog.nileshsingit.org/open-letter-to-niti-ayog-ai-disability-inclusion.