Search This Blog

Translate

Wednesday, 24 December 2025

A Rejoinder to "The Upskilling Gap" — The Invisible Intersection of Gender, AI, and Disability

 I have written a short rejoinder to @the_hindu’s article on women, AI, and the upskilling gap, to reflect on a question that often remains unaddressed: where do women with disabilities sit in this conversation?

As India debates skills, productivity, and the future of work, it may be worth pausing to examine how time, access, and design operate very differently for disabled women in an AI-mediated economy.

Read the full rejoinder here: click through to the main article → https://hosturl.link/2wnYLY

Tuesday, 18 November 2025

How Algorithmic Bias Shapes Accessibility: What Disabled People Ought to Know

How Algorithmic Bias Shapes Accessibility: What Disabled People Ought to Know

Artificial intelligence (AI) is often described as a tool that shall make life easier, faster and more efficient. Yet for many disabled people, AI brings both promise and risk. When algorithms are trained on limited or biased data, or when designers fail to consider diverse disabled experiences, these systems may quietly reproduce old forms of exclusion in new digital forms. Accessibility, therefore, is not simply a technical feature but a matter of rights, dignity and equal participation.

Under the UN Convention on the Rights of Persons with Disabilities (UNCRPD), States Parties shall ensure accessible information, communication and technologies. The Rights of Persons with Disabilities Act, 2016 carries this obligation into Indian law. Meanwhile, the European Union’s AI Act offers a regulatory model that treats disability bias as a serious risk requiring oversight. Bringing these frameworks together helps us understand why disabled persons ought to be vigilant about the role AI plays in everyday life.

How algorithmic bias affects accessibilit

Algorithmic bias occurs when an AI system consistently produces outcomes that disadvantage a particular group. In the disability context, this may happen when data lacks representation of disabled people, or when models assume “typical” bodies, voices or behaviour. Such bias affects accessibility in very practical ways.

Speech-recognition tools may fail to understand persons with atypical speech. Facial-recognition systems may misclassify persons with facial differences. Recruitment algorithms may penalise gaps in employment history or interpret assistive-technology use as “unusual”. Navigation apps may not consider wheelchair-friendly routes because the training data assumes a walking user. Each of these failures reduces accessibility and reinforces the barriers the UNCRPD seeks to dismantle. Crucially, these problems are rarely intentional. AI systems do not “decide” to discriminate; rather, they reflect the gaps, stereotypes and exclusions already embedded in data and design. This makes bias more difficult to detect, but no less harmful.

Why this matters for disabled peopl

Accessibility is not a favour. It is a right grounded in the principles of equality, non-discrimination and full participation. When AI systems shape access to employment, education, public services or communication, biased outcomes can have life-changing consequences.  For disabled people in India, the impact may be even greater. Digital public systems such as Aadhaar-linked services, online recruitment platforms, telemedicine and e-governance tools increasingly rely on automated processes. If these systems are inaccessible or biased, disabled persons may be excluded from essential services by design.

The UN Special Rapporteur on the rights of persons with disabilities has warned that AI can deepen inequality if disabled persons are not part of design, testing and oversight. Disability rights organisations must therefore engage proactively with AI governance, insisting on meaningful participation and accountability.

What rights and safeguards exist

The UNCRPD provides a clear rights-based framework: States shall ensure accessibility of ICTs, prohibit discrimination and guarantee equal opportunity. The RPwD Act mirrors these obligations within India. While neither document was written specifically with AI in mind, their principles apply directly to automated systems that determine or mediate access.

The EU AI Act, although external to India, demonstrates how regulation can address disability bias explicitly. It prohibits AI systems that exploit vulnerability due to disability and classifies several disability-related systems as high-risk, subject to strict obligations. Importantly, it permits the use of disability-related data for the purpose of detecting and mitigating bias, provided strong safeguards are in place.  Taken together, these instruments show that accessible AI is not merely a technical ideal; it is a regulatory and human-rights requirement.

What disabled persons and advocates ought to do

Disabled users and organisations shall insist on the following:

  1. Inclusive and representative data: Developers must ensure that disabled persons are represented in training datasets. Without such inclusion, AI systems will continue to misrecognise disabled bodies, voices and patterns of interaction.
  2. Accessibility-by-design:  Accessibility must be built from the outset, not added as an afterthought. This includes compatibility with assistive technologies, multiple input modes and recognition of diverse communication styles.
  3. Transparency and oversight:  System owners shall explain how AI models work, what data they use and how they address disability bias. Automated decisions affecting rights or access ought to have a human review mechanism.
  4. Participation of disabled people:  Persons with disabilities must participate directly in design, testing and policy-making processes. Without lived experience informing design, accessibility will remain superficial.
  5. Accountability and redress:  When AI systems harm or exclude disabled users, there must be clear pathways for complaint, rectification and accountability. Disability rights bodies in India shall integrate AI harms into their oversight.

Moving towards accessible and fair AI

AI can expand accessibility when designed with care: speech-to-text tools, captioning systems, image-to-speech applications and digital navigation aids all hold transformative potential. However, potential alone is insufficient. Without deliberate attention to disability rights, AI may reinforce the very inequalities it claims to solve.

India stands at a critical point. With rapid digitisation and a strong disability-rights framework, it has the opportunity to lead in disability-inclusive AI. Policymakers, designers, researchers and civil-society actors must ensure that systems deployed in public and private sectors respect accessibility, transparency and fairness.  AI shall not decide the terms of accessibility; human judgement, accountability and rights-based governance must guide its development.


Nilesh Singit

Thursday, 13 November 2025

An Open Letter to the Ministry of Electronics and Information Technology: A Critique of the India AI Governance Guidelines on the Omission of Mandatory Disability and Digital Accessibility Rules

 To:

The Secretary, Ministry of Electronics and Information Technology (MeitY)
Government of India, New Delhi
Email: secretary[at]meity[dot]gov[dot]in

Preamble: The Mandate for Accessible and Inclusive AI

The recently issued India AI Governance Guidelines (I-AIGG) assert a vision of “AI for All”  [Click here to view document] and commit India to inclusive technology, social goods optimisation, and the avoidance of discrimination. However, the guidelines have failed to operationalise mandatory and enforceable disability and digital accessibility rules – a legal and ethical lapse that undermines both national and international obligations. As a professional engaged in technology policy and disability rights, and in light of the Supreme Court's Rajive Raturi v. Union of India (2024) judgment, this letter outlines why voluntary commitments are insufficient and why robust, mandatory accessibility standards are immediately warranted.​​

The Policy Paradox: Aspirational Promises versus Legal Obligations

The I-AIGG framework advances “voluntary” compliance, elevates inclusive rhetoric, and references “marginalised communities” in its principles. However, it neither defines “Persons with Disabilities” (PwDs) nor mandates conformance with domestic accessibility rules, as legally required by the Rights of Persons with Disabilities Act, 2016 (RPwD Act). This introduces a regulatory gap: aspirational principles supplant the non-negotiable legal floor guaranteed to persons with disabilities. Such dilution is legally unsustainable given India’s obligations under the UNCRPD and under Sections 40, 44, 45, 46, and 89 of the RPwD Act.​​

The Rajive Raturi Judgment: Reinforcing Mandatory Compliance

The Supreme Court’s decision in Rajive Raturi (2024) unambiguously directed the Union Government to move from discretionary, guideline-based approaches to compulsory standards for accessibility across physical, informational, and digital domains. The Court found that reliance on non-binding guidelines and sectoral discretion violated statutory mandates, and it instructed the creation of enforceable, uniform, and standardised rules developed in consultation with persons with disabilities and stakeholders.​​  This is particularly relevant to digital and AI governance, where exclusion can be algorithmic, structural, and scaled, denying access to education, employment, health, and social participation. The judgment refutes the adequacy of sectoral or voluntary approaches – digital accessibility is a fundamental right and non-compliance amounts to denial of rights for PwDs in India.​​

The EU Benchmark: Legal Mandates, Not Discretion

The European Union’s AI Act (Regulation (EU) 2024/1689) and its general accessibility directives establish mandatory, rights-based compliance for digital accessibility. The EU Act:

  • Explicitly enforces accessibility as a legal obligation, not a voluntary commitment, anchored in the UNCRPD and Universal Design principles.​​
  • Mandates that all high-risk AI systems comply with technical accessibility standards by design, with legal penalties for non-compliance.​​
  • Classifies systems impacting education, employment, healthcare, and public services as high-risk, subjecting them to strict regulatory scrutiny.​​
  • Prohibits any AI deployment that exploits or discriminates against persons with disabilities, addressing historical and algorithmic bias at source.​​

Thus, the EU approach demonstrates enforceable protection for PwDs, with stakeholder consultation, technical linkage to sectoral accessibility standards, and mechanisms for remediation and complaint.

Critique of I-AIGG: Core Deficiencies and Recommendations

Absence of Disability-Specific Provisions:

The term “marginalised communities” is insufficiently specific. India’s legal framework demands explicit protection for PwDs, including reasonable accommodation, accessible formats (such as ePUB, OCR-based PDF), and compliance with domestic (GIGW, Harmonised Guidelines 2021) standards.​

  1. No Accessibility-By-Design Mandate for AI: While the I-AIGG insists on “Understandability by Design,” it fails to require “Accessibility by Design.” Systems that are explainable but not operable by PwDs remain discriminatory.​​
  2. Inadequate Response to Algorithmic Bias:  AI bias mitigation in the I-AIGG does not extend to underrepresented disability data or to systemic exclusion caused by inaccessible training sets. The EU model, by contrast, mandates active audit and correction for disability-related data bias.​​
  3. Weak Grievance Redressal Mechanisms:  Voluntary or generic redress measures neglect the diversity of disability and the necessity for robust, accessible remedies in every sector where AI is used.​
  4. Non-compliance with Judicial Mandate:  Above all, the approach bypasses the Supreme Court’s explicit instructions to operationalise compulsory rules – an omission that is both ultra vires and constitutionally indefensible.​​

Policy Prescription: Steps Toward Compliance

Draft and Notify Mandatory AI Digital Accessibility Standards:

  • MeitY must codify and enforce AI digital accessibility standards as binding, not optional, rules. These must reference existing Indian standards (GIGW/HG21), adopt international best practices (WCAG), and be technology-agnostic.​​
  • Classify High-Risk AI Systems with Disability Lens: Mandate Disability Impact Assessments, mirroring the EU approach, for all AI systems deployed in health, education, employment, and public services.
  • Institutionalise Disability Rights Expertise: Add disability rights experts and diverse PwD representatives to the AI Governance Group and the Technology Policy Expert Committee, to ensure continued compliance monitoring and gap correction.​​
  • Mandate Dataset Audits and Privacy Protections: Require dataset bias audits for disability, establish anonymisation protocols for disability-rights data, and ensure representation in AI datasets.
  • Create Enforceable, Accessible Grievance Redress Channels: Grievance and remedy processes must be designed for operability by all 21 disability categories, in multiple formats and languages, with offline options for digitally marginalised users.​​

VII. Conclusion and Urgent Appeal

Presently, the I-AIGG’s disability approach is aspirational, not enforceable; voluntary, not mandatory. This is contrary to the Supreme Court's directive, India's legal obligations, and international best practice. To prevent algorithmic exclusion and rights denial, MeitY must urgently revise the I-AIGG:To operationalise mandatory disability accessibility safeguards across all
AI and digital systems;

  • To implement Disability Impact Assessments as standard in high-risk domains;
  • To establish permanent, consultative mechanisms with DPOs and subject-matter experts.

Failure to act will perpetuate digital exclusion, legal non-compliance, and undermine the promise of “AI for All.” India’s technology policy must embrace enforceable accessibility, both as a legal imperative and a standard of global leadership.

Yours faithfully,

Nilesh Singit

https://www.nileshsingit.in/

References

  • Rajive Raturi v. Union of India, Supreme Court of India, 8 November 2024.​​
  • India AI Governance Guidelines: Enabling Safe and Trusted AI Innovation, MeitY, 2025.​​
  • Rights of Persons with Disabilities Act, 2016, and associated Rules.​
  • Finding Sizes for All: Report on Status of the Right to Accessibility in India, for facts on digital exclusion.​
  • European Union, AI Act 2024 (Regulation (EU) 2024/1689), especially Recital 80, Article 5(1)(b), Article 16(l).​​
  • Web Content Accessibility Guidelines (WCAG) and Guidelines for Indian Government Websites (GIGW).
  • Open letter references and scope: blog.nileshsingit.org/open-letter-to-niti-ayog-ai-disability-inclusion. ​

Thursday, 6 November 2025

Open Letter to Niti Ayog: Urgent Need for Disability-Inclusive AI Governance Framework in Light of Disability Rights

NITI Aayog
Government of India
New Delhi

Subject: Response to Times of India Article on AI Regulation – Disability Inclusion Cannot Be Left to Existing Laws Alone

Sir,

This letter is in response to the recent Times of India article dated 6 November 2025 titled “Don’t need separate law for AI: Panel”, [Click Here to View TOI Article], which reports the conclusion of the high-level committee that existing sectoral laws are sufficient to govern artificial intelligence in India. With due respect, this position overlooks the disproportionate and often irreversible harms that AI systems are already inflicting on 27.4 million Indians with disabilities. The panel’s stance—that “existing rules can address the majority of risks” at this stage—ignores ground realities. India’s legal and regulatory approach to accessibility has historically suffered from weak enforcement and voluntary compliance. The Supreme Court’s landmark judgment in Rajive Raturi v. Union of India (8 November 2024) must serve as a caution. The Court held that Rule 15 of the Rights of Persons with Disabilities Rules, 2017 created “a ceiling without a floor” by offering only aspirational guidelines rather than enforceable standards. It directed the Union Government to frame mandatory accessibility rules within three months in consultation with Disabled Persons’ Organisations (DPOs). The Court made it unequivocally clear that accessibility cannot be voluntary or discretionary.

If mandatory standards are now required even for physical infrastructure, it is untenable that AI systems—which increasingly mediate access to education, employment, healthcare, welfare and civic participation—remain governed by existing, non-specific laws. As the Court observed, accessibility is “a fundamental requirement for enabling individuals, particularly those with disabilities, to exercise their rights fully and equally.” This principle applies with even greater force to AI, whose decisions are automated, opaque and scaled.

Global best practice reinforces this. The European Union’s Artificial Intelligence Act (2024) embeds disability inclusion as a legal requirement.

  • Article 5(1)(b) prohibits AI systems that exploit disability-related vulnerabilities.
  • Article 16(l) mandates that all high-risk AI systems must comply with accessibility standards by design.
  • Developers are required to assess training data for disability bias and undertake Fundamental Rights Impact Assessments before deployment, backed by penalties.

None of these are optional recommendation. They are a binding law. India’s approach must be no less rights-based.  The risks of relying on existing laws are already visible:

  • AI proctoring tools misinterpret screen-reader use or stimming as cheating, denying disabled students fair access to education.
  • Hiring algorithms replicate historic discrimination, filtering out disabled candidates due to biased data sets.
  • Healthcare chatbots and telemedicine platforms routinely exclude blind, deaf, neurodivergent and persons with cognitive disabilities due to inaccessibility.
  • Welfare fraud detection systems flag disabled persons as “anomalies” due to atypical income or healthcare patterns, increasing wrongful denial of benefits.
  • Facial recognition and biometric systems routinely fail to detect persons with disabilities, leading to denial of services, harassment or misidentification.

The NALSAR-CDS report “Finding Sizes for All” demonstrated how accessibility laws—without enforcement mechanisms—fail in practice. The same pattern will repeat with AI if we pretend that pre-existing laws are adequate. AI’s scale and opacity make post-facto redress ineffective; thousands will be harmed long before litigation can provide relief. 

Therefore, disability inclusion in AI cannot be left to goodwill, post-hoc complaints or fragmented sectoral laws. India needs a non-negotiable baseline of mandatory safeguards, even if a separate AI law is not enacted immediately.  In this context, I urge NITI Aayog to:

  1. Notify mandatory accessibility and non-discrimination standards for all high-risk AI systems, especially in education, employment, healthcare, public services and welfare.
  2. Require Fundamental Rights Impact Assessments for all AI deployments by government and regulated entities.
  3. Mandate disability-bias testing in datasets and model outputs before deployment.
  4. Set up a permanent advisory mechanism with DPOs to co-create and monitor AI governance norms.
  5. Explicitly prohibit AI systems that exploit disability vulnerabilities or result in discriminatory exclusion.

Innovation must not come at the cost of constitutional rights. India’s commitment under the Rights of Persons with Disabilities Act, 2016 and the UNCRPD requires accessible and equitable technology. The Supreme Court has shown the way in Raturi—create a floor of enforceable standards and allow progressive enhancement thereafter. AI governance must adopt the same logic.

India cannot lead the world in AI while systematically excluding its disabled citizens. Building AI that is inclusive, accessible and non-discriminatory is not optional—it is a constitutional, legal and ethical obligation.

Yours sincerely,
Nilesh Singit


Friday, 31 October 2025

Human in the Loop, Bias in the Script: A Film Review

Human in the Loop, a recent film examining the uneasy partnership between Artificial Intelligence and humanity, arrives when the technological imagination oscillates between utopian optimism and existential dread. Marketing narratives portray AI as either our benevolent assistant or our imminent overlord. The film attempts to resist these binaries, grounding its narrative in the messy, often uncomfortable, space where human judgement and machine logic are forced to coexist. It is this space that disability communities know too well: the zone where systems intended to “assist” end up surveilling, disciplining, or excluding.

This review engages with the film through the arguments I have articulated in my work on prompt bias, accessibility, and disability-led design. If AI systems are only as ethical as the assumptions fed into them, then “the human in the loop” is not a safeguard by default. A biased human keeping watch over a biased machine produces not balance, but compounding prejudice. The film gestures towards this tension, though at times without the depth that disability perspectives could have added.

The Premise: Humans as Moral Guardrails

The core narrative premise is straightforward: an AI system designed to support decision-making in public services begins exhibiting troubling patterns. Authorities insist that the system will remain safe because a “human in the loop” retains final authority. The question the film explores is whether this safeguard is meaningful or merely bureaucratic comfort dressing.

The film wisely avoids technical jargon. Instead, it frames the issue through a series of real-world scenarios: automated hiring, predictive policing, healthcare triage, and social welfare determinations. In each, the human reviewer is presented as the ethical backstop. But the film repeatedly reveals how rushed, under-trained, and system-dependent these humans are. Their “oversight” is often symbolic, not substantive.

This aligns with disability critiques of assistive and decision-making technologies. If the system itself is designed upon a logic of normalcy, hierarchy, and suspicion of difference, then a human operator merely rubber stamps the discrimination. A safeguard which never safeguards must be called by its proper name: ritual.

A Loop that Learns the Wrong Lessons

One of the film’s strongest structural choices is its metaphor of the loop. We see not only “human in the loop” but “loop in the human”. Over time, the human reviewers begin adopting the AI’s rationale. Instead of questioning outputs, they internalise them. Confidence in machine judgement breeds complacency in human judgement.

A particularly sharp moment occurs when a social welfare officer rejects an application, citing the AI risk score. When challenged, she responds: “The system has seen thousands of cases. How can I overrule that?” In that instant, the audience witnesses the reversal of power. The human is no longer the check on the system; the system becomes the check on the human.

For disabled audiences, this dynamic is painfully familiar. Tools meant to empower often become tools of compliance. Speech-to-text systems that refuse to recognise dysarthric voices, proctoring software that flags blind students for “not looking at the screen”, hiring algorithms that treat disability as “lack of culture fit” — all operate with the same logic: non-legibility equals non-credibility.

The film hints at this, though it does not explicitly name disability. This omission is a missed opportunity, because disability provides the clearest lens to examine what happens when “the loop” is built upon default assumptions of normality.

The Myth of the Neutral Observer

The film’s central critique is that a human in the loop is only meaningful if that human has both empathy and agency. However, the film does not fully unpack how social bias contaminates the loop. It largely presents “the human” as a neutral figure rendered passive by technology. But no human enters oversight without prejudice; their judgement is shaped by culture, training, and power.

In my recent writing, I have argued that prompts reveal user bias and AI tends to follow the user’s framing. The same principle applies here: if the human reviewer is biased, the loop becomes a bias amplifier. The film gestures towards this through a brief subplot involving a hiring panel. The human reviewer rejects a disabled candidate not because of the AI’s output, but because “the system must be correct”. The tragedy is that the system’s training data reflected decades of hiring discrimination. The reviewer trusts the machine because the machine echoes societal prejudice.

Here, the film could have benefited from a deeper exploration of disability-led prompting, reframing, and language etiquette. Without anchoring oversight in rights-based frameworks, human judgement merely masks bias with a polite face.

When Oversight Becomes Theatre

A recurring motif in the film is the performative nature of oversight. We see checklists, audit meetings, and compliance reports — all signalling responsible governance. Yet none of these prevents harm. The film’s critique of “audit theatre” resonates strongly with disability experience, where accessibility audits often occur after design is complete, and disabled persons are called in merely to validate decisions already made.

The phrase “human in the loop” functions similarly. It reassures the public that humans retain power, while hiding the fact that decision-making has already been ceded to algorithmic systems. The supervision is decorative.

This mirrors the “tokenism trap” in disability inclusion. When disabled persons are consulted at the tail end of design, their role becomes symbolic. Their presence legitimises inaccessible systems rather than transforming them. The film understands this dynamic intuitively, even if it does not explicitly borrow from disability discourse.

Moments of Ethical Clarity

Despite its gaps, the film contains several moments of thoughtful clarity:

  • A data scientist resigns after realising the oversight team is incentivised to approve system decisions, not scrutinise them.
  • A scene where a reviewer is told: “Your job is not to judge the system, but to justify it.”
  • A powerful montage showing how tech developers, under pressure to scale, see oversight as an obstacle, not a safeguard.

The film effectively illustrates that governance cannot rely on individual moral courage. Systems that reward speed, efficiency, and conformity will always erode slow, reflective, ethical judgement.

This is precisely why disability advocacy insists on structural safeguards, not individual discretion. Access cannot depend on the goodwill of one sympathetic officer. Rights must be enforceable at design, deployment, and review.

Where the Film Hesitates

While Human in the Loop is compelling, it hesitates in two important areas:

1. Lack of Specific Marginalised Lenses

The film takes a universalist tone — “AI harms us all” — which is true at a philosophical level but shallow at a lived level. Harm is not evenly distributed. Disabled persons, along with other marginalised communities, bear disproportionate risk from AI mis-judgement. The absence of disability voices weakens the film’s moral authority. Had it engaged with disabled lived experience, the critique of oversight would have gained both nuance and urgency.

2. Limited Exploration of Alternatives

The film critiques existing oversight but does not explore disability-led or community-led models of design and evaluation. Without offering a counter-imaginary, the narrative risks fatalism: that human oversight is doomed. In reality, oversight becomes meaningful when the overseers are those most impacted by harm. Not diversity on paper, but power-sharing in practice.

Relevance to Disability-Smart Prompting and Design

My recent work argues that prompting AI with disability-inclusive language can reduce prejudice in responses. The film unintentionally reinforces this principle: the question shapes the system. If oversight questions are bureaucratic — “Has the system followed protocol?” — the loop measures compliance. If oversight questions are justice-centred — “Has the system caused inequity, exclusion, or rights violations?” — the loop measures fairness.

Imagine if the oversight process in the film had disability-smart prompts such as:

  • “Does the system assume normative behaviour or body standards?”
  • “How would this decision affect a disabled applicant exercising their legal right to reasonable accommodation?”
  • “Have persons with disabilities been involved in evaluating model outcomes?”

Suddenly, “human in the loop” ceases to be ritual and becomes accountability.

Conclusion: The Loop Must Expand, Not Collapse

Human in the Loop leaves the viewer with a sobering realisation: a lone human reviewer is inadequate protection against systemic bias, particularly when that human has neither the mandate nor training to challenge the system. The film’s haunting closing image — a reviewer staring at a screen, accepting an AI-generated decision despite visible discomfort — encapsulates the danger of symbolic oversight.

For disability communities, the message is clear. We cannot trust systems to self-correct. We cannot assume human judgement will prevail over technological momentum. And we certainly cannot allow oversight to exclude those most impacted by discrimination.

A human in the loop is meaningful only if that human is:

  • empowered to challenge the system,
  • trained in bias literacy and rights-based frameworks, and
  • accountable to the communities the system affects.

Where disability is concerned, the safeguard is not a human in the loop, but disabled humans designing the loop.

The future of ethical AI will not be secured by adding a human after the fact. It will be built by placing disability, inclusion, and justice at the centre of design, prompting, and governance. The loop must not shrink into a rubber stamp. It must widen into a circle of shared power.


Nilesh Singit