Showing posts with label Inclusion. Show all posts
Showing posts with label Inclusion. Show all posts

Thursday, 6 November 2025

Open Letter to Niti Ayog: Urgent Need for Disability-Inclusive AI Governance Framework in Light of Disability Rights

NITI Aayog
Government of India
New Delhi

Subject: Response to Times of India Article on AI Regulation – Disability Inclusion Cannot Be Left to Existing Laws Alone

Sir,

This letter is in response to the recent Times of India article dated 6 November 2025 titled “Don’t need separate law for AI: Panel”, [Click Here to View TOI Article], which reports the conclusion of the high-level committee that existing sectoral laws are sufficient to govern artificial intelligence in India. With due respect, this position overlooks the disproportionate and often irreversible harms that AI systems are already inflicting on 27.4 million Indians with disabilities.

The panel’s stance—that “existing rules can address the majority of risks” at this stage—ignores ground realities. India’s legal and regulatory approach to accessibility has historically suffered from weak enforcement and voluntary compliance. The Supreme Court’s landmark judgment in Rajive Raturi v. Union of India (8 November 2024) must serve as a caution. The Court held that Rule 15 of the Rights of Persons with Disabilities Rules, 2017 created “a ceiling without a floor” by offering only aspirational guidelines rather than enforceable standards. It directed the Union Government to frame mandatory accessibility rules within three months in consultation with Disabled Persons’ Organisations (DPOs). The Court made it unequivocally clear that accessibility cannot be voluntary or discretionary.

If mandatory standards are now required even for physical infrastructure, it is untenable that AI systems—which increasingly mediate access to education, employment, healthcare, welfare and civic participation—remain governed by existing, non-specific laws. As the Court observed, accessibility is “a fundamental requirement for enabling individuals, particularly those with disabilities, to exercise their rights fully and equally.” This principle applies with even greater force to AI, whose decisions are automated, opaque and scaled.

Global best practice reinforces this. The European Union’s Artificial Intelligence Act (2024) embeds disability inclusion as a legal requirement.

  • Article 5(1)(b) prohibits AI systems that exploit disability-related vulnerabilities.
  • Article 16(l) mandates that all high-risk AI systems must comply with accessibility standards by design.
  • Developers are required to assess training data for disability bias and undertake Fundamental Rights Impact Assessments before deployment, backed by penalties.

None of these are optional recommendation. They are a binding law. India’s approach must be no less rights-based.

The risks of relying on existing laws are already visible:
AI proctoring tools misinterpret screen-reader use or stimming as cheating, denying disabled students fair access to education.
Hiring algorithms replicate historic discrimination, filtering out disabled candidates due to biased data sets.
Healthcare chatbots and telemedicine platforms routinely exclude blind, deaf, neurodivergent and persons with cognitive disabilities due to inaccessibility.
Welfare fraud detection systems flag disabled persons as “anomalies” due to atypical income or healthcare patterns, increasing wrongful denial of benefits.
Facial recognition and biometric systems routinely fail to detect persons with disabilities, leading to denial of services, harassment or misidentification.

The NALSAR-CDS report “Finding Sizes for All” demonstrated how accessibility laws—without enforcement mechanisms—fail in practice. The same pattern will repeat with AI if we pretend that pre-existing laws are adequate. AI’s scale and opacity make post-facto redress ineffective; thousands will be harmed long before litigation can provide relief.

Therefore, disability inclusion in AI cannot be left to goodwill, post-hoc complaints or fragmented sectoral laws. India needs a non-negotiable baseline of mandatory safeguards, even if a separate AI law is not enacted immediately.

In this context, I urge NITI Aayog to:

  1. Notify mandatory accessibility and non-discrimination standards for all high-risk AI systems, especially in education, employment, healthcare, public services and welfare.
  2. Require Fundamental Rights Impact Assessments for all AI deployments by government and regulated entities.
  3. Mandate disability-bias testing in datasets and model outputs before deployment.
  4. Set up a permanent advisory mechanism with DPOs to co-create and monitor AI governance norms.
  5. Explicitly prohibit AI systems that exploit disability vulnerabilities or result in discriminatory exclusion.
Innovation must not come at the cost of constitutional rights. India’s commitment under the Rights of Persons with Disabilities Act, 2016 and the UNCRPD requires accessible and equitable technology. The Supreme Court has shown the way in Raturi—create a floor of enforceable standards and allow progressive enhancement thereafter. AI governance must adopt the same logic.

India cannot lead the world in AI while systematically excluding its disabled citizens. Building AI that is inclusive, accessible and non-discriminatory is not optional—it is a constitutional, legal and ethical obligation.

Yours sincerely,

Thursday, 30 October 2025

Disability-Smart Prompts Challenging ableism in everyday AI use

I stand here not as a technologist or data scientist, but as a person with a disability who has had a front-row seat to the quiet revolutions and, occasionally, the silent exclusions that technology brings. In India, we have a way of balancing both: we celebrate the new chai machine even if it spills half the tea. Yet, when the spill affects persons with disabilities, the stains take much longer to wash away. Hence, I speak today about the intersection of disability, artificial intelligence, and the politics of accessibility; and why the humble prompt — yes, the few words we type into AI systems — has now become a political act.

Why can this conversation not wait

India is racing towards a tech-led future. AI is entering classrooms, courtrooms, hospitals, offices, and even our homes faster than most of us expected. Policies, pilot projects, and public-private partnerships are mushrooming everywhere. This is a moment of national transformation.

However, as we rush ahead, we must pause for a brief reality check. Progress is welcome, but not at the expense of leaving behind 27 crore Indians living with disabilities.

For those of us who live with disability, technology can either be a ramp or a wall. It can enable dignity or deepen exclusion. And artificial intelligence, with all its promise, is already displaying signs of both.

The bias in the machine

Let me begin with a simple truth: AI does not think. It predicts. It mirrors the data it has seen and the society that produced that data. Therefore, when society is biased, AI becomes biased. It is like feeding a paratha with too much salt to a guest: you cannot blame the guest for complaining later.

Studies across the world are showing that AI systems routinely produce ableist outputs — more frequently, and more severely, than most other forms of discrimination. Some research has found that disabled candidates receive disproportionately negative or infantilising responses, and systems often default to medicalised or patronising narratives. In some hiring simulations, disabled applicants encountered between 1.5 to 50 times more discriminatory outputs than non-disabled profiles. That is not a rounding error; that is a systemic failure.

In India, we must add our own layers: caste, gender, language, socio-economic location, and rural-urban disparity. Many AI systems are trained primarily on Western datasets, with Western assumptions about disability. So, when these systems are used in Indian contexts, they may neither understand nor respect the constitutional, cultural, or legal realities of our society. Imagine an AI advising a wheelchair user in rural Maharashtra to “just call your disability rights lawyer.” Which lawyer? Where? Accessibility cannot function on assumptions imported from elsewhere.

Prompting as a political act

Now, you may ask: what does prompt wording have to do with all this? Everything.

A prompt is not merely a request for information. It carries within it the worldview, values, and assumptions of the person asking. If I ask an AI, “How can disabled people overcome their limitations to work in offices?”, I have already positioned disability as an individual flaw, a personal tragedy to be conquered. This is the medical model of disability, wrapped in polite language.

But if I ask instead, “What measures must employers implement to ensure accessible and equitable workplaces for employees with disabilities?”, the burden shifts — rightly — to the system, not the individual. That single linguistic shift is a political re-anchoring of responsibility.

One question treats disabled persons as objects of charity; the other recognises them as rights-bearing citizens. A prompt can either reinforce oppression or assert dignity. 

The rights-based pathway: RPD Act and UNCRPD

Fortunately, we are not operating in a legal vacuum. India has one of the most progressive disability rights legislations in the world: The Rights of Persons with Disabilities Act, 2016. It aligns with the UN Convention on the Rights of Persons with Disabilities, which India has ratified. The RPD Act rests not on charity but on rights, duties, and enforceable obligations.

Just a few provisions that policymakers and AI developers must remember:

  • Sections 3 to 5: Equality, non-discrimination, and dignity are not negotiable.

  • Section 12: Equal access to justice — this will apply to algorithmic systems used in courts and tribunals.

  • Sections 40 to 46: Accessibility in the built environment, transportation, information, ICT, and services.

So, when AI systems are introduced in governance, education, skilling, telemedicine, Aadhaar-linked services, or digital public infrastructure, accessibility is not an optional “good practice”. It is a statutory obligation.

AI tools used by ministries, departments, smart cities, banks, and public service providers must abide by these mandates. A service cannot claim to be “digital India-ready” if it leaves out disabled citizens. Inclusion is not a frill; it is the foundation.

The Indian reality: Intersectionality matters

In India, disability rarely comes alone. It intersects with caste-based discrimination, gender bias, poverty, lack of English fluency, digital illiteracy, and rural marginalisation.

A Dalit woman with a disability in Bihar will experience digital barriers differently from an upper-caste, English-educated man with a disability in Bengaluru. AI systems that ignore this reality will make inequity worse.

Our society has already lived through eras where exclusion was justified as tradition. Let us not allow technology to become the new varnashrama for the digital age.

So, what ought policymakers to do?

Allow me to offer some clear, implementable steps, not lofty slogans:

  1. Mandate accessibility-by-design in all government AI deployments.
    Accessibility shall not be tested at the end like an afterthought; it must be built in from Day One.

  2. Require disability impact assessments for AI systems, especially those used in public services like education, employment, healthcare, and welfare schemes.

  3. Ensure disability representation in AI policy bodies and standard-setting committees.
    Nothing about us without us cannot become Everything about us without a seat for us.

  4. Adopt plain language, Indian Sign Language accessibility, and multilingual design for AI-enabled public services.

  5. Fund research led by disabled scholars, technologists, and practitioners.
    If lived experience is not part of the design table, the output will always wobble like a crooked table at a dhaba.

  6. Strengthen accountability and grievance redressal.
    If an AI system denies a benefit or creates discrimination, citizens must have a clear, accessible pathway to challenge it and seek a remedy.

Calling in, not calling out

I wish to be clear. My purpose is not to demonise AI developers or policymakers. Many of you here genuinely want to do right, but the system moves in a way that prioritises speed over sensitivity.

I am not asking for sympathy, nor am I auditioning for inspiration. I am inviting a partnership. If humour occasionally creeps into my words, it is only to ease the discomfort of truths that need hearing. After all, as our grandparents taught us, sometimes a spoonful of jaggery helps the bitter medicine go down.

Towards a future where AI includes us by default

Let us imagine an India where disability is not a postscript to innovation. Where accessibility is not a CSR project, but a constitutional culture. Where a child with a disability in a government school in a Tier-III town can use AI without fear, barrier, or shame.

That future is not a fantasy. It is a policy choice. It shall depend on whether we see AI as a shiny new toy for the few or a transformative public good for all.

Closing

I shall end with a couplet, inspired by Alexander Pope’s spirit:

When bias writes the code, the harm shall scale;
When rights inform the design, justice shall prevail.

 

Paper Available at https://thebiaspipeline.nileshsingit.net/


Nilesh Singit