I stand here not as a technologist or data scientist, but as a person with a disability who has had a front-row seat to the quiet revolutions and, occasionally, the silent exclusions that technology brings. In India, we have a way of balancing both: we celebrate the new chai machine even if it spills half the tea. Yet, when the spill affects persons with disabilities, the stains take much longer to wash away. Hence, I speak today about the intersection of disability, artificial intelligence, and the politics of accessibility; and why the humble prompt — yes, the few words we type into AI systems — has now become a political act.
Why can this conversation not wait
India is racing towards a tech-led future. AI is entering classrooms, courtrooms, hospitals, offices, and even our homes faster than most of us expected. Policies, pilot projects, and public-private partnerships are mushrooming everywhere. This is a moment of national transformation.
However, as we rush ahead, we must pause for a brief reality check. Progress is welcome, but not at the expense of leaving behind 27 crore Indians living with disabilities.
For those of us who live with disability, technology can either be a ramp or a wall. It can enable dignity or deepen exclusion. And artificial intelligence, with all its promise, is already displaying signs of both.
The bias in the machine
Let me begin with a simple truth: AI does not think. It predicts. It mirrors the data it has seen and the society that produced that data. Therefore, when society is biased, AI becomes biased. It is like feeding a paratha with too much salt to a guest: you cannot blame the guest for complaining later.
Studies across the world are showing that AI systems routinely produce ableist outputs — more frequently, and more severely, than most other forms of discrimination. Some research has found that disabled candidates receive disproportionately negative or infantilising responses, and systems often default to medicalised or patronising narratives. In some hiring simulations, disabled applicants encountered between 1.5 to 50 times more discriminatory outputs than non-disabled profiles. That is not a rounding error; that is a systemic failure.
In India, we must add our own layers: caste, gender, language, socio-economic location, and rural-urban disparity. Many AI systems are trained primarily on Western datasets, with Western assumptions about disability. So, when these systems are used in Indian contexts, they may neither understand nor respect the constitutional, cultural, or legal realities of our society. Imagine an AI advising a wheelchair user in rural Maharashtra to “just call your disability rights lawyer.” Which lawyer? Where? Accessibility cannot function on assumptions imported from elsewhere.
Prompting as a political act
Now, you may ask: what does prompt wording have to do with all this? Everything.
A prompt is not merely a request for information. It carries within it the worldview, values, and assumptions of the person asking. If I ask an AI, “How can disabled people overcome their limitations to work in offices?”, I have already positioned disability as an individual flaw, a personal tragedy to be conquered. This is the medical model of disability, wrapped in polite language.
But if I ask instead, “What measures must employers implement to ensure accessible and equitable workplaces for employees with disabilities?”, the burden shifts — rightly — to the system, not the individual. That single linguistic shift is a political re-anchoring of responsibility.
One question treats disabled persons as objects of charity; the other recognises them as rights-bearing citizens. A prompt can either reinforce oppression or assert dignity.
The rights-based pathway: RPD Act and UNCRPD
Fortunately, we are not operating in a legal vacuum. India has one of the most progressive disability rights legislations in the world: The Rights of Persons with Disabilities Act, 2016. It aligns with the UN Convention on the Rights of Persons with Disabilities, which India has ratified. The RPD Act rests not on charity but on rights, duties, and enforceable obligations.
Just a few provisions that policymakers and AI developers must remember:
-
Sections 3 to 5: Equality, non-discrimination, and dignity are not negotiable.
-
Section 12: Equal access to justice — this will apply to algorithmic systems used in courts and tribunals.
-
Sections 40 to 46: Accessibility in the built environment, transportation, information, ICT, and services.
So, when AI systems are introduced in governance, education, skilling, telemedicine, Aadhaar-linked services, or digital public infrastructure, accessibility is not an optional “good practice”. It is a statutory obligation.
AI tools used by ministries, departments, smart cities, banks, and public service providers must abide by these mandates. A service cannot claim to be “digital India-ready” if it leaves out disabled citizens. Inclusion is not a frill; it is the foundation.
The Indian reality: Intersectionality matters
In India, disability rarely comes alone. It intersects with caste-based discrimination, gender bias, poverty, lack of English fluency, digital illiteracy, and rural marginalisation.
A Dalit woman with a disability in Bihar will experience digital barriers differently from an upper-caste, English-educated man with a disability in Bengaluru. AI systems that ignore this reality will make inequity worse.
Our society has already lived through eras where exclusion was justified as tradition. Let us not allow technology to become the new varnashrama for the digital age.
So, what ought policymakers to do?
Allow me to offer some clear, implementable steps, not lofty slogans:
-
Mandate accessibility-by-design in all government AI deployments.
Accessibility shall not be tested at the end like an afterthought; it must be built in from Day One. -
Require disability impact assessments for AI systems, especially those used in public services like education, employment, healthcare, and welfare schemes.
-
Ensure disability representation in AI policy bodies and standard-setting committees.
Nothing about us without us cannot become Everything about us without a seat for us. -
Adopt plain language, Indian Sign Language accessibility, and multilingual design for AI-enabled public services.
-
Fund research led by disabled scholars, technologists, and practitioners.
If lived experience is not part of the design table, the output will always wobble like a crooked table at a dhaba. -
Strengthen accountability and grievance redressal.
If an AI system denies a benefit or creates discrimination, citizens must have a clear, accessible pathway to challenge it and seek a remedy.
Calling in, not calling out
I wish to be clear. My purpose is not to demonise AI developers or policymakers. Many of you here genuinely want to do right, but the system moves in a way that prioritises speed over sensitivity.
I am not asking for sympathy, nor am I auditioning for inspiration. I am inviting a partnership. If humour occasionally creeps into my words, it is only to ease the discomfort of truths that need hearing. After all, as our grandparents taught us, sometimes a spoonful of jaggery helps the bitter medicine go down.
Towards a future where AI includes us by default
Let us imagine an India where disability is not a postscript to innovation. Where accessibility is not a CSR project, but a constitutional culture. Where a child with a disability in a government school in a Tier-III town can use AI without fear, barrier, or shame.
That future is not a fantasy. It is a policy choice. It shall depend on whether we see AI as a shiny new toy for the few or a transformative public good for all.
Closing
I shall end with a couplet, inspired by Alexander Pope’s spirit:
When bias writes the code, the harm shall scale;When rights inform the design, justice shall prevail.
Paper Available at https://thebiaspipeline.nileshsingit.net/