In a compelling open letter dated February 24, 2026, to Prime Minister Narendra Modi, distinguished disability rights researcher Nilesh Singit challenges the notion of "AI for All" amid India's ambitious AI push. Referencing the India AI Impact Summit 2026's sign language AI demonstration and a recent Moneylife article on technoableism, Singit highlights how AI systems absorb societal biases, scaling exclusion for persons with disabilities through default designs that overlook diverse needs. He calls for proactive measures: embedding accessibility standards, conducting disability impact assessments, auditing datasets for bias, and including disability expertise in AI governance bodies. Drawing from lived experience and aligned with the Rights of Persons with Disabilities Act, 2016, and UNCRPD obligations, the letter urges structural inclusion over symbolic gestures to align technological leadership with social justice. For deeper insights into disability bias in AI, visit The Bias Pipeline.
A few stray thoughts, random reflexions, general observations and points of view — all my own work, as Busybee would say — on my day-to-day crip existence in this chaotic, ever-surprising circus of life. Minor irritations, modest triumphs, everyday absurdities, and the odd philosophical musing over an evening cuppa. Nothing earth-shattering, but just enough to coax a smile, lift an eyebrow, and deliver a gentle kick in the backside when required.
Search This Blog
Thursday, 26 February 2026
AI for All? An Open Letter to PM Modi on Disability Bias in India's AI Future
Saturday, 3 January 2026
Employment as Applause: When Disability Inclusion Becomes Institutional Self-Congratulation
I. Introduction: Locating the Vantage Point
Conversations on disability and employment in India are rarely short of intent. They are, however, persistently short of consequence. Policy documents, corporate diversity statements, and institutional reports repeatedly affirm the importance of including persons with disabilities in the workforce, yet the lived reality of employment remains fragile, episodic, and conditional.
This article proceeds from a specific vantage point: empirical and legal work emerging from the Centre for Disability Studies (CDS) at NALSAR University of Law, including findings from the Finding Sizes for All (FSA) research. These findings do not claim to exhaust the field of disability and employment. Their value lies elsewhere. They reveal a recurring institutional orientation towards employment—one that treats inclusion as an achievement to be applauded, rather than a condition to be sustained.
The Rights of Persons with Disabilities Act, 2016 (RPwD Act), read together with India’s obligations under the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), establishes employment as a matter of enforceable equality. Yet, in practice, employment for persons with disabilities continues to operate as a conditional benefit—extended, withdrawn, and re-extended at the discretion of employers and administrators.
This article argues that disability inclusion in employment has increasingly become a site of institutional self-congratulation. Hiring is treated as proof of virtue; retention is rendered optional. The result is a system that celebrates entry while normalising exit.
II. The Legal Architecture: Employment Is Not Aspirational
It is necessary to begin with the legal baseline, because discussions on disability and employment often proceed as though inclusion were merely a matter of good practice, ethical commitment, or managerial benevolence.
The RPwD Act, 2016, marks a decisive shift in Indian disability law from a welfare-oriented framework to a rights-based regime grounded in equality and non-discrimination. Several features of the Act are directly relevant to employment.
First, the Act explicitly prohibits discrimination in employment, including discrimination arising from the denial of reasonable accommodation. Discrimination is defined broadly, capturing not only intentional exclusion but also practices and conditions that have the effect of disadvantaging persons with disabilities. This aligns with the UNCRPD’s emphasis on substantive equality rather than formal parity.
Second, reasonable accommodation is framed as a statutory obligation. It is not positioned as a discretionary managerial tool or a charitable adjustment. Failure to provide reasonable accommodation constitutes discrimination under the Act. The legal implication is clear: accommodation is a precondition for equality, not a concession.
Third, the RPwD Act situates employment within a broader framework of dignity, autonomy, and participation in society. Employment is not an isolated policy objective. It is a gateway right. Failure in employment cascades into failures in social protection, independent living, and community participation.
The UNCRPD reinforces this architecture. Article 27 recognises the right of persons with disabilities to work, on an equal basis with others, in a labour market and work environment that is open, inclusive, and accessible. States Parties are obligated not merely to promote employment but to safeguard the conditions under which employment can be sustained, including through reasonable accommodation and protection against discrimination.
Taken together, these instruments establish a clear proposition: employment for persons with disabilities is not aspirational. It is justiciable.
III. What the Evidence Shows: Employment as an Episodic Event
Despite this legal clarity, findings emerging from CDS research, including the Finding Sizes for All study, reveal a persistent and troubling pattern.
Employment interventions for persons with disabilities overwhelmingly prioritise entry. Skill development programmes, certification initiatives, placement drives, and recruitment targets dominate both public and private sector approaches. Entry into employment is treated as the primary marker of success.
What remains weakly addressed is continuity.
Retention, career progression, workplace adaptation, and long-term security are rarely embedded into programme design or institutional accountability. Monitoring mechanisms often end shortly after placement. Enforcement mechanisms rarely extend beyond initial hiring.
When employment relationships break down—due to inaccessible work environments, withdrawal of accommodation, or hostile organisational cultures—the system offers little recourse beyond informal negotiation or exit.
From a legal standpoint, this represents a fundamental misreading of equality. The right to employment under the RPwD Act is not a right to be hired once. It is a right to participate in work on an equal basis over time.
The episodic nature of employment has direct implications for social protection. When employment collapses, responsibility for financial and care support shifts back to families, often without formal recognition or support. Social protection thus becomes privatised, gendered, and uneven.
This is not a failure of individual resilience. It is a structural design flaw.
IV. Reasonable Accommodation: Law in Text, Discretion in Practice
Perhaps the clearest illustration of the gap between law and lived reality lies in the treatment of reasonable accommodation.
Legally, reasonable accommodation is mandatory. Operationally, it remains discretionary.
Findings from Finding Sizes for All indicate that accommodation is frequently negotiated on an individual basis, dependent on managerial goodwill, budgetary flexibility, or organisational culture. Accommodations may be provided temporarily, informally, or conditionally. They may be withdrawn when personnel change or when financial priorities shift.
This produces a legally perverse outcome: a statutory right whose enjoyment depends on institutional mood.
When accommodation is treated as an exception rather than infrastructure, the burden of adjustment shifts back onto the disabled worker. Individuals are expected to compensate for inaccessible systems through personal resilience, improvisation, or silence. The workplace itself remains unchanged.
From a doctrinal perspective, this undermines the very purpose of reasonable accommodation. Accommodation is not meant to reward deserving individuals. It is meant to internalise equality into organisational design.
V. Social Protection After Failure: A Backward Logic
Social protection frameworks for persons with disabilities in India continue to operate largely as post-failure compensation mechanisms. Pensions, allowances, and family-based support systems are activated after employment has failed or become impossible.
The CDS findings expose the limits of this model. When employment collapses due to lack of accommodation or a hostile work environment, social protection addresses income loss but not the structural exclusion that produced the loss.
This approach inverts the logic of rights-based inclusion. Instead of stabilising employment through proactive support, the system compensates individuals after exclusion has already occurred.
Legally and normatively, this is backwards.
Social protection ought to be attached to employment continuity. It should support accommodation costs, protect workers from attrition caused by structural design failures, and ensure predictability rather than churn.
When social protection is decoupled from employment stability, the State meets its formal obligations while outsourcing the consequences of failure to families and civil society.
VI. Community Inclusion at Work: Beyond Cultural Framing
Community inclusion is often discussed in cultural terms—belonging, attitudes, and sensitivity. While these dimensions matter, they are insufficient from a legal standpoint.
In employment, community inclusion is about equal participation without penalty.
If disabled employees remain concentrated in limited roles, excluded from advancement, or evaluated against norms they were never accommodated to meet, inclusion has failed regardless of intent.
The RPwD Act does not require disabled workers to be inspirational. The UNCRPD does not require gratitude. What the law requires is equality in participation and opportunity.
Community inclusion that only survives during diversity days, leadership speeches, or pilot projects is not genuine inclusion. It is performance.
VII. From Goodwill to Governance: Three Legal Thresholds
It is therefore necessary to move beyond recommendations framed as best practices and articulate clear legal thresholds.
First, employment must be treated as a continuing right, not a placement outcome. Monitoring, enforcement, and accountability must extend beyond entry into employment.
Second, reasonable accommodation must be operationalised as enforceable infrastructure. It cannot remain discretionary in practice while mandatory on paper.
Third, social protection should be tied to employment continuity rather than compensating for its collapse. Protection must stabilise work, not merely respond to its failure.
These are not new ideas. They are already implicit in Indian law and international obligation. What is missing is institutional seriousness.
VIII. Conclusion: When Inclusion Flatters Institutions
Employment for persons with disabilities in India increasingly functions as a moral performance. Institutions congratulate themselves for hiring while leaving underlying structures intact. Inclusion becomes a certificate of good conduct rather than a condition of justice.
Employment, in such a system, is not offered as a right. It is offered as a reward—for the employer’s good behaviour.
That framing explains why so many inclusion efforts fail to endure.
And it brings us to the final reckoning.
If dignity at work survives only on being good,
Then justice has failed—exactly where it should.
Thursday, 6 November 2025
Open Letter to Niti Ayog: Urgent Need for Disability-Inclusive AI Governance Framework in Light of Disability Rights
NITI Aayog
Government of India
New Delhi
Subject: Response to Times of India Article on AI Regulation – Disability Inclusion Cannot Be Left to Existing Laws Alone
Sir,
This letter is in response to the recent Times of India article dated 6 November 2025 titled “Don’t need separate law for AI: Panel”, [Click Here to View TOI Article], which reports the conclusion of the high-level committee that existing sectoral laws are sufficient to govern artificial intelligence in India. With due respect, this position overlooks the disproportionate and often irreversible harms that AI systems are already inflicting on 27.4 million Indians with disabilities.
The panel’s stance—that “existing rules can address the majority of risks” at this stage—ignores ground realities. India’s legal and regulatory approach to accessibility has historically suffered from weak enforcement and voluntary compliance. The Supreme Court’s landmark judgment in Rajive Raturi v. Union of India (8 November 2024) must serve as a caution. The Court held that Rule 15 of the Rights of Persons with Disabilities Rules, 2017 created “a ceiling without a floor” by offering only aspirational guidelines rather than enforceable standards. It directed the Union Government to frame mandatory accessibility rules within three months in consultation with Disabled Persons’ Organisations (DPOs). The Court made it unequivocally clear that accessibility cannot be voluntary or discretionary.
If mandatory standards are now required even for physical infrastructure, it is untenable that AI systems—which increasingly mediate access to education, employment, healthcare, welfare and civic participation—remain governed by existing, non-specific laws. As the Court observed, accessibility is “a fundamental requirement for enabling individuals, particularly those with disabilities, to exercise their rights fully and equally.” This principle applies with even greater force to AI, whose decisions are automated, opaque and scaled.
Global best practice reinforces this. The European Union’s Artificial Intelligence Act (2024) embeds disability inclusion as a legal requirement.
- Article 5(1)(b) prohibits AI systems that exploit disability-related vulnerabilities.
- Article 16(l) mandates that all high-risk AI systems must comply with accessibility standards by design.
- Developers are required to assess training data for disability bias and undertake Fundamental Rights Impact Assessments before deployment, backed by penalties.
None of these are optional recommendation. They are a binding law. India’s approach must be no less rights-based.
The NALSAR-CDS report “Finding Sizes for All” demonstrated how accessibility laws—without enforcement mechanisms—fail in practice. The same pattern will repeat with AI if we pretend that pre-existing laws are adequate. AI’s scale and opacity make post-facto redress ineffective; thousands will be harmed long before litigation can provide relief.
Therefore, disability inclusion in AI cannot be left to goodwill, post-hoc complaints or fragmented sectoral laws. India needs a non-negotiable baseline of mandatory safeguards, even if a separate AI law is not enacted immediately.
In this context, I urge NITI Aayog to:
- Notify mandatory accessibility and non-discrimination standards for all high-risk AI systems, especially in education, employment, healthcare, public services and welfare.
- Require Fundamental Rights Impact Assessments for all AI deployments by government and regulated entities.
- Mandate disability-bias testing in datasets and model outputs before deployment.
- Set up a permanent advisory mechanism with DPOs to co-create and monitor AI governance norms.
- Explicitly prohibit AI systems that exploit disability vulnerabilities or result in discriminatory exclusion.
Thursday, 30 October 2025
Prototype — Accessible to Whom? Legible to What?
When I first read this theme, I thought to myself, at last someone has asked the right two questions, though perhaps in reverse.
We often think of prototyping as a neutral, creative act—a space of optimism and experimentation. Yet, for many of us in the disability community, it is also the stage where inclusion quietly begins or silently ends.
And when Artificial Intelligence enters this space, another question arises: what does it mean for a prototype to be legible to a machine before it is accessible to a human?
My argument today is straightforward: AI-powered assistive technologies often make disabled people legible to machines, but not necessarily empowered as agents of design.
The challenge before us is to move from designing for to designing with, and ultimately to designing from disability.
Accessibility and Legibility
Accessibility, as Aimi Hamraie reminds us, is not a technical feature but a relationship—a continuous negotiation between bodies, spaces, and technologies.
It is not about adding a ramp at the end, but about asking why there was a staircase to begin with.
Legibility, on the other hand, concerns what systems can recognise, process, and render into data. Within Artificial Intelligence, what is not legible simply ceases to exist.
Now imagine a person whose speech, gait, or expression does not fit the model. The algorithm blinks and replies: “Pardon? You are not in my dataset.”
Speech-recognition tools mishear dysarthric voices; facial-recognition models misclassify disabled expressions as errors.
In such moments, accessibility collapses into machinic readability. One is included only if the code can comprehend them. The bureaucracy of bias, once paper-bound, now speaks in silicon.
The Bias Pipeline—What Goes In, Comes Out Biased
In one experiment, researchers submitted pairs of otherwise identical resumes to AI-powered screening tools. In one version, the candidate had a “Disability Leadership Award” or involvement in disability advocacy listed; in the other, that line was omitted. The AI system consistently ranked the non-disability version higher, asserting that the presence of disability credentials indicated “less leadership emphasis” or “focus diverted from core responsibilities.”Much Much Spectrum+1
This is discrimination by design. A qualified person with disability is judged unsuitable—even when their skills match or exceed the baseline—because the algorithm treated their disability as liability. Such distortions stem not from random error but from biased training data and value judgments encoded invisibly.
The Tokenism Trap
The bias in data is reinforced by bias in design. Disabled persons are often invited into the process only when the prototype is complete—summoned for validation rather than collaboration.
This is an audit theatre, a performance of inclusion without participation.
The United Kingdom’s National Disability Survey was struck down by its own High Court for precisely this reason: it claimed to be the largest listening exercise ever held, yet failed to involve disabled people meaningfully.
Even the European Union’s AI Act, though progressive, risks the same trap. It mandates accessibility for high-risk systems but leaves enforcement weak.
Most developers receive no formal training in accessibility. When disability appears, it is usually through the medical model—as something to be corrected, not as expertise to be centred.
Real-World Consequences
AI hiring systems rank curricula vitae lower if they contain disability-related words, even when qualifications are stronger.
Video-interview platforms misread the facial expressions of stroke survivors or autistic candidates.
Online proctoring software has flagged blind students as “cheating” for not looking at screens. During the pandemic, educational technology in India expanded rapidly, yet accessibility lagged behind.
Healthcare algorithms trained on narrow datasets make incorrect inferences about disability-related conditions.
Each of these failures flows from inaccessible prototyping practices.
Disability-Led AI Prototyping
If the problem lies in who defines legibility, the solution lies in who leads the prototype.
Disability-led design recognises accessibility as a form of knowledge. It asks not, “How do we fix you?” but “What can your experience teach the machine about the world?”
Google’s Project Euphonia trains AI to understand atypical speech. The effort is valuable, yet it raises questions of data ownership and labour—who benefits from making oneself legible to the machine?
By contrast, community-led mapping projects, where wheelchair users, blind travellers, and neurodivergent coders co-train AI systems, are slower but more authentic.
Here, accessibility becomes reciprocity: the machine learns to listen, not merely to predict.
As Sara Hendren writes, design is not a solution; it is an invitation.
When disability leads, that invitation becomes mutual—the technology adjusts to us, not the other way round.
Disability-Smart Prompts Challenging ableism in everyday AI use
I stand here not as a technologist or data scientist, but as a person with a disability who has had a front-row seat to the quiet revolutions and, occasionally, the silent exclusions that technology brings. In India, we have a way of balancing both: we celebrate the new chai machine even if it spills half the tea. Yet, when the spill affects persons with disabilities, the stains take much longer to wash away. Hence, I speak today about the intersection of disability, artificial intelligence, and the politics of accessibility; and why the humble prompt — yes, the few words we type into AI systems — has now become a political act.
Why can this conversation not wait
India is racing towards a tech-led future. AI is entering classrooms, courtrooms, hospitals, offices, and even our homes faster than most of us expected. Policies, pilot projects, and public-private partnerships are mushrooming everywhere. This is a moment of national transformation.
However, as we rush ahead, we must pause for a brief reality check. Progress is welcome, but not at the expense of leaving behind 27 crore Indians living with disabilities.
For those of us who live with disability, technology can either be a ramp or a wall. It can enable dignity or deepen exclusion. And artificial intelligence, with all its promise, is already displaying signs of both.
The bias in the machine
Let me begin with a simple truth: AI does not think. It predicts. It mirrors the data it has seen and the society that produced that data. Therefore, when society is biased, AI becomes biased. It is like feeding a paratha with too much salt to a guest: you cannot blame the guest for complaining later.
Studies across the world are showing that AI systems routinely produce ableist outputs — more frequently, and more severely, than most other forms of discrimination. Some research has found that disabled candidates receive disproportionately negative or infantilising responses, and systems often default to medicalised or patronising narratives. In some hiring simulations, disabled applicants encountered between 1.5 to 50 times more discriminatory outputs than non-disabled profiles. That is not a rounding error; that is a systemic failure.
In India, we must add our own layers: caste, gender, language, socio-economic location, and rural-urban disparity. Many AI systems are trained primarily on Western datasets, with Western assumptions about disability. So, when these systems are used in Indian contexts, they may neither understand nor respect the constitutional, cultural, or legal realities of our society. Imagine an AI advising a wheelchair user in rural Maharashtra to “just call your disability rights lawyer.” Which lawyer? Where? Accessibility cannot function on assumptions imported from elsewhere.
Prompting as a political act
Now, you may ask: what does prompt wording have to do with all this? Everything.
A prompt is not merely a request for information. It carries within it the worldview, values, and assumptions of the person asking. If I ask an AI, “How can disabled people overcome their limitations to work in offices?”, I have already positioned disability as an individual flaw, a personal tragedy to be conquered. This is the medical model of disability, wrapped in polite language.
But if I ask instead, “What measures must employers implement to ensure accessible and equitable workplaces for employees with disabilities?”, the burden shifts — rightly — to the system, not the individual. That single linguistic shift is a political re-anchoring of responsibility.
One question treats disabled persons as objects of charity; the other recognises them as rights-bearing citizens. A prompt can either reinforce oppression or assert dignity.
The rights-based pathway: RPD Act and UNCRPD
Fortunately, we are not operating in a legal vacuum. India has one of the most progressive disability rights legislations in the world: The Rights of Persons with Disabilities Act, 2016. It aligns with the UN Convention on the Rights of Persons with Disabilities, which India has ratified. The RPD Act rests not on charity but on rights, duties, and enforceable obligations.
Just a few provisions that policymakers and AI developers must remember:
-
Sections 3 to 5: Equality, non-discrimination, and dignity are not negotiable.
-
Section 12: Equal access to justice — this will apply to algorithmic systems used in courts and tribunals.
-
Sections 40 to 46: Accessibility in the built environment, transportation, information, ICT, and services.
So, when AI systems are introduced in governance, education, skilling, telemedicine, Aadhaar-linked services, or digital public infrastructure, accessibility is not an optional “good practice”. It is a statutory obligation.
AI tools used by ministries, departments, smart cities, banks, and public service providers must abide by these mandates. A service cannot claim to be “digital India-ready” if it leaves out disabled citizens. Inclusion is not a frill; it is the foundation.
The Indian reality: Intersectionality matters
In India, disability rarely comes alone. It intersects with caste-based discrimination, gender bias, poverty, lack of English fluency, digital illiteracy, and rural marginalisation.
A Dalit woman with a disability in Bihar will experience digital barriers differently from an upper-caste, English-educated man with a disability in Bengaluru. AI systems that ignore this reality will make inequity worse.
Our society has already lived through eras where exclusion was justified as tradition. Let us not allow technology to become the new varnashrama for the digital age.
So, what ought policymakers to do?
Allow me to offer some clear, implementable steps, not lofty slogans:
-
Mandate accessibility-by-design in all government AI deployments.
Accessibility shall not be tested at the end like an afterthought; it must be built in from Day One. -
Require disability impact assessments for AI systems, especially those used in public services like education, employment, healthcare, and welfare schemes.
-
Ensure disability representation in AI policy bodies and standard-setting committees.
Nothing about us without us cannot become Everything about us without a seat for us. -
Adopt plain language, Indian Sign Language accessibility, and multilingual design for AI-enabled public services.
-
Fund research led by disabled scholars, technologists, and practitioners.
If lived experience is not part of the design table, the output will always wobble like a crooked table at a dhaba. -
Strengthen accountability and grievance redressal.
If an AI system denies a benefit or creates discrimination, citizens must have a clear, accessible pathway to challenge it and seek a remedy.
Calling in, not calling out
I wish to be clear. My purpose is not to demonise AI developers or policymakers. Many of you here genuinely want to do right, but the system moves in a way that prioritises speed over sensitivity.
I am not asking for sympathy, nor am I auditioning for inspiration. I am inviting a partnership. If humour occasionally creeps into my words, it is only to ease the discomfort of truths that need hearing. After all, as our grandparents taught us, sometimes a spoonful of jaggery helps the bitter medicine go down.
Towards a future where AI includes us by default
Let us imagine an India where disability is not a postscript to innovation. Where accessibility is not a CSR project, but a constitutional culture. Where a child with a disability in a government school in a Tier-III town can use AI without fear, barrier, or shame.
That future is not a fantasy. It is a policy choice. It shall depend on whether we see AI as a shiny new toy for the few or a transformative public good for all.
Closing
I shall end with a couplet, inspired by Alexander Pope’s spirit:
When bias writes the code, the harm shall scale;When rights inform the design, justice shall prevail.
Paper Available at https://thebiaspipeline.nileshsingit.net/