Search This Blog

Translate

Showing posts with label Disability Rights. Show all posts
Showing posts with label Disability Rights. Show all posts

Sunday, 29 March 2026

AI Mandates, Coding Shortcuts, and the Quiet Rise of Technoableism

A black and white editorial cartoon titled "TECHNO-ABLEISM OFFICE" illustrates a conflict over accessible design. A menacing robot labeled "ZABARDASTI AI" spews bubbles like "INACCESSIBLE" and "NO ARIA LABELS," while a shouting manager commands a stressed young programmer, "USE IT! MANDATORY ZABARDASTI! WE DON'T NEED YOUR 'ACCESSIBILITY' SLOWDOWN!" The programmer points to a computer, with a thought bubble quoting an article from The Hindu about forcing AI code making it brittle. An older man in a wheelchair reading THE TIMES comments that "WIPE CODING" for speed only "wipes the inclusion part."
The "Zabardasti AI" Mandate: A Cartoon on Corporate Techno-Ableism and Inaccessibility

Across the technology sector, organisations are beginning to mandate the use of AI coding tools. The argument is simple: AI increases productivity, accelerates software development, and allows companies to do more with fewer people. But something important is missing from this conversation. Most AI coding systems generate software that focuses on functionality and speed. Accessibility rarely appears in the default output. As a result, developers often receive machine-generated code that works visually but fails for screen readers, keyboard navigation, and other assistive technologies. 

 Over the last two decades, accessibility advocates have worked hard to teach developers that inclusive design must be built into software from the start. However, when AI tools become mandatory and productivity metrics dominate development workflows, accessibility risks being pushed to the margins again. This raises a deeper question: Are AI mandates quietly spreading technoableism within digital infrastructure? 

If accessibility is not integrated into AI coding systems themselves, organisations may unknowingly scale exclusion across the web. In the full article, I respond to a recent discussion on forced AI adoption and examine why accessibility must be part of the AI development pipeline itself. 

 Click below to read the full article.

Friday, 20 March 2026

An Open Letter on Transgender Law Reform, Accessibility, and Constitutional Equality in India

 To:

Dr Virendra Kumar,
Hon’ble Minister
Social Justice and Empowerment, Government of India
Room No. 201, C‑Wing, Shastri Bhawan,
New Delhi – 110001, India

Subject: Concern over Transgender Persons (Protection of Rights) Amendment Bill, 2026 and related disability issues

Hon’ble Minister,

I  write as a disability rights advocate deeply concerned for the welfare of the transgender community. I applaud the government’s historic achievements: from the Supreme Court’s NALSA ruling recognising transgender persons’ rights, to the Transgender Persons (Protection of Rights) Act 2019 and recent welfare schemes such as the National Council for Transgender Persons and the SMILE scheme launched by your Ministry. These have been important steps towards inclusion. However, I am alarmed that the Transgender Persons (Protection of Rights) Amendment Bill, 2026, introduced by your Ministry, would require transgender individuals to obtain identity certificates only after approval by a designated medical board.

This medical‑board‑centred approach is deeply troubling from a human‑rights and disability‑studies perspective. Transgender people already face social stigma; subjecting them to intrusive examinations would reinforce a medical model of identity that depends on “certification” by doctors, instead of respecting the self‑perceived identity that the original Act had affirmed in line with NALSA. In effect, trans persons with disabilities would suffer a double burden: first, to prove their disability, often repeatedly, to access the 5 per cent reservation and other entitlements, and then to prove their gender identity as well. Nothing in disability rights law justifies such additional gates. Courts, including the Supreme Court and High Courts, have repeatedly said that disability should not bar someone from education or employment unless it truly prevents them from performing essential duties; they have emphasised functional assessment and reasonable accommodation over rigid exclusions. By that logic, endless reassessments simply because a person is transgender or disabled violate both dignity and rights.

Real‑world experience from disability certification already shows the dangers of this model. NEET qualifiers with disabilities have been forced to travel across states for repeated assessments, even when they already hold permanent certificates and UDID cards. One visually impaired student, Lakshay Sharma, topped NEET but was told by a hospital board that he had “0 per cent” disability for quota purposes, until he went back for reassessment and finally regained recognition of 40 per cent disability after much effort and public scrutiny. Disability rights activists report that every year, persons with disabilities face unnecessary hassles, conflicting opinions from different boards and avoidable legal fights just to secure what the law already promises them.

Even interim guidance from the Supreme Court directions on NEET, requiring boards to focus on functional capacity and not use the 40 per cent benchmark as a blunt bar, is being ignored in practice. Reports and testimonies show wheelchair users being asked to walk, candidates cleared by one state being rejected in another, and young students being humiliated in the name of “fitness”. Expecting the same medical board system to handle transgender identity certificates will simply reproduce these insensitivities in a new context. As Dr Satendra Singh and many others have warned in the context of NEET, every additional medical or bureaucratic hurdle entrenches stigma, wastes years of people’s lives, and deters capable candidates. 

I must emphasise that none of this is to question the government’s intent. Protecting vulnerable persons from exploitation, including trafficking and forced procedures, is a worthy goal; stronger penalties for coercion are understandable. The aim of your Ministry’s welfare initiatives for transgender persons, including SMILE and the National Portal for Transgender Persons, is also commendable. My concern is that we must not confuse identity with a medical condition. Under the disability framework, the rules make it clear that once a disability certificate is issued by a competent authority, it is generally meant to be valid for all purposes, so that people can apply for schemes and benefits without facing constant re‑testing. The 2019 Transgender Persons Act similarly allowed self‑identification via a certificate from a District Magistrate; an administrative process, not a medical examination.

On paper, a uniform national procedure for transgender ID certificates might look like a way to ensure transparency. In practice, requiring all trans persons to go through state hospitals and medical boards risks recreating the very gatekeeping that the old, narrow, binary view of gender imposed on them. It will delay legal recognition of transgender identities and expose people to invasive questioning and examinations. Furthermore, I am worried that the proposed definition of a transgender person is becoming far too narrow. By focusing primarily on specific socio-cultural groups or those who have undergone medical procedures, we are effectively erasing transgender men, non-binary persons, and genderqueer individuals who do not fit a specific transfeminine stereotype. This looks less like broadening recognition and more like stripping it away from many who exist in India’s diversity.

I am also concerned about the introduction of vague offences related to "inducement" or "allurement" regarding how a person dresses or presents their gender. Without clear data or community consultation, such broad language risks arbitrary enforcement against the most vulnerable members of the community who are simply trying to live their lives.

My plea is that the Ministry rethink these provisions. I respectfully urge you to refer the Amendment Bill to a Standing Committee for deeper reconsideration. I request that you build further on the existing Act’s social‑rights framework: ensure that transgender persons can continue to self‑declare identity through a simple, accessible administrative process, and focus State energy on social support, non‑discrimination and access to services, rather than medical confirmation. Where genuine mischief, such as forced gender‑related procedures or trafficking, is a concern, existing criminal law and the stronger offences already proposed in the Amendment can and should be used; ordinary transgender people should not be treated as potential offenders or frauds because of these extreme cases. Inclusive policy should empower identity, not police it.

I hope it reaches your desk and prompts a careful reconsideration in Parliament. Our communities believe in dialogue and respect for evidence; many government documents and surveys already show broad public support for reducing stigma around disability and gender diversity.

Thank you for your attention to these urgent concerns.

Yours faithfully,

Nilesh Singit

https://www.nileshsingit.org/

Thursday, 26 February 2026

AI for All? An Open Letter to PM Modi on Disability Bias in India's AI Future

 In a compelling open letter dated February 24, 2026, to Prime Minister Narendra Modi, distinguished disability rights researcher Nilesh Singit challenges the notion of "AI for All" amid India's ambitious AI push. Referencing the India AI Impact Summit 2026's sign language AI demonstration and a recent Moneylife article on technoableism, Singit highlights how AI systems absorb societal biases, scaling exclusion for persons with disabilities through default designs that overlook diverse needs. He calls for proactive measures: embedding accessibility standards, conducting disability impact assessments, auditing datasets for bias, and including disability expertise in AI governance bodies. Drawing from lived experience and aligned with the Rights of Persons with Disabilities Act, 2016, and UNCRPD obligations, the letter urges structural inclusion over symbolic gestures to align technological leadership with social justice. For deeper insights into disability bias in AI, visit The Bias Pipeline. 

Click here to read the full letter.

Saturday, 21 February 2026

Designing for Everyone Is Not a Slogan: What Recent Indian Developments Mean for the Built Environment

A modern architectural illustration in a vivid, high-contrast palette of deep navy, vibrant orange, and citrus yellow. The scene shows a contemporary building campus where wide, seamless pathways flow naturally through the architecture. Diverse individuals, including a person using a wheelchair, an elderly person with a walking stick, and a parent with a stroller, are shown moving effortlessly along these integrated, barrier-free routes.
The Continuous Path: Systemic Inclusion in Modern Architecture

In recent years, conversations around accessibility in India have become more visible. Institutions speak of inclusion, new developments refer to universal design, and public discourse increasingly acknowledges that the built environment must respond to a wider range of users. Yet visibility alone does not transform experience. Many environments that claim to be inclusive remain difficult to use in practice.  The challenge before India is not whether accessibility should exist, but how it should be understood. If it continues to be treated as a matter of compliance or isolated provision, its impact will remain limited. If, however, it is recognised as a design condition — something that shapes how spaces are conceived — then accessibility can fundamentally improve how environments function for everyone.

Recent national discussions, including those that arose in connection with the Rajive Raturi proceedings before the Supreme Court of India and the research initiative Finding Sizes for All developed by the Centre for Disability Studies at NALSAR, have drawn attention to precisely this shift: accessibility must move from token provision to systemic thinking.

This is not a legal transition alone. It is a design transition.

The Limits of “Standard Solutions”

Accessibility is often reduced to a predictable set of features — a ramp, an accessible toilet, a lift, a designated parking space. These elements are necessary, but they are not sufficient. When treated as add-ons, they operate in isolation from the spatial logic of the building.

Consider a large institutional campus. A ramp may exist at the entrance, yet pathways between buildings involve uneven surfaces, long gradients, or unclear direction. A lift may be available, but reaching it requires navigating a confusing sequence of corridors. Facilities may technically meet dimensional standards, yet remain impractical because they are poorly located or disconnected from everyday movement patterns.

The difficulty lies not in the absence of features, but in the absence of continuity.

Standard solutions cannot address environments that are complex, layered, and heavily used. Accessibility must therefore be approached as an organising principle rather than a collection of components.

From Dimensions to Experience

Traditional approaches to accessibility focus on measurements: widths, heights, slopes, and turning radii. These are important, but they describe only the geometry of space, not how space is experienced.

Usability depends on factors that measurements alone cannot resolve:

  • The distance a person must travel without rest or orientation.

  • The clarity with which destinations are understood.

  • The predictability of transitions between indoor and outdoor areas.

  • The relationship between circulation routes and services.

  • The ease with which assistance can be sought if required.

An environment may satisfy every prescribed dimension and still be exhausting, disorienting, or exclusionary.

Designing for everyone therefore requires moving beyond the question, “Does it comply?” to the more meaningful one, “Does it work?”

The Indian Built Environment: Scale and Diversity

India presents a uniquely demanding context for accessibility. Developments are often large, multi-functional, and intensely used. Educational campuses accommodate thousands of students; hospitals manage continuous public flow; transport hubs connect diverse populations across long distances.

In such environments, accessibility cannot be inserted retrospectively without creating fragmentation. Each addition risks becoming an isolated adjustment rather than part of a coherent system.

The work emerging from research such as Finding Sizes for All has emphasised that Indian environments must respond to variability — in body types, mobility patterns, climate conditions, and patterns of use. Designing for uniformity in such a context is ineffective; designing for range is essential.

Accessibility as a System, Not an Element

When accessibility is integrated early, it shapes how the entire environment is organised:

  • Routes are planned as continuous networks rather than disconnected segments.

  • Entrances align with natural movement rather than requiring detours.

  • Facilities are placed where they are actually needed.

  • Landscapes, buildings, and infrastructure function together.

  • Wayfinding is embedded in spatial clarity rather than dependent on signage alone.

Such integration benefits all users, not only those who identify as persons with disabilities. Older persons, families with children, temporary injuries, and even those carrying luggage experience the environment differently when it is designed with range in mind.

Accessibility, in this sense, becomes synonymous with good planning.

Why Retrofitting Cannot Deliver the Same Outcome

Retrofitting remains necessary for older structures, but it is inherently constrained. Once a building’s structure, levels, and services are fixed, change becomes reactive rather than generative.

Retrofitted environments often reveal tell-tale signs:

  • Secondary entrances used as accessible routes.

  • External ramps added without integration into landscape design.

  • Altered interiors that disrupt circulation.

  • Facilities that meet standards but feel marginal.

By contrast, when accessibility informs the original design, it is invisible — not because it is absent, but because it is integral.

The Emerging Expectation: Inclusion as Normal Practice

What recent Indian discourse signals is not merely regulatory attention but a cultural expectation that public environments must anticipate diversity. Institutions and developers increasingly recognise that accessibility is tied to credibility, longevity, and public engagement.

Design teams are therefore being asked to think differently:
not how to correct exclusion after construction,
but how to avoid producing it in the first place.

This requires collaboration across disciplines — architecture, planning, engineering, and user experience — rather than delegating accessibility to a late-stage audit.

Designing for Range Rather Than Average

Much conventional design assumes an “average user.” Accessibility challenges this assumption by recognising that no such average exists. Human bodies, abilities, and interactions with space vary widely, and environments must accommodate that variability.

Designing for range does not dilute architectural intent; it strengthens it by making spaces more adaptable, resilient, and humane.

An accessible campus is easier to navigate.
An accessible hospital is less stressful to use.
An accessible transport system is more efficient for everyone.

These outcomes are not specialised benefits. They are indicators of quality.

A Shift in Professional Responsibility

The responsibility for accessibility cannot rest solely on enforcement or audit mechanisms. It must be internalised within design practice itself.

When architects and planners begin to treat accessibility as a parameter equal to structure, climate response, or safety, it ceases to be an external demand and becomes part of professional judgement.

India’s current moment of rapid construction offers an opportunity to make this shift deliberately rather than retrospectively.

Conclusion: From Awareness to Integration

Accessibility in India is no longer an unfamiliar concept. The task now is to translate awareness into environments that function seamlessly for diverse users.

Designing for everyone is not a slogan to be applied at the end of a project. It is a way of thinking that must begin at the first sketch — when decisions are still fluid and inclusion can be embedded without compromise.

If accessibility is considered early, it improves design.
If considered late, it attempts repair.

The choice between those approaches will shape how inclusive India’s future built environment truly becomes.

Suggested Reading

For readers interested in exploring these questions further:

  • Built environment accessibility guidelines issued by Government of India ministries addressing planning and infrastructure.

  • Research publications and design studies developed under the Centre for Disability Studies, NALSAR.

  • International literature on universal design and inclusive spatial planning.

  • Technical discussions on campus-scale accessibility and transport environment usability.

  • Comparative studies examining lifecycle outcomes of integrated versus retrofitted accessibility approaches.


Saturday, 3 January 2026

Employment as Applause: When Disability Inclusion Becomes Institutional Self-Congratulation

Introduction: Locating the Vantage Point

Conversations on disability and employment in India are rarely short of intent. They are, however, persistently short of consequence. Policy documents, corporate diversity statements, and institutional reports repeatedly affirm the importance of including persons with disabilities in the workforce, yet the lived reality of employment remains fragile, episodic, and conditional.

This article proceeds from a specific vantage point: empirical and legal work emerging from the Centre for Disability Studies (CDS) at NALSAR University of Law, including findings from the Finding Sizes for All (FSA) research. These findings do not claim to exhaust the field of disability and employment. Their value lies elsewhere. They reveal a recurring institutional orientation towards employment—one that treats inclusion as an achievement to be applauded, rather than a condition to be sustained.

The Rights of Persons with Disabilities Act, 2016 (RPwD Act), read together with India’s obligations under the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), establishes employment as a matter of enforceable equality. Yet, in practice, employment for persons with disabilities continues to operate as a conditional benefit—extended, withdrawn, and re-extended at the discretion of employers and administrators.

This article argues that disability inclusion in employment has increasingly become a site of institutional self-congratulation. Hiring is treated as proof of virtue; retention is rendered optional. The result is a system that celebrates entry while normalising exit.

The Legal Architecture: Employment Is Not Aspirational

It is necessary to begin with the legal baseline, because discussions on disability and employment often proceed as though inclusion were merely a matter of good practice, ethical commitment, or managerial benevolence.

The RPwD Act, 2016, marks a decisive shift in Indian disability law from a welfare-oriented framework to a rights-based regime grounded in equality and non-discrimination. Several features of the Act are directly relevant to employment.

  • First, the Act explicitly prohibits discrimination in employment, including discrimination arising from the denial of reasonable accommodation. Discrimination is defined broadly, capturing not only intentional exclusion but also practices and conditions that have the effect of disadvantaging persons with disabilities. This aligns with the UNCRPD’s emphasis on substantive equality rather than formal parity.
  • Second, reasonable accommodation is framed as a statutory obligation. It is not positioned as a discretionary managerial tool or a charitable adjustment. Failure to provide reasonable accommodation constitutes discrimination under the Act. The legal implication is clear: accommodation is a precondition for equality, not a concession.
  • Third, the RPwD Act situates employment within a broader framework of dignity, autonomy, and participation in society. Employment is not an isolated policy objective. It is a gateway right. Failure in employment cascades into failures in social protection, independent living, and community participation.

The UNCRPD reinforces this architecture. Article 27 recognises the right of persons with disabilities to work, on an equal basis with others, in a labour market and work environment that is open, inclusive, and accessible. States Parties are obligated not merely to promote employment but to safeguard the conditions under which employment can be sustained, including through reasonable accommodation and protection against discrimination.

Taken together, these instruments establish a clear proposition: employment for persons with disabilities is not aspirational. It is justiciable.

What the Evidence Shows: Employment as an Episodic Event

Despite this legal clarity, findings emerging from CDS research, including the Finding Sizes for All study, reveal a persistent and troubling pattern.

Employment interventions for persons with disabilities overwhelmingly prioritise entry. Skill development programmes, certification initiatives, placement drives, and recruitment targets dominate both public and private sector approaches. Entry into employment is treated as the primary marker of success.

What remains weakly addressed is continuity.

Retention, career progression, workplace adaptation, and long-term security are rarely embedded into programme design or institutional accountability. Monitoring mechanisms often end shortly after placement. Enforcement mechanisms rarely extend beyond initial hiring.

When employment relationships break down—due to inaccessible work environments, withdrawal of accommodation, or hostile organisational cultures—the system offers little recourse beyond informal negotiation or exit.

From a legal standpoint, this represents a fundamental misreading of equality. The right to employment under the RPwD Act is not a right to be hired once. It is a right to participate in work on an equal basis over time.

The episodic nature of employment has direct implications for social protection. When employment collapses, responsibility for financial and care support shifts back to families, often without formal recognition or support. Social protection thus becomes privatised, gendered, and uneven.

This is not a failure of individual resilience. It is a structural design flaw.

Reasonable Accommodation: Law in Text, Discretion in Practice

Perhaps the clearest illustration of the gap between law and lived reality lies in the treatment of reasonable accommodation.

Legally, reasonable accommodation is mandatory. Operationally, it remains discretionary.

Findings from Finding Sizes for All indicate that accommodation is frequently negotiated on an individual basis, dependent on managerial goodwill, budgetary flexibility, or organisational culture. Accommodations may be provided temporarily, informally, or conditionally. They may be withdrawn when personnel change or when financial priorities shift.

This produces a legally perverse outcome: a statutory right whose enjoyment depends on institutional mood.

When accommodation is treated as an exception rather than infrastructure, the burden of adjustment shifts back onto the disabled worker. Individuals are expected to compensate for inaccessible systems through personal resilience, improvisation, or silence. The workplace itself remains unchanged.

From a doctrinal perspective, this undermines the very purpose of reasonable accommodation. Accommodation is not meant to reward deserving individuals. It is meant to internalise equality into organisational design.

Social Protection After Failure: A Backward Logic

Social protection frameworks for persons with disabilities in India continue to operate largely as post-failure compensation mechanisms. Pensions, allowances, and family-based support systems are activated after employment has failed or become impossible.

The CDS findings expose the limits of this model. When employment collapses due to lack of accommodation or a hostile work environment, social protection addresses income loss but not the structural exclusion that produced the loss.

This approach inverts the logic of rights-based inclusion. Instead of stabilising employment through proactive support, the system compensates individuals after exclusion has already occurred.

Legally and normatively, this is backwards.

Social protection ought to be attached to employment continuity. It should support accommodation costs, protect workers from attrition caused by structural design failures, and ensure predictability rather than churn.

When social protection is decoupled from employment stability, the State meets its formal obligations while outsourcing the consequences of failure to families and civil society.

Community Inclusion at Work: Beyond Cultural Framing

Community inclusion is often discussed in cultural terms—belonging, attitudes, and sensitivity. While these dimensions matter, they are insufficient from a legal standpoint.

In employment, community inclusion is about equal participation without penalty.

If disabled employees remain concentrated in limited roles, excluded from advancement, or evaluated against norms they were never accommodated to meet, inclusion has failed regardless of intent.

The RPwD Act does not require disabled workers to be inspirational. The UNCRPD does not require gratitude. What the law requires is equality in participation and opportunity.

Community inclusion that only survives during diversity days, leadership speeches, or pilot projects is not genuine inclusion. It is performance.

From Goodwill to Governance: Three Legal Thresholds

It is therefore necessary to move beyond recommendations framed as best practices and articulate clear legal thresholds.

  • First, employment must be treated as a continuing right, not a placement outcome. Monitoring, enforcement, and accountability must extend beyond entry into employment.
  • Second, reasonable accommodation must be operationalised as enforceable infrastructure. It cannot remain discretionary in practice while mandatory on paper.
  • Third, social protection should be tied to employment continuity rather than compensating for its collapse. Protection must stabilise work, not merely respond to its failure.

These are not new ideas. They are already implicit in Indian law and international obligation. What is missing is institutional seriousness.

Conclusion: When Inclusion Flatters Institutions

Employment for persons with disabilities in India increasingly functions as a moral performance. Institutions congratulate themselves for hiring while leaving underlying structures intact. Inclusion becomes a certificate of good conduct rather than a condition of justice.

Employment, in such a system, is not offered as a right. It is offered as a reward—for the employer’s good behaviour.

That framing explains why so many inclusion efforts fail to endure.

And it brings us to the final reckoning.

If dignity at work survives only on being good, 

Then justice has failed—exactly where it should.

Thursday, 13 November 2025

An Open Letter to the Ministry of Electronics and Information Technology: A Critique of the India AI Governance Guidelines on the Omission of Mandatory Disability and Digital Accessibility Rules

 To:

The Secretary, Ministry of Electronics and Information Technology (MeitY)
Government of India, New Delhi
Email: secretary[at]meity[dot]gov[dot]in

Preamble: The Mandate for Accessible and Inclusive AI

The recently issued India AI Governance Guidelines (I-AIGG) assert a vision of “AI for All”  [Click here to view document] and commit India to inclusive technology, social goods optimisation, and the avoidance of discrimination. However, the guidelines have failed to operationalise mandatory and enforceable disability and digital accessibility rules – a legal and ethical lapse that undermines both national and international obligations. As a professional engaged in technology policy and disability rights, and in light of the Supreme Court's Rajive Raturi v. Union of India (2024) judgment, this letter outlines why voluntary commitments are insufficient and why robust, mandatory accessibility standards are immediately warranted.​​

The Policy Paradox: Aspirational Promises versus Legal Obligations

The I-AIGG framework advances “voluntary” compliance, elevates inclusive rhetoric, and references “marginalised communities” in its principles. However, it neither defines “Persons with Disabilities” (PwDs) nor mandates conformance with domestic accessibility rules, as legally required by the Rights of Persons with Disabilities Act, 2016 (RPwD Act). This introduces a regulatory gap: aspirational principles supplant the non-negotiable legal floor guaranteed to persons with disabilities. Such dilution is legally unsustainable given India’s obligations under the UNCRPD and under Sections 40, 44, 45, 46, and 89 of the RPwD Act.​​

The Rajive Raturi Judgment: Reinforcing Mandatory Compliance

The Supreme Court’s decision in Rajive Raturi (2024) unambiguously directed the Union Government to move from discretionary, guideline-based approaches to compulsory standards for accessibility across physical, informational, and digital domains. The Court found that reliance on non-binding guidelines and sectoral discretion violated statutory mandates, and it instructed the creation of enforceable, uniform, and standardised rules developed in consultation with persons with disabilities and stakeholders.​​  This is particularly relevant to digital and AI governance, where exclusion can be algorithmic, structural, and scaled, denying access to education, employment, health, and social participation. The judgment refutes the adequacy of sectoral or voluntary approaches – digital accessibility is a fundamental right and non-compliance amounts to denial of rights for PwDs in India.​​

The EU Benchmark: Legal Mandates, Not Discretion

The European Union’s AI Act (Regulation (EU) 2024/1689) and its general accessibility directives establish mandatory, rights-based compliance for digital accessibility. The EU Act:

  • Explicitly enforces accessibility as a legal obligation, not a voluntary commitment, anchored in the UNCRPD and Universal Design principles.​​
  • Mandates that all high-risk AI systems comply with technical accessibility standards by design, with legal penalties for non-compliance.​​
  • Classifies systems impacting education, employment, healthcare, and public services as high-risk, subjecting them to strict regulatory scrutiny.​​
  • Prohibits any AI deployment that exploits or discriminates against persons with disabilities, addressing historical and algorithmic bias at source.​​

Thus, the EU approach demonstrates enforceable protection for PwDs, with stakeholder consultation, technical linkage to sectoral accessibility standards, and mechanisms for remediation and complaint.

Critique of I-AIGG: Core Deficiencies and Recommendations

Absence of Disability-Specific Provisions:

The term “marginalised communities” is insufficiently specific. India’s legal framework demands explicit protection for PwDs, including reasonable accommodation, accessible formats (such as ePUB, OCR-based PDF), and compliance with domestic (GIGW, Harmonised Guidelines 2021) standards.​

  1. No Accessibility-By-Design Mandate for AI: While the I-AIGG insists on “Understandability by Design,” it fails to require “Accessibility by Design.” Systems that are explainable but not operable by PwDs remain discriminatory.​​
  2. Inadequate Response to Algorithmic Bias:  AI bias mitigation in the I-AIGG does not extend to underrepresented disability data or to systemic exclusion caused by inaccessible training sets. The EU model, by contrast, mandates active audit and correction for disability-related data bias.​​
  3. Weak Grievance Redressal Mechanisms:  Voluntary or generic redress measures neglect the diversity of disability and the necessity for robust, accessible remedies in every sector where AI is used.​
  4. Non-compliance with Judicial Mandate:  Above all, the approach bypasses the Supreme Court’s explicit instructions to operationalise compulsory rules – an omission that is both ultra vires and constitutionally indefensible.​​

Policy Prescription: Steps Toward Compliance

Draft and Notify Mandatory AI Digital Accessibility Standards:

  • MeitY must codify and enforce AI digital accessibility standards as binding, not optional, rules. These must reference existing Indian standards (GIGW/HG21), adopt international best practices (WCAG), and be technology-agnostic.​​
  • Classify High-Risk AI Systems with Disability Lens: Mandate Disability Impact Assessments, mirroring the EU approach, for all AI systems deployed in health, education, employment, and public services.
  • Institutionalise Disability Rights Expertise: Add disability rights experts and diverse PwD representatives to the AI Governance Group and the Technology Policy Expert Committee, to ensure continued compliance monitoring and gap correction.​​
  • Mandate Dataset Audits and Privacy Protections: Require dataset bias audits for disability, establish anonymisation protocols for disability-rights data, and ensure representation in AI datasets.
  • Create Enforceable, Accessible Grievance Redress Channels: Grievance and remedy processes must be designed for operability by all 21 disability categories, in multiple formats and languages, with offline options for digitally marginalised users.​​

VII. Conclusion and Urgent Appeal

Presently, the I-AIGG’s disability approach is aspirational, not enforceable; voluntary, not mandatory. This is contrary to the Supreme Court's directive, India's legal obligations, and international best practice. To prevent algorithmic exclusion and rights denial, MeitY must urgently revise the I-AIGG:To operationalise mandatory disability accessibility safeguards across all
AI and digital systems;

  • To implement Disability Impact Assessments as standard in high-risk domains;
  • To establish permanent, consultative mechanisms with DPOs and subject-matter experts.

Failure to act will perpetuate digital exclusion, legal non-compliance, and undermine the promise of “AI for All.” India’s technology policy must embrace enforceable accessibility, both as a legal imperative and a standard of global leadership.

Yours faithfully,

Nilesh Singit

https://www.nileshsingit.in/

References

  • Rajive Raturi v. Union of India, Supreme Court of India, 8 November 2024.​​
  • India AI Governance Guidelines: Enabling Safe and Trusted AI Innovation, MeitY, 2025.​​
  • Rights of Persons with Disabilities Act, 2016, and associated Rules.​
  • Finding Sizes for All: Report on Status of the Right to Accessibility in India, for facts on digital exclusion.​
  • European Union, AI Act 2024 (Regulation (EU) 2024/1689), especially Recital 80, Article 5(1)(b), Article 16(l).​​
  • Web Content Accessibility Guidelines (WCAG) and Guidelines for Indian Government Websites (GIGW).
  • Open letter references and scope: blog.nileshsingit.org/open-letter-to-niti-ayog-ai-disability-inclusion. ​

Thursday, 6 November 2025

Open Letter to Niti Ayog: Urgent Need for Disability-Inclusive AI Governance Framework in Light of Disability Rights

NITI Aayog
Government of India
New Delhi

Subject: Response to Times of India Article on AI Regulation – Disability Inclusion Cannot Be Left to Existing Laws Alone

Sir,

This letter is in response to the recent Times of India article dated 6 November 2025 titled “Don’t need separate law for AI: Panel”, [Click Here to View TOI Article], which reports the conclusion of the high-level committee that existing sectoral laws are sufficient to govern artificial intelligence in India. With due respect, this position overlooks the disproportionate and often irreversible harms that AI systems are already inflicting on 27.4 million Indians with disabilities. The panel’s stance—that “existing rules can address the majority of risks” at this stage—ignores ground realities. India’s legal and regulatory approach to accessibility has historically suffered from weak enforcement and voluntary compliance. The Supreme Court’s landmark judgment in Rajive Raturi v. Union of India (8 November 2024) must serve as a caution. The Court held that Rule 15 of the Rights of Persons with Disabilities Rules, 2017 created “a ceiling without a floor” by offering only aspirational guidelines rather than enforceable standards. It directed the Union Government to frame mandatory accessibility rules within three months in consultation with Disabled Persons’ Organisations (DPOs). The Court made it unequivocally clear that accessibility cannot be voluntary or discretionary.

If mandatory standards are now required even for physical infrastructure, it is untenable that AI systems—which increasingly mediate access to education, employment, healthcare, welfare and civic participation—remain governed by existing, non-specific laws. As the Court observed, accessibility is “a fundamental requirement for enabling individuals, particularly those with disabilities, to exercise their rights fully and equally.” This principle applies with even greater force to AI, whose decisions are automated, opaque and scaled.

Global best practice reinforces this. The European Union’s Artificial Intelligence Act (2024) embeds disability inclusion as a legal requirement.

  • Article 5(1)(b) prohibits AI systems that exploit disability-related vulnerabilities.
  • Article 16(l) mandates that all high-risk AI systems must comply with accessibility standards by design.
  • Developers are required to assess training data for disability bias and undertake Fundamental Rights Impact Assessments before deployment, backed by penalties.

None of these are optional recommendation. They are a binding law. India’s approach must be no less rights-based.  The risks of relying on existing laws are already visible:

  • AI proctoring tools misinterpret screen-reader use or stimming as cheating, denying disabled students fair access to education.
  • Hiring algorithms replicate historic discrimination, filtering out disabled candidates due to biased data sets.
  • Healthcare chatbots and telemedicine platforms routinely exclude blind, deaf, neurodivergent and persons with cognitive disabilities due to inaccessibility.
  • Welfare fraud detection systems flag disabled persons as “anomalies” due to atypical income or healthcare patterns, increasing wrongful denial of benefits.
  • Facial recognition and biometric systems routinely fail to detect persons with disabilities, leading to denial of services, harassment or misidentification.

The NALSAR-CDS report “Finding Sizes for All” demonstrated how accessibility laws—without enforcement mechanisms—fail in practice. The same pattern will repeat with AI if we pretend that pre-existing laws are adequate. AI’s scale and opacity make post-facto redress ineffective; thousands will be harmed long before litigation can provide relief. 

Therefore, disability inclusion in AI cannot be left to goodwill, post-hoc complaints or fragmented sectoral laws. India needs a non-negotiable baseline of mandatory safeguards, even if a separate AI law is not enacted immediately.  In this context, I urge NITI Aayog to:

  1. Notify mandatory accessibility and non-discrimination standards for all high-risk AI systems, especially in education, employment, healthcare, public services and welfare.
  2. Require Fundamental Rights Impact Assessments for all AI deployments by government and regulated entities.
  3. Mandate disability-bias testing in datasets and model outputs before deployment.
  4. Set up a permanent advisory mechanism with DPOs to co-create and monitor AI governance norms.
  5. Explicitly prohibit AI systems that exploit disability vulnerabilities or result in discriminatory exclusion.

Innovation must not come at the cost of constitutional rights. India’s commitment under the Rights of Persons with Disabilities Act, 2016 and the UNCRPD requires accessible and equitable technology. The Supreme Court has shown the way in Raturi—create a floor of enforceable standards and allow progressive enhancement thereafter. AI governance must adopt the same logic.

India cannot lead the world in AI while systematically excluding its disabled citizens. Building AI that is inclusive, accessible and non-discriminatory is not optional—it is a constitutional, legal and ethical obligation.

Yours sincerely,
Nilesh Singit


Friday, 31 October 2025

Human in the Loop, Bias in the Script: A Film Review

Human in the Loop, a recent film examining the uneasy partnership between Artificial Intelligence and humanity, arrives when the technological imagination oscillates between utopian optimism and existential dread. Marketing narratives portray AI as either our benevolent assistant or our imminent overlord. The film attempts to resist these binaries, grounding its narrative in the messy, often uncomfortable, space where human judgement and machine logic are forced to coexist. It is this space that disability communities know too well: the zone where systems intended to “assist” end up surveilling, disciplining, or excluding.

This review engages with the film through the arguments I have articulated in my work on prompt bias, accessibility, and disability-led design. If AI systems are only as ethical as the assumptions fed into them, then “the human in the loop” is not a safeguard by default. A biased human keeping watch over a biased machine produces not balance, but compounding prejudice. The film gestures towards this tension, though at times without the depth that disability perspectives could have added.

The Premise: Humans as Moral Guardrails

The core narrative premise is straightforward: an AI system designed to support decision-making in public services begins exhibiting troubling patterns. Authorities insist that the system will remain safe because a “human in the loop” retains final authority. The question the film explores is whether this safeguard is meaningful or merely bureaucratic comfort dressing.

The film wisely avoids technical jargon. Instead, it frames the issue through a series of real-world scenarios: automated hiring, predictive policing, healthcare triage, and social welfare determinations. In each, the human reviewer is presented as the ethical backstop. But the film repeatedly reveals how rushed, under-trained, and system-dependent these humans are. Their “oversight” is often symbolic, not substantive.

This aligns with disability critiques of assistive and decision-making technologies. If the system itself is designed upon a logic of normalcy, hierarchy, and suspicion of difference, then a human operator merely rubber stamps the discrimination. A safeguard which never safeguards must be called by its proper name: ritual.

A Loop that Learns the Wrong Lessons

One of the film’s strongest structural choices is its metaphor of the loop. We see not only “human in the loop” but “loop in the human”. Over time, the human reviewers begin adopting the AI’s rationale. Instead of questioning outputs, they internalise them. Confidence in machine judgement breeds complacency in human judgement.

A particularly sharp moment occurs when a social welfare officer rejects an application, citing the AI risk score. When challenged, she responds: “The system has seen thousands of cases. How can I overrule that?” In that instant, the audience witnesses the reversal of power. The human is no longer the check on the system; the system becomes the check on the human.

For disabled audiences, this dynamic is painfully familiar. Tools meant to empower often become tools of compliance. Speech-to-text systems that refuse to recognise dysarthric voices, proctoring software that flags blind students for “not looking at the screen”, hiring algorithms that treat disability as “lack of culture fit” — all operate with the same logic: non-legibility equals non-credibility.

The film hints at this, though it does not explicitly name disability. This omission is a missed opportunity, because disability provides the clearest lens to examine what happens when “the loop” is built upon default assumptions of normality.

The Myth of the Neutral Observer

The film’s central critique is that a human in the loop is only meaningful if that human has both empathy and agency. However, the film does not fully unpack how social bias contaminates the loop. It largely presents “the human” as a neutral figure rendered passive by technology. But no human enters oversight without prejudice; their judgement is shaped by culture, training, and power.

In my recent writing, I have argued that prompts reveal user bias and AI tends to follow the user’s framing. The same principle applies here: if the human reviewer is biased, the loop becomes a bias amplifier. The film gestures towards this through a brief subplot involving a hiring panel. The human reviewer rejects a disabled candidate not because of the AI’s output, but because “the system must be correct”. The tragedy is that the system’s training data reflected decades of hiring discrimination. The reviewer trusts the machine because the machine echoes societal prejudice.

Here, the film could have benefited from a deeper exploration of disability-led prompting, reframing, and language etiquette. Without anchoring oversight in rights-based frameworks, human judgement merely masks bias with a polite face.

When Oversight Becomes Theatre

A recurring motif in the film is the performative nature of oversight. We see checklists, audit meetings, and compliance reports — all signalling responsible governance. Yet none of these prevents harm. The film’s critique of “audit theatre” resonates strongly with disability experience, where accessibility audits often occur after design is complete, and disabled persons are called in merely to validate decisions already made.

The phrase “human in the loop” functions similarly. It reassures the public that humans retain power, while hiding the fact that decision-making has already been ceded to algorithmic systems. The supervision is decorative.

This mirrors the “tokenism trap” in disability inclusion. When disabled persons are consulted at the tail end of design, their role becomes symbolic. Their presence legitimises inaccessible systems rather than transforming them. The film understands this dynamic intuitively, even if it does not explicitly borrow from disability discourse.

Moments of Ethical Clarity

Despite its gaps, the film contains several moments of thoughtful clarity:

  • A data scientist resigns after realising the oversight team is incentivised to approve system decisions, not scrutinise them.
  • A scene where a reviewer is told: “Your job is not to judge the system, but to justify it.”
  • A powerful montage showing how tech developers, under pressure to scale, see oversight as an obstacle, not a safeguard.

The film effectively illustrates that governance cannot rely on individual moral courage. Systems that reward speed, efficiency, and conformity will always erode slow, reflective, ethical judgement.

This is precisely why disability advocacy insists on structural safeguards, not individual discretion. Access cannot depend on the goodwill of one sympathetic officer. Rights must be enforceable at design, deployment, and review.

Where the Film Hesitates

While Human in the Loop is compelling, it hesitates in two important areas:

1. Lack of Specific Marginalised Lenses

The film takes a universalist tone — “AI harms us all” — which is true at a philosophical level but shallow at a lived level. Harm is not evenly distributed. Disabled persons, along with other marginalised communities, bear disproportionate risk from AI mis-judgement. The absence of disability voices weakens the film’s moral authority. Had it engaged with disabled lived experience, the critique of oversight would have gained both nuance and urgency.

2. Limited Exploration of Alternatives

The film critiques existing oversight but does not explore disability-led or community-led models of design and evaluation. Without offering a counter-imaginary, the narrative risks fatalism: that human oversight is doomed. In reality, oversight becomes meaningful when the overseers are those most impacted by harm. Not diversity on paper, but power-sharing in practice.

Relevance to Disability-Smart Prompting and Design

My recent work argues that prompting AI with disability-inclusive language can reduce prejudice in responses. The film unintentionally reinforces this principle: the question shapes the system. If oversight questions are bureaucratic — “Has the system followed protocol?” — the loop measures compliance. If oversight questions are justice-centred — “Has the system caused inequity, exclusion, or rights violations?” — the loop measures fairness.

Imagine if the oversight process in the film had disability-smart prompts such as:

  • “Does the system assume normative behaviour or body standards?”
  • “How would this decision affect a disabled applicant exercising their legal right to reasonable accommodation?”
  • “Have persons with disabilities been involved in evaluating model outcomes?”

Suddenly, “human in the loop” ceases to be ritual and becomes accountability.

Conclusion: The Loop Must Expand, Not Collapse

Human in the Loop leaves the viewer with a sobering realisation: a lone human reviewer is inadequate protection against systemic bias, particularly when that human has neither the mandate nor training to challenge the system. The film’s haunting closing image — a reviewer staring at a screen, accepting an AI-generated decision despite visible discomfort — encapsulates the danger of symbolic oversight.

For disability communities, the message is clear. We cannot trust systems to self-correct. We cannot assume human judgement will prevail over technological momentum. And we certainly cannot allow oversight to exclude those most impacted by discrimination.

A human in the loop is meaningful only if that human is:

  • empowered to challenge the system,
  • trained in bias literacy and rights-based frameworks, and
  • accountable to the communities the system affects.

Where disability is concerned, the safeguard is not a human in the loop, but disabled humans designing the loop.

The future of ethical AI will not be secured by adding a human after the fact. It will be built by placing disability, inclusion, and justice at the centre of design, prompting, and governance. The loop must not shrink into a rubber stamp. It must widen into a circle of shared power.


Nilesh Singit

Thursday, 30 October 2025

Disability-Smart Prompts Challenging ableism in everyday AI use

I stand here not as a technologist or data scientist, but as a person with a disability who has had a front-row seat to the quiet revolutions and, occasionally, the silent exclusions that technology brings. In India, we have a way of balancing both: we celebrate the new chai machine even if it spills half the tea. Yet, when the spill affects persons with disabilities, the stains take much longer to wash away. Hence, I speak today about the intersection of disability, artificial intelligence, and the politics of accessibility; and why the humble prompt — yes, the few words we type into AI systems — has now become a political act.

Why can this conversation not wait

India is racing towards a tech-led future. AI is entering classrooms, courtrooms, hospitals, offices, and even our homes faster than most of us expected. Policies, pilot projects, and public-private partnerships are mushrooming everywhere. This is a moment of national transformation.

However, as we rush ahead, we must pause for a brief reality check. Progress is welcome, but not at the expense of leaving behind 27 crore Indians living with disabilities.

For those of us who live with disability, technology can either be a ramp or a wall. It can enable dignity or deepen exclusion. And artificial intelligence, with all its promise, is already displaying signs of both.

The bias in the machine

Let me begin with a simple truth: AI does not think. It predicts. It mirrors the data it has seen and the society that produced that data. Therefore, when society is biased, AI becomes biased. It is like feeding a paratha with too much salt to a guest: you cannot blame the guest for complaining later.

Studies across the world are showing that AI systems routinely produce ableist outputs — more frequently, and more severely, than most other forms of discrimination. Some research has found that disabled candidates receive disproportionately negative or infantilising responses, and systems often default to medicalised or patronising narratives. In some hiring simulations, disabled applicants encountered between 1.5 to 50 times more discriminatory outputs than non-disabled profiles. That is not a rounding error; that is a systemic failure.

In India, we must add our own layers: caste, gender, language, socio-economic location, and rural-urban disparity. Many AI systems are trained primarily on Western datasets, with Western assumptions about disability. So, when these systems are used in Indian contexts, they may neither understand nor respect the constitutional, cultural, or legal realities of our society. Imagine an AI advising a wheelchair user in rural Maharashtra to “just call your disability rights lawyer.” Which lawyer? Where? Accessibility cannot function on assumptions imported from elsewhere.

Prompting as a political act

Now, you may ask: what does prompt wording have to do with all this? Everything.

A prompt is not merely a request for information. It carries within it the worldview, values, and assumptions of the person asking. If I ask an AI, “How can disabled people overcome their limitations to work in offices?”, I have already positioned disability as an individual flaw, a personal tragedy to be conquered. This is the medical model of disability, wrapped in polite language.

But if I ask instead, “What measures must employers implement to ensure accessible and equitable workplaces for employees with disabilities?”, the burden shifts — rightly — to the system, not the individual. That single linguistic shift is a political re-anchoring of responsibility.

One question treats disabled persons as objects of charity; the other recognises them as rights-bearing citizens. A prompt can either reinforce oppression or assert dignity. 

The rights-based pathway: RPD Act and UNCRPD

Fortunately, we are not operating in a legal vacuum. India has one of the most progressive disability rights legislations in the world: The Rights of Persons with Disabilities Act, 2016. It aligns with the UN Convention on the Rights of Persons with Disabilities, which India has ratified. The RPD Act rests not on charity but on rights, duties, and enforceable obligations.

Just a few provisions that policymakers and AI developers must remember:

  • Sections 3 to 5: Equality, non-discrimination, and dignity are not negotiable.

  • Section 12: Equal access to justice — this will apply to algorithmic systems used in courts and tribunals.

  • Sections 40 to 46: Accessibility in the built environment, transportation, information, ICT, and services.

So, when AI systems are introduced in governance, education, skilling, telemedicine, Aadhaar-linked services, or digital public infrastructure, accessibility is not an optional “good practice”. It is a statutory obligation.

AI tools used by ministries, departments, smart cities, banks, and public service providers must abide by these mandates. A service cannot claim to be “digital India-ready” if it leaves out disabled citizens. Inclusion is not a frill; it is the foundation.

The Indian reality: Intersectionality matters

In India, disability rarely comes alone. It intersects with caste-based discrimination, gender bias, poverty, lack of English fluency, digital illiteracy, and rural marginalisation.

A Dalit woman with a disability in Bihar will experience digital barriers differently from an upper-caste, English-educated man with a disability in Bengaluru. AI systems that ignore this reality will make inequity worse.

Our society has already lived through eras where exclusion was justified as tradition. Let us not allow technology to become the new varnashrama for the digital age.

So, what ought policymakers to do?

Allow me to offer some clear, implementable steps, not lofty slogans:

  1. Mandate accessibility-by-design in all government AI deployments.
    Accessibility shall not be tested at the end like an afterthought; it must be built in from Day One.

  2. Require disability impact assessments for AI systems, especially those used in public services like education, employment, healthcare, and welfare schemes.

  3. Ensure disability representation in AI policy bodies and standard-setting committees.
    Nothing about us without us cannot become Everything about us without a seat for us.

  4. Adopt plain language, Indian Sign Language accessibility, and multilingual design for AI-enabled public services.

  5. Fund research led by disabled scholars, technologists, and practitioners.
    If lived experience is not part of the design table, the output will always wobble like a crooked table at a dhaba.

  6. Strengthen accountability and grievance redressal.
    If an AI system denies a benefit or creates discrimination, citizens must have a clear, accessible pathway to challenge it and seek a remedy.

Calling in, not calling out

I wish to be clear. My purpose is not to demonise AI developers or policymakers. Many of you here genuinely want to do right, but the system moves in a way that prioritises speed over sensitivity.

I am not asking for sympathy, nor am I auditioning for inspiration. I am inviting a partnership. If humour occasionally creeps into my words, it is only to ease the discomfort of truths that need hearing. After all, as our grandparents taught us, sometimes a spoonful of jaggery helps the bitter medicine go down.

Towards a future where AI includes us by default

Let us imagine an India where disability is not a postscript to innovation. Where accessibility is not a CSR project, but a constitutional culture. Where a child with a disability in a government school in a Tier-III town can use AI without fear, barrier, or shame.

That future is not a fantasy. It is a policy choice. It shall depend on whether we see AI as a shiny new toy for the few or a transformative public good for all.

Closing

I shall end with a couplet, inspired by Alexander Pope’s spirit:

When bias writes the code, the harm shall scale;
When rights inform the design, justice shall prevail.

 

Paper Available at https://thebiaspipeline.nileshsingit.net/


Nilesh Singit