Friday, 31 October 2025

Human in the Loop, Bias in the Script: A Film Review

 Human in the Loop, a recent film examining the uneasy partnership between Artificial Intelligence and humanity, arrives when the technological imagination oscillates between utopian optimism and existential dread. Marketing narratives portray AI as either our benevolent assistant or our imminent overlord. The film attempts to resist these binaries, grounding its narrative in the messy, often uncomfortable, space where human judgement and machine logic are forced to coexist. It is this space that disability communities know too well: the zone where systems intended to “assist” end up surveilling, disciplining, or excluding.

This review engages with the film through the arguments I have articulated in my work on prompt bias, accessibility, and disability-led design. If AI systems are only as ethical as the assumptions fed into them, then “the human in the loop” is not a safeguard by default. A biased human keeping watch over a biased machine produces not balance, but compounding prejudice. The film gestures towards this tension, though at times without the depth that disability perspectives could have added.

The Premise: Humans as Moral Guardrails

The core narrative premise is straightforward: an AI system designed to support decision-making in public services begins exhibiting troubling patterns. Authorities insist that the system will remain safe because a “human in the loop” retains final authority. The question the film explores is whether this safeguard is meaningful or merely bureaucratic comfort dressing.

The film wisely avoids technical jargon. Instead, it frames the issue through a series of real-world scenarios: automated hiring, predictive policing, healthcare triage, and social welfare determinations. In each, the human reviewer is presented as the ethical backstop. But the film repeatedly reveals how rushed, under-trained, and system-dependent these humans are. Their “oversight” is often symbolic, not substantive.

This aligns with disability critiques of assistive and decision-making technologies. If the system itself is designed upon a logic of normalcy, hierarchy, and suspicion of difference, then a human operator merely rubber stamps the discrimination. A safeguard which never safeguards must be called by its proper name: ritual.

A Loop that Learns the Wrong Lessons

One of the film’s strongest structural choices is its metaphor of the loop. We see not only “human in the loop” but “loop in the human”. Over time, the human reviewers begin adopting the AI’s rationale. Instead of questioning outputs, they internalise them. Confidence in machine judgement breeds complacency in human judgement.

A particularly sharp moment occurs when a social welfare officer rejects an application, citing the AI risk score. When challenged, she responds: “The system has seen thousands of cases. How can I overrule that?” In that instant, the audience witnesses the reversal of power. The human is no longer the check on the system; the system becomes the check on the human.

For disabled audiences, this dynamic is painfully familiar. Tools meant to empower often become tools of compliance. Speech-to-text systems that refuse to recognise dysarthric voices, proctoring software that flags blind students for “not looking at the screen”, hiring algorithms that treat disability as “lack of culture fit” — all operate with the same logic: non-legibility equals non-credibility.

The film hints at this, though it does not explicitly name disability. This omission is a missed opportunity, because disability provides the clearest lens to examine what happens when “the loop” is built upon default assumptions of normality.

The Myth of the Neutral Observer

The film’s central critique is that a human in the loop is only meaningful if that human has both empathy and agency. However, the film does not fully unpack how social bias contaminates the loop. It largely presents “the human” as a neutral figure rendered passive by technology. But no human enters oversight without prejudice; their judgement is shaped by culture, training, and power.

In my recent writing, I have argued that prompts reveal user bias and AI tends to follow the user’s framing. The same principle applies here: if the human reviewer is biased, the loop becomes a bias amplifier. The film gestures towards this through a brief subplot involving a hiring panel. The human reviewer rejects a disabled candidate not because of the AI’s output, but because “the system must be correct”. The tragedy is that the system’s training data reflected decades of hiring discrimination. The reviewer trusts the machine because the machine echoes societal prejudice.

Here, the film could have benefited from a deeper exploration of disability-led prompting, reframing, and language etiquette. Without anchoring oversight in rights-based frameworks, human judgement merely masks bias with a polite face.

When Oversight Becomes Theatre

A recurring motif in the film is the performative nature of oversight. We see checklists, audit meetings, and compliance reports — all signalling responsible governance. Yet none of these prevents harm. The film’s critique of “audit theatre” resonates strongly with disability experience, where accessibility audits often occur after design is complete, and disabled persons are called in merely to validate decisions already made.

The phrase “human in the loop” functions similarly. It reassures the public that humans retain power, while hiding the fact that decision-making has already been ceded to algorithmic systems. The supervision is decorative.

This mirrors the “tokenism trap” in disability inclusion. When disabled persons are consulted at the tail end of design, their role becomes symbolic. Their presence legitimises inaccessible systems rather than transforming them. The film understands this dynamic intuitively, even if it does not explicitly borrow from disability discourse.

Moments of Ethical Clarity

Despite its gaps, the film contains several moments of thoughtful clarity:

  • A data scientist resigns after realising the oversight team is incentivised to approve system decisions, not scrutinise them.
  • A scene where a reviewer is told: “Your job is not to judge the system, but to justify it.”
  • A powerful montage showing how tech developers, under pressure to scale, see oversight as an obstacle, not a safeguard.

The film effectively illustrates that governance cannot rely on individual moral courage. Systems that reward speed, efficiency, and conformity will always erode slow, reflective, ethical judgement.

This is precisely why disability advocacy insists on structural safeguards, not individual discretion. Access cannot depend on the goodwill of one sympathetic officer. Rights must be enforceable at design, deployment, and review.

Where the Film Hesitates

While Human in the Loop is compelling, it hesitates in two important areas:

1. Lack of Specific Marginalised Lenses

The film takes a universalist tone — “AI harms us all” — which is true at a philosophical level but shallow at a lived level. Harm is not evenly distributed. Disabled persons, along with other marginalised communities, bear disproportionate risk from AI mis-judgement. The absence of disability voices weakens the film’s moral authority. Had it engaged with disabled lived experience, the critique of oversight would have gained both nuance and urgency.

2. Limited Exploration of Alternatives

The film critiques existing oversight but does not explore disability-led or community-led models of design and evaluation. Without offering a counter-imaginary, the narrative risks fatalism: that human oversight is doomed. In reality, oversight becomes meaningful when the overseers are those most impacted by harm. Not diversity on paper, but power-sharing in practice.

Relevance to Disability-Smart Prompting and Design

My recent work argues that prompting AI with disability-inclusive language can reduce prejudice in responses. The film unintentionally reinforces this principle: the question shapes the system. If oversight questions are bureaucratic — “Has the system followed protocol?” — the loop measures compliance. If oversight questions are justice-centred — “Has the system caused inequity, exclusion, or rights violations?” — the loop measures fairness.

Imagine if the oversight process in the film had disability-smart prompts such as:

  • “Does the system assume normative behaviour or body standards?”
  • “How would this decision affect a disabled applicant exercising their legal right to reasonable accommodation?”
  • “Have persons with disabilities been involved in evaluating model outcomes?”

Suddenly, “human in the loop” ceases to be ritual and becomes accountability.

Conclusion: The Loop Must Expand, Not Collapse

Human in the Loop leaves the viewer with a sobering realisation: a lone human reviewer is inadequate protection against systemic bias, particularly when that human has neither the mandate nor training to challenge the system. The film’s haunting closing image — a reviewer staring at a screen, accepting an AI-generated decision despite visible discomfort — encapsulates the danger of symbolic oversight.

For disability communities, the message is clear. We cannot trust systems to self-correct. We cannot assume human judgement will prevail over technological momentum. And we certainly cannot allow oversight to exclude those most impacted by discrimination.

A human in the loop is meaningful only if that human is:

  • empowered to challenge the system,
  • trained in bias literacy and rights-based frameworks, and
  • accountable to the communities the system affects.

Where disability is concerned, the safeguard is not a human in the loop, but disabled humans designing the loop.

The future of ethical AI will not be secured by adding a human after the fact. It will be built by placing disability, inclusion, and justice at the centre of design, prompting, and governance. The loop must not shrink into a rubber stamp. It must widen into a circle of shared power.

Thursday, 30 October 2025

Prototype — Accessible to Whom? Legible to What?

When I first read this theme, I thought to myself, at last someone has asked the right two questions, though perhaps in reverse.

We often think of prototyping as a neutral, creative act—a space of optimism and experimentation. Yet, for many of us in the disability community, it is also the stage where inclusion quietly begins or silently ends.

And when Artificial Intelligence enters this space, another question arises: what does it mean for a prototype to be legible to a machine before it is accessible to a human?

My argument today is straightforward: AI-powered assistive technologies often make disabled people legible to machines, but not necessarily empowered as agents of design.

The challenge before us is to move from designing for to designing with, and ultimately to designing from disability.

Accessibility and Legibility

Accessibility, as Aimi Hamraie reminds us, is not a technical feature but a relationship—a continuous negotiation between bodies, spaces, and technologies.

It is not about adding a ramp at the end, but about asking why there was a staircase to begin with.

Legibility, on the other hand, concerns what systems can recognise, process, and render into data. Within Artificial Intelligence, what is not legible simply ceases to exist.

Now imagine a person whose speech, gait, or expression does not fit the model. The algorithm blinks and replies: “Pardon? You are not in my dataset.”

Speech-recognition tools mishear dysarthric voices; facial-recognition models misclassify disabled expressions as errors.

In such moments, accessibility collapses into machinic readability. One is included only if the code can comprehend them. The bureaucracy of bias, once paper-bound, now speaks in silicon.

The Bias Pipeline—What Goes In, Comes Out Biased

In one experiment, researchers submitted pairs of otherwise identical resumes to AI-powered screening tools. In one version, the candidate had a “Disability Leadership Award” or involvement in disability advocacy listed; in the other, that line was omitted. The AI system consistently ranked the non-disability version higher, asserting that the presence of disability credentials indicated “less leadership emphasis” or “focus diverted from core responsibilities.”Much Much Spectrum+1

This is discrimination by design. A qualified person with disability is judged unsuitable—even when their skills match or exceed the baseline—because the algorithm treated their disability as liability. Such distortions stem not from random error but from biased training data and value judgments encoded invisibly.

The Tokenism Trap

The bias in data is reinforced by bias in design. Disabled persons are often invited into the process only when the prototype is complete—summoned for validation rather than collaboration.

This is an audit theatre, a performance of inclusion without participation.

The United Kingdom’s National Disability Survey was struck down by its own High Court for precisely this reason: it claimed to be the largest listening exercise ever held, yet failed to involve disabled people meaningfully.

Even the European Union’s AI Act, though progressive, risks the same trap. It mandates accessibility for high-risk systems but leaves enforcement weak.

Most developers receive no formal training in accessibility. When disability appears, it is usually through the medical model—as something to be corrected, not as expertise to be centred.

Real-World Consequences

AI hiring systems rank curricula vitae lower if they contain disability-related words, even when qualifications are stronger.

Video-interview platforms misread the facial expressions of stroke survivors or autistic candidates.

Online proctoring software has flagged blind students as “cheating” for not looking at screens. During the pandemic, educational technology in India expanded rapidly, yet accessibility lagged behind.

Healthcare algorithms trained on narrow datasets make incorrect inferences about disability-related conditions.

Each of these failures flows from inaccessible prototyping practices.

Disability-Led AI Prototyping

If the problem lies in who defines legibility, the solution lies in who leads the prototype.

Disability-led design recognises accessibility as a form of knowledge. It asks not, “How do we fix you?” but “What can your experience teach the machine about the world?”

Google’s Project Euphonia trains AI to understand atypical speech. The effort is valuable, yet it raises questions of data ownership and labour—who benefits from making oneself legible to the machine?

By contrast, community-led mapping projects, where wheelchair users, blind travellers, and neurodivergent coders co-train AI systems, are slower but more authentic.

Here, accessibility becomes reciprocity: the machine learns to listen, not merely to predict.

As Sara Hendren writes, design is not a solution; it is an invitation.

When disability leads, that invitation becomes mutual—the technology adjusts to us, not the other way round.

Paper Available at https://thebiaspipeline.nileshsingit.net/

Disability-Smart Prompts Challenging ableism in everyday AI use

I stand here not as a technologist or data scientist, but as a person with a disability who has had a front-row seat to the quiet revolutions and, occasionally, the silent exclusions that technology brings. In India, we have a way of balancing both: we celebrate the new chai machine even if it spills half the tea. Yet, when the spill affects persons with disabilities, the stains take much longer to wash away. Hence, I speak today about the intersection of disability, artificial intelligence, and the politics of accessibility; and why the humble prompt — yes, the few words we type into AI systems — has now become a political act.

Why can this conversation not wait

India is racing towards a tech-led future. AI is entering classrooms, courtrooms, hospitals, offices, and even our homes faster than most of us expected. Policies, pilot projects, and public-private partnerships are mushrooming everywhere. This is a moment of national transformation.

However, as we rush ahead, we must pause for a brief reality check. Progress is welcome, but not at the expense of leaving behind 27 crore Indians living with disabilities.

For those of us who live with disability, technology can either be a ramp or a wall. It can enable dignity or deepen exclusion. And artificial intelligence, with all its promise, is already displaying signs of both.

The bias in the machine

Let me begin with a simple truth: AI does not think. It predicts. It mirrors the data it has seen and the society that produced that data. Therefore, when society is biased, AI becomes biased. It is like feeding a paratha with too much salt to a guest: you cannot blame the guest for complaining later.

Studies across the world are showing that AI systems routinely produce ableist outputs — more frequently, and more severely, than most other forms of discrimination. Some research has found that disabled candidates receive disproportionately negative or infantilising responses, and systems often default to medicalised or patronising narratives. In some hiring simulations, disabled applicants encountered between 1.5 to 50 times more discriminatory outputs than non-disabled profiles. That is not a rounding error; that is a systemic failure.

In India, we must add our own layers: caste, gender, language, socio-economic location, and rural-urban disparity. Many AI systems are trained primarily on Western datasets, with Western assumptions about disability. So, when these systems are used in Indian contexts, they may neither understand nor respect the constitutional, cultural, or legal realities of our society. Imagine an AI advising a wheelchair user in rural Maharashtra to “just call your disability rights lawyer.” Which lawyer? Where? Accessibility cannot function on assumptions imported from elsewhere.

Prompting as a political act

Now, you may ask: what does prompt wording have to do with all this? Everything.

A prompt is not merely a request for information. It carries within it the worldview, values, and assumptions of the person asking. If I ask an AI, “How can disabled people overcome their limitations to work in offices?”, I have already positioned disability as an individual flaw, a personal tragedy to be conquered. This is the medical model of disability, wrapped in polite language.

But if I ask instead, “What measures must employers implement to ensure accessible and equitable workplaces for employees with disabilities?”, the burden shifts — rightly — to the system, not the individual. That single linguistic shift is a political re-anchoring of responsibility.

One question treats disabled persons as objects of charity; the other recognises them as rights-bearing citizens. A prompt can either reinforce oppression or assert dignity. 

The rights-based pathway: RPD Act and UNCRPD

Fortunately, we are not operating in a legal vacuum. India has one of the most progressive disability rights legislations in the world: The Rights of Persons with Disabilities Act, 2016. It aligns with the UN Convention on the Rights of Persons with Disabilities, which India has ratified. The RPD Act rests not on charity but on rights, duties, and enforceable obligations.

Just a few provisions that policymakers and AI developers must remember:

  • Sections 3 to 5: Equality, non-discrimination, and dignity are not negotiable.

  • Section 12: Equal access to justice — this will apply to algorithmic systems used in courts and tribunals.

  • Sections 40 to 46: Accessibility in the built environment, transportation, information, ICT, and services.

So, when AI systems are introduced in governance, education, skilling, telemedicine, Aadhaar-linked services, or digital public infrastructure, accessibility is not an optional “good practice”. It is a statutory obligation.

AI tools used by ministries, departments, smart cities, banks, and public service providers must abide by these mandates. A service cannot claim to be “digital India-ready” if it leaves out disabled citizens. Inclusion is not a frill; it is the foundation.

The Indian reality: Intersectionality matters

In India, disability rarely comes alone. It intersects with caste-based discrimination, gender bias, poverty, lack of English fluency, digital illiteracy, and rural marginalisation.

A Dalit woman with a disability in Bihar will experience digital barriers differently from an upper-caste, English-educated man with a disability in Bengaluru. AI systems that ignore this reality will make inequity worse.

Our society has already lived through eras where exclusion was justified as tradition. Let us not allow technology to become the new varnashrama for the digital age.

So, what ought policymakers to do?

Allow me to offer some clear, implementable steps, not lofty slogans:

  1. Mandate accessibility-by-design in all government AI deployments.
    Accessibility shall not be tested at the end like an afterthought; it must be built in from Day One.

  2. Require disability impact assessments for AI systems, especially those used in public services like education, employment, healthcare, and welfare schemes.

  3. Ensure disability representation in AI policy bodies and standard-setting committees.
    Nothing about us without us cannot become Everything about us without a seat for us.

  4. Adopt plain language, Indian Sign Language accessibility, and multilingual design for AI-enabled public services.

  5. Fund research led by disabled scholars, technologists, and practitioners.
    If lived experience is not part of the design table, the output will always wobble like a crooked table at a dhaba.

  6. Strengthen accountability and grievance redressal.
    If an AI system denies a benefit or creates discrimination, citizens must have a clear, accessible pathway to challenge it and seek a remedy.

Calling in, not calling out

I wish to be clear. My purpose is not to demonise AI developers or policymakers. Many of you here genuinely want to do right, but the system moves in a way that prioritises speed over sensitivity.

I am not asking for sympathy, nor am I auditioning for inspiration. I am inviting a partnership. If humour occasionally creeps into my words, it is only to ease the discomfort of truths that need hearing. After all, as our grandparents taught us, sometimes a spoonful of jaggery helps the bitter medicine go down.

Towards a future where AI includes us by default

Let us imagine an India where disability is not a postscript to innovation. Where accessibility is not a CSR project, but a constitutional culture. Where a child with a disability in a government school in a Tier-III town can use AI without fear, barrier, or shame.

That future is not a fantasy. It is a policy choice. It shall depend on whether we see AI as a shiny new toy for the few or a transformative public good for all.

Closing

I shall end with a couplet, inspired by Alexander Pope’s spirit:

When bias writes the code, the harm shall scale;
When rights inform the design, justice shall prevail.

 

Paper Available at https://thebiaspipeline.nileshsingit.net/