Human in the Loop, a recent film examining the uneasy partnership between Artificial Intelligence and humanity, arrives when the technological imagination oscillates between utopian optimism and existential dread. Marketing narratives portray AI as either our benevolent assistant or our imminent overlord. The film attempts to resist these binaries, grounding its narrative in the messy, often uncomfortable, space where human judgement and machine logic are forced to coexist. It is this space that disability communities know too well: the zone where systems intended to “assist” end up surveilling, disciplining, or excluding.
This review engages with the film through the arguments I have articulated in my work on prompt bias, accessibility, and disability-led design. If AI systems are only as ethical as the assumptions fed into them, then “the human in the loop” is not a safeguard by default. A biased human keeping watch over a biased machine produces not balance, but compounding prejudice. The film gestures towards this tension, though at times without the depth that disability perspectives could have added.
The Premise: Humans as Moral Guardrails
The core narrative premise is straightforward: an AI system designed to support decision-making in public services begins exhibiting troubling patterns. Authorities insist that the system will remain safe because a “human in the loop” retains final authority. The question the film explores is whether this safeguard is meaningful or merely bureaucratic comfort dressing.
The film wisely avoids technical jargon. Instead, it frames the issue through a series of real-world scenarios: automated hiring, predictive policing, healthcare triage, and social welfare determinations. In each, the human reviewer is presented as the ethical backstop. But the film repeatedly reveals how rushed, under-trained, and system-dependent these humans are. Their “oversight” is often symbolic, not substantive.
This aligns with disability critiques of assistive and decision-making technologies. If the system itself is designed upon a logic of normalcy, hierarchy, and suspicion of difference, then a human operator merely rubber stamps the discrimination. A safeguard which never safeguards must be called by its proper name: ritual.
A Loop that Learns the Wrong Lessons
One of the film’s strongest structural choices is its metaphor of the loop. We see not only “human in the loop” but “loop in the human”. Over time, the human reviewers begin adopting the AI’s rationale. Instead of questioning outputs, they internalise them. Confidence in machine judgement breeds complacency in human judgement.
A particularly sharp moment occurs when a social welfare officer rejects an application, citing the AI risk score. When challenged, she responds: “The system has seen thousands of cases. How can I overrule that?” In that instant, the audience witnesses the reversal of power. The human is no longer the check on the system; the system becomes the check on the human.
For disabled audiences, this dynamic is painfully familiar. Tools meant to empower often become tools of compliance. Speech-to-text systems that refuse to recognise dysarthric voices, proctoring software that flags blind students for “not looking at the screen”, hiring algorithms that treat disability as “lack of culture fit” — all operate with the same logic: non-legibility equals non-credibility.
The film hints at this, though it does not explicitly name disability. This omission is a missed opportunity, because disability provides the clearest lens to examine what happens when “the loop” is built upon default assumptions of normality.
The Myth of the Neutral Observer
The film’s central critique is that a human in the loop is only meaningful if that human has both empathy and agency. However, the film does not fully unpack how social bias contaminates the loop. It largely presents “the human” as a neutral figure rendered passive by technology. But no human enters oversight without prejudice; their judgement is shaped by culture, training, and power.
In my recent writing, I have argued that prompts reveal user bias and AI tends to follow the user’s framing. The same principle applies here: if the human reviewer is biased, the loop becomes a bias amplifier. The film gestures towards this through a brief subplot involving a hiring panel. The human reviewer rejects a disabled candidate not because of the AI’s output, but because “the system must be correct”. The tragedy is that the system’s training data reflected decades of hiring discrimination. The reviewer trusts the machine because the machine echoes societal prejudice.
Here, the film could have benefited from a deeper exploration of disability-led prompting, reframing, and language etiquette. Without anchoring oversight in rights-based frameworks, human judgement merely masks bias with a polite face.
When Oversight Becomes Theatre
A recurring motif in the film is the performative nature of oversight. We see checklists, audit meetings, and compliance reports — all signalling responsible governance. Yet none of these prevents harm. The film’s critique of “audit theatre” resonates strongly with disability experience, where accessibility audits often occur after design is complete, and disabled persons are called in merely to validate decisions already made.
The phrase “human in the loop” functions similarly. It reassures the public that humans retain power, while hiding the fact that decision-making has already been ceded to algorithmic systems. The supervision is decorative.
This mirrors the “tokenism trap” in disability inclusion. When disabled persons are consulted at the tail end of design, their role becomes symbolic. Their presence legitimises inaccessible systems rather than transforming them. The film understands this dynamic intuitively, even if it does not explicitly borrow from disability discourse.
Moments of Ethical Clarity
Despite its gaps, the film contains several moments of thoughtful clarity:
- A data scientist resigns after realising the oversight team is incentivised to approve system decisions, not scrutinise them.
- A scene where a reviewer is told: “Your job is not to judge the system, but to justify it.”
- A powerful montage showing how tech developers, under pressure to scale, see oversight as an obstacle, not a safeguard.
The film effectively illustrates that governance cannot rely on individual moral courage. Systems that reward speed, efficiency, and conformity will always erode slow, reflective, ethical judgement.
This is precisely why disability advocacy insists on structural safeguards, not individual discretion. Access cannot depend on the goodwill of one sympathetic officer. Rights must be enforceable at design, deployment, and review.
Where the Film Hesitates
While Human in the Loop is compelling, it hesitates in two important areas:
1. Lack of Specific Marginalised Lenses
The film takes a universalist tone — “AI harms us all” — which is true at a philosophical level but shallow at a lived level. Harm is not evenly distributed. Disabled persons, along with other marginalised communities, bear disproportionate risk from AI mis-judgement. The absence of disability voices weakens the film’s moral authority. Had it engaged with disabled lived experience, the critique of oversight would have gained both nuance and urgency.
2. Limited Exploration of Alternatives
The film critiques existing oversight but does not explore disability-led or community-led models of design and evaluation. Without offering a counter-imaginary, the narrative risks fatalism: that human oversight is doomed. In reality, oversight becomes meaningful when the overseers are those most impacted by harm. Not diversity on paper, but power-sharing in practice.
Relevance to Disability-Smart Prompting and Design
My recent work argues that prompting AI with disability-inclusive language can reduce prejudice in responses. The film unintentionally reinforces this principle: the question shapes the system. If oversight questions are bureaucratic — “Has the system followed protocol?” — the loop measures compliance. If oversight questions are justice-centred — “Has the system caused inequity, exclusion, or rights violations?” — the loop measures fairness.
Imagine if the oversight process in the film had disability-smart prompts such as:
- “Does the system assume normative behaviour or body standards?”
- “How would this decision affect a disabled applicant exercising their legal right to reasonable accommodation?”
- “Have persons with disabilities been involved in evaluating model outcomes?”
Suddenly, “human in the loop” ceases to be ritual and becomes accountability.
Conclusion: The Loop Must Expand, Not Collapse
Human in the Loop leaves the viewer with a sobering realisation: a lone human reviewer is inadequate protection against systemic bias, particularly when that human has neither the mandate nor training to challenge the system. The film’s haunting closing image — a reviewer staring at a screen, accepting an AI-generated decision despite visible discomfort — encapsulates the danger of symbolic oversight.
For disability communities, the message is clear. We cannot trust systems to self-correct. We cannot assume human judgement will prevail over technological momentum. And we certainly cannot allow oversight to exclude those most impacted by discrimination.
A human in the loop is meaningful only if that human is:
- empowered to challenge the system,
- trained in bias literacy and rights-based frameworks, and
- accountable to the communities the system affects.
Where disability is concerned, the safeguard is not a human in the loop, but disabled humans designing the loop.
The future of ethical AI will not be secured by adding a human after the fact. It will be built by placing disability, inclusion, and justice at the centre of design, prompting, and governance. The loop must not shrink into a rubber stamp. It must widen into a circle of shared power.