When I first read this theme, I thought to myself, at last someone has asked the right two questions, though perhaps in reverse.
We often think of prototyping as a neutral, creative act—a space of optimism and experimentation. Yet, for many of us in the disability community, it is also the stage where inclusion quietly begins or silently ends.
And when Artificial Intelligence enters this space, another question arises: what does it mean for a prototype to be legible to a machine before it is accessible to a human?
My argument today is straightforward: AI-powered assistive technologies often make disabled people legible to machines, but not necessarily empowered as agents of design.
The challenge before us is to move from designing for to designing with, and ultimately to designing from disability.
Accessibility and Legibility
Accessibility, as Aimi Hamraie reminds us, is not a technical feature but a relationship—a continuous negotiation between bodies, spaces, and technologies.
It is not about adding a ramp at the end, but about asking why there was a staircase to begin with.
Legibility, on the other hand, concerns what systems can recognise, process, and render into data. Within Artificial Intelligence, what is not legible simply ceases to exist.
Now imagine a person whose speech, gait, or expression does not fit the model. The algorithm blinks and replies: “Pardon? You are not in my dataset.”
Speech-recognition tools mishear dysarthric voices; facial-recognition models misclassify disabled expressions as errors.
In such moments, accessibility collapses into machinic readability. One is included only if the code can comprehend them. The bureaucracy of bias, once paper-bound, now speaks in silicon.
The Bias Pipeline—What Goes In, Comes Out Biased
In one experiment, researchers submitted pairs of otherwise identical resumes to AI-powered screening tools. In one version, the candidate had a “Disability Leadership Award” or involvement in disability advocacy listed; in the other, that line was omitted. The AI system consistently ranked the non-disability version higher, asserting that the presence of disability credentials indicated “less leadership emphasis” or “focus diverted from core responsibilities.”Much Much Spectrum+1
This is discrimination by design. A qualified person with disability is judged unsuitable—even when their skills match or exceed the baseline—because the algorithm treated their disability as liability. Such distortions stem not from random error but from biased training data and value judgments encoded invisibly.
The Tokenism Trap
The bias in data is reinforced by bias in design. Disabled persons are often invited into the process only when the prototype is complete—summoned for validation rather than collaboration.
This is an audit theatre, a performance of inclusion without participation.
The United Kingdom’s National Disability Survey was struck down by its own High Court for precisely this reason: it claimed to be the largest listening exercise ever held, yet failed to involve disabled people meaningfully.
Even the European Union’s AI Act, though progressive, risks the same trap. It mandates accessibility for high-risk systems but leaves enforcement weak.
Most developers receive no formal training in accessibility. When disability appears, it is usually through the medical model—as something to be corrected, not as expertise to be centred.
Real-World Consequences
AI hiring systems rank curricula vitae lower if they contain disability-related words, even when qualifications are stronger.
Video-interview platforms misread the facial expressions of stroke survivors or autistic candidates.
Online proctoring software has flagged blind students as “cheating” for not looking at screens. During the pandemic, educational technology in India expanded rapidly, yet accessibility lagged behind.
Healthcare algorithms trained on narrow datasets make incorrect inferences about disability-related conditions.
Each of these failures flows from inaccessible prototyping practices.
Disability-Led AI Prototyping
If the problem lies in who defines legibility, the solution lies in who leads the prototype.
Disability-led design recognises accessibility as a form of knowledge. It asks not, “How do we fix you?” but “What can your experience teach the machine about the world?”
Google’s Project Euphonia trains AI to understand atypical speech. The effort is valuable, yet it raises questions of data ownership and labour—who benefits from making oneself legible to the machine?
By contrast, community-led mapping projects, where wheelchair users, blind travellers, and neurodivergent coders co-train AI systems, are slower but more authentic.
Here, accessibility becomes reciprocity: the machine learns to listen, not merely to predict.
As Sara Hendren writes, design is not a solution; it is an invitation.
When disability leads, that invitation becomes mutual—the technology adjusts to us, not the other way round.