“AI is inevitable” is bullshit
I really wished I could sit back and let the whole “AI” frenzy play out without speaking up or getting involved. Usually these types of hype cycles don’t affect me. I have never blogged about crypto or NFTs because it was far too obvious that they were a fad. The hype cycle around “AI” feels the same to me, with the distinction that there is the drive to “AI” even in spaces that I am in, meaning this fad has penetrated far into the accessibility discourse.
Support Eric’s independent work
I'm a web accessibility professional who cares deeply about inclusion and an open web for everyone. I work with Axess Lab as an accessibility specialist. Previously, I worked with Knowbility, the World Wide Web Consortium, and Aktion Mensch. In this blog I publish my own thoughts and research about the web industry.
And yes, if you are searching for the “the kids are wrong” meme from the Simpsons, let me save you a click and acknowledge my bias: I’m a middle-aged man, and there are probably about 67 things I don’t understand about the modern world. But: I’m not against technology and advancements. Where they make a tangible positive impact, I like technology to improve our lives.
Noise-canceling headphones, larger screens, lighter phones, locally pollution-free electric cars, solar and wind energy, and 3D printing. These are all technologies that I have embraced and am happy to use any day. In addition, I use some light LLMs (Large Language Models) where they make sense in a transitional way. Creating captions and transcripts for videos that would otherwise not have them is something I often do. Do I think they are relatively bad? Yes, they would not meet my standards for publication. I also use a program to correct my grammar that is based on an LLM and a translation tool that is also using an LLM1 .
LLMs are useful when you need a compromise between fast and good. You will never get a good outcome fast.
“Look, they all are doing it!”
I recently attended a talk where the number of lines committed by bots to GitHub repositories was used as the inevitability of “AI” and how that’s a signal we must train those “AI” bots to make stuff accessible. But quantity is not quality. The number of committed lines says nothing about the fact that these lines actually worked. And it does not indicate if they are merged into the code. Many open-source projects fight with a huge number of pull requests that do not add any value to their codebase.
The companies that are promoting “AI”, which are often considered indicators that this is an important technology, are also financially very invested in the technology. Similar to how Germany cannot envision a future without a combustion engine because so much of the economy depends on it, we cannot envision a future without a combustion engine, “AI” investors can’t acknowledge the limitation of the technology because they are all-in on it.
Sunk cost, a fallacy.
The “AI” companies all run on venture capital money. OpenAI charges $200 for their highest plan, and it does not cover their cost. The company lost more money than it had in revenue in 2024. I could not run my business this way. And sure, this is a big bet on some kind of nebulous future. I get it. But that there is no path for any of those companies to profit is concerning. And it seems to be concerning to them, too. Otherwise, why would they start to build feeds2 into ChatGPT and their video platform Sora that allows people to create depictions of harm against disabled people?
But don’t worry, if that all doesn’t work out, there is always sexting with ChatGPT.
How much will a request or a monthly subscription cost to make a profit? 10, 50, or 100 times what it costs now?
Human review.
Of course, “AI” is not perfect, but that’s where human review comes in. That’s a common sentiment and deflection of criticism. And for now it will work, because people are very used to best practices and know when an LLM produces valid information and when not. But of course those humans would have reviewed and trained other humans before. Humans that would eventually replace them and teach the next generation how to review code and locate issues.
When “AI” replaces that next generation, you have people who cannot collect the experience they need to succeed in their job, but also, once the reviewers go into retirement, you also lose the experience for review outright. One of the most rewarding experiences of my career is training people to think critically about the work they do so that they can make decisions on their own. Arguing with a chatbot for hours is not what I would envision as fulfilling.
The mediocracy “AI” creates.
The advancement of “AI” has led us to accepting more and more mediocre content and work. Creating a workshop outline with ChatGPT? Congratulations to all other consultants who do your job and have the same outline generated. There is nothing unique about your workshop. We now accept that captions are sometimes wrong “because oops, AI”. Shitty translations are ok as long as “AI” did it. Bad code is good enough as long as some mediocre “AI” “fixes” all the issues.
I’m afraid we are settling into a status of good enough when using “AI,” which is especially hurtful for accessibility. I have stopped doing it, but when people boasted about the great results of image descriptions by “AI”, I almost always found important details that were missing. Recently, I have seen shops using “AI” descriptions for clothes, and the same clothes are described differently on different images.
Technology over people.
This all is technoableism in its clearest form. Demoting accessibility and access to a technical problem that a magic script can make go away. Finally, one can breathe freely and not think about disabled people and how they use the web ever again. Instead of fixing the (admittedly more difficult) societal problems, we use mediocre tools that simulate access for our convenience.
I have seen the argument that “we need to make the AI better” as an argument before, but I don’t think this will help eventually. There will always be more inaccessible content out there than accessible content. And accidentally accessible content is also not sustainable. What if one “AI” puts labels on form elements, and the next, for some reason, changes the accessible name with an ARIA attribute? You can easily end up with an inaccessible product despite both LLMs being trained to be “accessible”.
What to do?
Humans, humans, humans, humans. Train them in accessibility, guide them through issues that are difficult to identify, show them tools and techniques to reliably find accessibility bugs, and ensure that they think of accessibility from the beginning, side-stepping the issues.
- Funny enough, since switching to the new “AI-powered” model, the translation tool has taken a hit in accuracy and understanding. I would not trust an automatic translation without review by a translator if I were to publish it, and then you can just hire the translator straight away. ↩
- Automatic content based on previous interactions, like social media feeds, not RSS feeds. ↩
Comments & Webmentions