Automated testing won’t solve web accessibility

Over the past few years, accessibility companies have started to develop tools that claim to find accessibility problems automatically. Often the idea is that “automated testing is not quite there yet, but in a few years there will be a revolution”. I don't believe that.

Human problems need human solutions

At its core, accessibility is about people interacting with computers. Users take different paths with their computers to get to your content. Sometimes that route is through a screen reader, zoom software, voice interface, or switch controls. In other use cases, there are no specific accessibility features involved. In addition, there are many written and unwritten user interface rules that apply: Is user interaction from one operating system even understandable on another operating system? Can different accessibility APIs even represent a specific UI element?

This has happened before, with the Treeview component, which is rarely used outside its natural Windows 98 habitat. There is no rule against using it, but making it truly accessible instead of just conforming to some soulless specification is almost impossible.

Automated findings need human interpretation

Sure, finding issues can be a challenge. But the real challenge is how to address an issue. An automated tool can easily say that a <div> with a link role cannot be reached with the keyboard. But what the correct course of action is, might not be clear to the person reading the finding.

Usually, you want to replace the <div> with an <a> element, but in other instances, removing the role="link" altogether is the correct advice. Might some AI/ ML/ InsertBuzzwordTechnologyHere be able to figure this out and make a decent guess? Maybe at some point. But the reality is that the automation might give you wrong advice, which means it will look to the tool that you fixed the issue, but in reality you haven’t.

Automated testing is pretty basic

WCAG 2.0 is 15 years old this year. And while tools to aid exploratory testing1 have matured and gotten better to guide testers through what needs to be done, most automated testing is still very basic. Adrian Roselli compared his own exploratory testing to free automated tools in a blog post recently.

The results are not good. While Adrian found 37 failures against WCAG, the most failures any tool found were five. No tool found any AA violations, Adrian found 11. The tools can also provide warnings, which can be useful. But in my experience, the warnings can create more confusion, especially when the tool users are inexperienced.

While A success criteria (generally) have the most impact on disabled people, not finding any AA criteria is obviously a weakness. It clearly shows that you cannot rely on automated testing alone to meet the absolute minimum level of WCAG AA.

Efficient automated testing works best for preventing regressions

While I think automated testing tools are not good at finding accessibility issues and helping to remediate them, the tests can help with keeping a website or app accessible. Significant issues will crop up earlier when you have regular linting and automated testing for accessibility. This becomes even better if you can include best practice rules. Comparing to a known best practice will always be more useful than having a general purpose tool.

Might this get better in the future?

Maybe. It is hard to gauge the future of our community at the moment. On one hand, it looks like some aspects get easier to test, on the other hand, new requirements are introduced occasionally. And then there is room for interpretation of the rules and how to apply them. And that all does not take into account that the web is a magical, flexible medium that you can adapt to a plethora of things.

But manual testing is hard and expensive

That’s true. Deal with it. 😎

There is no way around doing a manual/exploratory test eventually. That is just the nature of the task at hand. And as much as you would expect a developer to know enough to check their code generally for bugs like performance and security, we also need to teach developers to do the same. It’s not that this would completely prevent the need for exploratory testing, but it would certainly prevent shipping the simple to recognize accessibility bugs.

Support Eric’s independent work

I'm a web accessibility professional who cares deeply about inclusion and an open web for everyone. I work with Axess Lab as an accessibility specialist. Previously, I worked with Knowbility, the World Wide Web Consortium, and Aktion Mensch. In this blog I publish my own thoughts and research about the web industry.

Sign up for a €5/Month Membership Subscribe to the Infrequent Newsletter Follow me on Mastodon

Automated tools should run continuously and issues remediated immediately. That’s where these tools are useful. In all other cases, automated tools can be too hard to understand for users who are not accessibility experts. And once they have acquired enough knowledge to understand them, the users don’t need automated testing anymore.

Update February 15, 2023

Karl Groves published a blog post called “Automation is not the enemy”, where he specifically calls out the title of this article without linking to it (which is fair). As an automation nerd, I enthusiastically agree that automation is our friend. I automate the heck out of my life on an everyday basis. I have Stream Deck buttons that switch on and off different email accounts, depending on the work I do.

But, especially for accessibility testing, users of those tools must be aware that they are severely limited, which Karl also agrees with:

It is true that there are a lot of shortcomings when it comes to automation of something like accessibility testing.
Karl Groves
Sadly, it bears mentioning that no automated accessibility testing tool on the market today can fully test for every possible accessibility challenge. In fact, some WCAG Success Criterion cannot be tested for using any automated means available now.
Karl Groves

I’m not sure where his leap of understanding comes from when he writes in the following sentence:

That said, it is still irresponsible to act as though automated testing is bad and should be avoided.
Karl Groves

Because I don’t think anyone has said that. At least I did not write this in my article. The fair summarization of my article is “automated testing tools have limits (hence they don’t on their own solve web accessibility) and people need to be aware of them”.

A while ago, a competing vendor started making the claim that their tool could find over 70% of accessibility issues, then made hand-wavy explanations of how they came to that conclusion. I decided to do my own research. What I found, based on Tenon’s data, is that my own findings from my blog post “What can be tested and how” from 2012 is still accurate – but that there’s a pretty big difference between “possible” and “likely”, and Tenon.io was able to find 59% of likely accessibility issues.
Karl Groves

I don’t think any of this contradicts my post above. I explicitly say, “While A success criteria (generally) have the most impact on disabled people, not finding any AA criteria is obviously a weakness.” And yes. Adrian’s test is limited to one site only, but it matches my experience. Level A issues are generally more likely to occur than AA issues. As Karl points out, the lens though which you view results is essential. 60% of likely accessibility issues is good, but we can’t call this “accessibility solved” – and we all agree on that.

Automated tools have become better in recent years in other ways than how many errors they detect. Interoperability is one aspect, there is a W3C Working Group that makes sure that tools can use the same tests to interpret things similarly. That’s good. I don’t claim that there has been no development in these tools, just that they are limited. And those limits come from the way accessibility as a discipline needs to be viewed (a human problem with human solutions) and how the standards are written.

Finally, Karl makes the good point that automated tools that are regularly used by trained accessibility professionals can speed up, help, and supplement manual/exploratory testing. And I totally agree. Professionals use tools and know their limits and how to interpret results. But that doesn’t mean that the same tools can be given to non-professionals with the expectation of similar success.

Final note: I have recommended Tenon.io in the past to clients where I thought it was the right fit. I regularly tell people to use axe to verify their work. But I also say that they might need a professional that interprets the results or recommend best practices to solve issues, and advise them to reach out, if they are unsure about the results or how to address errors.

  1. In accessibility, we usually call this “manual testing”, but I’ve learned a few years ago that QA testers use the term “exploratory testing” instead. As web accessibility testing includes many instances of exploring the content with different tools and (assistive) technologies, I have adapted that nomenclature.

Comments & Webmentions

Likes

Shares

Replies

  • 💬
    Ricky Onsman replied:
    2023-02-12 03:55

    One product I see coming is an automated testing tool that is configured to a single user's requirements: a specific combination of settings for device, platform, operating system, user agent, bandwidth, etc all configured to the accessibility needs of one user. Much of this configuration can be done now but this tool will do it all and, where possible, address identified issues for, say, a user session at a specific website. Maybe.

    reply
  • 💬
    Ankit replied:
    2023-02-15 06:15

    Great blog post! I appreciate the thoughtful and nuanced perspective on the relationship between automated testing and web accessibility. It's important to recognize that while automated testing can be a valuable tool in identifying certain accessibility issues, it's not a silver bullet solution that can guarantee an accessible website.

    I particularly appreciated the emphasis on the importance of manual testing and user testing in ensuring accessibility. These methods allow for a more holistic understanding of the user experience and can uncover issues that may not be caught by automated testing.

    Overall, this post offers valuable insights for anyone involved in web development and accessibility, and highlights the need for a multifaceted approach to creating truly accessible websites. Thanks for sharing!

    reply
  • 💬
    Scott M. Stolz replied:
    2023-02-15 12:10

    Thank you for the update. I'll plug my toaster back in then. Manually toasting it with a match was getting troublesome. ;)

    reply
  • 💬
    2023-02-21 07:30

    Oh! What cool typeface is that 😉?

    reply
  • 💬
    Eric Eggert replied:
    2023-02-21 13:15

    😜

    reply
Comments were disabled on this page.

Preferences (beta)

Select a Theme
Font Settings

Preferences are saved on your computer and never transmitted to the server.