Websites and apps should be designed and developed in such a way that people with disabilities (permanent, temporary or situational) find them easy to use (if you’re unsure on what accessibility is, take a look at our blog, What is digital accessibility?).

But how do we measure accessibility? How can we tell whether a site or an app is accessible? The answer is by checking it against international accessibility standards - Web Content Accessibility Guidelines (WCAG) 2.1. This can be done in two ways: by using automated testing tools or carrying out a manual audit. In this article we’ll compare the two methods to better understand when and how they should be used and look at their advantages and disadvantages.

Automated accessibility testing

Automated evaluation can be carried out by anyone with access to an accessibility testing tool. There’s a number of free and paid tools available, including:

Accessibility testing tools run a set of scripts checking web content against certain criteria based on WCAG 2.1. For example, according to WCAG, every <img> element (an image) has to have “alt” attribute which is used to provide a text description of that image for the benefit of users who cannot see it:

<img alt=”black cat sleeping on a sofa” src=”/images/123.jpg” />

The relevant script would consist of the following steps:

  1. Check if there are any <img> elements

    1. If no, the result is N/A

    2. If yes, check if each of those elements has “alt” attribute

      1. If yes, the result is Pass

      2. If no, the result is Fail

Automated testing pros:

  • It’s easy - it can be done by anyone with access to a testing tool

  • It’s fast - it can check hundreds of pages and provide results in a matter of hours

  • It’s cheap - some of the automated tools are free to use

  • It can detect problems early on - some tools can be integrated into the development process and run tests whenever new code is added or perform regular scans to ensure that no new issues have been introduced since the last check. 

Automated testing cons:

  • Automated testing tools are not able to test websites or apps against all success criteria listed in WCAG. Many guidelines are objective and therefore can’t be tested using a script as they require human judgement. For example, while automated tools can check whether an image has an “alt” attribute, they cannot evaluate whether it’s correctly used, i.e. whether its value conveys the same information as the image. A picture of a cat described as “red car” would pass automated testing.

This also applies to links. Accessibility guidelines require link texts to be descriptive, and yet the link below would pass automated accessibility testing:

<a href=”/contact-us”>go somewhere else</a>

Automated tools can check whether link text is provided, but they cannot determine whether that text accurately describes the link’s purpose.

  • Automated testing can generate false or misleading results. Even when checking against guidelines that can be tested using automated tool, the results may be inaccurate or wrong. Someone with no or limited knowledge of web accessibility may not be able to correctly interpret the results. 

  • The advice on how to fix the issues is quite generic and often vague and therefore some developers may find it difficult to understand and implement the required changes. 

Someone who is not aware of the limitations of automated tools may not realise that even though a site has passed automated testing, it may still have many barriers preventing people with disabilities from accessing its content. When the site passes an automated test it only means that there were no errors found in checks that can be performed by an automated tool and not that the site is accessible. Most automated tools can only check a small number of criteria reliably (e.g. 6 out of 50) so automated accessibility testing cannot determine whether a website is WCAG 2.1 compliant.

Laptop

Manual testing

Manual testing is carried out by an accessibility expert who checks a subset of pages or app screens (usually 5 - 20) against WCAG 2.1 criteria. Every page element and their underlying code is manually evaluated for issues related to non-text content (images, audio, video), use of colour, keyboard accessibility, descriptive links, labels and instructions, correct HTML structure and many others.

Manual testing pros:

  • A manual audit is much more thorough and, unlike automated testing, includes checking the content against all WCAG 2.1 criteria at a required level

  • The results of manual accessibility testing are considered much more reliable and trustworthy

  • While results generated by automated testing tools are generic, a report written by an accessibility expert is specific to your site and includes realistic and tailor-made solutions.

Manual testing cons:

  • Manual testing is more time-consuming and therefore expensive compared to automated testing. The cost depends on the coverage - the more pages included in the audit, the more expensive it is. However, even though in most cases it’s not possible to manually check all pages (unless the site is really small), an accessibility expert who is also an experienced Drupal developer is able to select a good representative sample of pages that includes as many content types and components as possible to maximise test coverage

  • The value of the audit depends largely on the knowledge and experience of the accessibility expert who carries it out. While any accessibility expert can identify issues on a Drupal site, an accessibility expert with good working knowledge of Drupal is able to provide much more detailed advice on how to fix the issues found during the audit. For example, rather than just saying, “these images need alternative description”, they can specify where and how in the code or CMS that change needs to be made. This makes implementing suggested fixes much easier and cost effective for the client. 

Recommended approach

Automated and manual testing complement each other and should both be used to assess the level of accessibility of a website or app. Generally speaking, automated testing can identify some of the issues found on pages across the site but manual testing can find all of the issues on a subset of pages.

As mentioned above, automated testing has many limitations, but it will give you a rough idea of the level of accessibility of your site. Automated testing can quickly check a large number of pages and find some obvious issues, such as missing alternative descriptions or form fields without labels.

We’d then suggest embarking upon a manual accessibility audit. An experienced accessibility expert can help you select a good sample of pages which will cover all (or key) content types and components. This is important to make the manual audit cost-effective and ensure you get the best value for money. Following the audit you’ll receive a report which will clearly explain all issues found on your site and provide tailor-made and specific suggestions on how to fix them.

Any issue described in the report should then be resolved, remembering to apply the suggested fixes to not only to pages included the audit, but also to other areas of your site. 

Once all identified issues are fixed, a re-test can be carried out to confirm that they’ve been correctly resolved and no new issues have been accidentally introduced in the process. 

Following an automated and manual audit, you could look at usability testing with people with disabilities so that further improvements can be made to your site. Regular automated testing and training sessions for content authors and developers contributing to the site will help maintain its high level of accessibility and prevent new issues.

 

Want to know more about how we can help you on your accessibility journey? Get in touch