When we discussed testing and quality assurance in the previous chapter, it meant checking for issues before the product goes out to customers. One way to do this is manually, where real people go through an application and report issues they come across. Another way is through automated tests that run at certain times during the code writing and building processes. In this chapter, we’ll review how manual and automated testing close the loop on sustainable accessible design.
Automated Testing
Automated testing means programmatically running parts of an application to make sure they work as expected under different scenarios, and to prevent errors (regressions) if the underlying code changes at a later date.
This test will fail as soon as the developer makes the change locally (on their own machine), or in the pull request phase (when their changes are compared to the latest version of the main application). The feedback loop in the absence of this test would have been much longer (manual tester or user ➤ feedback channel ➤ project manager ➤ engineering team ➤ tester), where the problem would go unfixed until a user complains, or there was a manual audit.
- 1.
Automated tests are especially effective when application development is decentralized, that is, several different teams are making changes to the same codebase.
- 2.
If different versions of the application are available due to experimentation, automated testing can cover different combinations of the holistic experience, all of which would be time-consuming if done manually.
- 3.
While it is clear that some level of manual oversight is necessary, automated testing helps surface low-hanging fruit to free up time for more complicated error discovery. It can also be integrated with the continuous delivery process. By highlighting the error in a developer’s workflow, the feedback loop is much shorter than manual testing practices.
- 4.
In large distributed teams, automated testing results can also help identify teams that might need more education or accessibility training, as well as opportunities to apply standard design components/practices that have accessibility built-in.
- 1.
Currently, the subset of WCAG guidelines automated tests can check for is limited to a handful, especially on mobile. These include labels, color contrast, punctuations, clickable span focus, and others listed later in this section. Of course, it is possible for developers to write their own automated tests to cover other guidelines. This is time-consuming, although often worth it in the long run.
- 2.
These checks cannot verify the semantic accuracy of accessible content. For example, a label check can ensure that a label is present, but it cannot check whether the label is a good one or an accurate one.
- 3.
If the views to be checked are not already covered by tests, it might require some initial investment into adding test coverage.
- 4.
Test reports are only effective if they are followed up on. Disabling or deleting a test are two quick ways to get a pull request merged if it fails an accessibility check.
Automated test frameworks for mobile are available as open source libraries, for example, GTXiLib (iOS)1 and ATF (Accessibility Test Framework for Android).2 There are other free and paid tools that offer bespoke functionality for automated checks. Currently, open source checks on Android and iOS allow for the following checks:
- 1.
Speakable Text Present
- 2.
Redundant Description
- 3.
Touch Target Size
- 4.
Text Contrast
- 5.
Editable Content Description
- 6.
Duplicate Speakable Text
- 7.
Clickable Span
- 8.
Duplicate Clickable Bounds
- 9.
Image Contrast Check
- 10.
Class Name Check
- 11.
Traversal Order Check
GTXiLib
- 1.
Link Purpose Unclear Check
- 2.
Accessibility Label Present
- 3.
Accessibility Label-Trait Overlap
- 4.
Accessibility Label Not Punctuated
- 5.
Accessibility Traits: Element fails if it has conflicting accessibility traits
- 6.
Touch target size
- 7.
Contrast ratio (Label)
- 8.
Option for custom checks
Scanners
Halfway between automated and manual testing are scanners, which scan each screen you manually visit and find accessibility violations. Examples of these are the Android Accessibility Scanner, and the XCode inspector on iOS. On the web, tools such as Lighthouse scan websites for a subset of accessibility errors.
On iOS, the XCode inspector is part of the development environment that analyses screens on the simulator (virtual device). Scanners are particularly useful for teams that do not have automated testing practices in place. By highlighting basic errors, scanners can save a lot of time in the manual testing phase and leave time for checks including semantic verification.
Manual Testing
The most comprehensive checklist for accessibility compliance is the one published by the W3C WAI (Web Accessibility Initiative), which can be found at www.w3.org/WAI/WCAG21/quickref/.
The list contains over 70 items, and can at first be overwhelming. Only a subset of these items applies to any given product. If usability (not just compliance) is the goal, checklists can provide a great jumping-off point, but much like the tiered approach to prioritizing audit reports discussed in Chapter 3, teams will need to weave accessibility testing with their manual testing routine including bug bashes (where members of a product team dedicate a specific amount of time together to test their features and to discover software regressions), sign-off processes, and occasional audits.
Which are the most important user flows?
What is already tested in the automated process?
- How often do we manually test our application?
Regular testing (daily, weekly)
Release testing
Bug bash sessions
Audits
What should be tested at each of these stages?
The preceding processes are in decreasing order of frequency of testing. Core user flows must be tested against all applicable criteria with as many of these steps as possible, followed by secondary and less critical user flows.
One practice that has worked extremely well in my experience was creating user personas and picking at least one per bug bash, and taking turns using the feature we were testing with related assistive technology. Bug bashing sessions are particularly powerful because they are typically done with the entire development team including designers, engineers, and product managers. Firstly, it helps the team learn how people with disabilities use the product and empathize with them. Secondly, it uncovers opportunities and not just bugs in the application. Following is a list of sample personas for a job application site:
Persona 1
A blind Talkback or VoiceOver writer who is looking for a copywriting role
Persona 2
Engineering manager with a hearing impairment hiring for software developers
Persona 3
Partially sighted keyboard user with a motor impairment looking for a store manager position
Persona 4
College student with short-term memory looking for art internships and career advice
One neat trick in manually testing a mobile application is to test with an external keyboard because, in the process, it checks for criteria including labels, focus order, the grouping of elements, hierarchy, links, and focus traps.
Evaluating Third-Party Testing Vendors
Is it more feasible to build the infrastructure in-house?
If not, which vendor do we pick?
- 1.
Latency
- 2.
Accuracy
- 3.
Scalability
- 4.
Price (integration cost)
- 5.
Licencing and vendor fee
Manual testing: Is it more feasible to hire accessibility testing experts in-house?
The benefit of doing this is that internal testers will become dedicated product experts over time and build relationships with development teams. The downside is individual testers take time to ramp up, and may not scale with fluctuating needs. If the demand for manual testing varies over time, it might make more economic sense to hire a third-party vendor with that flexibility. A hybrid model with a few in-house testers and additional testers when needed could be the best option for some companies.
a lump sum
per hour
per run
per screen
per issue found
per build
per developer
- 6.Coverage
- a.
How much of the application is covered?
- b.
Of all the criteria you want to be tested, how many are covered? You can use WCAG guidelines as a baseline for this calculation, but a better baseline would be only guidelines that apply to your product. You could write your own using WCAG as a framework to get the best of both.
- 7.
SLAs
- 8.
Actionability of reports
Any reports generated by the third party, either manually or automatically, should have screenshots, detailed explanations of errors, WCAG, or other guidelines the issue pertains to, as well as suggested solutions including code samples.
- 9.
Parseability of reports
- 10.
Customizability
Related to actionability is how customizable the reports and the workflows surrounding them are. Is it possible to extract information from the reports into an API that powers internal workflows and dashboards? If not, can the reports be exported into Excel or other document formats for further analysis and assignment?
API stands for Application Programming Interface, which means a way for systems or computers to share information using a set of predetermined rules. In this case, it could mean an internal reporting application requests specific information from the report to display and notify teams in their preferred format.
Customer Service: How to Help Customers and Escalate Critical Issues
If you have an inclusive, comprehensive product development strategy and testing practices discussed before, doing this part becomes much easier because ideally, very few people will need to contact support.
Guidelines and best practices help avoid known issues. However, each user has their own unique way of engaging with products. In the accessibility context, there might be unforeseen interactions, and opportunities for improvement that only users with certain limitations, use cases, or intersection of disabilities will understand.
The motivation is no different than support pages, FAQs,4 and feedback channels for all users. Making sure users with disabilities are able to reach relevant teams with requests and feedback through accessible channels is important to ensure they don’t get stuck or feel unheard. For example, if phone is the only support channel offered, it might make sense to evaluate email or chat alternatives for people with hearing impairments or communication anxiety.
The second step in the process is to train agents and customer experience teams in identifying accessibility issues, determining their criticality, and escalating them to the right teams. It also involves using inclusive language in their responses, as well as help pages or FAQ documentation. A customer service guide published by the Ontario Education Service Corporation5 covers basic principles as well as inclusive language guidelines for specific disabilities. The ADA website also has a document with quick tips.6
Reports and suggestions from real users will also serve as a data source for measuring the success of a team’s accessibility efforts. This data source is reliable only when the product team has an effective relationship with users, and they feel invested enough to share feedback.
Number of issues reported relative to application size
Percentage of issues resolved
Average time to resolve issues
Average and total agent time spent on resolution
Customer satisfaction ratings
Summary
Automated testing means programmatically running parts of an application to make sure they work as expected under different scenarios, and to prevent errors (regressions) if the underlying code changes at a later date.
Advantages of automated testing include shorter feedback cycles between discovery and remediation of bugs and identification of systemic issues in large distributed teams, thereby freeing teams to focus on innovation.
Halfway between automated and manual testing are scanners, which scan each screen you manually visit and find accessibility violations. This is useful for teams that do not have automated testing practices in place. By highlighting basic errors, scanners can save time in the manual testing phase and leave time for checks including semantic verification.
The most comprehensive checklist for accessibility compliance is the one published by the W3C WAI (Web Accessibility Initiative), which can be found at www.w3.org/WAI/WCAG21/quickref/.
One practice that works extremely well in the absence of direct user feedback is creating user personas and taking turns using features with related assistive technology as those users.
In-house testing experts become dedicated product experts over time and build relationships with development teams. The downside is individual testers take time to ramp up, and may not scale with fluctuating needs. If the demand for manual testing varies over time, it might make more economic sense to hire a third-party vendor with that flexibility.
Making sure users with disabilities are able to reach relevant teams with requests and feedback through accessible channels is important to ensure they don’t get stuck or feel unheard.