© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
S. ChadhaBeyond Accessibility Compliance https://doi.org/10.1007/978-1-4842-7948-9_6

6. We Built It, Now What?

Sukriti Chadha1  
(1)
Lookout Mountain, TN, USA
 

When we discussed testing and quality assurance in the previous chapter, it meant checking for issues before the product goes out to customers. One way to do this is manually, where real people go through an application and report issues they come across. Another way is through automated tests that run at certain times during the code writing and building processes. In this chapter, we’ll review how manual and automated testing close the loop on sustainable accessible design.

Automated Testing

Automated testing means programmatically running parts of an application to make sure they work as expected under different scenarios, and to prevent errors (regressions) if the underlying code changes at a later date.

For example, we’re checking that text on a button (Figure 6-1) has sufficient color contrast with its background so it is easier for users with visual impairments such as color blindness to read the text.
Figure 6-1

Figure of a start button with a light blue background and black text

At a later date, the design changes to have a darker background color (Figure 6-2), which fails the contrast check.
Figure 6-2

Figure of a start button with a dark blue background and black text

This test will fail as soon as the developer makes the change locally (on their own machine), or in the pull request phase (when their changes are compared to the latest version of the main application). The feedback loop in the absence of this test would have been much longer (manual tester or user ➤ feedback channel ➤ project manager ➤ engineering team ➤ tester), where the problem would go unfixed until a user complains, or there was a manual audit.

Advantages of automated testing:
  1. 1.

    Automated tests are especially effective when application development is decentralized, that is, several different teams are making changes to the same codebase.

     
  2. 2.

    If different versions of the application are available due to experimentation, automated testing can cover different combinations of the holistic experience, all of which would be time-consuming if done manually.

     
  3. 3.

    While it is clear that some level of manual oversight is necessary, automated testing helps surface low-hanging fruit to free up time for more complicated error discovery. It can also be integrated with the continuous delivery process. By highlighting the error in a developer’s workflow, the feedback loop is much shorter than manual testing practices.

     
  4. 4.

    In large distributed teams, automated testing results can also help identify teams that might need more education or accessibility training, as well as opportunities to apply standard design components/practices that have accessibility built-in.

     
Limitations of automated testing
  1. 1.

    Currently, the subset of WCAG guidelines automated tests can check for is limited to a handful, especially on mobile. These include labels, color contrast, punctuations, clickable span focus, and others listed later in this section. Of course, it is possible for developers to write their own automated tests to cover other guidelines. This is time-consuming, although often worth it in the long run.

     
  2. 2.

    These checks cannot verify the semantic accuracy of accessible content. For example, a label check can ensure that a label is present, but it cannot check whether the label is a good one or an accurate one.

     
  3. 3.

    If the views to be checked are not already covered by tests, it might require some initial investment into adding test coverage.

     
  4. 4.

    Test reports are only effective if they are followed up on. Disabling or deleting a test are two quick ways to get a pull request merged if it fails an accessibility check.

     

Automated test frameworks for mobile are available as open source libraries, for example, GTXiLib (iOS)1 and ATF (Accessibility Test Framework for Android).2 There are other free and paid tools that offer bespoke functionality for automated checks. Currently, open source checks on Android and iOS allow for the following checks:

ATF
  1. 1.

    Speakable Text Present

     
  2. 2.

    Redundant Description

     
  3. 3.

    Touch Target Size

     
  4. 4.

    Text Contrast

     
  5. 5.

    Editable Content Description

     
  6. 6.

    Duplicate Speakable Text

     
  7. 7.

    Clickable Span

     
  8. 8.

    Duplicate Clickable Bounds

     
  9. 9.

    Image Contrast Check

     
  10. 10.

    Class Name Check

     
  11. 11.

    Traversal Order Check

     

GTXiLib

The GTXiLib Toolkit integrates with existing test frameworks and can run accessibility checks on all child elements from the given root. This means that accessibility or platform teams can enable these checks for developers who write integration tests with relatively no extra effort on their part. Currently, GTXiLib can check for:
  1. 1.

    Link Purpose Unclear Check

     
  2. 2.

    Accessibility Label Present

     
  3. 3.

    Accessibility Label-Trait Overlap

     
  4. 4.

    Accessibility Label Not Punctuated

     
  5. 5.

    Accessibility Traits: Element fails if it has conflicting accessibility traits

     
  6. 6.

    Touch target size

     
  7. 7.

    Contrast ratio (Label)

     
  8. 8.

    Option for custom checks

     

Scanners

Halfway between automated and manual testing are scanners, which scan each screen you manually visit and find accessibility violations. Examples of these are the Android Accessibility Scanner, and the XCode inspector on iOS. On the web, tools such as Lighthouse scan websites for a subset of accessibility errors.

The Accessibility Scanner on Android is a free application downloaded from the Google Play Store,3 which can analyze a screen or a series of screens, and generate shareable reports. Figure 6-3 is an example of a scan from a food delivery app, followed by a report (Figure 6-4).
Figure 6-3

Two screenshots from an accessibility scan of a food delivery app. The first one shows all detected issues highlighted with an orange outline. The second one is a selected issue, which brings up details of the element’s accessibility issue. In this case, it is a missing label

Figure 6-4

Screenshot of a shareable report generated by the Android Accessibility Scanner that details accessibility issues found on the screen, remedies, and links to more information

On iOS, the XCode inspector is part of the development environment that analyses screens on the simulator (virtual device). Scanners are particularly useful for teams that do not have automated testing practices in place. By highlighting basic errors, scanners can save a lot of time in the manual testing phase and leave time for checks including semantic verification.

Manual Testing

The most comprehensive checklist for accessibility compliance is the one published by the W3C WAI (Web Accessibility Initiative), which can be found at www.w3.org/WAI/WCAG21/quickref/.

The list contains over 70 items, and can at first be overwhelming. Only a subset of these items applies to any given product. If usability (not just compliance) is the goal, checklists can provide a great jumping-off point, but much like the tiered approach to prioritizing audit reports discussed in Chapter 3, teams will need to weave accessibility testing with their manual testing routine including bug bashes (where members of a product team dedicate a specific amount of time together to test their features and to discover software regressions), sign-off processes, and occasional audits.

An effective manual test strategy will answer the following questions:
  • Which are the most important user flows?

  • What is already tested in the automated process?

  • How often do we manually test our application?
    • Regular testing (daily, weekly)

    • Release testing

    • Bug bash sessions

    • Audits

  • What should be tested at each of these stages?

The preceding processes are in decreasing order of frequency of testing. Core user flows must be tested against all applicable criteria with as many of these steps as possible, followed by secondary and less critical user flows.

One practice that has worked extremely well in my experience was creating user personas and picking at least one per bug bash, and taking turns using the feature we were testing with related assistive technology. Bug bashing sessions are particularly powerful because they are typically done with the entire development team including designers, engineers, and product managers. Firstly, it helps the team learn how people with disabilities use the product and empathize with them. Secondly, it uncovers opportunities and not just bugs in the application. Following is a list of sample personas for a job application site:

Persona 1

A blind Talkback or VoiceOver writer who is looking for a copywriting role

Persona 2

Engineering manager with a hearing impairment hiring for software developers

Persona 3

Partially sighted keyboard user with a motor impairment looking for a store manager position

Persona 4

College student with short-term memory looking for art internships and career advice

Tip

One neat trick in manually testing a mobile application is to test with an external keyboard because, in the process, it checks for criteria including labels, focus order, the grouping of elements, hierarchy, links, and focus traps.

Evaluating Third-Party Testing Vendors

Sometimes teams may choose to hire external consultants or vendors for automated or manual testing support. You are basically trying to answer two questions here:
  • Is it more feasible to build the infrastructure in-house?

  • If not, which vendor do we pick?

The following evaluation criteria will help benchmark internal solutions with these service providers in the test run:
  1. 1.

    Latency

     
For CI integration, you want the checks to run in parallel with the regular test suite for a short feedback loop and PR merge times. This also factors in the turnaround time of the generated report. For manual audits, this is less of a concern.
  1. 2.

    Accuracy

     
Pick 2-3 user flows that span across multiple screens and make a list of known issues (you can hire a consultant to do this as well). In the evaluation, make sure all known issues are highlighted, to avoid false positives (or false negatives).
  1. 3.

    Scalability

     
Make sure the system can scale to handle simultaneous builds and large test suites, especially in teams with hundreds of developers working on the same repository.
  1. 4.

    Price (integration cost)

     
How much upfront effort (time and resources) is it going to take for the team to integrate and maintain third-party software with existing build systems? How does this compare with building that same infrastructure on top of open source frameworks?
  1. 5.

    Licencing and vendor fee

     

Manual testing: Is it more feasible to hire accessibility testing experts in-house?

The benefit of doing this is that internal testers will become dedicated product experts over time and build relationships with development teams. The downside is individual testers take time to ramp up, and may not scale with fluctuating needs. If the demand for manual testing varies over time, it might make more economic sense to hire a third-party vendor with that flexibility. A hybrid model with a few in-house testers and additional testers when needed could be the best option for some companies.

Third-party vendors tend to have varying pricing models. They can charge in a number of different ways:
  • a lump sum

  • per hour

  • per run

  • per screen

  • per issue found

  • per build

  • per developer

For example, if a vendor’s pricing model is based on the number of developers working on a repository, you would only want to count the developers who work on user-facing features that impact accessibility.
  1. 6.
    Coverage
    1. a.

      How much of the application is covered?

       
    2. b.

      Of all the criteria you want to be tested, how many are covered? You can use WCAG guidelines as a baseline for this calculation, but a better baseline would be only guidelines that apply to your product. You could write your own using WCAG as a framework to get the best of both.

       
     
  2. 7.

    SLAs

     
Does the vendor have a reasonable service level agreement, that is, how much time they will take to address issues once reported? How will the reporting work, and is it in line with current workflows? For example, if there is a known issue with their system, will the team be notified? If the team notices an issue, is there an easy way for them to get in touch with the vendor’s team? The level of support needs to be good enough to justify not building a testing team internally.
  1. 8.

    Actionability of reports

     

Any reports generated by the third party, either manually or automatically, should have screenshots, detailed explanations of errors, WCAG, or other guidelines the issue pertains to, as well as suggested solutions including code samples.

One metric that would be incredibly useful in these reports, that no vendor in my experience currently displays, is the number of users (globally or in specific countries) that live with the relevant disability and would be affected by the issue.
  1. 9.

    Parseability of reports

     
The ability to filter and sort issues by type, screen, severity, and other parameters is key to making the reports easily digestible and hence actionable for whoever is responsible for triaging and assignment. A thorough report loses a lot of its power if it takes a lot to feed it into your workflow.
  1. 10.

    Customizability

     

Related to actionability is how customizable the reports and the workflows surrounding them are. Is it possible to extract information from the reports into an API that powers internal workflows and dashboards? If not, can the reports be exported into Excel or other document formats for further analysis and assignment?

Note

API stands for Application Programming Interface, which means a way for systems or computers to share information using a set of predetermined rules. In this case, it could mean an internal reporting application requests specific information from the report to display and notify teams in their preferred format.

Customer Service: How to Help Customers and Escalate Critical Issues

If you have an inclusive, comprehensive product development strategy and testing practices discussed before, doing this part becomes much easier because ideally, very few people will need to contact support.

Guidelines and best practices help avoid known issues. However, each user has their own unique way of engaging with products. In the accessibility context, there might be unforeseen interactions, and opportunities for improvement that only users with certain limitations, use cases, or intersection of disabilities will understand.

The motivation is no different than support pages, FAQs,4 and feedback channels for all users. Making sure users with disabilities are able to reach relevant teams with requests and feedback through accessible channels is important to ensure they don’t get stuck or feel unheard. For example, if phone is the only support channel offered, it might make sense to evaluate email or chat alternatives for people with hearing impairments or communication anxiety.

The second step in the process is to train agents and customer experience teams in identifying accessibility issues, determining their criticality, and escalating them to the right teams. It also involves using inclusive language in their responses, as well as help pages or FAQ documentation. A customer service guide published by the Ontario Education Service Corporation5 covers basic principles as well as inclusive language guidelines for specific disabilities. The ADA website also has a document with quick tips.6

Reports and suggestions from real users will also serve as a data source for measuring the success of a team’s accessibility efforts. This data source is reliable only when the product team has an effective relationship with users, and they feel invested enough to share feedback.

Similar to holistic audits, these channels can also highlight systemic issues in big distributed teams that can be either centrally resolved or combined into support documentation. A few metrics to track success on this side of the equation are
  • Number of issues reported relative to application size

  • Percentage of issues resolved

  • Average time to resolve issues

  • Average and total agent time spent on resolution

  • Customer satisfaction ratings

Summary

  • Automated testing means programmatically running parts of an application to make sure they work as expected under different scenarios, and to prevent errors (regressions) if the underlying code changes at a later date.

  • Advantages of automated testing include shorter feedback cycles between discovery and remediation of bugs and identification of systemic issues in large distributed teams, thereby freeing teams to focus on innovation.

  • Halfway between automated and manual testing are scanners, which scan each screen you manually visit and find accessibility violations. This is useful for teams that do not have automated testing practices in place. By highlighting basic errors, scanners can save time in the manual testing phase and leave time for checks including semantic verification.

  • The most comprehensive checklist for accessibility compliance is the one published by the W3C WAI (Web Accessibility Initiative), which can be found at www.w3.org/WAI/WCAG21/quickref/.

  • One practice that works extremely well in the absence of direct user feedback is creating user personas and taking turns using features with related assistive technology as those users.

  • In-house testing experts become dedicated product experts over time and build relationships with development teams. The downside is individual testers take time to ramp up, and may not scale with fluctuating needs. If the demand for manual testing varies over time, it might make more economic sense to hire a third-party vendor with that flexibility.

  • Making sure users with disabilities are able to reach relevant teams with requests and feedback through accessible channels is important to ensure they don’t get stuck or feel unheard.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.134.188