One of the most amazing things about the Apple Watch is how useful it is for people with disabilities. Much like the iPhone, you might think that a person with visual impairment would have serious difficulty using it. Even someone who has trouble reading small text would have difficulty with Apple Watch—it’s a small piece of glass (or sapphire) with no buttons on the surface and tiny text. However, watchOS includes some amazing accessibility features to help these users and others. From simple things, like making the text bigger onscreen or zooming in to the entire interface, to larger affordances like VoiceOver, which speaks the items onscreen using text-to-speech, the accessibility features of watchOS make the device great for a much wider range of users. The best part is that for us developers, supporting these features in our apps is incredibly simple and straightforward. Really, the only feature we need to do anything to support is VoiceOver.
Setting Up VoiceOver | |
---|---|
There are a few ways to set up VoiceOver on an Apple Watch. You can open the watchOS Settings app, navigate to General, then Accessibility, then VoiceOver, and turn it on manually. Or, you can do the same in the iPhone Apple Watch Settings app. For development purposes, you’re probably going to want to turn it on and off a lot. For that, the easiest way is to open the Watch iPhone companion app and navigate to Accessibility under General. Tap Accessibility Shortcut and then VoiceOver. This will enable triple-clicking the Digital Crown on your watch to toggle VoiceOver’s activation state. For debugging your app’s accessibility support, this is by far the most convenient option. |
The rest of this chapter will focus on VoiceOver support, but while you’re enabling VoiceOver to test your app, check out the other accessibility features that are available. Even better—enable them; then start your app. Start some other apps on your watch. You’ll quickly realize a difference between apps that support accessibility features and apps that don’t, and while you may not need to support them to be financially successful in the App Store, users who rely on these features will thank you.
To start improving accessibility in TapALap, let’s begin with the Go Running screen. This is the first screen new users will see, so it’s important to make it work well. The Selected Track and Start Run buttons both sound right, but the groups at the bottom could use some work. Each label is spoken individually by VoiceOver. Not ideal. Let’s start with the Average Pace label. While this label is not actually filled in with our real run data, we know that the “9:12” represents an average pace of 9 minutes and 12 seconds. What you need to do is to tell VoiceOver to say that instead. To do this, you’ll specify an accessibility label for the label.
Accessibility labels, as well as other accessibility-related properties you’re going to use, are defined on WKInterfaceObject, from which all interface objects inherit. To set an accessibility label on an interface object, it’s as simple as calling setAccessibilityLabel(_:) with the text you’d like VoiceOver to read. For this case, you can also set the text in the storyboard. Open up Interface.storyboard and select the 9:12 label in the Go Running interface controller. Unlike most other interface object properties that you set using the Attributes Inspector, accessibility information is set using the Identity Inspector. Open the inspector with ⌘⌥3 and you’ll see the accessibility information at the bottom of the inspector. Enter “9 minutes, 12 seconds” for the accessibility label; then rerun the app using VoiceOver. Much better!
What makes a good accessibility label? Be sure not to include identifying words like “button”—watchOS will add them when necessary. The label should speak the content of the element to the user. Most of the time such words are unnecessary, but anytime text in an element is abbreviated, incomplete, or cropped to fit the small size of the watch face, use the full text for the accessibility label. If you’re using an NSFormatter, it’s a good idea to have two formatters: one for the interface text, one for the accessibility label. Actually implementing the Average Pace label is an exercise left to the reader, but it would look something like this:
| func updateAveragePaceLabel(averagePace: NSTimeInterval) { |
| let labelTextFormatter = NSDateComponentsFormatter() |
| let labelAccessibilityLabelFormatter = NSDateComponentsFormatter() |
| |
| labelTextFormatter.allowedUnits = [.Minute, .Second] |
| labelAccessibilityLabelFormatter.allowedUnits = [.Minute, .Second] |
| |
| labelTextFormatter.unitsStyle = .Positional |
| labelAccessibilityLabelFormatter.unitsStyle = .SpellOut |
| |
| averagePaceLabel.setText( |
| labelTextFormatter.stringFromTimeInterval(averagePace)) |
| |
| averagePaceLabel.setAccessibilityLabel( |
| labelAccessibilityLabelFormatter.stringFromTimeInterval(averagePace)) |
| } |
In this example, we create two NSDateComponentsFormatter instances: one to use the .Positional units style, which formats our pace as “9:12,” and one to use the .SpellOut units style, which formats our pace as “nine minutes, twelve seconds.” While this method of handling the average pace requires us to duplicate some work, it’s pretty simple once the code is there. The only problem with this part of the screen now is that the Runs and Average Pace labels below their respective values don’t really add much value to the screen. It would be better if the two sets of labels could be read together—instead of “5,” “runs,” “nine minutes, twelve seconds,” and finally “average pace,” it would be nice if our users would just hear the two data points: “5 runs” and “nine minutes, twelve seconds average pace.” Let’s look at how to accomplish that next.
Both of the labels for statistics on this screen are in a group with a caption label underneath. Conceptually, to a VoiceOver user, the group is one thing: on the left, the number of runs, and on the right, their average pace. The fact that we’ve split out the data and the captions inside groups really doesn’t matter to VoiceOver users. To achieve this, you’ll set the groups as accessibility elements, which tells watchOS to treat them as a single unit of content. Let’s set this up to see how it works.
Open the watch app storyboard and select the groups. In the Identity Inspector, you’ll notice the accessibility options we were using before don’t appear. By default, groups are not accessibility elements—they’re invisible to the VoiceOver system. Select Disabled and enable the groups, as in this screenshot:
Build and run again, and move your finger over the groups. Now, instead of speaking the content of the individual labels, VoiceOver speaks the content of the groups all at once. Marking an interface object as an accessibility element causes VoiceOver to speak its contents; just as you can enable accessibility for a group to read the group’s contents, you can disable it for any other element to cause VoiceOver to skip it. This is perfect for images and text content that are for decoration only. You can do this in code, too; simply call setIsAccessibilityElement(_:) on an interface object to change the value.
Accessibility labels and elements are great for reading your content, but what about the Selected Track button? For a VoiceOver user, it might not be obvious that this is the button they need to press in order to select a track. Let’s help them out and give them some more information.
When you select a button with VoiceOver, you’ll hear the system speak the word “button” after it reads the label. Your Go Running interface controller has two buttons: Selected Track and Start Run. While the Start Run button is fairly self-explanatory, it may not be immediately obvious to the user that they should tap Selected Track to select a new track. You use the chevron graphic to try to indicate this for sighted users, but for VoiceOver users, you can do even better.
In the storyboard, select the Selected Track button and open the Identity Inspector to change accessibility settings. The item you’re interested in is Hint. The hint should describe to the user what happens when she selects the button. For this button, set the hint to “Tap to select a track.” Build and run, and then navigate to the button in VoiceOver. You’ll hear something like this: “Selected track. None. Button,” and then, after a short delay, “Tap to select a track.” The hint allows you to give some extra contextual information to the button that can really help users understand the behavior of your app. Of course, you can also set the hint in code, using setAccessibilityHint(_:) on any WKInterfaceObject subclass.
While all of these examples are great for built-in controls, sometimes a complicated layout in your interface needs a little more help. If you’re using images with multiple pieces of data in them, having VoiceOver read the entire image’s accessibility label might not be what you want. For more complicated image-based layouts, WatchKit has one more API to help VoiceOver users get the most out of your apps: image regions. Image regions allow you to break an image into multiple pieces, with each piece having its own accessibility information. If you have, say, an image of a pizza where each half has different toppings, image regions are a great choice to break that up. If you did any web development in the dark days before CSS, you may remember the technique of image maps[13] to split an image into multiple clickable regions. Image regions in WatchKit are similar.
Image regions are defined in the context of the source image. If you have an image of a pizza that’s 200 pixels square, you’d define the regions of the image as shown here:
| let firstHalfFrame = CGRect(x: 0, y: 0, width: 100, height: 200) |
| let secondHalfFrame = CGRect(x: 100, y: 0, width: 100, height: 200) |
The frames of the accessibility regions are in the coordinate system of the image you’re labeling. Once you have the frame, you create WKAccessibilityImageRegion instances and set two values: their frame and the label to speak when they’re active:
| let firstRegion = WKAccessibilityImageRegion() |
| firstRegion.frame = firstHalfFrame |
| firstRegion.label = "First Half: Pepperoni, Onions, and Mushroom" |
| |
| let secondRegion = WKAccessibilityImageRegion() |
| secondRegion.frame = secondHalfFrame |
| secondRegion.label = "Second Half: Pepperoni" |
This code sets up two regions, one for the pizza half with onions and mushroom, and one without. Finally, the last step is to add the accessibility regions to the image:
| pizzaImage.setAccessibilityImageRegions([firstRegion, secondRegion]) |
Just like that, an image is better for a VoiceOver user: she can move her finger across it to “read” the regions. While not every image needs accessibility regions, they’re a great way to enable VoiceOver users to get complicated information from your UI.
Not only do these enhancements you’ve made to TapALap in the name of supporting accessibility help VoiceOver users with the app, but they also help you think about the structure of the app in new ways. An app that’s clear, simple, and easy to use for one user should be the same for all users. Continually testing your apps with VoiceOver and other accessibility features enabled is a great way to think about their structure in new ways.
18.224.67.0