When Apple first announced the iPad, I started to wonder if we could leverage it in our Disability Services department (DDS). DDS scans a lot of books every year for students. This takes time, as the process is pretty labor intensive, and it takes storage space on our SAN and its not a very elegant process. The iPad seemed to hold potential as an e-book reader for students with disabilities. I got the chance to take one home last night and put it through some paces. Here’s my initial thoughts on the iPad and accessibility. Bear in mind that I do not have a visual impairment, so I’m looking at this as best as I can, but I may be compensating some times. I did try several tasks with my eyes closed. There are a lot of aspects to device accessibility, and I focused on the screen reader aspects most heavily for this post.
First, I give Apple a lot of credit for even including voice over, white on black, and zoom in the device. These are solid foundations to work with. Setting up the features would be tough for someone with limited visibility as they are not on by default, but they are easy to get to and we could pre-configure a device for a student.
I spent most of my time with voice over as zoom and the white on black work pretty flawlessly. Turning on voice over changes the swipe gestures (you can’t use voice over and zoom at the same time) and it’s not obvious on the device what the new gestures are, but I did find the documentation on-line.
Interface and voice over set-up
Like iPhone and iPod touch, iPad includes VoiceOver, the world’s first gesture-based screen reader for the blind. Instead of memorizing keyboard commands or pressing tiny arrow keys, you simply touch the screen to hear a description of the item under your finger, then double-tap, drag, or flick to control iPad. VoiceOver speaks 21 languages and works with all of the applications built into iPad. Apple also enables software developers to create applications for iPad that work with VoiceOver.
The navigation of the interface worked well with voice over. Each icon was announced and clearly spoken with instructions for how to access it. Flicking left and right moves through the icons and double tapping launches them. You can double tap anywhere on the screen to launch an app once it has been selected. Although you don’t have to memorize keyboard commands or press tiny arrows, you do have to learn a vocabulary of touch gestures. They’re not complicated, but knowing all of them is key to successful navigation. There are audio cues to let you know when you have reached the end of the things you can navigate through. Some initial training will be needed for students in order to get them up to speed, but I think they will pick it up quickly and learn where the various elements are on the device. Having a regular grid pattern should help folks learn the location of icons and speed up navigation.
You can set the speed of the voice and there are included voices for a variety of languages (changing the language for voice over changes the language for the iPad as well), but you can’t choose different voices.
One nice feature that jumped out right away was the iPad announced changes in orientation and said where the home button was when it went into landscape mode – “landscape, home button to the right”. Nice touch.
The rotor gesture (two fingers spun in a circle like spinning a dial) switches between words and letters, and flicking up and down activates the reading in the chosen format, swiping left and right reads the whole word.
I did notice a minor hassle when unlocking the iPad. It defaults to announcing the time and then you have swipe right twice to get to the unlock button and double tap to unlock. Seems like focusing on the unlock button would be a better default, or maybe folks will appreciate a talking clock.
The rotor gesture comes into play more strongly when using Safari where it allows you to select the navigation element you want to move through – links, visited links, headers, form elements and move through those instead of having to navigate the whole page. To do that you select the element, then flick up and down to navigate between them. It works quite well once you stop flicking left and right, which was my initial action. Pays to read the documentation.
Flicking left and right navigated through all the page elements. There doesn’t seem to be a way to have it auto-start and run through all the elements, you have to flick one-by-one. Seems like simple software option, or a new gesture, could add that. All in all, web navigation was quite good, although it does depend, as always, on web authors structuring their pages in ways that are accessible.
I was excited to try the iBooks app with voice over. Double tapping the iBook app got me the bookshelf and a voice “Store Button”, which is the left-most navigation element on the app. Switching to list view added the search box to the top of the display, but didn’t really change the navigation. Being a good consumer, I went to buy a book (being a state employee, I opted for a free one). The modal dialog for the iTunes user login was a bit touchy to navigate, but I got through it and soon had a copy of Heart of Darkness, although I wouldn’t have known it if I couldn’t see it as it did not announce the download was complete.
Double tapping the book opened it, and this is where some issues appeared. I swiped right to work my way down to the text, but got stuck navigating only the header elements – Library, Table of Contents, Author, Title, Brighteness, etc. The only way I could initally get reading to start was to tap on the screen somewhere to get it to read that line, then swipe left to get it to start reading. Then I discovered I could two finger tap to start and stop the reading and switch between navigating and reading. It wasn’t highly intuitive, and I had some inconsistent experiences, sometimes tapping in the text area wouldn’t get me back to navigation. It did work consistently when it reached the end of the page. It looks like there is a bug in the voice over cursor as it stopped following the elements when I swiped after 2 finger tapping.
I image your mileage is going to vary with iBooks unless Apple enforces some sort of accessibility standard for the platform.
Once I got the reading started, it worked quite well. It does not automatically advance the page, so you have to 3 finger swipe on the right edge to get the pages to advance. Swiping left when the page ends will start the reading again, so the user has good control over the interface and it doesn’t run off and advance until you tell it to. The rotor gesture is active, it allowed me to select characters or words, but it did not speak them when I flicked up and down, except for the navigation elements. If the voice over is reading when you change pages, it continues reading on the next page. If you had interrupted the reading and then change pages, you have to restart the reading again by swiping. Additionally, you have to have the voice over cursor focused on the page text in order to advance the page, which you get to by swiping. This happens naturally if you are moving from page to page, but if you interrupt the flow it takes a few swipes to get back on track.
I was able to get the “copy, dictionary, etc.” dialog to come up, but there was no way to access it using gestures. At least no way I could figure out. This cuts off the dictionary for users using voice over. It would be nice to see Apple enable that.
Unfortunately, the table of contents was completely cut-off. I could find no way to swipe navigate, and worse, when I two finger tapped it seemed to lose focus on the iBook app and I got stuck navigating the status bar at the top of the screen. The only way I could recover from this was to tap somewhere until the table of contents was read and then I could navigate it.
We converted a book to ePub format, synced it to the iPad and voice over worked just as it did with a book from the iBooks store.
All in all, I think with some time I could get pretty efficient with voice over and iBooks (as long as I didn’t need the table of contents). Like any screen reader, it has a learning curve, but it it pretty small and the gestures are easy to remember. If Apple can clean up the navigation and the table of contents issue, this could really have some potential for us.
The keyboard really shines on the iPad. Letters are clearly announced and you can turn on phonetics (touching A gets you “A” “Alpha”). Dragging your finger across the surface reads the labels to you. Double tapping inserts the character and insertions can be announced with a pitch change. You can have words or letters read after they are entered and these settings can be changed to suit your needs. Having to double tap a key to enter it does slow down the input speed, but it enables accuracy. You’re not going to write the great american novel on your iPod with voice over enabled, but it gets you web pages and the other basics. It would be nice to see some more fine grained control – like the ability to have it speak the letters and input them when you release your finger. That’s how the non voice-over UI works and once you learn to visualize the keyboard I would think a voice over user might crave some speed.
Haven’t had a chance yet to try external keyboards.
I tried voice over with a few downloaded apps with little success. My biggest disappointment was with the New York Times editor’s choice app. I could get the story headline to work, but could never access the story content itself. Given that this is a text-heavy app which would seem ideal for voice over, I wish the Times had put a little extra effort into getting their app to fully integrate with the voice over API.
NPR gets a bit more credit as their app is at least somewhat navigable and you can tap to access content in news stories. The odd voice over cursor focus appeared again in this app, so I wonder if there is something that needs fixing from Apple.
I tried a public radio app that was written for the iPhone and when I switched it to full screen mode I was able to navigate the app with gestures and voice over worked. If I tapped on the regular size version it would work, but finding that app in the middle of the screen if you can’t see the screen would be a challenge.
3rd party app integration with voice over isn’t Apple’s fault and they do provide the framework for developers to leverage the features. It would be nice to see more of them doing so, particularly on the iPad.
All in all, I was impressed with the out of the box accessibility features on the iPad. Apple deserves a lot of credit for including these on the device and the overall implementation is good. There are a few quirks here and there, but most of those seem solvable in software updates. A direct deal with text book publishers might save our DDS folks a fair amount of work. I could envision a scenario where we could lend out iPads to students instead of scanning their books if all the pieces could fall into place. Given the additional things they could do with it (web browsing, dictation via Dragon) there might be a compelling argument for that.
Voice over did have an impact on battery life. In about an hour or so of testing the battery drained from 100% to 80%, which was faster than when not using voice over. This is certainly expected and the battery life is still quite good.
We’re going to keep experimenting and get the device into the hands of some folks who can really put it through its paces and post more updates as we learn more.
Update: Thanks to Twitter, found this article with an experience by a person who is actually blind. Nice to see that the initial experience for him is positive.