The m.unc.edu site that we run has been sitting on the old MIT code for some time now. I’ve been working in my spare time (misnomer if there ever was one) to update the site to the new Kurogo framework. The folks that developed the MIT code have now gone and started their own company called ModoLabs and have released Kurogo as the latest version of the framework. It’s quite the difference from the MIT code, but it’s still open source. So, kudos to ModoLabs for keeping the code out there.
The new framework is much nicer, has actual documentation and is a pleasure to work with. I’ve got a variety of RSS and iCal data sources hooked in, been able to consume YouTube videos, even subclassed a couple of modules to make my own variations (and I’m not a PHP programmer by any stretch of the imagination). It’s nice stuff.
I’m close to being able to release the new version, which will include a tablet UI as well. That’s something I’ve wanted for some time as giving a tablet user the iPhone UI experience just isn’t the right thing to do. People have expectations of how content is going to look on a particular device and when we fail to meet those expectations, no matter how good the back-end technology, the user experience ultimately dictates how it is perceived.
m.unc.edu is the web app part of the mobile experience. Native apps are also available from the Kurogo project and we’ll be looking at those sometime soon. It’s a whole different ball game to do native app work and we’re not currently staffed to handle that. Maybe in the Fall we can find a student who wants to delve into that world and using that as a starting point would be an excellent way to speed up someone’s learning.
I want native apps that do things the web can’t. If we’re going to invest time and energy in a native app, it can’t just be something that recreates the web experience. People can already get the web experience on their phone and we handle that quite gracefully with the web app side. Native apps need to be taking advantage of the hardware. The cool thing is because Kurogo is a framework, we’ll be able to leverage the work we’ve already done for the web app side of the project and allow native apps to access the same data the web apps can without having to rewrite everything.
We recently had several sites that had spam code injected into them due to a mis-configured file permission. It was a very targeted attack and only revealed itself to the Google search bot when it visited the site. Normal site visitors simply saw the regular web page, but when the Google bot came, the script gave it a version of the page that included text about on-line pharmaceuticals. So, the only place you could see the spam was in the Google cache. Since the cache is what is used to show the descriptive text under a link in the search results, we had spam text under our links, which tend to be at the top of the search results.
Once we were notified, we quickly addressed the problem, removed the offending script and secured the file. Then the fun began. I went looking for ways to have the Google cache flushed to remove the offending text. I registered the sites with Google’s web master tools. This process supposedly connects me to the sites and established me as a legitimate owner of them. I then went to the remove URL option (buried under site configuration, crawler access). No luck there, that only lets you remove a particular path from the cache, you can’t use it for your base URL. If you try to put the base URL in there it wants to remove the entire site from Google’s search index, not just the cache. Eventually I found the remove option for the top level URL, put in my requests and waited. About 2 hours later almost all the requests were denied because according to Google it was a live site and could not be removed:
The content you submitted for cache removal appears on a live page.
As you may know, information in our search results is actually located on publicly available webpages. Even if we removed this page from our index, the content in question would still be available on the web.
To remove this information from our search results and from the web, you’ll need to contact the webmaster of the site in question. Once the webmaster makes the change, you can submit a request to remove the cached copy or simply wait for our search results to reflect this change the next time we crawl the page.
Yeah, that’s what I did, Google.
So, I e-mailed firstname.lastname@example.org and email@example.com and got an automated response from security which basically told me that unless I had identified a breach in a Google product I could forget about ever hearing from them. The abuse team didn’t even bother with an auto-reply.
So we were now in this really interesting position. The sites themselves were fine, in fact, they had always appeared fine to the site visitors. Google had used cached versions of our content to drive traffic and ads on its site, and was the only place where our on-line identity was compromised in any way, yet there seemed to be no way to get them to stop using that. In essence, we no longer owned our site content, the Google cache did and the cache was now misrepresenting our sites with no way to have it changed.
24 hours later, one of the sites, which was submitted exactly like the sites that were denied, was actually updated and the cache removed. So much for consistency in the Google tools, apparently. Thinking I could capitalize on that small victory, I resubmitted all the sites again using the same format as the successful one (which was the same thing I did the 1st time). No luck, within an hour all the requests were denied. Why the one succeeded is still beyond me.
In the interim, I added:
in the header of the sites. That’s Googlebot instructions to not cache our sites. But with no way to force it, we have to wait for the bot to decide when it’s time to update the cache. That’s now going into all our sites going forward (and probably should have been in the past).
I know Google’s motto is supposedly “do no evil” but they sure can bring out some evil feelings when you try to control your own content that has been co-opted by their tools.
After some waiting it does appear that many of the sites now have cleared from the Google cache and the meta tag has prevented the caching. It took a visit from the Googlebot in order for this to happen. However, I have one site that Google refuses to remove from the cache when I submit a request, because according to Google it’s already been removed from the cache:
However, a Google search reveals that it, in fact, is still being cached:
Google gets credit for providing a suite of tools for webmasters and site owners to interact with the giant faceless entity that is a search index. However, I still remain unimpressed with the consistency in the tools and the ability for site owners to actually effect change in cached site info on Google. I think that is Google is going to cache our content, it should do a much better job at respecting our ability to have that content modified or removed. The current process has the appearance of transparency, but the effectiveness is rather murky.
Now that we have spent a little more time with the iPad, it’s slightly more obvious how to do a few things. The voice over options have a learning page that is quite useful. You have to get the focus on the test area by right flicking or tapping it and then it walks you through all the gestures.
Here’s a list:
- Touch – select item, under finger
- Flick one finger up – move to previous item using rotor setting
- Flick one finger down – move to next item using rotor setting
- Flick one finger right – move to next item
- Flick one finger left – move to previous item
- Two finger flick up – read page starting at the top
- Two finger flick down – read page starting at selected item
- Two finger tap – start or stop action (like voice over reading) also used to pause the ordered reading in two finger actions to allow you to select an item
- Four finger flick left – move to previous container
- Four finger flick right – move to next container
- Four finger flick up – move to first element
- Four finger flick down – move to last element
- Three finger flick up – scroll down one page
- Three finger flick down – scroll up one page
- Three finger flick left – scroll right one page
- Three finger flick right – scroll left one page
Turns out you can have iBooks auto-turn the pages (I had gotten this working once and could not reproduce it and it was driving me nuts). A four finger swipe to the right reads the page, but stops at the end. Two finger swipe down starts the reading and it continues page to page. It was hard to get this to behave consistently, or so I thought at first.
For example, when trying two finger flick up, it started reading from the top – Library, Table of contents, book title, etc. but on the Winnie the Pooh book it never jumped to the text and started reading. A four finger swipe did get it started, a two finger tap to pause and then the two finger swipe up started the voice over and auto pagination. So, navigable, but you had to know what to do and the navigation isn’t consistently delivering what it promises. At first I thought this was just in this book, but Heart of Darkness behaved the same way (the horror, the horror) at first, but then I flicked to change pages, two finger flicked up and it started working, but didn’t auto-paginate and stopped at the last element (which, is technically correct).
A second try and it worked as it should and I was able to get auto pagination to start by tapping with two fingers to pause the reading, then tapping to start it again. I first thought this might be a bug, but on further thought it is seems to be a UI decision. The two finger swipe needs to consistently navigate through all the page elements, regardless of what they are. You need to know that the two finger swipe will get you to the bottom of the screen, reading all elements along the way. In the books app, this ends up being a little counter intuitive, since you feel like once you start the reading it should continue. So, the compromise seems to be the tap to pause and then tap to resume to get the pagination going. Seems like a reasonable decision.
The two finger swipe did solve my table of contents problem, a two finger swipe up correctly worked through the page elements and started reading the elements, two fingers to pause, double tap to select and you’re off to that chapter.
I did have a more serious issue with Safari in my second round of testing. With no WiFi connected, a four finger swipe down put the iPad into a lost navigation state, where I couldn’t get any voice over feedback or navigate at all. The cursor hung in the status bar and I could not get focus back on Safari. This remained true even after a reboot of the iPad. I couldn’t discover that the iPad was not connected to the Internet, so it was hard to tell what was going one. Seems like there should be a hook in there so if Safari loads with no Internet connection, there is a way to alert the user. It does do this if you have a page loaded and try to do something, but if you start with a blank page, you can get lost. With an Internet connection, everything behaved correctly and the combination of the rotor and the multi-finger swipe gestures made for some fairly efficient page navigation.
The Google search box in Safari popped up auto-suggestions to a search, but there was no notification from voice over that this had happened. A similar thing happened in the Google Maps application, the search worked, a two finger swipe started reading my map results, but a more info button got a dialog box that popped up, but there was no notification and no way to focus on that box and hear the contents. These seem like solvable problems.
So, round two of testing and I’m still impressed. For a just released device, the iPad has a lot of features that should enhance accessibility. Like many accessible devices, for a non-sighted person it would seem like it would be helpful to have a sighted person walk them through the initial set-up, describe the interface, etc. I do think there is a learning curve, but it seems short and mastering the gestures isn’t all that hard.
When Apple first announced the iPad, I started to wonder if we could leverage it in our Disability Services department (DDS). DDS scans a lot of books every year for students. This takes time, as the process is pretty labor intensive, and it takes storage space on our SAN and its not a very elegant process. The iPad seemed to hold potential as an e-book reader for students with disabilities. I got the chance to take one home last night and put it through some paces. Here’s my initial thoughts on the iPad and accessibility. Bear in mind that I do not have a visual impairment, so I’m looking at this as best as I can, but I may be compensating some times. I did try several tasks with my eyes closed. There are a lot of aspects to device accessibility, and I focused on the screen reader aspects most heavily for this post.
First, I give Apple a lot of credit for even including voice over, white on black, and zoom in the device. These are solid foundations to work with. Setting up the features would be tough for someone with limited visibility as they are not on by default, but they are easy to get to and we could pre-configure a device for a student.
I spent most of my time with voice over as zoom and the white on black work pretty flawlessly. Turning on voice over changes the swipe gestures (you can’t use voice over and zoom at the same time) and it’s not obvious on the device what the new gestures are, but I did find the documentation on-line.
Interface and voice over set-up
Like iPhone and iPod touch, iPad includes VoiceOver, the world’s first gesture-based screen reader for the blind. Instead of memorizing keyboard commands or pressing tiny arrow keys, you simply touch the screen to hear a description of the item under your finger, then double-tap, drag, or flick to control iPad. VoiceOver speaks 21 languages and works with all of the applications built into iPad. Apple also enables software developers to create applications for iPad that work with VoiceOver.
The navigation of the interface worked well with voice over. Each icon was announced and clearly spoken with instructions for how to access it. Flicking left and right moves through the icons and double tapping launches them. You can double tap anywhere on the screen to launch an app once it has been selected. Although you don’t have to memorize keyboard commands or press tiny arrows, you do have to learn a vocabulary of touch gestures. They’re not complicated, but knowing all of them is key to successful navigation. There are audio cues to let you know when you have reached the end of the things you can navigate through. Some initial training will be needed for students in order to get them up to speed, but I think they will pick it up quickly and learn where the various elements are on the device. Having a regular grid pattern should help folks learn the location of icons and speed up navigation.
You can set the speed of the voice and there are included voices for a variety of languages (changing the language for voice over changes the language for the iPad as well), but you can’t choose different voices.
One nice feature that jumped out right away was the iPad announced changes in orientation and said where the home button was when it went into landscape mode – “landscape, home button to the right”. Nice touch.
The rotor gesture (two fingers spun in a circle like spinning a dial) switches between words and letters, and flicking up and down activates the reading in the chosen format, swiping left and right reads the whole word.
I did notice a minor hassle when unlocking the iPad. It defaults to announcing the time and then you have swipe right twice to get to the unlock button and double tap to unlock. Seems like focusing on the unlock button would be a better default, or maybe folks will appreciate a talking clock.
The rotor gesture comes into play more strongly when using Safari where it allows you to select the navigation element you want to move through – links, visited links, headers, form elements and move through those instead of having to navigate the whole page. To do that you select the element, then flick up and down to navigate between them. It works quite well once you stop flicking left and right, which was my initial action. Pays to read the documentation.
Flicking left and right navigated through all the page elements. There doesn’t seem to be a way to have it auto-start and run through all the elements, you have to flick one-by-one. Seems like simple software option, or a new gesture, could add that. All in all, web navigation was quite good, although it does depend, as always, on web authors structuring their pages in ways that are accessible.
I was excited to try the iBooks app with voice over. Double tapping the iBook app got me the bookshelf and a voice “Store Button”, which is the left-most navigation element on the app. Switching to list view added the search box to the top of the display, but didn’t really change the navigation. Being a good consumer, I went to buy a book (being a state employee, I opted for a free one). The modal dialog for the iTunes user login was a bit touchy to navigate, but I got through it and soon had a copy of Heart of Darkness, although I wouldn’t have known it if I couldn’t see it as it did not announce the download was complete.
Double tapping the book opened it, and this is where some issues appeared. I swiped right to work my way down to the text, but got stuck navigating only the header elements – Library, Table of Contents, Author, Title, Brighteness, etc. The only way I could initally get reading to start was to tap on the screen somewhere to get it to read that line, then swipe left to get it to start reading. Then I discovered I could two finger tap to start and stop the reading and switch between navigating and reading. It wasn’t highly intuitive, and I had some inconsistent experiences, sometimes tapping in the text area wouldn’t get me back to navigation. It did work consistently when it reached the end of the page. It looks like there is a bug in the voice over cursor as it stopped following the elements when I swiped after 2 finger tapping.
I image your mileage is going to vary with iBooks unless Apple enforces some sort of accessibility standard for the platform.
Once I got the reading started, it worked quite well. It does not automatically advance the page, so you have to 3 finger swipe on the right edge to get the pages to advance. Swiping left when the page ends will start the reading again, so the user has good control over the interface and it doesn’t run off and advance until you tell it to. The rotor gesture is active, it allowed me to select characters or words, but it did not speak them when I flicked up and down, except for the navigation elements. If the voice over is reading when you change pages, it continues reading on the next page. If you had interrupted the reading and then change pages, you have to restart the reading again by swiping. Additionally, you have to have the voice over cursor focused on the page text in order to advance the page, which you get to by swiping. This happens naturally if you are moving from page to page, but if you interrupt the flow it takes a few swipes to get back on track.
I was able to get the “copy, dictionary, etc.” dialog to come up, but there was no way to access it using gestures. At least no way I could figure out. This cuts off the dictionary for users using voice over. It would be nice to see Apple enable that.
Unfortunately, the table of contents was completely cut-off. I could find no way to swipe navigate, and worse, when I two finger tapped it seemed to lose focus on the iBook app and I got stuck navigating the status bar at the top of the screen. The only way I could recover from this was to tap somewhere until the table of contents was read and then I could navigate it.
We converted a book to ePub format, synced it to the iPad and voice over worked just as it did with a book from the iBooks store.
All in all, I think with some time I could get pretty efficient with voice over and iBooks (as long as I didn’t need the table of contents). Like any screen reader, it has a learning curve, but it it pretty small and the gestures are easy to remember. If Apple can clean up the navigation and the table of contents issue, this could really have some potential for us.
The keyboard really shines on the iPad. Letters are clearly announced and you can turn on phonetics (touching A gets you “A” “Alpha”). Dragging your finger across the surface reads the labels to you. Double tapping inserts the character and insertions can be announced with a pitch change. You can have words or letters read after they are entered and these settings can be changed to suit your needs. Having to double tap a key to enter it does slow down the input speed, but it enables accuracy. You’re not going to write the great american novel on your iPod with voice over enabled, but it gets you web pages and the other basics. It would be nice to see some more fine grained control – like the ability to have it speak the letters and input them when you release your finger. That’s how the non voice-over UI works and once you learn to visualize the keyboard I would think a voice over user might crave some speed.
Haven’t had a chance yet to try external keyboards.
I tried voice over with a few downloaded apps with little success. My biggest disappointment was with the New York Times editor’s choice app. I could get the story headline to work, but could never access the story content itself. Given that this is a text-heavy app which would seem ideal for voice over, I wish the Times had put a little extra effort into getting their app to fully integrate with the voice over API.
NPR gets a bit more credit as their app is at least somewhat navigable and you can tap to access content in news stories. The odd voice over cursor focus appeared again in this app, so I wonder if there is something that needs fixing from Apple.
I tried a public radio app that was written for the iPhone and when I switched it to full screen mode I was able to navigate the app with gestures and voice over worked. If I tapped on the regular size version it would work, but finding that app in the middle of the screen if you can’t see the screen would be a challenge.
3rd party app integration with voice over isn’t Apple’s fault and they do provide the framework for developers to leverage the features. It would be nice to see more of them doing so, particularly on the iPad.
All in all, I was impressed with the out of the box accessibility features on the iPad. Apple deserves a lot of credit for including these on the device and the overall implementation is good. There are a few quirks here and there, but most of those seem solvable in software updates. A direct deal with text book publishers might save our DDS folks a fair amount of work. I could envision a scenario where we could lend out iPads to students instead of scanning their books if all the pieces could fall into place. Given the additional things they could do with it (web browsing, dictation via Dragon) there might be a compelling argument for that.
Voice over did have an impact on battery life. In about an hour or so of testing the battery drained from 100% to 80%, which was faster than when not using voice over. This is certainly expected and the battery life is still quite good.
We’re going to keep experimenting and get the device into the hands of some folks who can really put it through its paces and post more updates as we learn more.
Update: Thanks to Twitter, found this article with an experience by a person who is actually blind. Nice to see that the initial experience for him is positive.
I’ve gotten asked a few time lately to document how we integrated Joomla with Shibboleth authentication. It turned out to be fairly straight-forward, primarily due to the awesome Joomla Auth plugin from Sam Moffat .
The first step is getting your Apache server configured to use Shibboleth. The main Shibboleth site https://spaces.internet2.edu/display/SHIB2/Home is your best friend when it comes to this. Pick the one for your platform, we are running on OS X, which turns out to be one of the more involved installs. Linux set-up are pretty straightforward. We already had an Identity provider up and running on campus so all I had to do was install a service provider.
Once Shib is running, you need to enable it for the host where your Joomla site lives. I just turned on Shib for the entire server using this in httpd.conf
Next is to install the Joomla Auth Plugins. You can find instructions for that here Quickstart_for_1.5.
I installed the libauthtool package from file repository and the plgSSOHTTP from the same spot since really what we’re doing is using HTTP header authentication.
Configuring these plugins is pretty straightforward. Here’s a screenshot of one of our configured sites. The key is setting the User Key to coincide in the SSP HTTP Plugin with where the username lives in the Shibboleth header.
In our case, and in most cases, that is REMOTE_USER. The “Username Replacement” option is handy for stripping off the @ portion of the REMOTE_USER data. That allows you to use regular username in Joomla. For example, firstname.lastname@example.org (my Shibboleth ID) can simply be payst in Joomla and I can login as payst. This makes it easier on the users. Your config may vary depending on your Shibboleth set-up or identity management for your area.
Config for the SSO-HTTP Plugin (click to enlarge):
Config for the System SSO Plugin (click to enlarge):
The hardest part of this was getting the Shibboleth service provider set up in Apache. Make sure that works before you start trying to get Joomla integrated. I beat my head into the wall a few time before I realized some of the Shib stuff wasn’t quite right. You can test your Shibboleth authentication by setting up a folder on your web server called something like /test and adding an entry into your Apache config:
Then drop an index.php in that directory with
Visit the /test URL in your favorite web browser and assuming all is working right, you should get directed to your Shibboleth login page and once successfully logged in your should see a page with the full headers from your Shibboleth Identity Provider. This is also a handy way to figure out where your usernames live in the header. You should see yours in REMOTE_USER and you can use that info in configuring the plug-ins as I described above.
I hope this helps (and I hope I haven’t forgotten anything)!
The new UNC mobile site has made huge progress in the last month. Live now at m.unc.edu, the site works with iPhone, iPod touch, Blackberry, Android, Palm Pre and many other phones. We have added a growing list of news items and event feeds, links to the UNC YouTube channel (and initial plans to make that look nicer), and for iPod / iPhone users a link to the UNC iTunes U presence that drops you right into the store if you are running the 3.0 OS. The campus directory app is quite popular and I’ve used it myself many times when I was walking to a meeting and needed to call someone who wasn’t in my address book. All in all I am really pleased with the progress and really thankful to the MIT Mobile team for releasing the code.
Goals for go-live in the Fall include a campus map and a library catalog search. Both are tough to implement and may get postponed past an official launch, but we’re working on them. We are also planning a Family Weekend addition so that folks attending that event in October will have schedules and other information available to them on their mobile devices. We should have that out in the September time frame.
Reaction from campus has been really positive. Everyone who sees this likes it and many folks have contributed content or want to. There’s even a link to it on the main UNC home page if you visit there from a mobile device. Could it be that after 3+ years of plugging away on this I finally have a foothold established in the mobile world for UNC Chapel Hill?
The fine, fine folks at MIT have released the first version of their Mobile Web project as open source code. I’ve been messing around with it for a bit today and it is quite impressive. We’ve been up against budget crunch issues for continuing to push into mobile on campus and this platform may be the answer to our problems. It’s not complete at the moment, and I haven’t gotten the mobile device detection to work yet, but the pieces are certainly all there. Many thanks to the MIT team that developed this and to the folks at MIT who got it released to the public as open source code!
There is a strong culture of service on the UNC Chapel Hill campus. Students on their own and student organizations all engage in an incredible array of service activities locally, across NC, across the country and even across the world. This is something we are rightly proud of about our students.
One of the challenges is capturing that activity in some form. There are isolated pockets of scholars who submit reports, class projects for grades, etc. but there is not a simple way for the average student to reflect on what they have done. I think there is a lot of value in these reflections, even if they aren’t highly personal they can still show others how easy service can be and the benefits they can get from it.
In an effort to provide a platform for this, we have launched a new blog platform at serviceblog.unc.edu. Based on WordPress multi-user we can host blogs for specific groups, or allow anyone associated with campus to post to the main blog. Since it’s multi-user, the constituent blog posts are rolled up into the main blog and available there are well. This is a recurring theme for me – it’s similar to the way the events get into the slice.unc.edu site – we pull them via an iCal feed from the constituent web sites.
Over the summer I hope to hook the serviceblog to the student org Joomla sites via RSS, letting students post their service activities in their sites and pulling that info for display and discover on the main serviceblog page. Same idea again.
I am working raising awareness of the serviceblog site and hope to see some students sharing their experiences with us soon.