World Usability Day 2010 – Part Two

World Usability Day logo

If you haven’t read Part One of this post, find it here.

I apologize. I failed at the whole not taking as long to get this post up. It’s been 90% done for several weeks just waiting for me to go back and finalize it – taunting me the whole while.

So, picking up where we left off there are three presentations remaining starting with a surprisingly interesting one from a government employee.

Government on the Move

When I saw that there was someone from our state government there it was not necessarily surprising considering how close we were to the Capital Building and my knowledge of’s existence, but I wasn’t sure if I’d be sleeping by the end.

As it turns out, Chuck Baird was an entertaining speaker and even started out joking about the bad rap speakers from the government get. He shared with us about how usability came to his attention and grew from there as well as the ways that the state had started to look at usability in their development.

While anyone can tell you that there is a lot work to be done on the state’s site, Baird was able to share about the Mi Drive site, MDOT’s twitter account and the mobile version of the state’s site.

Best Practices for Mobile Application Design

This presentation, by Jason Withrow of Washtenaw Community Colleg and Usable Development, LLC, mostly discussed the basics of how to think of and work in the mobile space.

Withrow started with the range of mobile devices – screen sizes, software, supported devices, storage and microbrowsers. All of these elements can vary across a wide range. For example the average cellphone has a screen size of 100-320 pixels while an iPhone is 480 in width and PDAs can range from 320-640 pixels.

From there he moved on to specific design constraints focusing mostly on hardware like the keyboards on mobile devices, touchscreens and potential bandwidth (both connection speed & data use).

For the sake of space I’m going to breakdown the main best of practice points into an outline.

Low-end device implementation
– best done in (x)html + css with either a handheld stylesheet, .mobl address or m.domain address.
– functionality should be focused on the key task the app is meant to do
– same amount of bandwidth is consumed when elements are hidden but not removed from mobile visibility/functionality.

High-end device implementation
– Option 1: (x)html +css combined with javascript using a css media query based on device characteristics
– Option 2: write the app in a programming language supported by the device.

– Streamline task completion: keep a consistent theme in mpbile design and focus on doing 1-2 things well
– Design for preferences and patterns: remember the users frequent selections & most recent actions
– Design for single-hand use: can the user operate it with one finger? Limit the need for keyboard & multi-touch.
– Keep Fitt’s law in mind: size is critical, it needs to be big enough to acquire easily without clutter to hit accidentally around it
– Use mobility to your advantage: incorporate the users geographic location in consideration

– transition between portrait/landscape modes requires bigger adjustments than on PDA/smartphones
– drop & drag is possible

Creating Beautiful Android & Apple Mobile Applications Easily

When we first saw this topic on the docket, we really weren’t sure if we’d get anything out of it because when it comes to coding… well, that’s what the devs are for (and we appreciate it greatly). I am so glad we decided to stay anyway.

Nick Kwiatkowski (@quetwo) started out with a few stats to keep in mind –

– 4.6 billion cellular subscriptions worldwide
– 3.5 billion are capable of text messaging
– 257 million data plan holders within the USA (over 1 billion world wide)

It is not all HTML5 and people are expecting an app but the fragmentation of mobile devices makes very few things compatible across the board. Because of this creating apps for use across the board can be very expensive in development hours.

Adobe has published an application framework that allows developers to create applications that run across mac, windows and linux

AIR is either compatible or soon to be compatible for iPhone, Android, Windows & blackberry and allows you to re-use code to create an app over multiple platforms. This is not a miracle tool. The toolsets will never be as powerful as an app targeted at a specific device and the context (phone vs desktop vs other) does need to be considered before sharing the code blindly.

I’m including my notes from the talk, but if you really want to get a good grasp of how this works I suggest going to Kwiatkowski’s blog for his notes, powerpoint, etc.

Start by selecting your chosen device in Device Central

Create a new document in Photoshop
– since it is exported from device central it is already set to the correct specifications
– make sure each element is on it’s own layer because the prototype will lose it’s taxonomy
– elements should also be divided into folders
– Do not rasterize labels and fonts, this breaks accessibility

Adobe Catalyst
– create a new project from a psd.
– It will import with an options screen (choose a color other than white so people don’t take it as something being wrong)
– Ctrl/enter will run the project in internet explorer
– Use shift to select multiple
– With a click change to types of functionality (Photoshop box -> text box)
– Specify as needed (parts, repeated elements, etc.)
– Create multiple views in the pages/states sections
– Create interactions from the data list

Now into Adobe Flash Builder (can also use Flash Professional)
– rt click on project explorer, Flex mobile AIR project
– list target platforms, blank to import project in progress
– multiple Photoshop files for real life re-orientation
– App id usually domain name backwards
– Automatically imports code with the catalyst final design file
– Debug file for errors
– Can export to code or the internet
– Professional is a little more advanced with iPhone

There you go. I hope you’ve enjoyed this overview even if it has taken me far too long to post it.

World Usability Day 2010 – Part One

World Usability Day logo

It has a taken a couple of weeks to get to writing this, but back on Nov. 11th @caitlinpotts and I had the opportunity to head out to Lansing for the day and soak in some mobile devices information at World Usability Day (#wud2010). I’ve realized in pulling together my notes that this would be awfully long as one post, so this’ll be a two parter.

We started off our day with a quick stop at Panera bread for breakfast and coffee before heading over to the Kellogg Center on Michigan State’s campus. Once there we met up with our friend Mike (@UXmikebeasley) and settled in for the day.

On to the presentations…

Talking Points

While our first speaker’s topic had absolutely nothing to do with the mobile apps we’re currently considering at work, Mark Newman of the University of Michigan gave a fascinating presentation.

‘I can get where I need to be and if I get lost I can find my way out.”
-Talking Points Test Participant

Talking Points is a research project they’re working on to use smartphone technology with gesture-based interactions to help visually impaired people learn about their surroundings.

Basically, the project uses GPS outside and, eventually, wi-fi tri-lateration inside to provide visually impaired people with information on their surroundings with community gathered GPS/wi-fi tags. These tags would be for paths, areas, points of interest, path intersections/decision points and functional elements like entrances or restrooms.

The goal of this project is to not only foster spatial awareness in navigation, but also comfort, security, exploration, improvisation and the discovery of new resources (aka independence). You can find out more information on the project at

The Art of Mobile User Experience Research

The second presentation was by Kris Mihalic (@suikris) from Nokia. He started out talking about the division in understanding humans that occurs in the technological workplace between market research and UX research.

Market research tends to focus on opinions and quantitative data gathered from focus groups, surveys and ideation sessions while UX research is usually more interested in behaviors and qualitative analyzing behaviors and anthropological studies for insights into why people do what they do. UX is also usually closer to product & design than market research.

There are three pillars that Mihalic focused on for Mobile UX – speed, flexibility and context.

Speed refers to all of the things involved in having a high speed of execution including fast prototyping, recruiting and delivering results, which includes involving stakeholders early.

One quick prototyping method that I’ve not used but sounded fun was sharpie movies, which is sketching in sharpie then compiling it into a movie. Being imbedded with a development team, his suggestion to become BFFs with the developers to gain their investment in prototyping was amusing. Cookies are generally a great form of bribery ;)

Flexibility referred in large part to platforms (focus on the important ones), interaction and hardware. He specifically mentioned being prepared and informed about the situation you’re working in and making sure that your stakeholders understand the constraints of what you are and are not able to do.

The last pillar of context stressed the timing within the development cycle, use vs. research, support/vendors and being contextually aware of your team, business, users and research partners. One suggestion he made was using video diaries from testers which put the device into a real life context and also powerful with stakeholders.

Milahic’s last point was that the greatest return on investment (ROI) of UX research is to take the information gathered and optimize your product’s ‘sweet spot’. You can’t do everything, but optimize that which you do well.

Building Mobile Experiences

The next presentation was by Frank Bentley of Motorola Mobility. He had some great visual examples that I unfortunately do not have to share with you, but there are some great points event without them.

The first and very important point is that while there is a good bit of similarity phones and computers, especially smartphones, but they are used very differently.

This point is so important because it goes back to the context that Mihalic mentioned and that testing in a lab alone won’t give you the whole story. You get the basic data from lab tests, but it doesn’t answer the question of how it’s really used and misses all of the creative ways that people use their devices in real life.

Because of the shortfalls of lab testing, their goal is to have a functional prototype that they can put out for field evaluations with beta testers in 1-2 weeks.

With these prototypes he emphasized building only what you need (!!!!!), focus on the experience instead of the technology and make sure that your prototype is sturdy enough to survive testing

He also emphasized the importance of recruiting according to your goals for testing and that the device needs to be their primary device to truly understand the context and how they are used. With the recruiting, it is best to recruit a diverse testing population so that similarities are less likely to be coincidental.

Lastly, the first prototype doesn’t need to be complete or fancy. The faster that a prototype exists – even if it doesn’t have all the polish – the sooner you can get real feedback to see if the project is going in the right direction.

That’s it for World Usability Day – Part One. Keep watch for Part Two. I promise it won’t take as long to get out as this first one did. I failed at that one… but Part Two is now up!

Accessibility & the Web

sign reading accessibility with a windy road

Last week Caitlin and I traversed down to the University of Michigan in Ann Arbor to participate in Environments for Humans (@E4H) Web Accessibility Conference. Our fearless leader has started teaching us in the ways of accessibility, but it’s been snippets here and there so this was a great opportunity to expand our knowledge even further (Mwa ha ha ha ha!).

There were eight speakers and topics that were covered. Here’s an overview –

HTML5 w/ Christopher Schmitt (@teleject)
Progressive Enhancement with ARIA w/ Aaron Gustafson (@AaronGustafson)
Accessibility and Compatibility w/ Jared Smith (@jared_w_smith)
Accessible CSS w/ Marla Erwin (@marlaerwin)
Practical Accessibility Testing w/ Glenda Sims (@goodwitch)
Future Trends in Accessibility w/ Daniel Hubbell (@rollyo11)
Mobile Accessibility w/ Derek Featherstone (@feather)
Is Universal Design Still Possible w/ Matt May (@mattmay)

Obviously I can’t cover everything that we learned but here are some of the takeaways I got.

People are different. One size doesn’t fit all.

We should all be aware of this, but when we approach a subject like accessibility there tends to be this one size fits all idea, which simply isn’t true. The needs of someone with a sight impairment is different than a hearing or physical impairment. If we don’t know our customers and their needs how can we think that we are really building anything for them?

It’s not are we dealing with disabled people. It’s how many people are we reaching & how many are we leaving behind. – Matt May

Accessibility impacts more people than we realize.

When the subject of accessibility comes up often it conjures images and ideas of someone who is blind using a screen reader or using voice commands to navigate, but the audience is much larger than we initially realize.
According to DiversityInc more than 1 in 5 Americans have a disability but not every accessibility issue is due to a handicap. In fact, Hubbell shared that 54% of the population ages 18-64 would benefit from some form of assistive technology.
This number should not be so surprising. I myself will many times turn captions on on my TV even though my hearing is fine. It started when my exchange daughter was living with me. She speaks English very well, but it was easier for her to follow screen conversations with subtitles/captions. Just this past week I was trying to figure out how to turn on captions on Direct TV  just because the conversations around me made it difficult to catch onscreen dialog.

Consider accessibility at every step in the project.

This shouldn’t come as any surprise, but it is a whole lot easier to plan in accessibility from the beginning that to try and add it in at the end. As Marla Erwin said, an accessible website starts from the design stage, not from the coding stage. Things like zoom, adjustable font, alt text, link clarity, clickable space, screenreaders and background colors all take a lot less development time to include from the beginning than to try to come back and address later.

If your boss doesn’t see the value in thinking about accessibility, feel free to point out to them that they’re missing out on a trillion dollar market, including $220 billion in discretionary income. It shouldn’t be the reason we build accessibility, but sometimes mentioning the potential market helps.

Compliance is not enough (even if its more than a lot of people are doing).

Compliance with accessibility standards doesn’t equal usability. Accessibility is about more than just checking off the compliance boxes Just play around with for a bit and you can see how some extremely compliant color combinations make your eyes cry out for mercy (it can’t just be mine).

The same can be said for alt text. Just throwing some random words in there isn’t going to make your site accessible. Thought needs to be put into the choice of text and it should convey the functional content of the image it belongs to.

Accessibility is about real people who live real lives. They are our neighbors, siblings, parents, grandparents, friends and co-workers. Their needs are as important as any other customer and as professions we need to care about more than if we meet the minimum requirements.

It is not enough to just hack around on these things. We need to get involved. – Matt May