A method for quantifying user experience
Back in January, 2009, I published my definition of user experience
. UX, as user experience is popularly called, is a difficult subject to discuss with business clients. To them, “UX” is just more expensive hot air from the folks who brought us the dot bomb.
The basic problem is that discussing an experience – any experience – is highly subjective. And although others have attempted to set up metrics (notably Robert Rubinoff’s User Experience Audit
, and Livia Labate's User Experience Health Check
), we don’t always end up with particularly useful data. Here at FatDUX, we were looking for a simple tool that could help us turn observations and subjective conclusions into useful dialog with our clients.
Our UX quantification model will undoubtedly be criticized by the scientific hardliners. But it does help us uncover many problems and communicate these to the client. And it works better than beating them over the head with statistics.
Please note, we take a very broad view of “user experience,” incorporating both online and offline interactions of three types:
Please refer to the original user-experience blogpost
for details regarding these types of encounter.
Avoiding complicated algorithms
There are lots of complicated ways to work numbers, particularly when dealing with the subjective data that invariably lies at the heart of any discussion of user experience. But rather than putting together confusing formulae to present our research, we work directly with our clients to quantify empirical observations in a very simple model.
The model in brief
We start by consolidating our research findings in a single first-person narrative – an X-log (experience log). This is somewhat related to phenomenology
. Once we’ve assembled this story, we work together with the client to:
1. mark each individual interaction – we call these “snapshots”
2. assign a value from 1 to 3 to each snapshot in relation to its contribution to the overall experience
3. grade the experience on a scale from -3 to +3
4. multiply the value by the grade to get a score (this is the really useful number)
5. note any events that are recurring, unique, or may be influenced by chronology (cause and effect relationships).
Plugging in the numbers
We mark each interaction, but some may later be thrown out if they are sufficiently trivial or so unique in character that they are deemed irrelevant in the broader, generic sense of the project. Although no individual snapshot can be assigned a value of 0, if you really think it deserves a value of 0, this is probably an interaction you'll want to ignore.
When we grade
the individual snapshots, we use the following scale:
+3 = fantastic
+2 = good
+1 = better than expected
0 = no effect on the ultimate user experience
-1 = poor
-2 = awful
-3 = mission critical
Unique or chronological events won’t always influence the score, but in the case of repeating events, the interaction clearly needs to be looked at carefully.
A sample narrative
Here’s a simple story based on a trip to the movies. It represents an amalgam of several user interviews, onsite research, review of user-satisfaction surveys, etc.
My family (my wife, myself, and our two kids) decided to go to the movies. We checked the internet and found the website for our local cinema complex after a quick search on Google. But we had to click three times to get to the program page and wait through a silly animated ad for a movie that hadn’t even been released yet. Worse still, we were forced to download a pdf to find out the specific movie names and playing times. And after all that, we couldn’t even order tickets online, much less purchase them, so we couldn’t avoid waiting in line when we arrived. You’d think a big four-screen complex would have a more sophisticated website. But we did find out what was showing, decided to see the latest Harry Potter movie, and piled into the car.
Finding a parking place was easy. The theater has a big lot, which is important since driving to this particular theater is really our only option. Just as we were leaving the car, it really started to rain, but happily, the entrance wasn’t far away.
There were three ticket windows open, so the lines were short. The girl behind the counter was noisily chewing gum and barely looked up during the entire transaction. In fact, she didn’t say a single word to me except to ask for the money. Wow, prices have really increased this past year. I was surprised at how expensive it was.
The lobby was inviting and quite clean. We bought popcorn and soda at the concession and found our way to our particular auditorium. It was easy to spot the signs pointing the way. As we approached, we noticed overflowing trashcans with popcorn and other garbage from previous audiences.
The seats were well-marked and easy to find. The seating was comfortable but there was old popcorn underfoot. The temperature in the room was pleasant, although all of the wet people made it get a little steamy. The sound was great and really enhanced the special effects, so we really enjoyed the movie. When we left, there was a nice usher, who opened the exits and wished us a pleasant evening as we went out. And it had stopped raining. A nice end to a nice family outing.
Defining the interactions
Reading through the narrative, we mark the individual interactive events – the snapshots. This gives us the following list:
1. Find website on internet
2. Click three times to find relevant page on site
3. Reaction to irrelevant animation
4. Find schedule (download PDF)
5. Reaction to lack of purchasing options
5a. Opinion of website
6. Park car
7. Reaction to parking lot
8. Reaction to rain
9. Reaction to proximity of parking to entrance
10. Reaction to short line
11. Reaction to rude ticketseller
12. Buy tickets
13. Reaction to ticket prices
14. Reaction to lobby
15. Buy popcorn and soda
16. Find auditorium
17. React to overfilled trashcans
18. Find seats
19. Reaction to seats
20. Reaction to popcorn on floor
21. Reaction to temperature
22. Reaction to steaminess
23. Reaction to sound
24. Reaction to movie
25. Reaction to nice usher
26. Reaction to dry weather
26a. Opinion of evening
Note that opinions are not really interactions, hence we have 5a and 26a.
Assigning values and grades
Ask your clients to help you fill out the values and grades. This is a great way to get clients emotionally involved in the design project without having to show them pretty layouts.
Having made this chart, there are several things that become painfully apparent. First, the lack of purchasing options is really a major problem. The need to watch irrelevant animations and resort to PDFs for information was also pretty bad. Snapshots 11, 15, and 25 suggest that additional emphasis should be placed on customer-service training for front-line personnel. Snapshots 17 and 20 illustrate that cleaning is a problem. Snapshot 22 revealed that the climate-control system was out of whack, which proved to be an easy repair.
The most important point of the exercise, though, was that the client suddenly understood how all of these events ultimately contributed to the total perception of the movie-going experience. The X-log narrative started a productive dialog about user experience and not about the color of the links.
We hope you’ll find it useful.