Ostmodern love user testing. We do lots of it, and it’s never enough for our liking. We test for different reasons, from early concept testing to usability testing live sites. Here is an introduction to the way we approach user testing at Ostmodern.
We don’t always have an opportunity to interview users at the beginning of projects, but we do always interview participants when we test prototypes. We do this to learn more about our users and feedback findings into our design process. It helps to enrich what we have already learned from data such as analytics or existing personas. We want to know about real behaviours and habits, but more importantly, why people behave as they do.
We split time with participants into research interviews and prototype testing. This also helps to get the participant thinking about the domain we will be testing, be it watching sports videos or choosing a broadband supplier.
We tend to outsource testing moderation as it’s easier to recruit participants, and we have limited suitable space in our current open studio. It also introduces an unbiased presence, someone who did not build the prototype, and will not lead participants or take offence when something ‘obvious’ isn’t so obvious after all. We have tested with individuals and families in London, outside London and even in Moscow. Russian testing was a particular challenge, requiring lots of planning, including language translation services. We are moving towards moderating tests ourselves, and will be guerilla user testing if we can’t get a testing room sorted soon.
We prepare extensive testing scripts to give to the test moderator. This sets out the goals of the product we are designing, a little background of the project and what we are trying to achieve from testing. We then explain a little about the prototype scope, level of fidelity, what type of data it contains (placeholder content or live data), when to expect work in progress and ‘final’ prototype versions (these are usually late at night before the first day) and how often we expect to change the prototype during the course of testing. This last point is key to the way we approach testing, it’s very much rapid iterations.
Sometimes we prototype using software such as Axure and then we try to learn from each test and makes changes between each participant. This can range from copy and style tweaks, to larger changes once we see patterns emerging across participants. If our prototype is more complicated, HTML or a Smart TV installed app, we will try to make changes between days of testing. In this case, we try to leave a day or two in between testing days so that we have time to learn from each day’s participants and make changes to the prototype.
On testing days, we arrive early and run through the test script to make sure everything works on the test machine, with the current conditions. We have had to contend with poor internet connections, missing fonts and freezing cold rooms. Once tests start, we begin furiously taking notes. We used to take notes in shared Google Docs, but this tended to result in just the Ostmodern team taking notes, not the client. Now we put up large sheets of paper all over the walls, hand out Post-its and Sharpies (of course) to everyone in the room and encourage everyone to write down thoughts. These can be notable quotes, descriptions of what participants are doing, or not doing. We also make a log of prototype changes that we then prioritise between test sessions. The important and quick changes are made right then and the important changes which require more effort are made as soon as possible. Less important changes are made when we get the chance or after testing.
We always cram lots of people into the observation room. Ostmodern UX and design teams are always represented, and we push for as many people from the client as possible. We don’t only want client UX, design and product managers in the room. We want C level people to witness how products are taking shape, and how learning directly from users can make the product better. It is much more compelling seeing participants use prototypes then hearing reports after the event. Dropping in to watch just one participant isn’t ideal, but it’s better than none. Staying for a whole day of participants is great, especially seeing which prototype changes we make, and the participant behaviours that led us to make those changes. We workshop changes with the client and test things out immediately. This leads to long and tiring days but we move fast and learn a huge amount.
During tests, we try to make it as naturalistic as possible. We set a real world task that is quite specific, but framed in each participant’s experiences. Some prototypes have limited functionality and after a period of free explore, we will guide participants along paths we need them to take. Using more advanced prototypes means we can leave participants to explore and use it more like they would at home.
Sometimes we experiment wildly with prototypes, this is our time to test our hypotheses. It’s when we can get rid of those egotistical designer assumptions and get down to what will actually work. During the course of testing, our prototypes become more stable after we have learned from many participants. By the end we are happy that we have learned from what participants told us and what we put in front of them. We usually have a few things that need workshopping further.
At the end of each day of testing we try to make time to summarise the findings using the KJ technique. Everyone who has been observing is invited to join, as well as the test moderator who has been sat with participants in another room all day. We take a few minutes to brainstorm what we learnt from the day and write each point on a Post-it. Then we each stick points up on the wall and briefly state what each point is. Then everyone groups the Post-its and we give each group an agreed title. Next we use dot voting to identify everyone’s priorities, the groups are reshuffled in priority order, and we have a clearer understanding of what everyone thought about the testing day. This is a great way to ensure everyone’s voice is heard and to understand individual and group priorities. It helps inform subsequent testing and focus ideas to be included in any documentation.
Some projects have multiple testing phases. Between phases we may increase the level of fidelity of the prototype, start using live data instead of placeholder content or focus in on a particular page or function that we feel needs exploring further. We also test live sites in order to learn from current usage and evolve what we have already built.
Tips for really useful user testing
- Start with a test script - what are the goals of the product, project and testing? What questions will you ask participants in order to learn what you need to fulfil these goals? Then build a prototype with just enough functionality that will let you answer these questions with participants.
- Use the introduction chat, when you are helping the participant relax, to learn about existing behaviours and frame the upcoming test activities.
- Have enough people observing that someone can take notes, workshop ideas with clients and make prototype changes throughout the day. Switching roles means you don’t get too tired.
- Keep clients active, taking public notes or helping with design ideas. They may have a different reaction to what is observed, or offer different solutions to perceived problems.
- Test product hypotheses early on and aim for prototype stability at the end of a testing phase.
- Capture everyone’s thoughts at the end of each day and avoid weighty reports, no one reads them.
That’s a snapshot of how we test at Ostmodern. It’s quick and it’s dirty and it gets results. We are always learning from our testing, not just for our products but how we run our testing.