It's not U, it's me

When you're researching UX and improving design, there are two completely different types of finding. Both are useful, but sometimes the line gets blurred and this can lead to issues.

It's all about me

First of all, there's opinion.

As a UX specialist I will often look at a design and have an opinion on what's working and what is not. I can perform an expert review which identifies the positives and negatives, based on thousands of user tests and a solid understanding of good design practice. So my opinion is certainly valid - not always correct, but valid.

Expert opinion is, often, a guess. It's an educated guess, hopefully, and it can be based on years of experience and insight into behaviour. But it's still a prediction of what might go wrong.

It's all about U

Then there's the U in UX - the users.

When I observe user testing or run user research, I'll hear things and observe behaviour. This is observed/learned research, and is quite different from my opinion as a UX specialist. I might see someone struggle to achieve a goal, or might hear them state that something was confusing. This knowledge is coming from the research, not from my personal opinion as to whether something works. I may have to interpret why they are going wrong, but the fact that they are going wrong is just that - a fact.

Observed findings are real. They are tangible, measurable, scientific measurements of what happens when people interact with technology. Like any scientific findings they should be based on research that is repeatable, measurable and unbiased.

Getting mixed messages

Obvious? Sure. Both have their merits and their uses. Most projects will employ both forms of research.

But you'd be surprised how often this line is crossed.

Not so long ago I was invited to observe some user testing - which is absolutely about observed findings. Good user testing should have at its core the three tenets (repeatable, measurable, unbiased), which this test did. An unbiased test script took independently recruited participants through a realistic and repeatable set of tasks on a fairly balanced prototype. The facilitator was an unbiased professional who wasn't invested in the design. Tests were recorded and findings were being analysed post each session.

But during the tests, both the client and the UX resource recording findings would comment as the design appeared as to what was wrong with it and needed to improve. One of them would point out a flaw, the other would agree it needed to change, and the UX resource would add that into the mix of issues found.

Once the testing was complete all of these findings were added to a sheet and then prioritised for solutions and changes.

Only, by this point it was almost impossible to identify where any one issue had come from. And herein lies the problem; personal opinion on what was wrong with the design was taking up equal space in the list of priorities with real issues that were observed to be a problem.

And now it's time to break up

Still not sure why that's an issue? Let me give you an example.

A few years back I worked with a client who wanted to improve an online web service, something with some pretty complex functionality. We were short on funds so user testing wasn't an option, but they had budget for an expert review (my professional opinion and best-practice comparison), followed by their own ad hoc user testing with friends and family.

Once this was completed I was able to point them in the direction of a few changes that would deliver some great benefits to the design. And remember, this was based on my expert opinion but also on their own ad hoc testing, which had confirmed our concerns.

They were okay with this - but the product owner really wanted to improve one particular part of the design that he hated. He felt it was old fashioned and slow, and had seen some pretty cool new ways for it to work. Crucially, this area hadn't been highlighted as an issue during the expert review, or during their ad hoc testing. I pointed this out, but could see this was still a priority for the owner.

I'd delivered my work so I stepped out. Several months later I bumped into the client, and we chatted about progress. At this stage he told me he was pretty disappointed, though not with my work or the UX stage overall. As you probably guessed he'd decided to prioritise that area that he didn't like and he'd reworked it at great expense and over quite some time. He'd tweaked some of the other issues we'd found, but not many - since he'd focused on his own opinion and the area he most wanted to fix, convinced it was going to add real value.

Only it hadn't, of course. Despite the effort poured into it, the new design effort slipped into a sea of indifference and sank without a trace, without a single happy customer delivered.

At the end of the day he'd been unable to separate the opinion from the fact and he'd paid the price for it - money down the drain and no positive impact on the product or his customers.

Staying afloat and healthy

As I said at the start, both forms of finding are valid and both are valuable. Expert opinion is faster and cheaper than direct UX research. It can lead design (rather than review what's there) and it's more freely available in many cases - though it's still just a prediction. UX research is more accurate as it's real rather than predictive, though it is more costly and slower and is often less useful when trying to lead rather than review (think of the iPod). 

Horses for courses. But the key is to know which one you're working with. 

When you're running UX research, make sure that the findings are observed and not opinion-based. I try to avoid carrying expectations and bias into the room when observing user tests. I may have my opinion as to whether the design will work well, but I've been surprised on many occasions, positively and negatively, so learn to leave them outside the room. And try to avoid having others in the room give their opinion too much, as that can easily begin to sway you towards focusing on/adding in their opinion as a finding.

As with most important topics in the world today, it's important to separate opinion from fact, and fact from 'alternative fact'. Otherwise the relationship - and your product - can quickly end up on the rocks.

Gary Bunker

the Fore