Hear what your customers aren’t saying

Some things never change; the biggest technology issues that businesses face come from not listening to customers at all. Blindly moving forward with ideas and products without the right input is akin to a midnight stroll through a minefield – it may go well, but you’re more than likely to be picking up your own legs at some point.

The second biggest problem we see is the entire opposite – focusing too strongly on what customers say.  Let me explain.

Once upon a time the inmates ran the asylum and programmers made all the decisions about how systems worked. The lowly user (read customer) had to like it, since there was rarely an option to lump it. Then along came the field of Human Computer Interaction and we all learned the power of usability testing. A process came in, and things got better.

But somewhere along the way the whole process started to fall by the wayside, and some people figured that user testing – and guerrilla testing in particular – was the whole answer. We often get involved in projects where the sum of all UX input has been one or two guerrilla tests at the start and perhaps mid-point of a project.

So, people are listening to what customers are saying, and that’s good. But here’s where it’s also wrong.

Polite equals failure

You’re walking down the street and someone stops you and asks for your help. They offer to pay for a few moments of your time, and then they offer you coffee and a biscuit whilst they explain just how useful your time will be to them.

If you’re like most people, then right about now you’re feeling relatively positive towards this person. They’ve been polite, they are paying you money, they’ve offered you food and drink. They value your opinion, which is nice. So when they bring out their baby and ask you to tell them whether their child is beautiful or not, what are you going to say?

Now imagine this is an ugly, ugly baby. Internally you’re thinking this child is going to have an uphill struggle being accepted in the world and should probably consider a career in bell-ringing. What do you say?

Again, if you are like most people, you’re going to pull your punches big-time. Given how positively you feel about the person who’s pulled you into this, what are the chances that you’re going to shoot from the hip and devastate them with the truth? You’re way too polite for that.

And that’s the first problem encountered with too strong a focus on listening to customers; customers can be polite, and polite equals failure. If you don’t get the truth then your business proceeds on the basis of a fallacy. And, horror of horrors, customers are often polite.

Yes means no – sometimes

One of the key goals of user testing is to find out a basic truth – will people use this thing. If the answer is a yes then you’re good to move on.

But you only need to consider the last point to realize that you don’t always get the right answer.

A wonderful case in point is the Sony Boombox research, an apocryphal tale. Sony wanted to know whether the red or black model of it’s new boombox was more appealing to customers, so they ran a focus group. A bunch of consumers were brought in, were shown the two models and were asked to talk about them. The consumers stated very clearly that the red boombox was the winner – it was seen as fresh, exciting, more interesting, whereas the black model was too conservative and boring. All of them told of a clear preference for the red model.

So with a clear answer to their issue the session was ended, and the participants were told to take one of the boomboxes from the rear of the room as a gift. When they’d left all the black boomboxes were gone, the red were all left alone.

What does that tell us? It tells us that people will say one thing, and do another – something that really shouldn’t surprise anyone who’s watched reality tv or has even the vaguest understanding of human nature. So when your customers see your product or website and say they like it and they’d use it, can you trust them?

The answer is a resounding no. It is certainly indicative, to some degree at least – but it is by no means a guarantee. Yes will often mean no, when heard in this context.

Order up, table 2!

The next problem to consider can be a real killer, but first an explanation. For years now I’ve used an analogy for UX teams that UX is the Waitress. Think of a business as a restaurant, with customers coming and going. This restaurant has a world-class chef who can make almost anything your heart desires. But when a customer walks in, how do you ensure they leave satisfied? The answer of course is that you need to know what the customer wants, and that’s where UX comes in. UX is the waitress that works with the customer to identify and perfect the order, then brings that back to the business for delivery. That way everyone wins. Without UX you get a perfect meal delivered to the table – but if it’s steak and this customer is a vegetarian, then it becomes an epic fail.

Relating this to guerrilla testing you’ll find that many guerrilla tests involve using surrogates. Rather than spend time and money recruiting actual or potential customers, you take a ‘general’ profile of ‘web consumer’ (i.e. pretty much any random person) and test them. Or worse you test the doorman, the woman next door, your sister or your girlfriend. One project I was involved with last year pivoted an entire e-commerce design based on feedback from a designers girlfriend and her two mates.

Now let’s get back to that restaurant. You’re the customer, you’ve just walked in and sat down. The waitress looks at you – but then rings her boyfriend and asks what he thinks you might like. You’ve never met him, but he eats in restaurants, he has a day job just like you. And there you go, your order of Celery soup is on the way. Hope you like it.

You need to talk to and test the right people in order to get anything useful at all, over and above the basic mechanics of design.

The anti-handyman and the drill

I’ve always thought that skills come in a sliding scale, with ‘none’ right in the middle. This allows for the concept of anti-skill; not only not being any good, but actively being bad/dangerous/scary to others. I recently proved this with a shelving unit that I had to put up in my son’s bedroom. I am definitely in negative territory when it comes to DIY skills, but I picked up a drill and a spirit level (bandaids would come later) and I did my best to make that shelf stay up.

Stay up it did, and more or less horizontally too – an outcome I would have been extremely pleased with, if it were not for the fact that on exiting the room I found four drill holes in the other side of the wall where I’d gone right through.

My point – beyond the warning not to come to my house when I’m trying to fix shelves – is that people have different skill sets. Consumers/users/customers are rarely expert designers, and in fact make terrible test participants when they are. This might seem self evident, but it’s something that seems to get lost in the rush to talk to them via testing.

When design goes wrong – and every design you ever test goes wrong somewhere, somehow, for at least some of the people – test participants will often have no idea why. They aren’t experts in design, information architecture or the UX process. They don’t necessarily know why they dislike something, or why they went the wrong way.

But ask them, and they’ll quickly come up with a justification. They will invent or justify their gut response, and sometimes that may be close to the mark. Often though, it will not.

Worse is the fact that they’ll happily recommend ways to fix the problems that they’ve just highlighted, problems that may not technically even exist. Just like my original recommendation to use No More Nails on that shelf they will happily recommend putting more things on the home page, making that ‘bit’ stand out and a hundred other comments that may have no relevance to the solutions you should be considering.

So what’s the answer?

The answer is simple, and it means taking a little bit of a step back. Rewinding to the beginning of this (already too long) blog post, there used to be a more thorough process in place. Perhaps not for every project every time, but in general there was a simple enough way to get it right. And it goes like this:

Understand the need

First you understand the who, and the why. Who are the audience (by profile), what is their need. Research, personas and audience profiles all help with this. At the thinner end of the wedge when budgets are tight there are still ways to get this information, without simply measuring what people do. Analytics data and surveys help with this, as do basic interviews.

Understanding the need helps you to map out what customers want and need from your business, and that in turn helps you predict what should happen next.

Understand the journey

Once you can predict what people want from you, you can map out the journeys they should take. User journeys are excellent visual tools for this, and you can utilize analytics data and behavioral info to refine them over time.

Journeys generally match up intent (“I want to buy an orange”) with action (“I’ll navigate into the Fruit area”) and outcome (“Fruit page is displayed”). They can also include meta-design, initial chunking of what the user will encounter as they go.

Observe the variance

knowing what the customer wants, and what they should be doing to get it, it’s now simple to observe what they actually do. And this is where testing absolutely wins out as an excellent step – watching and observing what happens, as opposed to what you expected to happen, is where all the gold nuggets can be found. You see where they get confused, you watch them select the wrong option, you observe as they think they are on the right track but aren’t.

And most importantly, that clarity helps you massively in interpreting what they then say. Now when users are confused or invent reasons why they went off the path you have the ideal context within which to interpret their words.

Remember, we haven’t ‘asked’ customers whether they like anything, yet – we’ve tried to understand them, we’ve predicted their behavior, and we’ve observed what they actually do. Entirely different process.

Measure the maze

Finally, once you have responded to what you’ve learned, it is a simple process to find ways to improve it. Measuring different performances within the maze allows you to tweak and improve, to poke and prod the walls until people are moving smoothly through it. Testing helps here, but this is where analytics and conversion optimization really come to the fore.

 

So there you have it. The moral of the tale is simple. There is no problem with listening to what your customers tell you, but it is muchmore useful to listen to what they don’t tell you. Sony learned that lesson well, and went with the black for their new model, which proceeded to sell well.

And if you want more information on that process – just call us.

Gary Bunker

the Fore