Relax, take a PUE - in five easy steps

NewScientist recently reported (opens in new window) on efforts the UK Police are undertaking to use AI (Artificial Intelligence) to stop crime.

It’s an interesting subject that has been reported on previously - mostly to the effect of exactly how ineffective efforts have been. And it hasn’t been too successful as of yet. Part of the problem here is knowing exactly when it does work. If you head to a crime spot and no crime occurs, does that mean the prediction was wrong - or that your presence there stopped the crime from occurring? And of course all the places you weren’t may still see crime, but does that mean you should get more resources, or that the algorithms need work? And that’s before you even touch the sides of the ethics of using and responding to such work.

We’re not exactly likely to see the Department of Pre-Crime open up any time soon - sorry, Mr Cruise.

But it’s certainly an interesting idea and one that has direct relevance to our field. Predictive User Experience (or PUE, as nasty as that sounds) has been discussed for many years. We’ve been using predictive reviews, audits, sampling and heuristic tracking since the early naughties. Even User Testing is a form of Predictive User Experience measurement, predicting the behaviours of thousands from the behaviours of a few.

All these methods are meant to help us see into the future and to reduce costs and risks. And they work.

So, for those who aren’t aware, here are the five levels of Predictive User Experience, and their pros and cons - in reverse order of trustworthiness.

5. Trust me - I know design

Coming in at number 5, my personal favourite. So you know design, do you? You know what works for people?

Trouble is, we all know design to some degree. We all know what works, and what doesn’t. And we’re all wrong, at least some of the time.

Yes, there are design principles to follow and knowing them certainly helps to predict to a certain level how well a design might work. Knowing what makes designs break is a good start and certainly helps.

My favourite ever quote was from a Creative Director. We were having a heated argument about one of his designs looking good but not being easy to use. His response was that I just ‘didn’t understand design’ and that his designs were simply ‘ahead of their time’ when it came to usability. He felt not only that users would persevere in using them but would come to love them - and his quote was “Picasso’s work wasn’t understood in his day but now they’re priceless”.

Which may well have been true for his designs - though I personally doubt that they’ll be worth millions a few hundred years from now. More importantly however, even if they are - the business they were intended for would have gone bust many decades before.


  • It’s easy and fast to learn (relatively speaking)

  • It doesn’t change too frequently

  • It gets you to a certain level of usability, without having to know the audience


  • It’ll be wrong at least 20% of the time

  • Every rule has an exception (this one included)

4. Trust me - I know people

At number 4, predicting the interactions with the audience because you ‘know people’ - or better yet, you know these people.

The worst user research I ever ran was on a service that had been completely redesigned twice already, due to horrendous reception by the audience. And the kicker was, the team designing it were all from that audience, originally. All of them were convinced they knew what this audience wanted, what their pain points were and how they wanted to work.

What they had forgotten was that the more you learn about design, the less like an everyday user you become. They had, in effect, poisoned what they know about the audience by learning too much. It’s hard to take away insights and understanding and see what the world looks like without them.

On top of this, past behaviour is not always a great predictor of future behaviour. New technology, new services, a changing world and changing expectations mean that what made sense a year ago might not make sense today. We regularly see expectations that would have made sense mere months ago get dashed against the rocks in research - our own included. Knowing people only gets you so far.


  • Knowing the audience takes time but is immensely rewarding and certainly helps

  • Over short periods of time (less than a year) past behaviours will often repeat - and for many core meeds they will transfer even further


  • It can take years to fully learn the needs of people, even in a wider sense. I’ve been researching for 21+ years and people still amaze me on a daily basis.

  • For some fields knowledge of past behaviour is old within months, certainly within years.

  • Context is king - change even a few elements of the mix (people, tech, content, platform, time, geography, culture) and that past knowledge can be worthless

3. Trust me - I’ll ask my brother

Next in the list is looking at current behaviour, for a few.

For example if I want to know how nurses will use and react to a health app I can’t research every nurse - but I can research a few. I’ll see what they do and say, and then use that to predict how the others will respond.

This is user testing/UX research as we most often perform it today.

And it works. It’s current knowledge about current behaviours in context, and in general it works pretty well. However it’s not perfect, and relies on getting the context and the mix right.

A great example of this was a recent project we ran for a financial system. Due to budget constraints we were asked to speak to just 5 people, representing 8 different segments and personas. Of course that meant we were seeing at least one person per segment, and missing others entirely. And at that scale, it’s no surprise we missed some pretty huge gaps. Because we didn’t predict these issues, the app went ahead without knowledge of them and had to be pulled back for major rework.


  • It’s fast, relatively cost effective and relatively simple to ask the few and predict the many

  • You learn a lot in a short time

  • You see behaviour but also uncover softer knowledge - likes, dislikes, preferences, content responses, etc.


  • It’s not always possible (e.g. getting Pilots or top C-suite people in a room can be a nightmare)

  • Get the mix or the context wrong and you’re rolling the dice

2. Trust me - I’ve looked at the numbers

Facts beat opinion every time. And much of UX can come down to opinion, unfortunately.

So number 2 is any prediction of behaviour based on numbers.

One example of this is a good heuristic scorecard. For example we use a scorecard that encompasses several hundred checkpoints of data measured across hundreds of websites and thousands of tests. These numbers spit out the problems we see and the rulesets that come from those, and predict that those problems will most likely recur when in context for new designs. So the fact that “up to 50% of general consumers don’t know a logo links to the site home page” has been tracked across so many projects we can confidently predict it’ll happen again. And if the site doesn’t have a unique Home link, it’ll cause a problem.

Another great example is using machine learning on data from call centres and customer contacts. Knowing what people are doing at a larger scale not only allows you to predict what will happen on site A, but also what’s likely to happen on site B, if they have similar contexts.

Machine learning is often applied more in an optimisation framework than a predictive one, but it is certainly a valuable tool.


  • Analytics (in all forms) is far superior to our gut feeling and (potentially) old knowledge from knowing people

  • Facts are facts (sorry, Mr Trump)

  • Numbers can help us predict the numbers and the benefits as well as the high level impact


  • Machine learning doesn’t come cheap

  • It requires a lot of data to be able to work - and that’s not always possible

  • It takes time to get that data - during which your business can be going down the tubes

1. Trust me - I’ll ask my program

Which brings me to number 1, AI.

And there are two sides of this coin.

Firstly there is AI that can help us in gathering data and researching customers. For example some market research companies are starting to use Bots (automated programs) to act as researchers, and talk with people directly. In a rougher form we’ve had robotic surveys for years now, but Bots are now smart enough and are becoming flexible enough to run full interviews - anyone who’s listened to Google’s Duplex booking a hair salon appointment can understand how that technology could potentially make phone interviews with participants all around the world.

And secondly there is the analysis of past data, to make predictions about the future. Which leads us back to the humble British Copper and their predictive policing.

Just as data models can lead to predictions on where crime is likely to happen, CX predictive models can potentially be used tp predict where in a service or a design issues are likely to creep up.

It may simply be a case of placing a design before an AI agent, selecting the audience we think will use it, and then seeing where the AI tells us those people will find roadblocks and snags.

This may not ever preclude the need to talk to our customers, but it could supplement and reduce the amount and scale of research we’d need to run.

And who knows, one day those AI engines may be so accurate we’re all out of a job. Which won’t be so bad - as long as the world is an awesomely simple and easy-to-use place.


  • Once it works, it’ll be fast, cheap and available to businesses large and small

  • AI can keep track of all the moving parts (technology, trends, knowledge, people) and adjust course over time

  • We’ll be able to retire!


  • Just like those Policing predictions - we’ll never know how right or wrong we are, without validation

Gary BunkerComment