5 tips to a good dash (board)

Screen Shot 2018-10-19 at 11.34.40 am.png

Over the past year or two we’ve helped design maybe a dozen or so dashboards, for applications ranging from expert systems (internal staff managing hundreds of interactions per day) to apps (one person managing their fitness each week).

But across all of those projects there are a number of core lessons that apply to all. So we’ve gathered our key findings and top tips below.


1. Choose the right clothes

We all know that one size does not fit all. And we all equally know that what looks good on a model will often look somewhat different when we buy it, take it home and try it on ourselves. Oh, the shame.

Which is my way of saying, ‘know the customer’. Whilst there are many elements of data design and visualisation that will carry across most audiences, this is not true for all. Let me give you one simple example.

One dashboard we redesigned was for a fitness app (specifically around eating and calories). The initial data views were far too complex, and we worked with the client to simplify them. The end result had perhaps half the data of the initial design, but performed much stronger in testing.

Compare this to a second dashboard we redesigned, built for financial agents managing transactions and applications. Upon initial view the dashboard was way too crowded, with tiny fonts being used to display huge and complex tables. But upon reviewing the requirements and talking with those agents, this was exactly what they needed - no amount of stripping down and simplifying the data was going to work, as they needed all the data on display at different (and somewhat unguessable) times. So the dashboard remained somewhat cramped, but kept them absolutely happy with the outcome.

The right clothes for the right person.


2. There’s a reason Police sketch artists exist

Take a look at the following choice. Both describe the man who made off with the bank notes in a swag bag, which do you think would be easier to remember after a day or two?

Luigi2.png

Police sketch artists exist because it’s far easier to consume some information in visual form than in data form; and tables and graphs are a great example of this. Bumps and dips on graphs are far easier to visualise and understand at a glance than are rows of numbers with smaller or higher values.

But this is a generalisation and isn’t always true. For example we worked on a dashboard where we took out a number of visualisations, because they were simply not helpful - they showed pies and lines but these meant nothing to the audience at all.

So a sketch is better than a description most of the time, but sometimes you only need the headline.

3. Ogre’s are like onions - and it’s not about tears

When trying to explain that Ogre’s are complex beings in Shrek, Shrek tells Donkey that Ogres are like onions. They have layers.

The same is true for a good data dashboard. Try to show everything in one layer, and you’ll almost always get it wrong.

The rule of three’s is a great place to start:

  1. Top level - overview of state of play

  2. Mid level - checklist of items that make up the state of play

  3. Low level - detail for the checklist items

Let’s take the example of a financial banking app. The CommBank app does this extremely well. The top level is visible as soon as you open the app, it gives you your state of play, how much money you have today, along with access to other features and the drill-down.

You can then select View accounts, and see the mid level. You see your accounts and cards listed with their own state of play information, showing how that makes up the overall picture. But it’s light at this stage.

When you select an account you then hit the low level, and you’re looking at the details that make up the state of play for one account - daily transactions and items. Three tiers, and the user has the ability to stop at any level. They aren’t forced to be a donkey, they can peel their financial data like an onion - or an Ogre. Not that you peel Ogres, but you get what I mean.


4. Look for the bubbles

There is a story about the first iPod that goes like this. Steve Jobs wanted the engineers to make it smaller, but they complained there was no way to reduce the size any more, they simply couldn’t deliver on that requirement. So Steve threw the iPod into an aquarium, and when bubbles rose to the surface he said ‘Those are bubbles, that means there’s space in there. Make it smaller.”

That may or may not be true, but it’s a great example of good dashboard design. Only with dashboard design we’re not making it smaller, we’re making it leaner.

You’re looking for what you can remove, whilst still keeping the dashboard useful. Most dashboards we see that are failing are doing so not from lack of information, but from a glut of it. The more information there is, the harder it is to find and seek the insights you need.

So, look for the bubbles - remove everything you can, pair it back, and simplify. Keep it clean and simple, lean and mean.


5. Walk the tightrope

As with so much in life, good design is about balance. Visual balance, goal balance, and a balance between simplicity and functionality.

In designing a great dashboard you’ll be finding the balance between:

  1. Power users (who want everything up front and quickly accessible), and other users (who want simplicity and layered structure)

  2. Generalists (who need a little of everything the system has to offer), and specialists (who only need a few things but more of them)

  3. Task-based users (who want to only see what they need to know right now) and holistic users (who want to see the wider picture all the time)

And that’s just the start.

It’s not easy, and it often involves compromise - but there are routes towards happiness for everyone, too. For example:

  • Customisation - allowing the audience to customise what they see to their own needs (as long as you provide elements they can use to do this)

  • Personalisation - personalising the experience people encounter based on their behaviour, path, archetype or other information.

  • Mode-setting (switching the UI based on activity, to better suit the audience mode at the time)


And finally…

None of this works without contextual user testing - or if it does, you’ll never know it. So, test, test and test again. Good dashboard designs will usually require several rounds of iterative testing and enhancing before you’ll know for sure that they are doing the job.

And if you want to know more - contact us.

Say Goodbye to Blu and Jewel

 Blu and Jewel, from the 2011 production  Rio .

Blu and Jewel, from the 2011 production Rio.

Rio is a movie for kids, that many of us have seen (or been forced to watch multiple times). The movie follows the heartwarming story of Blu, a Spix’s Macaw who is thought to be the last of his kind. When it’s discovered that there is a female, named Jewel, Blu is taken to meet her and the two are captured by smugglers. The rest of the movie follows Blu and Jewel as they attempt to make their way home, and hopefully some little Macaws.


Recently, it was revealed that the Spix’s Macaw is now extinct in the wild. It is the eighth bird species (that we have been able to record) to become extinct during this century, and based on climate predictions and political blindness, this rate of extinction is more than likely to continue or accelerate, and not just for birds.


Scientists predict we may be entering, or already experiencing, the next Great Extinction Event. We’re losing species at an alarmingly rapid rate, and advanced conservation techniques and technology have never been so important. These include everything from genetic technologies that allow population monitoring to more advanced forms of camera trapping, to bioacoustic monitoring devices.


But enough of the bad news - today we’re here to explore 8 of the most promising new developments that are being used to help fight extinction.


1 - Advanced cameras for trapping and monitoring

Camera trapping has been used for research for a long time, but new technology is allowing researchers to view animals in different settings and ways. For example, researchers in the US adapted thermal imaging technology to study the effect of a common disease on bat hibernation behaviour and activities throughout the winter. They were able to leave the cameras there, meaning the bats weren’t disturbed and the researchers were able to get lots of valuable footage.

While camera traps are usually used for larger or hibernating animals due to their slow frame rate, some researchers found a way to use camera traps on Hummingbirds. Camera traps work by being activated when a warm-blooded animal walks past cooler surroundings, and the difference in infrared lighting triggers the camera to record. This works really well for larger animals like big cats or bears, but Hummingbirds are a different game altogether. The researchers were able to develop a new camera trap in which the sensor was separated from the camera, which allowed them to set up a system with multiple sensors attached to one camera. This then gave the camera advance warning of when the Hummingbird was approaching.


2 - DNA sequencing technology

Collecting DNA from organisms that you are studying can grant a huge amount of insight into population dynamics, breeding dynamics, levels of health, disease and migration, along with many other things. However it can be incredibly time consuming. Say you need to collect genetic samples from a field of plants. You need to go around and collect the samples, store them efficiently, and return them to the lab where you can then begin to study them. Traditional DNA extraction and sequencing protocols can take a long time, but some researchers from the UK have recently been able to develop a handheld, real-time device to sequence DNA of plants, meaning they can now potentially sequence an entire field of plants in real-time and on-site. This kind of technology has awesome implications for monitoring plant populations and their dynamics and health, for example when selecting potential areas for Koala habitat based on the Eucalyptus species available.


3 - Using drones and AI to fight poachers

The Lindbergh Foundation is an environmental non-profit who are researching and improving drones using AI technology in partnership with other research foundations. While drones have been used to fight poachers before, there is a high chance that they might miss the poachers just by being in the wrong area at the wrong time. However, the researchers were able to train the AI to identify herds of elephants, herds of rhinos, and poachers. This means that the drones can accurately identify where the herds of animals are, and therefore where the poachers are likely to be. Additionally, the presence of the drones near the herds may prevent the poachers from ever getting to the herd in the first place.


4 - Protecting bees with circuit boards

Bees are incredibly important to our environment, and recently they have been dwindling. The exact reasons are not 100% certain, but one large issue they are facing is the presence of the Varroa destructor mite. These mites lay eggs in beehives, and grow by attaching themselves to bees and feeding from fat bodies.

In order to combat this without using pesticides that are harmful to both the bees and the environment, researchers developed a circuit board complete with 32 sensors that can be placed inside a beehive. The sensors monitor the temperature in different areas of the hive, and if temperatures are detected which correspond to the presence of Varroa mites, a notification is sent to an app that controls the temperature in the hive. The temperature in that area is then altered to kill the mites without harming the bees.


5 - Tracking endangered animals using Smart Collars

When monitoring endangered animals, sometimes just getting a sighting isn’t enough to accurately assess the animal or it’s movements and behaviours. Researchers are developing new Smart Collars that allow us to not only monitor the location of the animal, but can also be fitted with accelerometers and cameras, monitor the body temperature and vital signs, and allow real-time information on when the animal is hunting or travelling. Human interactions with animals, particularly in rural or farming areas may often be dangerous to both the human and the animal (for example wolves). However, these Collars may soon give us the potential to decrease the chances of human-animal interactions.


6 - A new way to monitor and understand bird calls

Monitoring bird populations is often difficult, as they are often discrete and can travel far distances. Additionally, many bird species call at different times throughout the day, making it difficult for one researcher to collect information on all the species present. On top of this issue, it is very easy to mistake bird calls for an incorrect species when an individual is trying to listen and identify without seeing the bird. Researchers have managed to develop a tool which can record multiple bird songs at once and analyse them with greater efficiency than previously possible. While this technology still has a failure rate relative to an experienced researcher, there is no denying how useful this tool could be in monitoring population responses to climate change or environmental disturbances, among many other topics.


7 - Predictive software to help focus conservation efforts

IBM has recently created a predictive analytics software which can be used to collect and analyse a huge amount of information about species. The information that can be collected includes what people think about the animals, where they are located, why they are hunted and access to medicines, among many other factors. The software can then figure out the best areas in which to focus conservation efforts.


While we’re living in an increasingly scary time environmentally, and we face the potential loss of thousands of species, researchers all over the world are trying to work on technology and techniques that will hopefully allow us to protect as many species as possible.

And if you’re wondering why we’re writing about this on a UX blog, the answer is simple. UX is about improving the experience - but for us, at least, it’s also about making the world a better place. More usable, more pleasant, but generally ‘better’. And that counts for the planet too. If we don’t get our combined acts together there’ll be no birds, no bees, and no ‘us’ to build experiences for.

So let’s hope we can collectively start making a difference.


You've got an ugly baby

Imagine your best friend has just had a baby, something they’ve been hoping for for years. You’re so excited, dying to meet your new God-child. Your friend asks you to sit down while she grabs the baby, and places the little love-bug in your arms. You move the blanket to take a peek at that gorgeous little face and… is it a gremlin? A hobbit? Some sort of Benjamin Button situation? But of course, you coo and gasp and tell your friend how beautiful and angelic their spawn is. How could you tell her the truth?

 

Who in their right mind would have the strength to say they’ve got an ugly baby?

 

This is how our users can feel when they come in to test our prototypes. When they first meet us, they are walking into our territory (or home) to look at our interface (or baby). We’ve paid for them to be there, so they will start from the perspective that it’s them and us (the facilitator and interface are the ‘them’).

There is a separation, however unintentional, between us - and that separation can easily lead to false coo’s and reports of beauty. We are asking them to review something, we are paying for their time, and because of this they want to be on our side. They want to be ‘right’ and to be similar to us (a basic human trait), and - most importantly - to do well.

It’s entirely human to feel worried about being embarrassed, or to worry about admitting we are wrong. The act of failing shows us where we aren’t standing up to societal expectations and standards, and this can be amplified when in the presence of other people (Lamia, 2011). This is thanks to the Spotlight Effect - the tendency that we have to think that many more people are watching us and paying attention to us then actually are (Gordon, 2013). We are left feeling vulnerable and exposed, and desperately trying to please and do it right. So how does this affect us in the UX world?

User testing is something we’ve written about before and continue to research. It’s a crucial step in understanding user needs and thought processes, and from there we can design useful, intuitive and innovative products that make users happy. And the first step in that is setting the right psychological environment to get the right responses from the participants.

 

We take steps to make our participants feel as relaxed as possible - we indicate that we are not testing them, that we are looking for ways to improve the design. We meet them and greet them and shake hands, forming minor (but important) bonds. We explain that we didn’t design the interface (creating space between us as facilitators and the interface itself), and we sit close to them but looking at the screen, to physically place us ‘on their side’. For most people, this does relax them and they openly discuss any flaws or issues with the design.

For others we need to expand on this behaviour and continually reinforce this. We need to confirm that their feedback (positive or negative) is good, and is helpful - showing them that we are in their team and the interface is the thing we’re looking at. We have to be careful not to reinforce the wrong thing though - showing agreement with a positive comment can quickly lead to people thinking we want to hear positive feedback, which leads us back to psychological square one.

But some people will still struggle with a task or a question during testing, and we as testers will observe this happening. When we then ask them about the task and look for feedback, they might answer something along the lines of ‘That was easy, once I knew where to look’, or ‘I didn’t think of looking there, but now that you’ve pointed it out it makes sense’. So how do we know whether to record this is a problem, or to take their word for it? Well, one indication could be if others struggle with the same task, but this isn’t always reliable - when one out of five or six people struggle that can still relate to a fair percentage of your audience.

 

The problem with ignoring the one person problem is that we can potentially miss issues that could be minor, but are still causing confusion and trouble for our users. We can try and reassure them as much as possible, but some people will still insist that it was their fault and there is no flaw in the design.

To help us help our users, we need to understand how trust works. How it can be formed, maintained, or broken with people we have just met. We need to get them on our team, to help them understand that it’s us and them vs the prototype. Some of this can be done verbally - reassuring them of the above mentioned points - that we did not design the prototype, that we are not testing their abilities, that we are independent and looking for flaws. We can also build trust with our users using body language. Being warm and welcoming, giving the person a handshake and big smile on greeting them can help.

But in user experience, one of our best weapons is truly being not invested. We have to project complete disinvestment in the current design and that we are there for them - to help them and to learn from them. We don’t want to lead our users into an answer, by influencing them one way or another.

So what can we do? Well, we focus on a three step approach - The Promise, the Price and the Guarantee. In terms of user testing, the Promise is making clear what they are getting in return - an incentive, for example. The price is what you are asking for in return - for them to be in the test for a certain amount of time, going through the prototype with you. The Guarantee is what happens if something goes wrong - and this is where a large amount of trust can be built up. We explain that we can stop the session, we can stop recording, we can solve whatever problems might occur. While this might not fully relax all participants, covering all of these factors with them can ensure we can try and make them feel as comfortable as possible.

 

We have to remember that at the end of the day, even if it isn’t, our users view the prototype as our baby. And if it’s ugly, they’re not always going to want to share that with us.

Attack of the Furbies

what-smart-toys-do-they-like-to-play-with.jpg

It’s more and more common to see children and toddlers with smart toys, phones, tablets or iPads. This isn’t a new phenomenon - children are always attached to something, before there were phones and tablets, there were board games and Pokemon cards. The difference this time is the level of security - and the potential threat if that security is breached.

 

Phones and tablets are, however, usually well-monitored by parents who can restrict internet access and monitor what their child has been doing. But what about more sinister forms of data collection? More and more toys are being released that have features that allow them to record voices, speak back to children and even ‘see’ some aspects of what’s going on. They tend to connect to the internet or bluetooth and store data in the cloud, which can then be accessed later.

 

CloudPets are a brand of toys for children that are animals fitted with a microphone and speaker, allowing kids to record their own messages or those from family members and play them later. But - this all happens via a bluetooth connected app. It was eventually revealed that all of the recordings were being stored in an unprotected location, and in fact had already been accessed and even held ransom.

 

And this isn’t the only occasion. The Teksta Toucan, a device from Genesis Toys, was tested for security by independent researchers, who found that as the speaker and microphone were accessible via an app, anyone nearby connected to bluetooth could find and connect to Texta Toucan. Genesis Toys also developed My Friend Cayla - a doll which recorded what children said to it - and could also be remotely listened to by anyone with a bluetooth connection. My Friend Cayla has actually been banned in Germany due to this flaw.

 

This is potentially a parents’ worst nightmare - someone outside the house or near your child, able to listen to their plans and activities through their toy - what time they go to school, what time they get picked up, when they’re alone. As well as the potential danger, you have a lot of valuable information being collected - what your child likes and doesn’t like, what attracts them, how they buy things, and so on. All without anyone's permission.

 

So the big question becomes, how do you keep your children and yourself safe from smart toys? Well, a secure internet connection is the first step. Turning the toy off when not in use, not connecting to unsecured or public networks, and not using the toy alone can all help. Research new updates or patches and stay up to date with posts and activity online. However, until there are better regulations in place for these kinds of toys, it may be best to just stick to the board games and Pokemon cards.

AI design needs some 'deep thought'

We all know the answer. 42.

But to quote Deep Thought, "the problem, to be quite honest with you is that you've never actually known what the question was."

When Douglas Adams wrote about Deep Thought in the Hitchhiker's Guide to the Galaxy, he wrote about a machine that humanoids built to do something they couldn't do - work out the meaning to 'life, the Universe and everything'. Unfortunately for all (but possibly fortunately for Arthur Dent), it failed.

I was thinking about this today, whilst reading about Google's recent AI efforts with Google Duplex.

For those of you who may not have seen it, Google Duplex was showcased in their recent I/O keynote. They showed how Duplex can make phone calls on your behalf, mimicking a human so closely that the person on the other end of the call doesn't realise they are speaking with a program.

Technical or unethical?

Technically that's a huge achievement, and something we all knew was coming. Hell, we've been begging for it for years, smart assistants that really understand us? Who doesn't want one?

As a UX/CX consultant, it's easy to see this as the first wave of the disappearance of my job. One day there won't be much of an interface to design - you'll just ask, and get. There won't be human researchers, programs will crunch the data and ask what they don't know, and act accordingly. People like me will mumble about 'the way things used to be', curl up in crusty armchairs and knock back another Margarita. 

Only we still have a few problems to solve, don't we.

The ethical bridge

I received an email recently from someone who said they wanted to talk about UX, relatively urgently. They asked me when I was free for a chat, and since I'm a nice chap (and always interested in growing business, it must be said) I called them back later that day to see where I could help.

It turns out, after several minutes of small talk and intros, that the person was, in fact, a recruiter. And the 'UX' they wanted to talk about was, in fact, the provision of his (non)-unique recruitment skills to any recruitment shortage I might have.

Now whilst this person didn't explicitly lie and tell me that he was looking for business, his email was carefully worded to leave me thinking that was the case - and hence to act as if it were. Once the deception was clear I exited the call as politely and quickly as possible - and vowed to lose his details - we do sometimes talk with recruiters but being deceived as to the nature of the caller as the opening gambit pretty much tops the list of 'ways to lose trust instantly'.

Unfortunately, Duplex was asked to do the same thing. 

By calling people and pretending to be human, it's deceiving them to act accordingly. Any choices they might have made or responses they might have considered had they been in possession of the full facts was denied to them, even if they never became aware of that fact.

In Duplex's case the tool was designed to include noise-terms and pauses and sounds ('ur', 'umm') so that the person it was talking to felt comfortable, and thought they were dealing with a real person. If you want to see the call talking place, watch it here.

As I've long discussed there is an ethical dilemma to much of AI that needs to be carefully built in. Any AI platform must act ethically - but if we, as designers, don't build in the right ethics it will never understand what is right and what is wrong.

First, do no harm

Okay, so those words don't appear within the Hippocratic Oath, but they are close enough. Do no harm. Help the person, don't hurt them.

So, could an AI phone assistant cause harm? 

Sure it could - perhaps only emotional, but the risk is there. Let's say the person who it calls is having a horrendous day. Their partner has been diagnosed with a terminal illness, they are struggling to cope but gamely trying to keep working. They take a call - and struggle.

Most humans would understand and ask the person if they were okay. Would an AI?

What if it just kept trying to book that appointment, seemingly not to care in the least as the person explains they're having the day from hell. How much might that hurt?

But if the person knew it was an AI, they wouldn't expect buckets of empathy and an offer to come round and help, they'd probably just hang up. No harm done.

Designing for AI - two core elements

When you're designing for an AI interface in any form (text, voice or interface-based) there are four key elements you need to consider:

  1. Context-setting
  2. Engagement points & options
  3. Learning paths
  4. Ethical risk analysis

1. Context-setting

AI needs to set context. It needs to let the person know that this is a machine, not a human. Full disclosure isn't a pathway to failure; it's the first step to building trust, understanding, and sometimes even engagement - our user research shows that many people absolutely love to play with and experience smart tech, so you're losing an opportunity to engage if you don't tell them the truth.

You're also potentially setting the AI up to fail - if people think they are talking to a human then they may deal with it in a way the AI hasn't been trained to handle, and kill the engagement right off the bat.

So, firstly, set the context.

2. Engagement points & options

Next comes engagement.

If a lifetime of CX research has taught me one thing, it is this: humans surprise you.

They do the unexpected. They think illogical thoughts. They act in ways that don't always make sense.

And they don't always understand the rules.

So with any smart system or AI interface, it's paramount that the engagement points are as wide as possible. You only need to go to YouTube and search for fails relating to any device (I won't call out Google or Siri particularly) and you'll see what happens when human and AI interpretation goes wrong.

Humans interrupt each other, leave long pauses, and take huge cues from the minutest of details. One tiny example of this happens every day in my family. My own upbringing (English) means I speak with pauses at the end of sentences, so I'll often pause a little between sentences when getting something out. My wife comes from a South American family where people speak fast and with few breaks - so to her, a slight pause is a signal that I've stopped speaking. The outcome is one frustrated husband who only gets half his words out before the conversation moves on - and there's no AI in site, yet.

But it's not just voice systems where this is important. I'm currently working on an AI engine that reviews video for you, identifying the content it finds. But if it does that for you, how does the user engage and ask for something slightly different? How does the user tell the AI that it's misunderstood their need?

Identifying the engagement points and the options available at those points is a key path to success - and making those points as flexible as possible (to deal with tricky human illogicality) is paramount.

3. Learning paths

All AI requires learning to be successful.

Whilst much of that happens in the background and from ongoing interactions, learning from the user within the context of a single interaction has the potential to teach far more.

It's like the difference between user testing and analysing your Google Analytics reports. GA reports give you en-masse data about how people are reacting to your website. It tells you what people are doing, but often not why. User testing on the other hand lets you talk one-on-one with a single customer; whilst it doesn't provide the 50,000 foot view of what everyone is doing, it does let you see exactly what's going wrong, and have a chance to correct it. The depth of feedback and learning at that engagement point is staggering, in comparison to any analytical data-based approach.

And the effect it can have on the customer, to be heard and to have their gripe confirmed and even fixed in that small window just cannot be over-stated. I've seen customers converted into fanboys in moments.

So building one or more learning paths into the tool is a gold-mine, both for longer term improvements within the AI itself and for engagement with and retention of the customer.

4. Ethical risk analysis

My final element to consider is back to my favour subject; ethics.

When AI gets it wrong it can be funny - YouTube is stuffed with examples where assistants have misunderstood and returned something hilarious. The web is full of autocorrect funnies.

But when a self-driving car piles into a pedestrian, it's not quite as funny.

So it's important to perform an ethical risk analysis, and understand how the risk can be mitigated.

The Duplex example I gave above is one. What happens if:

  1. The person receiving the call gets upset and needs someone to listen and/or care?
  2. The person receiving the call has a heart attack mid call and needs someone to call an ambulance?
  3. The AI misunderstands the call and books something that incurs a cancellation fee that the owner is not even aware of.
  4. And what happens to the store if agents repeatedly book appointments that nobody shows up to? Could that extra pressure push them out of business?

Or in the case of the video analysis tool, let's say it analyses a video of children in a park, a fast moving car chase and a dog getting run over - and mixes up the cut so it looks like a child was crushed to death. Could that cause trauma to a viewer? Could they be sued for it?

Ethical risks can be covered by terms and conditions and legalese - to a degree, only. But our responsibility as designers and innovators should never stop at the 'get out of jail (not-quite) free' card.

 

 

Overall Google is doing amazing work, and I love where we're going.

But next time, let's hope the conversation starts with a little more honesty.

Facebook knows all your secrets

Facebook has recently been caught out for collecting and storing huge amounts of user data, and that’s just one app. Now imagine the potential for data-mining if someone had access to your entire phone.

 

Think about it - what have you used your smartphone for this morning? Track your diet, your steps, your locations? Helped you navigate somewhere? Google recently booted 20 apps from the play store, as it found that the apps could record with the microphone, take photos, monitor the phone’s location and send that data back to the developers - all without the user knowing. While the privacy threats posed here seem obvious, it begs the question, how common are these apps? How many more are coming?

 

While the thought of those kinds of privacy invasion are bad enough, it’s also theoretically possible to use the sensors in the phone to track what you are typing - including your pin code for your banking app. Additionally, the information from the phone’s sensors doesn’t need permission to access - so apps don’t have to ask. And even when they do, people tend to just grant access - for example, when you open an app and it asks for permission to use the microphone. It doesn’t say what for, but you allow it anyway. That’s it - it never needs to ask again.

 

On top of that, recent work has found that 7 out of 10 smartphone apps share your information with third party developers. For example, your map app might be sharing your GPS data with whoever else the developer wants to share it with. This can be used for targeted advertising as well, but there is massive potential for bad situations here. Perhaps even more disturbingly, the same study found out of 111 tested apps for children, 11 leaked the MAC address of the Wi-Fi router it was connected to. It’s possible to search online for information associated with the MAC address, potentially including the physical address.

 

At the moment, there’s no real way around this, to an extent. And the apps and businesses mining your data don’t make it easy to figure out what’s going on. In light of Facebook being the latest company under scrutiny, I decided to find out exactly what data I could get from my own Facebook account, and how (if possible) to stop that happening. What I found was pretty shocking - it’s incredibly easy from a user perspective to download a zip file of the data they have stored on you, and much harder to find out how to protect your information. Click Settings, and one of the options is ‘Download a copy of your Facebook data’. First of all, this means that anyone that happened to gain access to your Facebook account can easily download this same information. But I wanted to see what was actually in the file.

 

What I found was an index of every message I have ever sent or received, deleted information, photos (both posted and sent to / from me in messenger) and videos. While that doesn’t sound too invasive at first, the content within the messages wasn’t hidden. So I have conversations with my partner where we share bank account details, information about where we live and our house, our habits. More than enough for someone to do anything from steal my bank account details to steal my identity, theoretically.

From my perspective, after looking around and playing with the privacy settings on just Google and Facebook, there’s a big hole in the user experience regarding privacy - designs seem to be becoming more and more hidden and making it harder to find your settings and what you’re sharing. More often than not, the request for your information doesn’t tell you the full extent of what is going on, and it can take quite a while of messing around to find the right settings to hide your information.

 

So, is there a way to stop this kind of storage? Apps in particular tend to be very vague in the language used when asking for permission - they won’t tell you straight up how much information is going to be collected. I’ll focus on Facebook here, as that has been the main consideration above. There are some simple ways you can reduce the amount of information Facebook has access to:

 

  • Have a look at your Ad Preferences. You can see the companies or pages that have your contact information, change which information Facebook is allowed to use to advertise to you, and stop Facebook tracking your activity on other websites.

  • Don’t let apps sign in with Facebook. Another example of hiding how much information is actually being harvested, when you allow an app to log in with Facebook that, at a minimum, allows the app to view your public information, and may go as far as allowing it to view your email address or phone number.

  • If you use apps inside Facebook that require you to accept Terms of Service from a third party, it’s worth the effort to find out who the third party is and what they will be doing with your information.

  • Ensure your own profile is set to Private and your posts and information are set to either ‘Friends Only’ or ‘Only Me’.

 

But most importantly - be aware. Take the time to read data sharing policies if they’re available, or to go through all your settings regularly.

5 ways you're biasing your research

You’re making incorrect assumptions. And that’s hard wired in your brain.

Our brains have evolved to make assumptions, to analyse vast amounts of data and make assumptions and judgements to reduce billions of data points down to a few key facts. Psychologists believe that these shortcuts are adaptive and help us make smart choices. At one time they helped us connect a moving shadow in the grass to a predator, triggering a flight or fight response.

You’re also influencing your participants. As science tells us, by observing something we are already interacting and influencing it, whether that’s particles or the people we are researching.

So you’re not only biased, but you’re probably influencing your research without realising it. 

But don’t worry - we all are. Here are 5 ways your research is being impacted by bias:

 

1. You make assumptions about what’s happening

You can’t avoid making assumptions, it happens. But the effect of incorrect assumptions can be huge. Here’s a couple of quick examples.

Stevenage & Bennett performed recently a study where forensic students were asked to match fingerprints when given an initial report. Whilst their matching effort should have been the same regardless of whether that initial report suggested the fingerprints matched, in reality a much higher number of students stated the fingerprints matched when their initial test report suggested so too - they were influenced by the suggestion of a match, and followed that suggestion through. 

Another study found that judges in domestic violence protection order cases were influenced by the behaviour of the applicant. If the applicant had waited for a period of time after the alleged incident before making an application, or if they did not appear ‘visibly afraid’ during the court appearance, then the judge's statements suggested a bias against them, based on their perception that the person “must be less afraid”.

This obviously has huge and potentially deadly consequences. And that’s a judge, who is trained to be impartial. How many researchers have similar levels of training and experience to a judge?

The fix

I have seen a lot of UX insight come out of assumptions. For example I once worked with a UX researcher who insisted that she was an ‘intuitive’ who could tell what people were thinking. She would often insist that people hadn’t understood something because she “could tell they didn’t understand it, even when they didn’t mention it”.

Try taking that one to the bank.

At heart we are a scientific community and we need to ensure that research is based on fact, not assumption. That means insights need to come from observed behaviour that is repeated and understood, and/or from recorded comments that were made without bias (see below). Findings are:

  • Objective and unbiased (read this article)

  • Based on direct observation of behaviour or user comment

  • Clear and repeatable - running the test again repeats the finding

  • Actionable (findings you can’t act on are rarely worth reporting on)

 

2. Prior knowledge is a blessing and a curse

Knowing nothing makes you a terrible facilitator. Knowing too much can mean the same, however. And that goes double for participants.

Raita and Oulasvirta (PDF link, sorry) performed an interesting study into bias. 36 participants were put through a website test, having viewed a positive, negative or neutral review of the platform first.

The standard System Usability Scale (SUS) was used to rate the tasks they performed. There were two key findings from this:

  1. There was no significant impact on success rate. No matter which review they read, all participants succeeded or failed at the same rates.

  2. Their perceived rating of the site was impacted significantly. They rated the platform 30% more easy to use if they had been influenced first by a positive review. See the table below for more detail.

As a facilitator if you know exactly how the system works then you’ll start making assumptions about choices and behaviour.

The fix

When it comes to facilitating the research it’s best to know a lot about people, about good research, and about the goals of the research itself.

But it’s best to know very little about the subject interface you’re testing. The less you know more in tune you are with a participant who is seeing it for the first time. Knowing too much places a gulf of knowledge and understanding between you, and that makes it harder to empathise and connect with their comments and actions.

For participants knowing too much can also sink the research. There are two key actions here to help with this:

  1. Ensure participants match the profile
    Make sure they are the right people with the right level of expertise you are looking for. When you’re testing for existing audiences then ensure you find people who match that audience profile well - but also explicitly tell them not to bone up or study the interface prior to the session.

  2. Ensure new user tests are blind
    When you’re testing for acquisition or new audiences, ensure the participants match the audience profile but also ensure the test is blind, right up to the last minute. Don’t let the participants know what it is they are testing - otherwise human nature is for them to get nosy and start looking at that site or app or software on their own.

 

3. You see what you expect to see (confirmation, hindsight 2)

Confirmation bias is the bias we all know and love.

You think that you’re late so you’re going to get all the red lights - and of course you do. Only you didn’t, you just noticed the red ones more because you were late and you were looking for them. Your bias is confirmed.

In research confirmation bias is a risky bias to have. You think people are going to find sign-up difficult, so you start looking out for problems in sign-up. Did they hesitate just then? That must be because it’s not clear to them. Did the mouse move a little to the left? That must be because they are thinking they want to go back to the previous page and choose again. You think it’s going to cause problems and like a magic eight ball all signs point to Yes, yes it is.

What’s harder to pull apart at times is the fact that we’re facing at least four different levels of confirmation bias, each and every time we test:

  1. The client
    The client thinks their checkout process is too hard to use, and wants to redesign it. This test is just to prove the problem exists so they can justify redesigning and selecting new technology. Confirmation bias is baked in, from the start.

  2. The team
    The UX team are provided with the brief and set up the research. But given other websites they’ve looked at, this sign up and checkout process really does look a little old fashioned, and it’s surely going to have some issues they’ve seen elsewhere. A second layer of confirmation bias begins to build.

  3. The facilitator
    The facilitator has played with the design and found it non-standard, they struggled to understand how to complete some of the goals - and they’re a UX specialist. How on earth would a standard user fare, especially an older one (because we all know older people struggle with technology and websites, right?). Layer three confirmation bias, calling layer three.

  4. The participant
    The participant has heard of Biggend shopping before, their friends told them it has bargains but can be painful to use. As soon as they see the name, they wince ever so slightly. They’re expecting to struggle, whilst the people behind the glass watch on. Layer four pulls into the station.

Confirmation bias can - like scary clowns at a travelling circus - be an unpleasant surprise around every corner.

We also need to guard against hindsight findings. These are when a participant justifies a decision in hindsight. They used control B instead of control A, and then we ask them why. Often this will lead to the participant judging their own actions in hindsight, and coming up with a logical reason why - when there may not have been one, or when in fact their decision was unconsciously driven by various design aspects their conscious mind doesn’t process.

And if those justifications happen to match our preconceived ideas of what’s wrong with the design, then we hit the jackpot and confirm our own bias again.

The fix

There are three key steps to avoiding an overload of scary-clown-confirmation-bias.

  1. An independent facilitator
    The facilitator for any test should be as independent as possible. Using an outsider or independent is best, but using a professional facilitator who practices avoidance of bias can be just as good. Just avoid using one of your team who’s heavily invested in the project and/or the design - or their bias cup may runneth over.
     

  2. A bias-free script
    The test script (or questionnaire or task list or whatever you like to call it) is the crux of any test and the key to avoiding confirmation bias, as far as possible. The goals of the project will probably be biased to some degree, and the client and the team may be bringing to the table their own biases, but the script can avoid much of this. Direction, notes, questions and tasks should be carefully reviewed to ensure they are not leading or biasing the participant. A good facilitator will follow the script and avoid talking too much outside of it, as this will ensure bias is kept to a minimum.
     

  3. Blind testing
    As outlined in a previous point, blind testing will help to avoid the participant carrying too much bias or pre-learning into the room. You can’t completely control for their opinions based on others, but you can ask them to objectively approach this test from scratch. And you can ensure you take observed performance over voiced opinion, a key difference.

 

4. Participants just aren’t normal

Well, okay, they are - but their behaviour often isn’t, when in the room. Take a look at these known biases:

  1. The Hawthorne effect: people change their behaviours when they know others are watching. And of course cameras, mic’s and a huge one way mirror won’t add to that at all. Participants may be more careful, go slower, read more, take more considered action than they would in the real world. Or, as we sometimes see, they will play up to the attention, act more rashly and with more bravado (generation Z I’m look at you here).

  2. Social desirability: people tend to look for confirmation, and act in ways that the current social group would approve of. This can lead to participants playing down the issues they see (“oh that’s fine, I found it easily enough - it’s just me being a bit slow!”) or under-reporting their dislikes and grumbles. It can also lead to more positive scoring when subjectively scoring interfaces - for example on a 1 - 5 site scoring mode where 5 is best, we rarely see a site score less than 3, no matter how painful the experience.

  3. Task selection bias: people will try longer and use more likely success paths (even if they are slower and more painful) when being observed at a task. You may see people sticking at tasks far longer than they would in the real world - or using search and trawling through pages of results, to avoid failing in the navigation.

  4. The bandwagon: people tend to jump on the bandwagon. That means if the facilitator or the script start to talk positively or negatively about the test subject, they may well be biased by this and join in. Equally when running group research, they may begin to emulate the comments and behaviours of the others in the group, just to fit in.

That’s not an exhaustive list, but it gives you an idea of the problem we face.

The fix

At the risk of repeating myself, the solutions here are much the same as those mentioned earlier - independent and bias-free facilitator, script and a blind test. When asking questions, open and non-leading questions are best, for example “How do you feel about the level of information on this page?”

Avoid building a personal relationship with the participant - research has shown that you can build rapport from the simple act of asking someone to hold you cup of coffee for you - imagine how much rapport you can build up over an hour of chummy chat, if you’re not careful.

Scripts should sometimes be explicit with task paths, to avoid task selection bias - tell the participant that this time you are searching, and this time browsing. Use the paths that your research shows you the majority of people use. Mix them up, and have some open tasks where the participant can choose for themselves.

 

5. You’re leading not following

This is a common one, and I see it regularly when reviewing research. It’s so easy to slip into that I find myself falling foul of this one every now and then.

You see a problem, and you think you know what’s going wrong. So you start looking for it. We begin to subtly hint at or probe at the issue with other participants.

For example I had a facilitator once who would see someone struggle with a label or an icon - and would from that point on ask every other participant whether they understood it or not, whether it was completely clear to them. Lo and behold, many people did indeed agree with him that it wasn’t clear. He felt supremely justified for having spotted this problem and routed it out into the light of day.

Only he didn’t, really. He was biasing the participants to focus on something they would not normally focus on, and look at it with a critical eye to see if they felt it could be improved. And since he was showing them that he felt it could be improved, they were happy to agree.

We can also fall foul of this easily, both in the script and in our language.

The fix

As I’ve pointed out before, the script should be non-leading. Some great examples of leading script questions I’ve seen include:

  • Which content on this page is most important to you?

  • What do you love most about this site?

  • What function would you remove from this site?

  • How hard was it to get to this point?

If you want to avoid priming the participant, then make them non-leading. For example:

  • Is there anything on this page that you particularly like or dislike?

  • Is there anything on this site that you particularly like or dislike?

  • Is there anything you would add or remove from this site?

  • How easy or hard was it to get to this point?

When I used to run training on facilitation I had a simple example of the Yes/No game. For those that don’t know it, the Yes/No game is where you have to talk for a period of time or answer questions without using the words yes or no. It’s incredibly hard, especially under stress (e.g. at high speed), as those words are integral to the language we use. Removing them is like trying to walk without bending your ankles - technically possible, but fairly tricky at speed.

Being a good facilitator is much the same. You have to remove words that will bias or lead the participant. The simple example I used was to imagine you were user testing Microsoft Word. If you asked someone how they would save a file, you would have lead them in three key ways:

  1. You explained that there is a system concept of keeping information for later

  2. You told them that the concept might be called Save

  3. You told them that the thing they were saving might be called a File.

That’s a pretty big leading statement. So, you have to remove the words that lead, and change it to something like “If you wanted to come back to this information later, but had to turn off the PC now, is there a way we can do that?”

Unwieldy, but far less leading. Just like the Yes/No game, a good facilitator has to constantly parse their language to ensure they aren’t leading the participant.

 

A fully non-biased test is not possible - even with an impartial machine-based facilitator and a science based peer-reviewed script, you’d still have to contend with the biases and assumptions made by the human who used the system.

But we can reduce bias in our research. We’re not aiming for perfect.

Better is good enough for today.

Excuse me sir - can I probe your brain?

twobrains.png

We are at an interesting cross-roads in terms of the rights of the customer / consumer / user and the ability of a designer or engineer to assist or even shape their path - and their mind with it.

True, we've been manipulating people's brains for centuries now, with everything from drugs (think caffeine and marijuana) to meditation, totem poles to Internet kittens. And then along came marketing, and not long after arrived the UX/CX world. Sometimes we are bent on delivering what's needed, wanted or desired, an altruistic goal. But sometimes we are attempting to persuade, cajole or influence - e-commerce and political UX in particular but in many other areas too.

So, my question is simple - when is it okay to mess with someone's brain? I'll break that into three questions: - should we, can we, and (if we can) how do we do it ethically?

 

1. Should we mess with a customer's brain?

When it comes to bodies, we're pretty much in the clear. We all know it's not okay to touch someone without their permission. We all know (or should know) that we don't have the right to invade someone's personal space or to physically redirect someone trying to walk a certain path. The laws are there, the cultural understanding is there - for most except the odd president, that is. But when it comes to the brain, not so much.

On one hand this can be desirable. Take a simple example, designing an interface for a self driving car. Now it's quite possible that the driver will be concerned that the car is driving itself - if you are anything like me then I'm often stamping on the missing brake pedal when I'm a passenger in the front, and that's with a human driving. Remove the human and I might need a herbal relaxative (or a change of underwear, but then again I am getting older). 

Knowing that, it makes sense to design the interface to put the passenger at ease, to change their mental state from anxious to calm or even blissful. That would seem to be a desirable outcome.

But it would also impinge on the mental integrity of the driver, and would be altering their mental state without permission - unless we asked first, of course. Is that okay?

This week I was reading a NewScientist article posing the question, 'do brains need rights?' The article points out that we are increasingly able to observe, collect and even influence what is happening within the brain, and asks whether those abilities to invade should be addressed with rights. Bioethicists Marcello Ienca and Roberto Andorno have proposed that we should. More on that later.

So, if we should be giving our brains rights, what could those rights be?

Returning to the story from NewScientist, Ienca and Andorno posit that a simple set of four rights could cover it:

  1. The right to alter your own mental state
  2. The right to mental privacy
  3. The right to mental integrity
  4. The right to forbid involuntary alteration of mental state/personality

Not bad as a start point. In the UX world, we are probably going to find it simple to honour ethical boundaries (or laws) one and two. Good researchers always ask permission, and wouldn't invade the privacy of a participant without informing them first, and what participants get up to in their own time is up to them.  

But when you reach points three and four, I can already see us crossing the line.

If designs / products / interfaces are meant to persuade then we are beginning on impinge on a person's right to avoid involuntary alteration of their mental state. But is that okay?

 

2. Can we mess with a customer's brain?

In short - yes. Yes we can.

For a start we've been messing with people's brains ever since Ogg hid behind a bush and made T-rex noises whilst Trogg tried to pee. We've used advertising and marketing for hundreds of years now, and we're getting better and better at the ever-increasing arms race that is the consumer market. Parents know that a good portion of parenting is the art of messing with your kids and pulling the psychological strings to a sufficient degree that they turn out to be reasonable and competent human beings, and not serial killers - before they turn the tables on you somewhere in the teens.

But the arms race is getting out of hand. The weapons of choice are becoming increasingly sophisticated beyond the simple cut and thrust of the media world. Just a couple of simple examples:

  • Facial recognition software can already interpret emotional states and feed them to those who are observing - whether the person wants you to know their emotional state or not.
  • fMRI's can already reconstruct movies and images in your head that you've already seen (comparing them against known) and will soon be able to construct images that weren't known in advance - for example, being able to visualise your dreams.
  • Neuroscientists are able to study your decision making process and some research has shown they can spot your decision in your brain up to 7 seconds before you even understand you have made a decision yourself. 
  • Neuromarketing is a thing. Using fMRI technology to measure direct brain responses to marketing and use that to make us buy more.

On top of this we have personalised content delivery that is absolutely shaping the way we think and act. We'll be writing an article shortly on bias which discusses this in more detail, but you only have to look at Facebook's filtering and any of the articles discussing it to see how much the world can be shaped - take a look around the world today and judge for yourself how much Facebook might have to answer for.

So how long will it be before we're able to understand exactly what someone wants, or is thinking about, and then respond to that? At what point will we be able to amend their wants, whether they want them amended or not?

The answer is going to be 'soon'. Very soon.

 

3. If we can brain-tinker - then is that a problem?

Let me answer that question by asking another. Would you be happy if you found out that your local water company was slipping anti-depressants into the water supply?

Would you be happy if you found out that your car radio was filtering the songs it played and the news it streamed to make you feel 'warmer and more positively' towards the car manufacturer?

Would you accept a new iPhone that removed contacts automatically if you argued with them more than twice in a week? 

All of these are choices that are designed to fix problems we face (depression, brand loyalty and stress levels), but I would hazzard a guess that the average person wouldn't be happy with any of these decisions, especially if they were unaware of them.

So what's the solution?

I'm personally recommending a set of rules to abide by. And I'm taking a leaf out of Isaac Asimov's book, when he coined the three laws of robotics, and cross-breeding it with the hippocratic oath. My laws would look something like this:

  1. Inform before researching.
    Ensure that the user is clearly informed before information about their emotive state,  cognitive state or personality is researched for further use.

  2. Approve before changing.
    Ensure that tacit approval is provided before attempts to change those states are made.

  3. Do no harm.
    Ensure that all such changes are designed to improve the heatlh-state of the (fully informed) user or works to their benefit in some way. 

 

We aren't doctors but we are changing people's lives. We are designing medical systems that can save lives or cost them. We are designing interfaces that will control airplanes, trains, cars, trucks. We are designing apps that can save a life or convince someone not to end one. And we need to be responsible when it comes to how we research, design and test those products into the future.

If that means abiding by a simple ethical set of laws then it works for us all.

How many users does it take to screw up a project?

Years ago I read a very interesting science fiction story from the great Isaac Asimov, called Franchise. 

The story was based in a future where computers were increasingly able to predict human behaviour and voting behaviour in particular - so much so, that the computer was able to reduce the sample size of the voting public further and further, to the point where it could identify a single voter who supposedly represented everyone. That poor slob was forced to vote, and he was held responsible for what happened from there. The story was inspired by the ability of the UNIVAC computer’s ability to predict the result of the 1952 US presidential election. 

Recently I was thinking about that story and the relevance it has to user research sample sizes. We UXers, like UNIVAC, aim to deduce from a few what relates to the masses.

Since the early days of UX research there has been a tension between the classic market research/consumer research fields (where sample sizes are huge and statistically relevant) and the UX world (where sample sizes are often far smaller).  The logic was simple; since people often think in similar ways, a few people will encounter most of the problems that a far larger number of people will see.

So - how many is enough?

In user testing and research, there has been an increasing trend towards smaller sample sizes. In the 90’s most user testing involved 10-12 people at a minimum, and often up to 20. Since then the average has fallen to just 5 or 6 people per test.

But how slim can the numbers be without losing quality? Will we end up, like UNIVAC, identifying a single user who is forced at gunpoint to identify all the problems in the system?

It’s quite possible we’ve already hit a risky level of sample size coverage. We may already be taking more risks than we're aware of.

 

So what is sample size?

Sample size is a critical element in any scientific research. It is the number of objects, people, animals or plants you are examining in order to test the outcome of some variable - in this case, the usability level of an interface, product or thing.  

Too many subjects and you’re wasting time and money - whilst that’s not a problem from the science perspective, in the business world that’s no small matter. Too few and your sample will most likely not uncover all of the issues there are to see. We all know that you might not uncover everything, but how big can that gap get - are you seeing 95% of the issues, or just 25%? 

Let’s take a look at the research.

For years, the five-user assumption has been debated in usability. We’ve heard that the law of diminishing returns means that a user test involving five-to-eight users will reveal around 80% of the usability problems in a test (Nielsen, 1993; Virzi, 1992). That’s an assumption that many UX companies and researchers around the world have used as a start point.

The first problem with that ‘five-to-eight’ line is the wriggle room it leaves. Fitting eight user tests into a day (at one hour each) is quite difficult, so many research companies started aiming for seven, or six. Over time, that fell to six or just five. Now, it’s commonplace to see five users tested in a research day, with the assumption that we’re still getting the same coverage we would have seen at eight people. For practical and financial reasons, we took short cuts.

The main problem with this assumption remains - there is variability in human emotional reactions, associations and thought processes. Whilst a smaller number can be representative of the wider audience, there is a lower limit here that we are ignoring somewhat.

Spool and Schroeder (2001) aimed to test the reliability of this five-user rule. Using 49 participants, they performed usability tests requiring each participant to visit an online store and purchase a product. On two of the websites used, the first five users only found 35% of the total problems presented. This lack of coverage was found again in a study by Perfetti and Landesman (2002), who performed tests with 18 participants, and found that each of the participants from 6-18 uncovered at least five issues that were not uncovered in the first five participants. While the validity and severity of the additional issues uncovered may vary, this hints strongly at the need for a larger sample size.

A study by Faulkner (2003) aimed to test this further, by having 60 participants perform a usability test (after being categorised by their skill level). Afterwards a program drew random samples of user data, in sample sizes of 5, 10, 20, 30, 40, 50 and 60. Perhaps not unexpectedly, the results showed high variation in the results when only 5 people were involved, with the amount of issues uncovered ranging from 55% to nearly 100% - and this decreased as more users were added. In fact, none of the sample group size ‘20’ found less than 95% of the problems.

So all of this research indicates that same size is a risk game, something we always knew.

In theory, a sample size of five may uncover 100% of the issues to be found - but as the graph below shows, your chances of hitting those five people are pretty small.

The image above shows the variance in sample sizes, with each dot representing a sample group, as defined by the label below the graph. What this shows is quite interesting, I've tried to summarise  this into three key points:

  1. If you test with 5 people, then you are most likely to find around 85% of the issues - but you may find as little as 55% of them. 
  2. If you test with 10 people, then you are most likely to find around 95% of the issues - but you may just 82% of them.
  3. If you test with 15 - 20 people, you will most likely to find around 90 - 97% of the issues - although you're now spending up to twice as much to find those last few percentage points.

 

So how many should you test?

As always, life is about balance. Five people (or six) can be tested in a single day, whereas ten or twelve needs two days of research. If you are engaging with a research company like us then that means almost twice the cost. 

But it also means the potential minimum number of issues found increases from 55% to 82%.

So how many to test is, as always, a factor of how much risk you can take, how much you can spend on the research, and how often you are testing.

But just to be safe - start with ten.

The Dark matter of UX

The universe, it turns out, is far stranger than we first thought.

Thanks to Einstein, decades of study and a lot of people so smart I look like an amoeba by comparison, we know that the stars, planets, moons and every bit of matter that we see (otherwise known as baryonic matter) makes up around 4% of everything that makes up the universe. Just 4%.

Bizarre, no? One day we think we know what the universe is made of, the next we realise we're only seeing tiny pieces of the puzzle - dark matter and dark energy make up the whopping 96% that's missing - and we know almost nothing about them.

It's bizarre to think of it but what we see is a tiny slice of the pie - everything else is hidden from us.

UX knowledge can sometimes be very similar.

We used to think that UX knowledge fell into a few core buckets: Needs, desires, requirements, issues, emotions. We knew that there was more to know but we mostly believed we were closing in on a full map of what was important. Various models flew about, all purporting to be full and complete models of UX knowledge. I can distinctly remember a conversation with a UXer who had decided to leave the field, because in his words "We know it all now - what's to learn? Why would anyone pay for a consultant five years from now?". That was in 1997. Oh, how the world has changed since then; mobile-first was an instruction to cot-building new parents, the iPad was a sci-fi device we never thought we'd see, only Dick Tracy had a smart watch, Google was a baby startup and we all bought our music in physical music stores. 

In 2017 we have a healthy global UX field with tools and methodologies and an ever increasing army of advocates and experts. In 1997 user testing was only for the leading edge projects, today it's a routine step. Back then we had our opinions of what made a good site and a few insights, today we have standardised scorecards, rating tools and expert review templates.

But just as with dark matter, we miss so much.

In the universe, the suns and planets are fireflies against a darkened sky, 4% fragments of the whole, supported by an invisible network we know little about.

In UX research we have the same; we ask questions and we receive an audible response with a single data point - but it's supported by a vastly larger network of thoughts, perceptions, misconceptions and processing that we never see or know anything about. When we observe users at work we can see issues or behaviour, we have visibility on single data points but again they are supported by the same invisible 'dark knowledge' of decisional processing.

We know that dark knowledge exists. We know it makes up a vast chunk of what's going on - and just like dark matter and dark energy, we may never get a full and complete understanding of what it is and how it works. 

I see this every day. People say and do things that seem, on our level of understanding, to make little or no sense. People will say one thing and then do something different. I see people confidently talk of how they prefer to search for everything, and then choose to navigate instead. I see people spot a control and talk about it, before they completely fail to see it when they need to use it. I hear people make statements that seem to be completely at odds with the way they act within the interface.

They aren't crazy, as it can be tempting to think. They aren't possessed and they aren't connected to an alternative universe. They are simply acting logically and consistently with the web of dark knowledge within them.

Knowing how dark matter and dark energy work, what they are and how they interrelate with baryonic matter is going to be the key for us to understand how the universe works. Once we know how the pieces fit together we'll be able to predict the movement of elements within it and understand our own future.

In much the same way, understanding the dark knowledge of UX will help us to predict decisions and actions and understand the future of any design or interface.

With dark matter we're on the case; China's DAMPE satellite and the upcoming Euclid spacecraft project will both help us look further and understand more. One day we will understand the complete picture, even if we have to reshape our thinking to do it. 

When it comes to UX knowledge, we are reliant on models of understanding of how the brain works. These help us to understand how we make decisions and how we react to inputs - but they don't help us to understand the complex context of decisions made whilst the user interacts with any set environment. We can understand how someone reacts in general to a problem, but we can't yet see how they react to this specific problem in this specific interface.

One day, perhaps we will. Until then, I think we should remember that everything we do, touch and work on as UXers is just a firefly against the darkened sky, a 4% fragment of what we'll one-day know.