Excuse me sir - can I probe your brain?

twobrains.png

We are at an interesting cross-roads in terms of the rights of the customer / consumer / user and the ability of a designer or engineer to assist or even shape their path - and their mind with it.

True, we've been manipulating people's brains for centuries now, with everything from drugs (think caffeine and marijuana) to meditation, totem poles to Internet kittens. And then along came marketing, and not long after arrived the UX/CX world. Sometimes we are bent on delivering what's needed, wanted or desired, an altruistic goal. But sometimes we are attempting to persuade, cajole or influence - e-commerce and political UX in particular but in many other areas too.

So, my question is simple - when is it okay to mess with someone's brain? I'll break that into three questions: - should we, can we, and (if we can) how do we do it ethically?

 

1. Should we mess with a customer's brain?

When it comes to bodies, we're pretty much in the clear. We all know it's not okay to touch someone without their permission. We all know (or should know) that we don't have the right to invade someone's personal space or to physically redirect someone trying to walk a certain path. The laws are there, the cultural understanding is there - for most except the odd president, that is. But when it comes to the brain, not so much.

On one hand this can be desirable. Take a simple example, designing an interface for a self driving car. Now it's quite possible that the driver will be concerned that the car is driving itself - if you are anything like me then I'm often stamping on the missing brake pedal when I'm a passenger in the front, and that's with a human driving. Remove the human and I might need a herbal relaxative (or a change of underwear, but then again I am getting older). 

Knowing that, it makes sense to design the interface to put the passenger at ease, to change their mental state from anxious to calm or even blissful. That would seem to be a desirable outcome.

But it would also impinge on the mental integrity of the driver, and would be altering their mental state without permission - unless we asked first, of course. Is that okay?

This week I was reading a NewScientist article posing the question, 'do brains need rights?' The article points out that we are increasingly able to observe, collect and even influence what is happening within the brain, and asks whether those abilities to invade should be addressed with rights. Bioethicists Marcello Ienca and Roberto Andorno have proposed that we should. More on that later.

So, if we should be giving our brains rights, what could those rights be?

Returning to the story from NewScientist, Ienca and Andorno posit that a simple set of four rights could cover it:

  1. The right to alter your own mental state
  2. The right to mental privacy
  3. The right to mental integrity
  4. The right to forbid involuntary alteration of mental state/personality

Not bad as a start point. In the UX world, we are probably going to find it simple to honour ethical boundaries (or laws) one and two. Good researchers always ask permission, and wouldn't invade the privacy of a participant without informing them first, and what participants get up to in their own time is up to them.  

But when you reach points three and four, I can already see us crossing the line.

If designs / products / interfaces are meant to persuade then we are beginning on impinge on a person's right to avoid involuntary alteration of their mental state. But is that okay?

 

2. Can we mess with a customer's brain?

In short - yes. Yes we can.

For a start we've been messing with people's brains ever since Ogg hid behind a bush and made T-rex noises whilst Trogg tried to pee. We've used advertising and marketing for hundreds of years now, and we're getting better and better at the ever-increasing arms race that is the consumer market. Parents know that a good portion of parenting is the art of messing with your kids and pulling the psychological strings to a sufficient degree that they turn out to be reasonable and competent human beings, and not serial killers - before they turn the tables on you somewhere in the teens.

But the arms race is getting out of hand. The weapons of choice are becoming increasingly sophisticated beyond the simple cut and thrust of the media world. Just a couple of simple examples:

  • Facial recognition software can already interpret emotional states and feed them to those who are observing - whether the person wants you to know their emotional state or not.
  • fMRI's can already reconstruct movies and images in your head that you've already seen (comparing them against known) and will soon be able to construct images that weren't known in advance - for example, being able to visualise your dreams.
  • Neuroscientists are able to study your decision making process and some research has shown they can spot your decision in your brain up to 7 seconds before you even understand you have made a decision yourself. 
  • Neuromarketing is a thing. Using fMRI technology to measure direct brain responses to marketing and use that to make us buy more.

On top of this we have personalised content delivery that is absolutely shaping the way we think and act. We'll be writing an article shortly on bias which discusses this in more detail, but you only have to look at Facebook's filtering and any of the articles discussing it to see how much the world can be shaped - take a look around the world today and judge for yourself how much Facebook might have to answer for.

So how long will it be before we're able to understand exactly what someone wants, or is thinking about, and then respond to that? At what point will we be able to amend their wants, whether they want them amended or not?

The answer is going to be 'soon'. Very soon.

 

3. If we can brain-tinker - then is that a problem?

Let me answer that question by asking another. Would you be happy if you found out that your local water company was slipping anti-depressants into the water supply?

Would you be happy if you found out that your car radio was filtering the songs it played and the news it streamed to make you feel 'warmer and more positively' towards the car manufacturer?

Would you accept a new iPhone that removed contacts automatically if you argued with them more than twice in a week? 

All of these are choices that are designed to fix problems we face (depression, brand loyalty and stress levels), but I would hazzard a guess that the average person wouldn't be happy with any of these decisions, especially if they were unaware of them.

So what's the solution?

I'm personally recommending a set of rules to abide by. And I'm taking a leaf out of Isaac Asimov's book, when he coined the three laws of robotics, and cross-breeding it with the hippocratic oath. My laws would look something like this:

  1. Inform before researching.
    Ensure that the user is clearly informed before information about their emotive state,  cognitive state or personality is researched for further use.

  2. Approve before changing.
    Ensure that tacit approval is provided before attempts to change those states are made.

  3. Do no harm.
    Ensure that all such changes are designed to improve the heatlh-state of the (fully informed) user or works to their benefit in some way. 

 

We aren't doctors but we are changing people's lives. We are designing medical systems that can save lives or cost them. We are designing interfaces that will control airplanes, trains, cars, trucks. We are designing apps that can save a life or convince someone not to end one. And we need to be responsible when it comes to how we research, design and test those products into the future.

If that means abiding by a simple ethical set of laws then it works for us all.

Gary Bunker

the Fore