Excuse me sir - can I probe your brain?


We are at an interesting cross-roads in terms of the rights of the customer / consumer / user and the ability of a designer or engineer to assist or even shape their path - and their mind with it.

True, we've been manipulating people's brains for centuries now, with everything from drugs (think caffeine and marijuana) to meditation, totem poles to Internet kittens. And then along came marketing, and not long after arrived the UX/CX world. Sometimes we are bent on delivering what's needed, wanted or desired, an altruistic goal. But sometimes we are attempting to persuade, cajole or influence - e-commerce and political UX in particular but in many other areas too.

So, my question is simple - when is it okay to mess with someone's brain? I'll break that into three questions: - should we, can we, and (if we can) how do we do it ethically?


1. Should we mess with a customer's brain?

When it comes to bodies, we're pretty much in the clear. We all know it's not okay to touch someone without their permission. We all know (or should know) that we don't have the right to invade someone's personal space or to physically redirect someone trying to walk a certain path. The laws are there, the cultural understanding is there - for most except the odd president, that is. But when it comes to the brain, not so much.

On one hand this can be desirable. Take a simple example, designing an interface for a self driving car. Now it's quite possible that the driver will be concerned that the car is driving itself - if you are anything like me then I'm often stamping on the missing brake pedal when I'm a passenger in the front, and that's with a human driving. Remove the human and I might need a herbal relaxative (or a change of underwear, but then again I am getting older). 

Knowing that, it makes sense to design the interface to put the passenger at ease, to change their mental state from anxious to calm or even blissful. That would seem to be a desirable outcome.

But it would also impinge on the mental integrity of the driver, and would be altering their mental state without permission - unless we asked first, of course. Is that okay?

This week I was reading a NewScientist article posing the question, 'do brains need rights?' The article points out that we are increasingly able to observe, collect and even influence what is happening within the brain, and asks whether those abilities to invade should be addressed with rights. Bioethicists Marcello Ienca and Roberto Andorno have proposed that we should. More on that later.

So, if we should be giving our brains rights, what could those rights be?

Returning to the story from NewScientist, Ienca and Andorno posit that a simple set of four rights could cover it:

  1. The right to alter your own mental state
  2. The right to mental privacy
  3. The right to mental integrity
  4. The right to forbid involuntary alteration of mental state/personality

Not bad as a start point. In the UX world, we are probably going to find it simple to honour ethical boundaries (or laws) one and two. Good researchers always ask permission, and wouldn't invade the privacy of a participant without informing them first, and what participants get up to in their own time is up to them.  

But when you reach points three and four, I can already see us crossing the line.

If designs / products / interfaces are meant to persuade then we are beginning on impinge on a person's right to avoid involuntary alteration of their mental state. But is that okay?


2. Can we mess with a customer's brain?

In short - yes. Yes we can.

For a start we've been messing with people's brains ever since Ogg hid behind a bush and made T-rex noises whilst Trogg tried to pee. We've used advertising and marketing for hundreds of years now, and we're getting better and better at the ever-increasing arms race that is the consumer market. Parents know that a good portion of parenting is the art of messing with your kids and pulling the psychological strings to a sufficient degree that they turn out to be reasonable and competent human beings, and not serial killers - before they turn the tables on you somewhere in the teens.

But the arms race is getting out of hand. The weapons of choice are becoming increasingly sophisticated beyond the simple cut and thrust of the media world. Just a couple of simple examples:

  • Facial recognition software can already interpret emotional states and feed them to those who are observing - whether the person wants you to know their emotional state or not.
  • fMRI's can already reconstruct movies and images in your head that you've already seen (comparing them against known) and will soon be able to construct images that weren't known in advance - for example, being able to visualise your dreams.
  • Neuroscientists are able to study your decision making process and some research has shown they can spot your decision in your brain up to 7 seconds before you even understand you have made a decision yourself. 
  • Neuromarketing is a thing. Using fMRI technology to measure direct brain responses to marketing and use that to make us buy more.

On top of this we have personalised content delivery that is absolutely shaping the way we think and act. We'll be writing an article shortly on bias which discusses this in more detail, but you only have to look at Facebook's filtering and any of the articles discussing it to see how much the world can be shaped - take a look around the world today and judge for yourself how much Facebook might have to answer for.

So how long will it be before we're able to understand exactly what someone wants, or is thinking about, and then respond to that? At what point will we be able to amend their wants, whether they want them amended or not?

The answer is going to be 'soon'. Very soon.


3. If we can brain-tinker - then is that a problem?

Let me answer that question by asking another. Would you be happy if you found out that your local water company was slipping anti-depressants into the water supply?

Would you be happy if you found out that your car radio was filtering the songs it played and the news it streamed to make you feel 'warmer and more positively' towards the car manufacturer?

Would you accept a new iPhone that removed contacts automatically if you argued with them more than twice in a week? 

All of these are choices that are designed to fix problems we face (depression, brand loyalty and stress levels), but I would hazzard a guess that the average person wouldn't be happy with any of these decisions, especially if they were unaware of them.

So what's the solution?

I'm personally recommending a set of rules to abide by. And I'm taking a leaf out of Isaac Asimov's book, when he coined the three laws of robotics, and cross-breeding it with the hippocratic oath. My laws would look something like this:

  1. Inform before researching.
    Ensure that the user is clearly informed before information about their emotive state,  cognitive state or personality is researched for further use.

  2. Approve before changing.
    Ensure that tacit approval is provided before attempts to change those states are made.

  3. Do no harm.
    Ensure that all such changes are designed to improve the heatlh-state of the (fully informed) user or works to their benefit in some way. 


We aren't doctors but we are changing people's lives. We are designing medical systems that can save lives or cost them. We are designing interfaces that will control airplanes, trains, cars, trucks. We are designing apps that can save a life or convince someone not to end one. And we need to be responsible when it comes to how we research, design and test those products into the future.

If that means abiding by a simple ethical set of laws then it works for us all.

CSA and the invisible ink

Most design is about communication. Equally, good communication takes some design, or at least a little thought.

And if it goes wrong, then communication can cease.

This last week or so I’ve had some fun hitting my head against a certain brick wall; in this case the Child Support Agency (or CSA) website.

I have a certain need to utilize this website to manage Child Support payments, and as anyone who has a similar need can attest, they can at times be a difficult organisation to work with. Part of the problem can be large numbers of letters with horrendously confusing tables of data which often seem to contradict one another – but for now I’m going to put the usability of their letters aside, since I’m talking about the method more than the content.

Some long time ag0 – from memory I’d guess maybe three or four years back – I was asked if I wanted to receive electronic letters rather than the printed ones. Thinking of the planet and of the huge amounts of paper I was receiving I agreed. And for a while, that seemed to work. I’d receive an email telling me I had communication from CSA, I’d log on to the website, and there is an Inbox. You see that you have a letter, you click on the letter, and there it is. Relatively simple.

But a few months in, some technical problem occurred and the website started reporting that some people were unable to open the letters. That included me so I stopped trying, expecting the problem to be resolved.

Skip forward, and years later here I am. I still, regularly, receive emails telling me I have communication from the CSA. I still, regularly, log in to the site and see the letters right there in the Inbox. The option to view them is now removed, with a clear note that this is a ‘temporary problem they are trying to resolve’. And there is an option to instead download it as a PDF.

But just as when this problem first occurred, there is a slight snag. The PDF doesn’t work either. You click on the link, a PDF begins to open, and then freezes. You can’t directly download the PDF due to the coding used. All you can do is click to open, and it gets stuck half way.

Not only that, but the system happily registers the click – and then marks the letter as read. You have then been recorded as having read and understood the letter, despite the fact you didn’t get to see a word.

And those ‘read’ letters are then removed. This happens on Firefox, Chrome and IE, so it’s not a browser issue.

Realising that I was potentially missing really key information I rang CSA a while back, and asked them if they had fixed the problem. I was told that they were ‘working on a solution’. That’s a no, then.

I also explained that I wasn’t receiving any of the letters they were sending me. I asked if there was any way they could send them to me in printed form. I was told no, those letters couldn’t be sent – and that since I’d chosen to go paperless, that was the end of it.


So there you go – I am now receiving a set of blank pages, and I’m unable to see anything I’ve missed, what a great user experience! Hopefully none of that invisible ink is telling me I’m about to go to prison for non-payment of a CSA amount. Let’s hope.

A SIC website?

I’ve recently had the dubious honour of using the ASIC website to purchase a business name.

Now, before I say anything negative, let me be clear and say that I’ve worked on numerous Government websites in the past, and Ido understand some of the issues they have to face – to name just a few:

  • Tremendous amounts of data that need to be available
  • Massively complex tasks that need supporting
  • Numerous and disparate user profiles
  • Legal restrictions around process/assistance and usage
  • Internal organisational resistance to change

The list goes on and on, I’m just making it clear that I understand the limitations.

That said, using the ASIC website was something akin to spending a week with mittens superglued to your hands; you know it’s going to be frustrating, you know things are going to be slow and painful, but nothing can quite prepare you for the world of frustration you’ve just stumbled in to.

I won’t document the hoops I jumped through, but time and time again I ended up at the same point – seeing that the business name was available, but having absolutely no clue how to purchase it from here! Like a dog watching a cat on TV I uselessly pawed and clicked on things that were never going to work, whilst what I wanted (This name is available!’) paraded in front of me with nary a button or link to make it mine.

Trap doors left and right launched me to new websites or opened massively complex help documentation that confused and confounded and invariably led me – eventually – back to where I started.

I did get there in the end, although to be honest I really am not quite sure how.

So, it’s over to you – has anyone else had a similar experience? ASIC, if you’re listening, I’d love to help…!

Usability 101: Don’t trap the user

Imagine walking down the street. You see a shop, you’re interested, and you walk in the door.

Only you find another door just inside, and it’s locked. You try the door in frustration – I mean, why would they have an open front door, but be locked inside? Then you turn to leave, and find the door behind you won’t open, either.


Not a nice feeling, really. And that’s why “Don’t trap the user” is one of the basic rules of usability. Given that, it’s surprising how often you find users trapped in software cycles of misery.

I encountered one of those today, and it was a doozy.

My son decided he wanted to play DC Universe Online, as he’d discovered it was now free to play. Fine, no issue, I found the site, downloaded the software, and installed it. Apart from a weird error that keeps telling me I’ve got out of date Flash software when I don’t, all works well.

The software starts up, and offers me the choice of logging in, or creating a new account. I create a new account, and it asks me to enter my date of birth.

Now, I’ll digress slightly, and state that one of the most obvious signs you’re getting old is when you have to scroll WAY down the list of years to select your date of birth. So there I am, scrolling down, when the mouse falls off the scrollbar – and for some reason, the form processes automatically, with 2002 selected. So now the software thinks I’m only 9 years old, and tells me politely that I can’t sign up, as I’m under age. It also tells me if I’m seeing this error by mistake, I can contact Support.

Nice. Remember, I didn’t process the form with the wrong date, I simply clicked off the listbox by mistake and the smart software did the rest. And now I’m stuck. There are no options to go back and re-enter the date of birth, you can only cancel and quit.

I close the software, and re-open it, but the smart software has remembered the incorrect data – again, I’m told that I’m under age, so I can’t sign up.

In frustration I quit, uninstall the software, then reinstall it (ignoring the ‘you’ve got out of date Flash’ error messages), and discover that the game makers have been a little smarter than that. Again, I’m too young, and I cannot sign up.

So, now I can  choose to do one of two things – give up, or contact support, and wait several days to see if they bother to respond. You can probably guess which option I chose. And given that the game is supposedly suffering from a lack of players and looking to expand, that makes it a double tragedy.

I can see why decisions like this are made – somebody somewhere decided that they would close the door on kids creating false accounts, stating they are older than they are. Kid tries to sign up, sees he is too young, so falsifies his date of birth. I can understand that.

Equally, I can see that kids are way smarter than that, and would mostly choose to enter an older D.O.B. to start with. And the cost of putting this child-trap into the software is losing customers who are completely valid. You can put bear-traps all around your building to keep away burglars, but don’t be surprised if you take out a fair few paying customers, too.

Vehicle overload – the warning signs

It’s not until you watch someone learn to drive a car that you realise just how complex the task really is.

I have a 17 year old daughter who recently passed her test – after many hours of nervous invisible peddle-stamping from me – and a slightly younger son who will be starting out on the same path in the not-so-distant future. Seeing the complexity of driving through their eyes is a bit of a – well, an eye opener.

And it’s soon going to become even more complex.

Technology is available today that adds layers of augmented reality to the driving experience – HUD overlays being just one example. Forward-looking radar, backward-looking cameras and augmented night-vision cameras can all help us to become more aware of what’s around us. Automated controls can automatically apply breaks to avoid a collision.

This article on Cnet.com.au explains another angle being applied by Ford, inter-car communication to avoid collisions. In the demo that Ford gave, cars communicated with each other about dangers on the road – stalled vehicles in your lane, sudden breaking, cars approaching from the side in a junction, etc. These communications of danger alerted the driver with sounds and warning lights.

Teaching a 17-year-old to drive is complicated, because they have to learn:

  • How to move the vehicle
  • How to stop the vehicle
  • How to extend their awareness of ‘me’ to include a huge piece of equipment far wider and longer than they are
  • How to communicate with other vehicles (indicators, etc.) and how to predict the movement of other vehicles in return
  • How to obey road rules and control a vehicle within them
  • How (and when) to use in-car control systems

Now add to that list:

  • How to use, interpret and respond to augmented reality layers
  • How to respond to and manage automated behaviours (breaking or steering)
  • How to respond to and manage inter-car warnings and communication

I had an argument with someone recently on LinkedIn, about the future of cars and safety. My argument was that one day accidents should be pretty much avoidable entirely, he argued not.

I still believe I was right; car technology is approaching a level where vehicles can automatically avoid other vehicles and objects by steering, breaking and accelerating. Google has played with driverless cars to assist with mapping, and other car companies have tested similar technology. It is not going to be too long before commercially available cars have the capability to drive in a fashion safer than we as humans could manage.

But until that time – and I think it’s at least a few years, possibly even a few decades away – we will have to manage an increasingly complicated set of feedback and safety mechanisms embedded in our family vehicle.

For that reason, usability and UX are going to become increasingly crucial in the applications of these technologies.

Software running rings around us

I've had two less than fun experiences this week in terrible, horrendous user experience - mostly related to software, but also very closely tied in with horrible customer support. The first was with Sony, in regards to a problem I've had on a brand new (and top of the line) laptop.

Case 1: Sony and the never ending update The Sony Vaio Z I've been playing with the last few weeks is turning out to be a wonderful machine, and I'm loving it. But this week, I hit a small snag.

A piece of software called Vaio Update ran, and told me that there were several pieces of software needing updating. I hate bloatware along with the best of us, but for my sins I let it run, and they all updated. The dreaded 'you must now reboot' message came up, I killed my apps and rebooted, and the world was fine.

For a minute or two, that is, until the update software ran again - and promptly told me that two of the updates needed to run again.

To cut a long story short, this ran a number of times before I twigged that it was updating the same two versions of the same two programs continuously, in a little vicious circle. It would download them, attempt to install them, give me errors that they were already installed, force a reboot, run, and then tell me they still needed updating.

From a software user experience point of view, there were two killer problems. First the program automatically ran on reboot, beginning the cycle over again, and second (and more importantly) it forced a reboot with no choice after it failed - despite the fact that nothing had even been installed. No buttons to cancel, no X to close the dialog, even force closing the popup causes windows problems.

I contacted Sony about this, and received the standard first line of support response - basically an automated email telling me to run the update - completely missing the point that it was the update itself that was going wrong.

It took several other emails and even a PDF of screen shots to get across that this wasn't a user error - and now several days have disappeared without a further word. Nice customer service.

Case 2: NewScientist and the unusable user name I love reading NewScientist, and recently decided to subscribe. I did that, and then once it was paid for went onto their site to register, so I could read the online content.

The site asked me for my subscriber number and surname, then asked me to enter a username, password and email address.

When I entered a username I use for sites such as this, the site gave me an error, telling me the username was already in use - at which point I remembered I'd registered it previously. However when I went to log in, it told me the username had been cancelled. Ah.

So I created a new username, entered my email address, and tried again. This time, it told me that the email address I'd entered was connected to 'another' username, so I couldn't use it. Ah, indeed.

So, I tried using a different email address. This time I received a dire warning that this was a different email address to the one registered against my subscription, and that I should not proceed.

Catch 22 again - I couldn't use the username I wanted to, I couldn't revive or use the one I already had, I couldn't use the email address I normally use, and I couldn't use a new one without risking 'something' going wrong... And all I wanted to do was to read some content online...

I contacted NewScientist support, and explained what was going wrong. I told them that the original registration was still there, and could they maybe just attach it to my subscription, or remove it so I could re-register from scratch.

Again, the human element extended the terrible user experience. Again, I receive an email that is insulting in it's response and lack of match to my request for help. It simply tells me how to go online and register, with no attempt to even register the problems I'd listed.

To their credit within 24 hours of my response to this I had a second more personal response - although the words "We will contact the UK and see if we can get it set up at our end" were not exactly brimming with confidence...

Case 3: Google Adsense and Schrodinger's Cat Ding ding, round three.

This week I finally got around to playing with the Google settings for my site, and needed to create an Adsense account to get a particular function working. I have Adwords and Gmail and several other Google functions, so loaded up the Adsense page and logged in.

It told me I didn't currently have an Adsense account, and asked me if I wanted to create one. I said yes, and away we went filling out forms for a page or two. All good so far.

Near the end of the process it asked if I had a Google account, and when I said yes it asked if I wanted to use that account for Adsense. Sure, I say, and enter my login details. It's at this point that the wheels well and truly fall off the cart.

This email address already has an Adsense account, the page tells me, and therefore I can't use it.

So, yet again we have a nice software led vicious circle - I don't have an Adsense account and therefore need to create one, but can't create one because I already have one. Like Schrodinger's

User Experience is so often written off as a nice-to-have, or as an almost irrelevant layer on top of the 'key' technology and content, the true cost - in terms of lost business and reputation alone - can be huge. If only these (and other) companies measured that cost, they may do more to pick up and respond to their emails.

Self-serving: time for pain

This weekend I tried - for the fifth time now - to use the self-service checkouts at a well-known store. I've used them at Woolworths, Kmart and Target previously, this time I was in a Big W store. if you've used them before, then you probably know the drill.

You come up to the checkouts, and spot a huge queue for one of the few hapless operators floating amidst a sea of un-manned isles. Whereas off to one side there's a set of self-service checkouts, with a sprinkling of customers working their way through. You figure 'what the hell', and you give it a shot.

Then it goes something like this:

  1. Press Start.
  2. Scan first item. Smile smugly as the price comes up, and place item in a bag, as directed.
  3. Scan second item. Think to self 'This is a piece of cake!' as you place it beside number 1.
  4. Frown, as 'Computer says no'. Apparently you've removed an item from the bag. look down at two items, behaving themselves within said bag.
  5. Scan item 3, then growl as the computer tells you to fetch help.
  6. Wave at the overworked assistant trying to help the old lady who's started throttling her own self-service checkout in an explosive fit of undisguised frustration. Wait till she can break free, amusing yourself by counting the people who queued for a human as they march through one by one.
  7. When assistant arrives, complain that you didn't remove anything. The assistant knows this of course and doesn't even bother to check (she's used to the computers and their continuous paranoid delusion on the contents of bags). She scans a card, presses some buttons, and you're back to item 3.
  8. Scan item 3, 4, and 5. Convince yourself that apart from one small hiccup, this isn't going too bad after all.
  9. Scan item 6, remove full bag and place item 6 in the new bag. Feel that creeping sense of dread as the computer, finally proved correct in its theory that you're trying to steal, gleefully shouts again that you've removed items from the bagging area and demands staff come immediately.
  10. Swear never to repeat this exercise again.

It happened to me, and it was happening to just about everyone around me - the hapless assistant was dancing from one machine to another, desperately trying to guard both the machinery and the customers from a complete nervous breakdown.

It's not rocket science, and this is exactly what User Centred Design is there to protect against. Design this around customers and it would work in a completely different way. So just in case someone is listening, here are my top five tips for improving self-service checkouts:

  1. Don't assume the customer is a thief. The only reason I can see for measuring the bags is to validate that they haven't slipped something extra in there to steal it. Let's be honest, I could slip something into my pocket at any point whilst walking around the store. I don't, because firstly store security might well be observing me, and secondly because I'm honest. Both of those are just as true at the checkout, and there are extra cameras and staff to observe clearly. Why treat all honest customers like crooks?
  2. Explain the process up-front. Most of these systems don't tell you key bits of information, such as the fact that you have to place items in a bag so they can be measured, and that you can't remove a bag till you press a certain button.
  3. Provide more space - I know they want to cram in as many of these devices as possible, but giving people a tiny space to try and juggle armfuls of goods whilst balancing bags for weighing and removing at the right time is insane. Three times now I've had the 'item removed from bagging area' message just because something shifted slightly in the bag and some weight shifted off the scales.
  4. Provide more assistance - when something goes wrong, there's very little help in here. Basically it's like a car that is driving fine one moment, then explodes into flames the next with no warning or chance to pull over. Give the user some hint of what to do, and some ability to avoid the embarrassing "Miss, miss, I've broken it again..." plaintiff call for help.
  5. Make me feel good about doing it. And for me, this is key. When I check in at an airport it knows my name, it welcomes me, and makes the choices simple, it makes me feel valued as a customer, and then it sends me on my merry way. When I self-service in a store, it assumes I am a criminal, it doesn't welcome me or know me from Adam (even though I might have a store card and shop there regularly), it regularly spits the dummy and demands I go get help, and does absolutely nothing to make me feel valued.

Right now, I'm only using these systems at all because firstly I'm a geek and I want to see how they work, but secondly because these stores are starting to reduce staffing levels on the operated checkouts in order to force customers through these channels. And already I'm beginning to consider switching to stores that don't force me into a terrible user experience, just because it saves them some bucks.

I went to complain at Big W, and was told that they received very few complaints. When you see the red faced annoyed customers skulking off, I can't imagine why. But it's time to start.