It's not U, it's me

When you're researching UX and improving design, there are two completely different types of finding. Both are useful, but sometimes the line gets blurred and this can lead to issues.

It's all about me

First of all, there's opinion.

As a UX specialist I will often look at a design and have an opinion on what's working and what is not. I can perform an expert review which identifies the positives and negatives, based on thousands of user tests and a solid understanding of good design practice. So my opinion is certainly valid - not always correct, but valid.

Expert opinion is, often, a guess. It's an educated guess, hopefully, and it can be based on years of experience and insight into behaviour. But it's still a prediction of what might go wrong.

It's all about U

Then there's the U in UX - the users.

When I observe user testing or run user research, I'll hear things and observe behaviour. This is observed/learned research, and is quite different from my opinion as a UX specialist. I might see someone struggle to achieve a goal, or might hear them state that something was confusing. This knowledge is coming from the research, not from my personal opinion as to whether something works. I may have to interpret why they are going wrong, but the fact that they are going wrong is just that - a fact.

Observed findings are real. They are tangible, measurable, scientific measurements of what happens when people interact with technology. Like any scientific findings they should be based on research that is repeatable, measurable and unbiased.

Getting mixed messages

Obvious? Sure. Both have their merits and their uses. Most projects will employ both forms of research.

But you'd be surprised how often this line is crossed.

Not so long ago I was invited to observe some user testing - which is absolutely about observed findings. Good user testing should have at its core the three tenets (repeatable, measurable, unbiased), which this test did. An unbiased test script took independently recruited participants through a realistic and repeatable set of tasks on a fairly balanced prototype. The facilitator was an unbiased professional who wasn't invested in the design. Tests were recorded and findings were being analysed post each session.

But during the tests, both the client and the UX resource recording findings would comment as the design appeared as to what was wrong with it and needed to improve. One of them would point out a flaw, the other would agree it needed to change, and the UX resource would add that into the mix of issues found.

Once the testing was complete all of these findings were added to a sheet and then prioritised for solutions and changes.

Only, by this point it was almost impossible to identify where any one issue had come from. And herein lies the problem; personal opinion on what was wrong with the design was taking up equal space in the list of priorities with real issues that were observed to be a problem.

And now it's time to break up

Still not sure why that's an issue? Let me give you an example.

A few years back I worked with a client who wanted to improve an online web service, something with some pretty complex functionality. We were short on funds so user testing wasn't an option, but they had budget for an expert review (my professional opinion and best-practice comparison), followed by their own ad hoc user testing with friends and family.

Once this was completed I was able to point them in the direction of a few changes that would deliver some great benefits to the design. And remember, this was based on my expert opinion but also on their own ad hoc testing, which had confirmed our concerns.

They were okay with this - but the product owner really wanted to improve one particular part of the design that he hated. He felt it was old fashioned and slow, and had seen some pretty cool new ways for it to work. Crucially, this area hadn't been highlighted as an issue during the expert review, or during their ad hoc testing. I pointed this out, but could see this was still a priority for the owner.

I'd delivered my work so I stepped out. Several months later I bumped into the client, and we chatted about progress. At this stage he told me he was pretty disappointed, though not with my work or the UX stage overall. As you probably guessed he'd decided to prioritise that area that he didn't like and he'd reworked it at great expense and over quite some time. He'd tweaked some of the other issues we'd found, but not many - since he'd focused on his own opinion and the area he most wanted to fix, convinced it was going to add real value.

Only it hadn't, of course. Despite the effort poured into it, the new design effort slipped into a sea of indifference and sank without a trace, without a single happy customer delivered.

At the end of the day he'd been unable to separate the opinion from the fact and he'd paid the price for it - money down the drain and no positive impact on the product or his customers.

Staying afloat and healthy

As I said at the start, both forms of finding are valid and both are valuable. Expert opinion is faster and cheaper than direct UX research. It can lead design (rather than review what's there) and it's more freely available in many cases - though it's still just a prediction. UX research is more accurate as it's real rather than predictive, though it is more costly and slower and is often less useful when trying to lead rather than review (think of the iPod). 

Horses for courses. But the key is to know which one you're working with. 

When you're running UX research, make sure that the findings are observed and not opinion-based. I try to avoid carrying expectations and bias into the room when observing user tests. I may have my opinion as to whether the design will work well, but I've been surprised on many occasions, positively and negatively, so learn to leave them outside the room. And try to avoid having others in the room give their opinion too much, as that can easily begin to sway you towards focusing on/adding in their opinion as a finding.

As with most important topics in the world today, it's important to separate opinion from fact, and fact from 'alternative fact'. Otherwise the relationship - and your product - can quickly end up on the rocks.

Play becomes accessible - finally

Over the past few years I've had the pleasure of working on a number of projects that have related to people with needs and others who help them - children, people with a disability, older people needing care and those who are suffering illness. They are some of the best people you'll meet. I've been involved in projects to provide software to people needing care to make it easier for them to find the carers they need. I've helped providers to build more usable websites to bring services closer to those who need them. I've listened to awesome people such as Northcott Innovation and their project to bring to market a stair climbing wheelchair. I've helped the NDIS to research their market and define the shape of their upcoming services.

So when I read this week about Adaptoys (adapttoys.org), supported by the Christopher & Dana Reeve Foundation, I was really intrigued.

I've lived with a daughter being temporarily paralysed (fortunately only for a few weeks), and I've had the pleasure of interviewing several people who have learned to live with full paralysis. I can't begin to imagine the challenges that brings, but I can imagine the pain of not being able to play with your own children. Having five kids in the family I can imagine how painful life would be if you could watch your kids play, but never join in. That's the problem that Adaptoys has set out to solve.

Image courtesy of adaptoys.org

Their approach so far includes a remote control car, controlled by a breathing straw. Move left and right with the straw and the car shifts direction. Breath out and it accelerates, breath in and it slows then reverses. 

Another prototype uses a simple voice control unit to control a baseball pitch machine for kids.

In their own words:

“Technology has been such a powerful force for individuals with disabilities. However, there is a void when it comes to technology and accessible toys,” said Peter Wilderotter, President and CEO, Christopher & Dana Reeve Foundation. “Adaptoys will help eliminate inequality by reimagining playtime for parents, grandparents, siblings, uncles or aunts who are living with paralysis. We are excited to partner with 360i on this innovative campaign to ignite a global conversation and share these life-changing toys with more families.”
To extend this idea, the Reeve Foundation and its partners in this initiative, have launched a crowdfunding campaign at Adaptoys.org to raise funds to support the research, development and cover production costs for at least 100 adapted remote control cars, which will be distributed to individuals living with paralysis through a random lottery selection.
“As a grandmother, you dream about playing with your grandchildren. But for people living with disabilities, playtime can be isolating and inaccessible. My granddaughter lit up when I was able to race cars with her,” said Donna Lowich. “Adaptoys will allow me to be part of her childhood in a more meaningful way and my only hope is that we can bring these accessible toys to many more families. Everyone deserves to play with their loved ones.” 

I got such huge pleasure from reading about this project, and imagining how much further it can go. I made a small donation from the Fore, and I'd encourage you all to do the same. Play is key to us as human beings, to both our mental and physical health. Let's give play back to some people who need it, and deserve it.

Oh, and just to bring this well and truly back into the realm of UX, I hit the biggest problem of all when attempting to make a donation - the site wouldn't let me. When I got to credit card details there was a huge bug in the coding that stopped data entry in the month and year of expiry for the card, blocking any attempt at payment. I figured out this appears to apply to just Chrome at this stage, and got round it using Firefox. I've let them know, but just be warned. 

And just this once, I'm being forgiving of the poor experience!

 

 

We'll follow a robot - to our doom

This article on NewScientist makes interesting reading - in a recent experiment at Georgia Institute of Technology, 26 out of 30 students chose to follow an 'Emergency Guide Robot' off to an obscure doorway in a simulated emergency, rather than take the clearly marked emergency exit door. That's 87% of the audience, potentially led to their doom in a burning building. It's a sobering thought.

But it's not unique.

Whilst scientists in this case are trying to get to the bottom of how we build and comprehend trust in robots, cases of people trusting their technology to their doom are relatively common. This article lists 8 drivers who blindly followed their GPS into disaster, from driving into a park to (almost) driving over a cliff, driving into the sea and driving hours and hours across International borders to reach the local train station.

Airplane crashes frequently involve trusting what the instruments are telling the pilot, over what the pilot could easily see - such as reported here. Years ago when I worked for the Military in the UK there were reports of fighter aircraft crashing in the Gulf, because their 'ground-hugging' radars would sometimes fail to read soft sand as ground and would bury the plane into the desert floor. I heard numerous other horror stories of military tech gone wrong and how it killed or almost killed the people it was supposed to protect - including one very 'not-smart' torpedo that blew the front off the vessel that fired it.

And all this comes to mind this week, as I'm involved in designing an app to help people avoid and survive bush fires. 

Software that tells someone to turn left instead of right in an emergency is no different than a robot that leads someone away from a fire in a simulated experiment. And if it leads someone into the fire rather than away from it - the effect is exactly the same. We are rapidly moving away from the age where software simply provided information, it now makes decisions for us - and that means we have a responsibility to ensure those decisions are right.

As designers, coders, product managers and UXers, we are involved in designing software that can increasingly be thought of as an agent. Those agents go out into the world, and - hopefully - people trust them. 

And if our agents lead people to their doom - that's on us.

I've always advocated for a code of ethics around the software, robotics, IoT and UX fields, never more so than now.

Great customer service - Accor Hotels

"I have to do what to talk to you..?"

"I have to do what to talk to you..?"

It's unfortunate that what often drives me to write is the bad experience; good design (and customer experience) is relatively invisible, bad experiences leap out.

So it is today for Accor Hotels.

A brief background. I've been a member of their rewards program for a couple of years now. You pay an amount to join each year, and you get one free nights stay in return, to use throughout the year. Last year I was a bit busy and didn't use it. With Christmas approaching I'd planned on a night away with my wife, somewhere mid December.

Unfortunately my father-in-law fell ill early in December and was extremely unwell in hospital. As the month progressed he seemed to get better, but unfortunately passed away. In the middle of this I received a call from the Accor Hotels rewards program to speak about renewing.

The customer service rep who called me was obviously understanding, and quickly hung up after offering condolences. No problem.

However I'm now trying to sort out renewal, and I have a question. Had I renewed instantly my free night could have been carried over into this year, since my membership lapsed for a few weeks it is technically lost; but given the circumstances I just wanted to ask someone there if there was any chance they could roll it over anyway.

And that's where the awesome user experience kicks in.

And it's the classic; an organisation that wants to reduce the overload of customer support, but making it near impossible to actually contact them.

Logging into my account I found a new design that was almost entirely focussed on sales. Offers, options and booking links are everywhere. But there were links to manage my account, so I started there first. Unfortunately, to no avail - my account listed my personal details, gave me options to sign up for different cards, but had nothing to help. 

There is no phone number visible anywhere I can see.

Okay, by this point I'm getting a little frustrated, but surely there's contact information in there somewhere.

So I head to the Support space - which is dominated with my favourite content, FAQs. I can search the FAQs, I can click into categories of FAQs, I can scroll through FAQs as much as I want. Great.

Having completely failed to find the FAQ titled "A family member just died and I missed my payment - can I roll over my rewards night anyway?" I did find a Contact us link at the bottom of the page. This offered me a list of choices - Contact us by Email. Great list. I would much rather talk to someone, but since email is the only show in town, fine.

This option asks me to select a main reason to contact them. I select the rewards program. It then offers me 11 sub-reasons to choose from, none of which have any relevance to me, and none of which relate to membership - which is bizarre given that I've just selected the rewards program. But since one of them is 'Other', I try that.

This now asks me to enter my membership number (my personal details are thankfully defaulted in) and to write, which I now do - foolishly thinking I've hit the jackpot.

I explain the reason for my contact and - learned from past such experiences - copy the text, in case the form fails. I hit the button - and the page scrolls up slightly. What???

I hit the button again, and again the page scrolls up slightly. 

I then proceed to try various combinations of information, just in case I've hit some bizarre error. Is there a mandatory field I've missed? Nope, everything's filled out. Is my text maybe too long? Nope, shorter text doesn't work. I try various other contact reasons, I try different subjects - but every time, I get a small scroll instead of a form submission.

By this stage I'm seriously peeved and thinking why the hell am I chasing these guys to give them my membership money and my business. In the end I have to settle for a shot across the Twitter bows, as the twitter account is the only contact information other than Facebook I can find.

Let me be very clear. I am not complaining about losing the hotel night. That was my bad, and if Accor told me they couldn't keep it I would have continued this year anyway, I just wanted to ask before I signed. It's not the night that counts - it's the fact that they made it practically impossible to ask them about it.

 

If you want to upset your customers, ignore them. Make them jump through hoops needlessly and repeatedly to ask you basic questions, make them sweat if they want to buy from you. It's a winning strategy Accor Hotels, and I wish you every bit of luck with it.

Welcome to 2016

I don't mind saying that 2015 was a strange year.

We saw some great new customers and some awesome projects, but we also said goodbye to some friends and some team members. We saw customers have huge success, and others struggle with digital disruption and transformation. On a company level we welcomed new life into the world, but also lost a close family member. The circle of life turned, as it always does.

So what does 2016 have in store for us?

As always it's hard to tell, so rather than a list of predictions, here are our company wide New Years Resolutions. At the end of the year we'll come back and see how we've done.

1. Launch our platform

We've been planning our new UX platform for years now, and it's finally the year we get it launched. No info just yet, but stay tuned. In future all of our output will be available to our customers on this platform.

2. Simplify, simplify, simplify

We've been doing this for the past year and I fully continue to keep doing it. Simple approaches. Simple pricing. Simple ways of working, simple deliverables. More simplicity I say, MORE!

3. More work life balance

Speaking of more; I launched the Fore with the full intention of keeping a great work-life balance. Unfortunately that's been hard to keep on track as we've grown in popularity, and this year was a personal low for me. I missed birthdays, I worked weekends and many nights. Despite best intentions the life grew smaller as the work grew bigger. This year we're going to ensure that this comes back into balance - because, simply, life is too short. We'll always do our best to make sure your project is delivered super fast, but this year we'll also make sure that weekends and evenings aren't involved. 

4. New format deliverables

We listen to our customers and as part of the new platform we're going to be radically redesigning how we deliver insight and intelligence from research and tests. Result formats are going to change - stay tuned.

5. Grow a little

This is also the year that we grow to a new location. Again no spoilers just yet, but stay tuned.

 

So, what are your plans for 2016?

The Matrix, here we come

NewScientist has reported here on an amazing breakthrough that's been years in the making.

A memory prosthesis is soon to be trialled that will allow the restoration of long-term memory, and potentially the uploading of new memories and skills. Whilst the sci-fi fans amongst us will rejoice at the potential to mimic Neo and upload combat skills in a blink, there are some amazing real-world potential uses.

Firstly, this could potentially help those suffering from dementia and related conditions that rob us of short term and then longer term memories and skills. Having seen first hand how this can reduce a strong and confident person to a lost child, I'm personally very excited about the potential of this kind of device.

Secondly this could be a huge boon to those who are disadvantaged by today's technology and world. For example older people or people suffering from a lower cognitive state could use this form of technology to assist them to integrate more freely. Instead of feeling isolated by an increasingly complex world, these people could have the necessary skills and memories directly available to allow them to integrate. Equally it could help refugees to quickly learn the language and culture of new countries they end up in. It could also help those with unique psychological conditions such as Prosopagnosia, where people are no longer able to recognise familiar faces. The potential wins are limitless. 

Even more importantly, this could open the way for increasingly complex systems. Right now our ability to use and understand complex systems has a relatively hard limit in current human perception and cognition. Technology that helps to shift that processing out of our heads and into assistive elements such as this could one day allow us to handle vastly more complex interfaces and computing tasks, without needing to have super brains ourselves. Not that I'm arguing for complexity, that's absolutely not the case - but extending our natural abilities and memories, and reducing the workload on new skill acquisition, can all help to increase the reach of the average user.

And that's something we should all be excited about. 

 

Great Customer Experience? Take this (placebo) pill

By now everyone and their dog knows what the placebo effect is; tell someone they are taking a pill that will make them feel better, and they'll generally report an increase in their happiness or a reduction in their pain, despite no efficacious medication being involved.

What's slightly more surprising is that this rule applies to the digital world too.

This week NewScientist reported that the placebo effect also applies to the digital gaming world. Gamers were taken through a game and told that their environment was being controlled by an adaptive AI that would customise their world uniquely for them. When asked to rate their experience with the game against a 'standard' version not using this non-existent adaptive AI, gamers generally rated the game as more immersive and more entertaining.

Which begs a very interesting question; can the placebo effect apply to other digital experiences, such as the design of overall Customer Experience strategies and approaches.

It's an interesting question, which also potentially crosses ethical boundaries. For example if you told visitors to a site that an AI was customising their experience for them and would help to make their visit the best possible (even when no such exists), you'd potentially see an uplift in perceived positive customer experience - though you'd be lying in order to see it.

But placebo's are necessarily used for untruth and obfuscation, they can have a very relevant place in treatment. My daughter broke her arm at an early age, a very nasty and painful break of several bones that needed major work. During her stay at the hospital she would sometimes hit her medication limit but still be in pain - at which point placebo's (in this case soft toys with 'magical pain-removing properties') were utilised. Whilst they didn't fix the problem they certainly helped.

There are instances where placebo's can help,in an ethical way. Take for example this app developed with medical specialists, to give people a knowing placebo on their phone to improve their life - in effect a self-delivered placebo. Think that won't work? Think again, read this research that shows deception is not in fact required for a placebo to work. Anyone who's ever used a Magic 8-ball knows it can't really

So back to the digital world. Could a digital placebo work, in an ethical way? Could you improve your Customer Experience with a placebo pill clearly marked..?

I don't have the answer to this but I'm betting the answer is yes. Let me paint a frame around two potential pictures.

Landscape number 1; as there is a measurable benefit of happiness gained from giving to charity, you could leverage a placebo around that issue. For example a site might ask people if they were to give to charity, which of these would they prefer? If the placebo effect holds as expected you would see a certain amount of happiness transferred from the faux giving. That doesn't need to detract from actual giving either, you'd start with the placebo and offer the chance to go further.

Landscape 2; there's an equally important psychological benefit to be had by telling other people about both the positive and negative customer experiences you've had. So what if a site were to offer a placebo version of that conversation; tell Paulo your Placebo mate just how great or terrible our customer experience was. As long as Paulo was clearly a placebo and responded with a placating response, you should see a placebo impact on overall satisfaction.

Okay, it's not perfect and it's not science, or at least not yet - but I'll bet a pack of placebos that it will be, soon. 

 

 

Samsung Gear VR meets people

Three weeks ago I managed to purchase a Samsung Gear VR headset (based on the Occulus Rift model) and I've had that time since to play with it.

I've now had a chance to sit in the tail-gunner seat of a bomber and shoot down attacking fighters, to see what it looks like to be eaten by a shark, I've flown across waterfalls in Iceland and I've stood on a dance floor surrounded by a teeny-bobber band and jiggled about a bit. But mostly I've been interested in how the technology engages other people, and where it might be a few years from now.

The highlights

I've been extremely pleased by how most people have engaged with this technology. For the most part it is still in the 'wow' stage - showing someone a virtual cinema with a movie playing across a massive screen, whilst the light from the screen reflects realistically off the walls and ceiling often lead to dropping jaws. 

Children in particular have been a joy to watch as they engage with the Samsung Gear VR. One experience available shows you in a forest as a large Dinosaur slumbers. He then wakes, walks over, sniffs you and eyes you closely, before standing on rear legs to eat leaves that shower down around you. Watching the awe and engagement on children's faces as they experience this for the first time is nothing but pure magic. Another experience takes the viewer on a guided tour around an animated solar system, naming and talking about all the planets and moons - as the father of a son who loves science, I loved seeing his face as he toured the planets.

What seems to work best at this stage are animated experiences and games. Flying through a brain shooting at defective neurons and shooting balloons by looking at them are pretty fun, if basic. 

Overall, the experience is definitely more fun than I expected and more engaging than I'd hoped.

The lowlights

Whilst video experiences are good, they aren't perfect. Video quality is just a little too low, and everything feels quite blurry in many experiences. 360 static images are clear and crisp and you can look all around you, but the static nature fails to grab the imagination.

Battery life was an issue too, with two or three hours of play wiping out my phones battery completely - the Samsung Gear VR has a port that can accept a powering cable (if it's a Samsung one) but as those cables are short this is a relatively useless feature, in my mind - especially given that much of the use for the headset was taken either standing up, or twisting and moving about on a chair.

Strangely, the one experience I thought would succeed - a rollercoaster type ride - just made me feel ill and I quickly had to remove the headset.

Mostly though, it's the lack of content that I found to be an issue - understandable, given where the technology sits today, but still frustrating. Once you've gone through the few games and experiences available you're left hungry for more.

The UX issues

A virtual headset is an interesting interface to control. You can look to select/activate, and there is a small trackpad (to scroll and click) and a Back button (press back, press and hold for menu options). Technically that's enough to do the job, and it works - mostly. But the experience is not always ideal.

What's are you seeing?

The first and most repetitive issue I found was in trying to assist others in using the headset. I would load or prepare an experience, take off the headset and pass it to someone. Then I'd press the trackpad to start it. Usually it would work, but sometimes the person would look confused. I'd ask what they were seeing, and they'd describe something that made no sense - "It's all black - and there's a thing over there..."

Invariably I'd end up taking off the headset, figuring out what had happened and then resetting it, sometimes two or three times in a row. Since the trackpad is on the side of the headset it's incredibly easy to accidentally touch it whilst taking it off or putting it on someone else, or for them to hit buttons whilst adjusting the headset for comfort. Suddenly they're somewhere else entirely. 

My neck hurts

I also found an issue with viewing angles and centring of view. Sometimes the device would remember that I was looking north, for example. I'd hand it to someone who was sitting and they'd take over - only to have to sit sideways on their chair to get a 'front' view. Or, the view would shift as they used the experience, and they'd end up twisting uncomfortably to see everything.

Often, even just using the experiences on my own, I found it easiest to stand. This allowed me to swivel and turn without twisting my neck too far and causing neck-ache.

Not as inclusive as it should be

One area that really disappointed me was in the inclusive nature of the device. My current focus is on assistive technology for people suffering from dementia and related conditions, and I held the hope that a device like this could open up new vistas for them. I had the chance to try the headset with a dementia sufferer in his 70's, and was hoping the experience would be as impactful as it was for the children.

Unfortunately it fell relatively flat. Confusion at the interface and options was a big issue, but even when the experiences were 'loaded and placed on the head' there were issues with clarity (older eyes with poorer vision and focus) and also with swivel - whereas young kids and (to a lesser extent the rest of us) can turn the head and swivel to view all around you, older people tend to simply look front and centre and therefore miss much of the immersive nature of the experiences.

It's not a knockout failure - there were still smiles and exclamations, but it was not the hit I was hoping it would be. 

The future

Whilst the Samsung Gear VR is not perfect, it is for the most part a pleasurable device to use. The experiences it offers are not exactly the kind of thing you'd use for hours every day, but they definitely add something to your life and they absolutely have a lot of possibility.

Right now the content on offer is limited to games, media consumption (watching movies) and experiences (travelling down the Grand Canyon). But the future is bright with possibility.

I can easily see real estate business opportunities - for example showing you interactive 360 views and video from inside homes, offices, buildings, both real ones and modelled ones yet to be built.

I can see options for surgeons and trainees of all kinds, exploring interfaces and the world from within. For example climbing inside an engine, or pulling apart an airplane to see how it works. I can see trainers and experts providing their insight and expertise via immersive experiences and walkthroughs.

I can see options for psychology - using a headset to guide patients through experiences that might have emotional impact.

More than this though, I can see options for education and schooling. Having a headset lab at school would allow children in all parts of the world to experience what they might not get to see in their lifetime. Taking a walk on Titan, clambering over the International Space Station - or just learning about the deep oceans by actually going there.

The future is bright for VR - so bright, you've gotta wear the (VR) shades.


Experience is everything: stand clear of the gap

Businesses today live or die by the experiences they offer to their customers.

It's that simple. It's even relatively easy to calculate how much those poor experiences are costing you. But what many businesses don't do is to measure the gap.

Great customer experiences pay their weight in gold - just read these 10 unforgettable customer experience stories. In today's socially connected digital world these stories spread quickly and bring huge benefits. Whereas the opposite is also true, hardly a day goes by without some customer experience fail story appearing in the news somewhere.

It almost doesn't matter what you do, it's how you do it that counts. And the best example of this is to ask your sulky teenager to do the washing up. If your experience is anything like mine it'll probably go like this:

Asking a teenager to do the washing up - my 10 step guide:

  1. Ask them nicely to do the washing up.
  2. Ask them again.
  3. Then again.
  4. Now order, point, gesticulate, rant a little; some spit coming out generally helps. Ensure your face goes a nice shade of purple, if you want them to pay attention.
  5. Wait 15 minutes whilst they shuffle and moan towards the kitchen, mouth going at ten times the speed of their feet as they tell you how they already did the washing up last week, so it shouldn't be their turn. Congratulations, you're at the half-way mark!
  6. Watch from the sidelines as they overfill the sink, forget to put in dishwashing liquid and then gently waft a dish cloth at some dirty plates before placing the (still dirty) plates in the rack.
  7. Point out that they've missed all the pots.
  8. Point out that they've missed all the glass cups, and now they're going to have to replace the greasy water before washing them up.
  9. Nod politely as they walk past you, glaring and mumbling about enforced labour camps and Stalinist dictatorships.
  10. Spend 30 minutes cleaning up the sink area, getting bubbles off the roof and re-washing everything since it's still dirty.

So you got the job done - but how was the experience? The service provider did provide the service, and they have completed the tasks. How likely are you to return?

 

The problem is the gap. Ask that teenager and they'll probably tell you that their parent must be happy - after all, they did the washing up, right? Ask the parent and they're much more likely to be grinding teeth. The issue here is the gap between what the provider thought would be a good experience, and what the end customer wanted it to be.

Mind the gap: doors closing...

This week I had a great example of this. My car broke down and I've had to return it for repairs. And there are two ways to look at how it all went.

From a purely factual perspective the event went something like this: a failure occurred in the computing system somewhere and the doors and windows (apart from driver door) went into deadlock. The car could be driven but couldn't carry anyone safely, and then locked whilst the keys were inside. A break-in was needed to retrieve the keys, the car was returned to the dealer, the part(s) were replaced and the vehicle was returned. 

From the dealer's perspective they no doubt see this as a win; the customer reported the fault, they offered a replacement vehicle (albeit a trade-in older car), they checked the vehicle and found the problem, they fixed the problem, and they returned it in working order with no charge. From an organisational perspective this wasn't a cheap exercise for them either - they've had to make numerous phone calls, they've had to order rush parts from overseas, and it's all covered by warranty so it's all cost. They've also had to hand over a vehicle they could have on-sold for a number of days.

Technically they've done a good job. Organisationally they've supported their customer in the field. The washing up was done, yes? Happy customer?

And now, for the parent

From a customer perspective, this was far from a win. I bought that vehicle less than 8 weeks ago and had an expectation of a relatively speedy and supportive response from the dealer. 

I'm not going to list everything I went through, but some of the emotional highlights of this event for me were:

  1. Driving my car with my son in the back (since we were out when then fault occurred) and thinking what would happen if we had a crash (since his doors and windows were deadlocked shut)
  2. Worrying about the fact that I'd only just dropped off two older people who could never have clambered out over the driver door if this fault had gone wrong just a little sooner
  3. Finding out that the break-down cover I thought I had, had actually elapsed.
  4. Watching with horror as the car decided to lock the drivers side door - whilst the car was switched on and the keys were sat in the console.
  5. Paying hundreds of dollars for the NRMA guy to come out and help me retrieve said keys.
  6. Paying for a cab to take my family home whilst I waiting on the road-side in shame.
  7. Waiting hours for the NRMA guy to come - in the pitch black, on a scary street, listening to a lovely couple argue about who was the bigger a-hole and scream at each other just across the road.
  8. Finding out he'd gone to the wrong suburb, and having to wait more.
  9. Watching as he drove off - only to realise with a sinking heart that the head lights had also failed and that I couldn't drive the car anyway.
  10. Paying for yet another cab to get me home.
  11. Paying for a third cab to take me back in the morning and then drive the car - again worrying what would happen if we got side-swiped - to the dealers the next day.
  12. Picking up my replacement vehicle, a tank of a Commodore with sheep-skin seat covers and a lovely lived-in aroma.

The gap between the dealer's expectations of satisfaction and mine are large enough to drive that Commodore through. Whilst they did respond and fix the car the staff I spoke with had no emotional connection to what had gone wrong - and completely failed to ask or care. At no point did they seem to realise how the fault had impacted on my day or life, and brushed off any comments I made about taxi and NRMA costs. Their estimates of how long the problem might take to fix ran from an initial 'few days' to 'a week or more' to 'three or four weeks' and back to 'tomorrow', to my utter confusion.

Whilst they did all the right things, they did many of them in a way that offered a poor customer experience. Despite having purchased three vehicles from them in the past four years, and despite having recently planned to trade in another in the next year, we're now very unlikely to ever return. Poor customer experience over one incident has likely cost them tens of thousands of dollars in sales, and could so easily have been avoided.

The shame is that it's all down to not being aware that the gap is even there. A little focus on the expectation and their shortfall could easily have fixed that.

Poor customer experience costs business. And getting it right really isn't that hard, if you place Customer Experience Strategy at the centre of what you do. When will they learn. 

Now excuse me, I'm off to clean up the kitchen...

 

Does an app have the right to save your life?

It's an interesting question, and one we'll soon have to address; where exactly is the ethical line in terms of technology and life-saving?

Not so long ago I sat in a coffee shop and dreamed up (with two others) the concept of a robotic pool life-saver. The idea was that the device would - when in guard mode - look for movement in the water, and would move towards it, sound an alarm and attempt to float it, with the assumption being that it was a child who'd fallen in. That project never made it off the ground, and there are no doubts in my mind that this would be a worthy cause. But where, exactly, is the line? Where does a device, or indeed an app, have the right or even the obligation to save us?

The past two months have been interesting ones for me, working in the aged care, youth care and disability sectors. I've researched in the field and spoken to numerous people about their lives, their issues and the needs that are out there.

Last night I watched Humans on SBS, a very interesting show. In it a Health service robot is provided to an older single man to care for him. This robot, matronly and authoritive, immediately started to curtail his life; salt-reduced broth for lunch, healthy walks and early to bed, despite his demands to be treated better.

And it raises an interesting question that we'll soon have to deal with. Apps already exist that can help to save lives, calling for help, providing life-saving guidance to start a heart, helping us to live healthier (and therefore longer). But right now all of this is self-guided.

What happens when the app or the device is in charge? Where's the ethical line here?

This might sound a little far-fetched but trust me it's already happening. We are living in a world were self-driven cars are a reality and will soon hit the roads (pun intended). Those cars will have lives in their hands every day, both passengers and others on the streets. They'll also, most likely, be gaining a layer of smarts to react in emergencies - for example if a passenger has a medical emergency whilst en-route.

Automated drones already have the capacity to carry weapons, at least for military purposes. Home automation options are available right now that will monitor movement, telling us if a person has a fall or forgets to take their pills. Others are in development that will (very soon) be able to monitor mental and physical health, and react accordingly.

So, from a UX designers perspective, we come to that question; does an app have the right to save your life?

On the surface the answer seems simple. Of course it does! Friendly robot sees person have a heart-attack and friendly robot assists. Technology has the right, is indeed obligated to save lives.

Or is that right?

What about people who make a choice to end their own lives? Should that be overridden by their app, their technology? Should the self-driving car refuse to allow it's passenger/driver to take dangerous actions behind the wheel? Should it stop the crash? Should the pill app lock the lid if someone takes too many?

Again, some people may say yes - that the app and the technology is just as obligated, even if the intent is to self-harm.

But where does that end? I had chips for lunch today, and yes, they tasted great. But they equally have added to the protective layer that I'd love to lose, and have increased my risk (though marginally, it's true) of seriously health issues and death. Should my technology stop me from eating chips ever again?

Should my car refuse to drive me anywhere except to the Doctors? Should my app refuse to show me web pages for fast food joints? Should my recipe finder recommend only vegetarian meals? 

And this hasn't even touched on the issue of choice. What happens if an app can reduce life risk or save the life of one person, when two people are at risk? How does it make a choice? What if my pool life-saver robot registered two bodies in the water, not one. How does it choose? And does it have the right to make that call?

I once talked with a team who tested the Phalanx anti-missile system in the field. This was an automated gun that could sense incoming missiles and shells threatening a ship and could fire pellets to explode it before it hit. When testing this with incoming shells from another ship it worked perfectly - right up to the point where they tested it with two incoming shells. The Phalanx gun started firing at the first, then ignored that and focused on the second, then the third... Luckily this was a field test with no live rounds, and no lives were lost. It turned out that the logic built into the gun had been to lock on to the fastest incoming target and fire. Since the first shell had slowed down marginally it ignored this, even though it was clearly a bigger threat.

Back down the track from that field test someone had built logic into a device to save lives. And it turned out that their logic was faulty. If that hadn't been captured, lives could easily have been lost due to poor logic, poor UX.

Think that's not relevant? What happens if a self-driving car registers an incoming threat (a boulder rolling down a hill, a child walking in front of the car) and has to take evasive action. What happens when the only evasive action it can take is to swerve into a line of pedestrians? Which choice should it make? 

 

I know this sounds sci-fi, but there are some pretty weighty questions here and we're not far from needing the answers. This last month or two has taught me that there is a burning need for technology to help save and protect people, to support those who are older, younger or who have a disability or special need. And solutions for these are already rolling off the line.

But the ethics and the model for how these devices might behave can't afford to follow too far behind...