AI design needs some 'deep thought'

We all know the answer. 42.

But to quote Deep Thought, "the problem, to be quite honest with you is that you've never actually known what the question was."

When Douglas Adams wrote about Deep Thought in the Hitchhiker's Guide to the Galaxy, he wrote about a machine that humanoids built to do something they couldn't do - work out the meaning to 'life, the Universe and everything'. Unfortunately for all (but possibly fortunately for Arthur Dent), it failed.

I was thinking about this today, whilst reading about Google's recent AI efforts with Google Duplex.

For those of you who may not have seen it, Google Duplex was showcased in their recent I/O keynote. They showed how Duplex can make phone calls on your behalf, mimicking a human so closely that the person on the other end of the call doesn't realise they are speaking with a program.

Technical or unethical?

Technically that's a huge achievement, and something we all knew was coming. Hell, we've been begging for it for years, smart assistants that really understand us? Who doesn't want one?

As a UX/CX consultant, it's easy to see this as the first wave of the disappearance of my job. One day there won't be much of an interface to design - you'll just ask, and get. There won't be human researchers, programs will crunch the data and ask what they don't know, and act accordingly. People like me will mumble about 'the way things used to be', curl up in crusty armchairs and knock back another Margarita. 

Only we still have a few problems to solve, don't we.

The ethical bridge

I received an email recently from someone who said they wanted to talk about UX, relatively urgently. They asked me when I was free for a chat, and since I'm a nice chap (and always interested in growing business, it must be said) I called them back later that day to see where I could help.

It turns out, after several minutes of small talk and intros, that the person was, in fact, a recruiter. And the 'UX' they wanted to talk about was, in fact, the provision of his (non)-unique recruitment skills to any recruitment shortage I might have.

Now whilst this person didn't explicitly lie and tell me that he was looking for business, his email was carefully worded to leave me thinking that was the case - and hence to act as if it were. Once the deception was clear I exited the call as politely and quickly as possible - and vowed to lose his details - we do sometimes talk with recruiters but being deceived as to the nature of the caller as the opening gambit pretty much tops the list of 'ways to lose trust instantly'.

Unfortunately, Duplex was asked to do the same thing. 

By calling people and pretending to be human, it's deceiving them to act accordingly. Any choices they might have made or responses they might have considered had they been in possession of the full facts was denied to them, even if they never became aware of that fact.

In Duplex's case the tool was designed to include noise-terms and pauses and sounds ('ur', 'umm') so that the person it was talking to felt comfortable, and thought they were dealing with a real person. If you want to see the call talking place, watch it here.

As I've long discussed there is an ethical dilemma to much of AI that needs to be carefully built in. Any AI platform must act ethically - but if we, as designers, don't build in the right ethics it will never understand what is right and what is wrong.

First, do no harm

Okay, so those words don't appear within the Hippocratic Oath, but they are close enough. Do no harm. Help the person, don't hurt them.

So, could an AI phone assistant cause harm? 

Sure it could - perhaps only emotional, but the risk is there. Let's say the person who it calls is having a horrendous day. Their partner has been diagnosed with a terminal illness, they are struggling to cope but gamely trying to keep working. They take a call - and struggle.

Most humans would understand and ask the person if they were okay. Would an AI?

What if it just kept trying to book that appointment, seemingly not to care in the least as the person explains they're having the day from hell. How much might that hurt?

But if the person knew it was an AI, they wouldn't expect buckets of empathy and an offer to come round and help, they'd probably just hang up. No harm done.

Designing for AI - two core elements

When you're designing for an AI interface in any form (text, voice or interface-based) there are four key elements you need to consider:

  1. Context-setting
  2. Engagement points & options
  3. Learning paths
  4. Ethical risk analysis

1. Context-setting

AI needs to set context. It needs to let the person know that this is a machine, not a human. Full disclosure isn't a pathway to failure; it's the first step to building trust, understanding, and sometimes even engagement - our user research shows that many people absolutely love to play with and experience smart tech, so you're losing an opportunity to engage if you don't tell them the truth.

You're also potentially setting the AI up to fail - if people think they are talking to a human then they may deal with it in a way the AI hasn't been trained to handle, and kill the engagement right off the bat.

So, firstly, set the context.

2. Engagement points & options

Next comes engagement.

If a lifetime of CX research has taught me one thing, it is this: humans surprise you.

They do the unexpected. They think illogical thoughts. They act in ways that don't always make sense.

And they don't always understand the rules.

So with any smart system or AI interface, it's paramount that the engagement points are as wide as possible. You only need to go to YouTube and search for fails relating to any device (I won't call out Google or Siri particularly) and you'll see what happens when human and AI interpretation goes wrong.

Humans interrupt each other, leave long pauses, and take huge cues from the minutest of details. One tiny example of this happens every day in my family. My own upbringing (English) means I speak with pauses at the end of sentences, so I'll often pause a little between sentences when getting something out. My wife comes from a South American family where people speak fast and with few breaks - so to her, a slight pause is a signal that I've stopped speaking. The outcome is one frustrated husband who only gets half his words out before the conversation moves on - and there's no AI in site, yet.

But it's not just voice systems where this is important. I'm currently working on an AI engine that reviews video for you, identifying the content it finds. But if it does that for you, how does the user engage and ask for something slightly different? How does the user tell the AI that it's misunderstood their need?

Identifying the engagement points and the options available at those points is a key path to success - and making those points as flexible as possible (to deal with tricky human illogicality) is paramount.

3. Learning paths

All AI requires learning to be successful.

Whilst much of that happens in the background and from ongoing interactions, learning from the user within the context of a single interaction has the potential to teach far more.

It's like the difference between user testing and analysing your Google Analytics reports. GA reports give you en-masse data about how people are reacting to your website. It tells you what people are doing, but often not why. User testing on the other hand lets you talk one-on-one with a single customer; whilst it doesn't provide the 50,000 foot view of what everyone is doing, it does let you see exactly what's going wrong, and have a chance to correct it. The depth of feedback and learning at that engagement point is staggering, in comparison to any analytical data-based approach.

And the effect it can have on the customer, to be heard and to have their gripe confirmed and even fixed in that small window just cannot be over-stated. I've seen customers converted into fanboys in moments.

So building one or more learning paths into the tool is a gold-mine, both for longer term improvements within the AI itself and for engagement with and retention of the customer.

4. Ethical risk analysis

My final element to consider is back to my favour subject; ethics.

When AI gets it wrong it can be funny - YouTube is stuffed with examples where assistants have misunderstood and returned something hilarious. The web is full of autocorrect funnies.

But when a self-driving car piles into a pedestrian, it's not quite as funny.

So it's important to perform an ethical risk analysis, and understand how the risk can be mitigated.

The Duplex example I gave above is one. What happens if:

  1. The person receiving the call gets upset and needs someone to listen and/or care?
  2. The person receiving the call has a heart attack mid call and needs someone to call an ambulance?
  3. The AI misunderstands the call and books something that incurs a cancellation fee that the owner is not even aware of.
  4. And what happens to the store if agents repeatedly book appointments that nobody shows up to? Could that extra pressure push them out of business?

Or in the case of the video analysis tool, let's say it analyses a video of children in a park, a fast moving car chase and a dog getting run over - and mixes up the cut so it looks like a child was crushed to death. Could that cause trauma to a viewer? Could they be sued for it?

Ethical risks can be covered by terms and conditions and legalese - to a degree, only. But our responsibility as designers and innovators should never stop at the 'get out of jail (not-quite) free' card.

 

 

Overall Google is doing amazing work, and I love where we're going.

But next time, let's hope the conversation starts with a little more honesty.

Facebook knows all your secrets

Facebook has recently been caught out for collecting and storing huge amounts of user data, and that’s just one app. Now imagine the potential for data-mining if someone had access to your entire phone.

 

Think about it - what have you used your smartphone for this morning? Track your diet, your steps, your locations? Helped you navigate somewhere? Google recently booted 20 apps from the play store, as it found that the apps could record with the microphone, take photos, monitor the phone’s location and send that data back to the developers - all without the user knowing. While the privacy threats posed here seem obvious, it begs the question, how common are these apps? How many more are coming?

 

While the thought of those kinds of privacy invasion are bad enough, it’s also theoretically possible to use the sensors in the phone to track what you are typing - including your pin code for your banking app. Additionally, the information from the phone’s sensors doesn’t need permission to access - so apps don’t have to ask. And even when they do, people tend to just grant access - for example, when you open an app and it asks for permission to use the microphone. It doesn’t say what for, but you allow it anyway. That’s it - it never needs to ask again.

 

On top of that, recent work has found that 7 out of 10 smartphone apps share your information with third party developers. For example, your map app might be sharing your GPS data with whoever else the developer wants to share it with. This can be used for targeted advertising as well, but there is massive potential for bad situations here. Perhaps even more disturbingly, the same study found out of 111 tested apps for children, 11 leaked the MAC address of the Wi-Fi router it was connected to. It’s possible to search online for information associated with the MAC address, potentially including the physical address.

 

At the moment, there’s no real way around this, to an extent. And the apps and businesses mining your data don’t make it easy to figure out what’s going on. In light of Facebook being the latest company under scrutiny, I decided to find out exactly what data I could get from my own Facebook account, and how (if possible) to stop that happening. What I found was pretty shocking - it’s incredibly easy from a user perspective to download a zip file of the data they have stored on you, and much harder to find out how to protect your information. Click Settings, and one of the options is ‘Download a copy of your Facebook data’. First of all, this means that anyone that happened to gain access to your Facebook account can easily download this same information. But I wanted to see what was actually in the file.

 

What I found was an index of every message I have ever sent or received, deleted information, photos (both posted and sent to / from me in messenger) and videos. While that doesn’t sound too invasive at first, the content within the messages wasn’t hidden. So I have conversations with my partner where we share bank account details, information about where we live and our house, our habits. More than enough for someone to do anything from steal my bank account details to steal my identity, theoretically.

From my perspective, after looking around and playing with the privacy settings on just Google and Facebook, there’s a big hole in the user experience regarding privacy - designs seem to be becoming more and more hidden and making it harder to find your settings and what you’re sharing. More often than not, the request for your information doesn’t tell you the full extent of what is going on, and it can take quite a while of messing around to find the right settings to hide your information.

 

So, is there a way to stop this kind of storage? Apps in particular tend to be very vague in the language used when asking for permission - they won’t tell you straight up how much information is going to be collected. I’ll focus on Facebook here, as that has been the main consideration above. There are some simple ways you can reduce the amount of information Facebook has access to:

 

  • Have a look at your Ad Preferences. You can see the companies or pages that have your contact information, change which information Facebook is allowed to use to advertise to you, and stop Facebook tracking your activity on other websites.

  • Don’t let apps sign in with Facebook. Another example of hiding how much information is actually being harvested, when you allow an app to log in with Facebook that, at a minimum, allows the app to view your public information, and may go as far as allowing it to view your email address or phone number.

  • If you use apps inside Facebook that require you to accept Terms of Service from a third party, it’s worth the effort to find out who the third party is and what they will be doing with your information.

  • Ensure your own profile is set to Private and your posts and information are set to either ‘Friends Only’ or ‘Only Me’.

 

But most importantly - be aware. Take the time to read data sharing policies if they’re available, or to go through all your settings regularly.

5 ways you're biasing your research

You’re making incorrect assumptions. And that’s hard wired in your brain.

Our brains have evolved to make assumptions, to analyse vast amounts of data and make assumptions and judgements to reduce billions of data points down to a few key facts. Psychologists believe that these shortcuts are adaptive and help us make smart choices. At one time they helped us connect a moving shadow in the grass to a predator, triggering a flight or fight response.

You’re also influencing your participants. As science tells us, by observing something we are already interacting and influencing it, whether that’s particles or the people we are researching.

So you’re not only biased, but you’re probably influencing your research without realising it. 

But don’t worry - we all are. Here are 5 ways your research is being impacted by bias:

 

1. You make assumptions about what’s happening

You can’t avoid making assumptions, it happens. But the effect of incorrect assumptions can be huge. Here’s a couple of quick examples.

Stevenage & Bennett performed recently a study where forensic students were asked to match fingerprints when given an initial report. Whilst their matching effort should have been the same regardless of whether that initial report suggested the fingerprints matched, in reality a much higher number of students stated the fingerprints matched when their initial test report suggested so too - they were influenced by the suggestion of a match, and followed that suggestion through. 

Another study found that judges in domestic violence protection order cases were influenced by the behaviour of the applicant. If the applicant had waited for a period of time after the alleged incident before making an application, or if they did not appear ‘visibly afraid’ during the court appearance, then the judge's statements suggested a bias against them, based on their perception that the person “must be less afraid”.

This obviously has huge and potentially deadly consequences. And that’s a judge, who is trained to be impartial. How many researchers have similar levels of training and experience to a judge?

The fix

I have seen a lot of UX insight come out of assumptions. For example I once worked with a UX researcher who insisted that she was an ‘intuitive’ who could tell what people were thinking. She would often insist that people hadn’t understood something because she “could tell they didn’t understand it, even when they didn’t mention it”.

Try taking that one to the bank.

At heart we are a scientific community and we need to ensure that research is based on fact, not assumption. That means insights need to come from observed behaviour that is repeated and understood, and/or from recorded comments that were made without bias (see below). Findings are:

  • Objective and unbiased (read this article)

  • Based on direct observation of behaviour or user comment

  • Clear and repeatable - running the test again repeats the finding

  • Actionable (findings you can’t act on are rarely worth reporting on)

 

2. Prior knowledge is a blessing and a curse

Knowing nothing makes you a terrible facilitator. Knowing too much can mean the same, however. And that goes double for participants.

Raita and Oulasvirta (PDF link, sorry) performed an interesting study into bias. 36 participants were put through a website test, having viewed a positive, negative or neutral review of the platform first.

The standard System Usability Scale (SUS) was used to rate the tasks they performed. There were two key findings from this:

  1. There was no significant impact on success rate. No matter which review they read, all participants succeeded or failed at the same rates.

  2. Their perceived rating of the site was impacted significantly. They rated the platform 30% more easy to use if they had been influenced first by a positive review. See the table below for more detail.

As a facilitator if you know exactly how the system works then you’ll start making assumptions about choices and behaviour.

The fix

When it comes to facilitating the research it’s best to know a lot about people, about good research, and about the goals of the research itself.

But it’s best to know very little about the subject interface you’re testing. The less you know more in tune you are with a participant who is seeing it for the first time. Knowing too much places a gulf of knowledge and understanding between you, and that makes it harder to empathise and connect with their comments and actions.

For participants knowing too much can also sink the research. There are two key actions here to help with this:

  1. Ensure participants match the profile
    Make sure they are the right people with the right level of expertise you are looking for. When you’re testing for existing audiences then ensure you find people who match that audience profile well - but also explicitly tell them not to bone up or study the interface prior to the session.

  2. Ensure new user tests are blind
    When you’re testing for acquisition or new audiences, ensure the participants match the audience profile but also ensure the test is blind, right up to the last minute. Don’t let the participants know what it is they are testing - otherwise human nature is for them to get nosy and start looking at that site or app or software on their own.

 

3. You see what you expect to see (confirmation, hindsight 2)

Confirmation bias is the bias we all know and love.

You think that you’re late so you’re going to get all the red lights - and of course you do. Only you didn’t, you just noticed the red ones more because you were late and you were looking for them. Your bias is confirmed.

In research confirmation bias is a risky bias to have. You think people are going to find sign-up difficult, so you start looking out for problems in sign-up. Did they hesitate just then? That must be because it’s not clear to them. Did the mouse move a little to the left? That must be because they are thinking they want to go back to the previous page and choose again. You think it’s going to cause problems and like a magic eight ball all signs point to Yes, yes it is.

What’s harder to pull apart at times is the fact that we’re facing at least four different levels of confirmation bias, each and every time we test:

  1. The client
    The client thinks their checkout process is too hard to use, and wants to redesign it. This test is just to prove the problem exists so they can justify redesigning and selecting new technology. Confirmation bias is baked in, from the start.

  2. The team
    The UX team are provided with the brief and set up the research. But given other websites they’ve looked at, this sign up and checkout process really does look a little old fashioned, and it’s surely going to have some issues they’ve seen elsewhere. A second layer of confirmation bias begins to build.

  3. The facilitator
    The facilitator has played with the design and found it non-standard, they struggled to understand how to complete some of the goals - and they’re a UX specialist. How on earth would a standard user fare, especially an older one (because we all know older people struggle with technology and websites, right?). Layer three confirmation bias, calling layer three.

  4. The participant
    The participant has heard of Biggend shopping before, their friends told them it has bargains but can be painful to use. As soon as they see the name, they wince ever so slightly. They’re expecting to struggle, whilst the people behind the glass watch on. Layer four pulls into the station.

Confirmation bias can - like scary clowns at a travelling circus - be an unpleasant surprise around every corner.

We also need to guard against hindsight findings. These are when a participant justifies a decision in hindsight. They used control B instead of control A, and then we ask them why. Often this will lead to the participant judging their own actions in hindsight, and coming up with a logical reason why - when there may not have been one, or when in fact their decision was unconsciously driven by various design aspects their conscious mind doesn’t process.

And if those justifications happen to match our preconceived ideas of what’s wrong with the design, then we hit the jackpot and confirm our own bias again.

The fix

There are three key steps to avoiding an overload of scary-clown-confirmation-bias.

  1. An independent facilitator
    The facilitator for any test should be as independent as possible. Using an outsider or independent is best, but using a professional facilitator who practices avoidance of bias can be just as good. Just avoid using one of your team who’s heavily invested in the project and/or the design - or their bias cup may runneth over.
     

  2. A bias-free script
    The test script (or questionnaire or task list or whatever you like to call it) is the crux of any test and the key to avoiding confirmation bias, as far as possible. The goals of the project will probably be biased to some degree, and the client and the team may be bringing to the table their own biases, but the script can avoid much of this. Direction, notes, questions and tasks should be carefully reviewed to ensure they are not leading or biasing the participant. A good facilitator will follow the script and avoid talking too much outside of it, as this will ensure bias is kept to a minimum.
     

  3. Blind testing
    As outlined in a previous point, blind testing will help to avoid the participant carrying too much bias or pre-learning into the room. You can’t completely control for their opinions based on others, but you can ask them to objectively approach this test from scratch. And you can ensure you take observed performance over voiced opinion, a key difference.

 

4. Participants just aren’t normal

Well, okay, they are - but their behaviour often isn’t, when in the room. Take a look at these known biases:

  1. The Hawthorne effect: people change their behaviours when they know others are watching. And of course cameras, mic’s and a huge one way mirror won’t add to that at all. Participants may be more careful, go slower, read more, take more considered action than they would in the real world. Or, as we sometimes see, they will play up to the attention, act more rashly and with more bravado (generation Z I’m look at you here).

  2. Social desirability: people tend to look for confirmation, and act in ways that the current social group would approve of. This can lead to participants playing down the issues they see (“oh that’s fine, I found it easily enough - it’s just me being a bit slow!”) or under-reporting their dislikes and grumbles. It can also lead to more positive scoring when subjectively scoring interfaces - for example on a 1 - 5 site scoring mode where 5 is best, we rarely see a site score less than 3, no matter how painful the experience.

  3. Task selection bias: people will try longer and use more likely success paths (even if they are slower and more painful) when being observed at a task. You may see people sticking at tasks far longer than they would in the real world - or using search and trawling through pages of results, to avoid failing in the navigation.

  4. The bandwagon: people tend to jump on the bandwagon. That means if the facilitator or the script start to talk positively or negatively about the test subject, they may well be biased by this and join in. Equally when running group research, they may begin to emulate the comments and behaviours of the others in the group, just to fit in.

That’s not an exhaustive list, but it gives you an idea of the problem we face.

The fix

At the risk of repeating myself, the solutions here are much the same as those mentioned earlier - independent and bias-free facilitator, script and a blind test. When asking questions, open and non-leading questions are best, for example “How do you feel about the level of information on this page?”

Avoid building a personal relationship with the participant - research has shown that you can build rapport from the simple act of asking someone to hold you cup of coffee for you - imagine how much rapport you can build up over an hour of chummy chat, if you’re not careful.

Scripts should sometimes be explicit with task paths, to avoid task selection bias - tell the participant that this time you are searching, and this time browsing. Use the paths that your research shows you the majority of people use. Mix them up, and have some open tasks where the participant can choose for themselves.

 

5. You’re leading not following

This is a common one, and I see it regularly when reviewing research. It’s so easy to slip into that I find myself falling foul of this one every now and then.

You see a problem, and you think you know what’s going wrong. So you start looking for it. We begin to subtly hint at or probe at the issue with other participants.

For example I had a facilitator once who would see someone struggle with a label or an icon - and would from that point on ask every other participant whether they understood it or not, whether it was completely clear to them. Lo and behold, many people did indeed agree with him that it wasn’t clear. He felt supremely justified for having spotted this problem and routed it out into the light of day.

Only he didn’t, really. He was biasing the participants to focus on something they would not normally focus on, and look at it with a critical eye to see if they felt it could be improved. And since he was showing them that he felt it could be improved, they were happy to agree.

We can also fall foul of this easily, both in the script and in our language.

The fix

As I’ve pointed out before, the script should be non-leading. Some great examples of leading script questions I’ve seen include:

  • Which content on this page is most important to you?

  • What do you love most about this site?

  • What function would you remove from this site?

  • How hard was it to get to this point?

If you want to avoid priming the participant, then make them non-leading. For example:

  • Is there anything on this page that you particularly like or dislike?

  • Is there anything on this site that you particularly like or dislike?

  • Is there anything you would add or remove from this site?

  • How easy or hard was it to get to this point?

When I used to run training on facilitation I had a simple example of the Yes/No game. For those that don’t know it, the Yes/No game is where you have to talk for a period of time or answer questions without using the words yes or no. It’s incredibly hard, especially under stress (e.g. at high speed), as those words are integral to the language we use. Removing them is like trying to walk without bending your ankles - technically possible, but fairly tricky at speed.

Being a good facilitator is much the same. You have to remove words that will bias or lead the participant. The simple example I used was to imagine you were user testing Microsoft Word. If you asked someone how they would save a file, you would have lead them in three key ways:

  1. You explained that there is a system concept of keeping information for later

  2. You told them that the concept might be called Save

  3. You told them that the thing they were saving might be called a File.

That’s a pretty big leading statement. So, you have to remove the words that lead, and change it to something like “If you wanted to come back to this information later, but had to turn off the PC now, is there a way we can do that?”

Unwieldy, but far less leading. Just like the Yes/No game, a good facilitator has to constantly parse their language to ensure they aren’t leading the participant.

 

A fully non-biased test is not possible - even with an impartial machine-based facilitator and a science based peer-reviewed script, you’d still have to contend with the biases and assumptions made by the human who used the system.

But we can reduce bias in our research. We’re not aiming for perfect.

Better is good enough for today.

Excuse me sir - can I probe your brain?

twobrains.png

We are at an interesting cross-roads in terms of the rights of the customer / consumer / user and the ability of a designer or engineer to assist or even shape their path - and their mind with it.

True, we've been manipulating people's brains for centuries now, with everything from drugs (think caffeine and marijuana) to meditation, totem poles to Internet kittens. And then along came marketing, and not long after arrived the UX/CX world. Sometimes we are bent on delivering what's needed, wanted or desired, an altruistic goal. But sometimes we are attempting to persuade, cajole or influence - e-commerce and political UX in particular but in many other areas too.

So, my question is simple - when is it okay to mess with someone's brain? I'll break that into three questions: - should we, can we, and (if we can) how do we do it ethically?

 

1. Should we mess with a customer's brain?

When it comes to bodies, we're pretty much in the clear. We all know it's not okay to touch someone without their permission. We all know (or should know) that we don't have the right to invade someone's personal space or to physically redirect someone trying to walk a certain path. The laws are there, the cultural understanding is there - for most except the odd president, that is. But when it comes to the brain, not so much.

On one hand this can be desirable. Take a simple example, designing an interface for a self driving car. Now it's quite possible that the driver will be concerned that the car is driving itself - if you are anything like me then I'm often stamping on the missing brake pedal when I'm a passenger in the front, and that's with a human driving. Remove the human and I might need a herbal relaxative (or a change of underwear, but then again I am getting older). 

Knowing that, it makes sense to design the interface to put the passenger at ease, to change their mental state from anxious to calm or even blissful. That would seem to be a desirable outcome.

But it would also impinge on the mental integrity of the driver, and would be altering their mental state without permission - unless we asked first, of course. Is that okay?

This week I was reading a NewScientist article posing the question, 'do brains need rights?' The article points out that we are increasingly able to observe, collect and even influence what is happening within the brain, and asks whether those abilities to invade should be addressed with rights. Bioethicists Marcello Ienca and Roberto Andorno have proposed that we should. More on that later.

So, if we should be giving our brains rights, what could those rights be?

Returning to the story from NewScientist, Ienca and Andorno posit that a simple set of four rights could cover it:

  1. The right to alter your own mental state
  2. The right to mental privacy
  3. The right to mental integrity
  4. The right to forbid involuntary alteration of mental state/personality

Not bad as a start point. In the UX world, we are probably going to find it simple to honour ethical boundaries (or laws) one and two. Good researchers always ask permission, and wouldn't invade the privacy of a participant without informing them first, and what participants get up to in their own time is up to them.  

But when you reach points three and four, I can already see us crossing the line.

If designs / products / interfaces are meant to persuade then we are beginning on impinge on a person's right to avoid involuntary alteration of their mental state. But is that okay?

 

2. Can we mess with a customer's brain?

In short - yes. Yes we can.

For a start we've been messing with people's brains ever since Ogg hid behind a bush and made T-rex noises whilst Trogg tried to pee. We've used advertising and marketing for hundreds of years now, and we're getting better and better at the ever-increasing arms race that is the consumer market. Parents know that a good portion of parenting is the art of messing with your kids and pulling the psychological strings to a sufficient degree that they turn out to be reasonable and competent human beings, and not serial killers - before they turn the tables on you somewhere in the teens.

But the arms race is getting out of hand. The weapons of choice are becoming increasingly sophisticated beyond the simple cut and thrust of the media world. Just a couple of simple examples:

  • Facial recognition software can already interpret emotional states and feed them to those who are observing - whether the person wants you to know their emotional state or not.
  • fMRI's can already reconstruct movies and images in your head that you've already seen (comparing them against known) and will soon be able to construct images that weren't known in advance - for example, being able to visualise your dreams.
  • Neuroscientists are able to study your decision making process and some research has shown they can spot your decision in your brain up to 7 seconds before you even understand you have made a decision yourself. 
  • Neuromarketing is a thing. Using fMRI technology to measure direct brain responses to marketing and use that to make us buy more.

On top of this we have personalised content delivery that is absolutely shaping the way we think and act. We'll be writing an article shortly on bias which discusses this in more detail, but you only have to look at Facebook's filtering and any of the articles discussing it to see how much the world can be shaped - take a look around the world today and judge for yourself how much Facebook might have to answer for.

So how long will it be before we're able to understand exactly what someone wants, or is thinking about, and then respond to that? At what point will we be able to amend their wants, whether they want them amended or not?

The answer is going to be 'soon'. Very soon.

 

3. If we can brain-tinker - then is that a problem?

Let me answer that question by asking another. Would you be happy if you found out that your local water company was slipping anti-depressants into the water supply?

Would you be happy if you found out that your car radio was filtering the songs it played and the news it streamed to make you feel 'warmer and more positively' towards the car manufacturer?

Would you accept a new iPhone that removed contacts automatically if you argued with them more than twice in a week? 

All of these are choices that are designed to fix problems we face (depression, brand loyalty and stress levels), but I would hazzard a guess that the average person wouldn't be happy with any of these decisions, especially if they were unaware of them.

So what's the solution?

I'm personally recommending a set of rules to abide by. And I'm taking a leaf out of Isaac Asimov's book, when he coined the three laws of robotics, and cross-breeding it with the hippocratic oath. My laws would look something like this:

  1. Inform before researching.
    Ensure that the user is clearly informed before information about their emotive state,  cognitive state or personality is researched for further use.

  2. Approve before changing.
    Ensure that tacit approval is provided before attempts to change those states are made.

  3. Do no harm.
    Ensure that all such changes are designed to improve the heatlh-state of the (fully informed) user or works to their benefit in some way. 

 

We aren't doctors but we are changing people's lives. We are designing medical systems that can save lives or cost them. We are designing interfaces that will control airplanes, trains, cars, trucks. We are designing apps that can save a life or convince someone not to end one. And we need to be responsible when it comes to how we research, design and test those products into the future.

If that means abiding by a simple ethical set of laws then it works for us all.

How many users does it take to screw up a project?

Years ago I read a very interesting science fiction story from the great Isaac Asimov, called Franchise. 

The story was based in a future where computers were increasingly able to predict human behaviour and voting behaviour in particular - so much so, that the computer was able to reduce the sample size of the voting public further and further, to the point where it could identify a single voter who supposedly represented everyone. That poor slob was forced to vote, and he was held responsible for what happened from there. The story was inspired by the ability of the UNIVAC computer’s ability to predict the result of the 1952 US presidential election. 

Recently I was thinking about that story and the relevance it has to user research sample sizes. We UXers, like UNIVAC, aim to deduce from a few what relates to the masses.

Since the early days of UX research there has been a tension between the classic market research/consumer research fields (where sample sizes are huge and statistically relevant) and the UX world (where sample sizes are often far smaller).  The logic was simple; since people often think in similar ways, a few people will encounter most of the problems that a far larger number of people will see.

So - how many is enough?

In user testing and research, there has been an increasing trend towards smaller sample sizes. In the 90’s most user testing involved 10-12 people at a minimum, and often up to 20. Since then the average has fallen to just 5 or 6 people per test.

But how slim can the numbers be without losing quality? Will we end up, like UNIVAC, identifying a single user who is forced at gunpoint to identify all the problems in the system?

It’s quite possible we’ve already hit a risky level of sample size coverage. We may already be taking more risks than we're aware of.

 

So what is sample size?

Sample size is a critical element in any scientific research. It is the number of objects, people, animals or plants you are examining in order to test the outcome of some variable - in this case, the usability level of an interface, product or thing.  

Too many subjects and you’re wasting time and money - whilst that’s not a problem from the science perspective, in the business world that’s no small matter. Too few and your sample will most likely not uncover all of the issues there are to see. We all know that you might not uncover everything, but how big can that gap get - are you seeing 95% of the issues, or just 25%? 

Let’s take a look at the research.

For years, the five-user assumption has been debated in usability. We’ve heard that the law of diminishing returns means that a user test involving five-to-eight users will reveal around 80% of the usability problems in a test (Nielsen, 1993; Virzi, 1992). That’s an assumption that many UX companies and researchers around the world have used as a start point.

The first problem with that ‘five-to-eight’ line is the wriggle room it leaves. Fitting eight user tests into a day (at one hour each) is quite difficult, so many research companies started aiming for seven, or six. Over time, that fell to six or just five. Now, it’s commonplace to see five users tested in a research day, with the assumption that we’re still getting the same coverage we would have seen at eight people. For practical and financial reasons, we took short cuts.

The main problem with this assumption remains - there is variability in human emotional reactions, associations and thought processes. Whilst a smaller number can be representative of the wider audience, there is a lower limit here that we are ignoring somewhat.

Spool and Schroeder (2001) aimed to test the reliability of this five-user rule. Using 49 participants, they performed usability tests requiring each participant to visit an online store and purchase a product. On two of the websites used, the first five users only found 35% of the total problems presented. This lack of coverage was found again in a study by Perfetti and Landesman (2002), who performed tests with 18 participants, and found that each of the participants from 6-18 uncovered at least five issues that were not uncovered in the first five participants. While the validity and severity of the additional issues uncovered may vary, this hints strongly at the need for a larger sample size.

A study by Faulkner (2003) aimed to test this further, by having 60 participants perform a usability test (after being categorised by their skill level). Afterwards a program drew random samples of user data, in sample sizes of 5, 10, 20, 30, 40, 50 and 60. Perhaps not unexpectedly, the results showed high variation in the results when only 5 people were involved, with the amount of issues uncovered ranging from 55% to nearly 100% - and this decreased as more users were added. In fact, none of the sample group size ‘20’ found less than 95% of the problems.

So all of this research indicates that same size is a risk game, something we always knew.

In theory, a sample size of five may uncover 100% of the issues to be found - but as the graph below shows, your chances of hitting those five people are pretty small.

The image above shows the variance in sample sizes, with each dot representing a sample group, as defined by the label below the graph. What this shows is quite interesting, I've tried to summarise  this into three key points:

  1. If you test with 5 people, then you are most likely to find around 85% of the issues - but you may find as little as 55% of them. 
  2. If you test with 10 people, then you are most likely to find around 95% of the issues - but you may just 82% of them.
  3. If you test with 15 - 20 people, you will most likely to find around 90 - 97% of the issues - although you're now spending up to twice as much to find those last few percentage points.

 

So how many should you test?

As always, life is about balance. Five people (or six) can be tested in a single day, whereas ten or twelve needs two days of research. If you are engaging with a research company like us then that means almost twice the cost. 

But it also means the potential minimum number of issues found increases from 55% to 82%.

So how many to test is, as always, a factor of how much risk you can take, how much you can spend on the research, and how often you are testing.

But just to be safe - start with ten.

The Dark matter of UX

The universe, it turns out, is far stranger than we first thought.

Thanks to Einstein, decades of study and a lot of people so smart I look like an amoeba by comparison, we know that the stars, planets, moons and every bit of matter that we see (otherwise known as baryonic matter) makes up around 4% of everything that makes up the universe. Just 4%.

Bizarre, no? One day we think we know what the universe is made of, the next we realise we're only seeing tiny pieces of the puzzle - dark matter and dark energy make up the whopping 96% that's missing - and we know almost nothing about them.

It's bizarre to think of it but what we see is a tiny slice of the pie - everything else is hidden from us.

UX knowledge can sometimes be very similar.

We used to think that UX knowledge fell into a few core buckets: Needs, desires, requirements, issues, emotions. We knew that there was more to know but we mostly believed we were closing in on a full map of what was important. Various models flew about, all purporting to be full and complete models of UX knowledge. I can distinctly remember a conversation with a UXer who had decided to leave the field, because in his words "We know it all now - what's to learn? Why would anyone pay for a consultant five years from now?". That was in 1997. Oh, how the world has changed since then; mobile-first was an instruction to cot-building new parents, the iPad was a sci-fi device we never thought we'd see, only Dick Tracy had a smart watch, Google was a baby startup and we all bought our music in physical music stores. 

In 2017 we have a healthy global UX field with tools and methodologies and an ever increasing army of advocates and experts. In 1997 user testing was only for the leading edge projects, today it's a routine step. Back then we had our opinions of what made a good site and a few insights, today we have standardised scorecards, rating tools and expert review templates.

But just as with dark matter, we miss so much.

In the universe, the suns and planets are fireflies against a darkened sky, 4% fragments of the whole, supported by an invisible network we know little about.

In UX research we have the same; we ask questions and we receive an audible response with a single data point - but it's supported by a vastly larger network of thoughts, perceptions, misconceptions and processing that we never see or know anything about. When we observe users at work we can see issues or behaviour, we have visibility on single data points but again they are supported by the same invisible 'dark knowledge' of decisional processing.

We know that dark knowledge exists. We know it makes up a vast chunk of what's going on - and just like dark matter and dark energy, we may never get a full and complete understanding of what it is and how it works. 

I see this every day. People say and do things that seem, on our level of understanding, to make little or no sense. People will say one thing and then do something different. I see people confidently talk of how they prefer to search for everything, and then choose to navigate instead. I see people spot a control and talk about it, before they completely fail to see it when they need to use it. I hear people make statements that seem to be completely at odds with the way they act within the interface.

They aren't crazy, as it can be tempting to think. They aren't possessed and they aren't connected to an alternative universe. They are simply acting logically and consistently with the web of dark knowledge within them.

Knowing how dark matter and dark energy work, what they are and how they interrelate with baryonic matter is going to be the key for us to understand how the universe works. Once we know how the pieces fit together we'll be able to predict the movement of elements within it and understand our own future.

In much the same way, understanding the dark knowledge of UX will help us to predict decisions and actions and understand the future of any design or interface.

With dark matter we're on the case; China's DAMPE satellite and the upcoming Euclid spacecraft project will both help us look further and understand more. One day we will understand the complete picture, even if we have to reshape our thinking to do it. 

When it comes to UX knowledge, we are reliant on models of understanding of how the brain works. These help us to understand how we make decisions and how we react to inputs - but they don't help us to understand the complex context of decisions made whilst the user interacts with any set environment. We can understand how someone reacts in general to a problem, but we can't yet see how they react to this specific problem in this specific interface.

One day, perhaps we will. Until then, I think we should remember that everything we do, touch and work on as UXers is just a firefly against the darkened sky, a 4% fragment of what we'll one-day know.

 

It's not U, it's me

When you're researching UX and improving design, there are two completely different types of finding. Both are useful, but sometimes the line gets blurred and this can lead to issues.

It's all about me

First of all, there's opinion.

As a UX specialist I will often look at a design and have an opinion on what's working and what is not. I can perform an expert review which identifies the positives and negatives, based on thousands of user tests and a solid understanding of good design practice. So my opinion is certainly valid - not always correct, but valid.

Expert opinion is, often, a guess. It's an educated guess, hopefully, and it can be based on years of experience and insight into behaviour. But it's still a prediction of what might go wrong.

It's all about U

Then there's the U in UX - the users.

When I observe user testing or run user research, I'll hear things and observe behaviour. This is observed/learned research, and is quite different from my opinion as a UX specialist. I might see someone struggle to achieve a goal, or might hear them state that something was confusing. This knowledge is coming from the research, not from my personal opinion as to whether something works. I may have to interpret why they are going wrong, but the fact that they are going wrong is just that - a fact.

Observed findings are real. They are tangible, measurable, scientific measurements of what happens when people interact with technology. Like any scientific findings they should be based on research that is repeatable, measurable and unbiased.

Getting mixed messages

Obvious? Sure. Both have their merits and their uses. Most projects will employ both forms of research.

But you'd be surprised how often this line is crossed.

Not so long ago I was invited to observe some user testing - which is absolutely about observed findings. Good user testing should have at its core the three tenets (repeatable, measurable, unbiased), which this test did. An unbiased test script took independently recruited participants through a realistic and repeatable set of tasks on a fairly balanced prototype. The facilitator was an unbiased professional who wasn't invested in the design. Tests were recorded and findings were being analysed post each session.

But during the tests, both the client and the UX resource recording findings would comment as the design appeared as to what was wrong with it and needed to improve. One of them would point out a flaw, the other would agree it needed to change, and the UX resource would add that into the mix of issues found.

Once the testing was complete all of these findings were added to a sheet and then prioritised for solutions and changes.

Only, by this point it was almost impossible to identify where any one issue had come from. And herein lies the problem; personal opinion on what was wrong with the design was taking up equal space in the list of priorities with real issues that were observed to be a problem.

And now it's time to break up

Still not sure why that's an issue? Let me give you an example.

A few years back I worked with a client who wanted to improve an online web service, something with some pretty complex functionality. We were short on funds so user testing wasn't an option, but they had budget for an expert review (my professional opinion and best-practice comparison), followed by their own ad hoc user testing with friends and family.

Once this was completed I was able to point them in the direction of a few changes that would deliver some great benefits to the design. And remember, this was based on my expert opinion but also on their own ad hoc testing, which had confirmed our concerns.

They were okay with this - but the product owner really wanted to improve one particular part of the design that he hated. He felt it was old fashioned and slow, and had seen some pretty cool new ways for it to work. Crucially, this area hadn't been highlighted as an issue during the expert review, or during their ad hoc testing. I pointed this out, but could see this was still a priority for the owner.

I'd delivered my work so I stepped out. Several months later I bumped into the client, and we chatted about progress. At this stage he told me he was pretty disappointed, though not with my work or the UX stage overall. As you probably guessed he'd decided to prioritise that area that he didn't like and he'd reworked it at great expense and over quite some time. He'd tweaked some of the other issues we'd found, but not many - since he'd focused on his own opinion and the area he most wanted to fix, convinced it was going to add real value.

Only it hadn't, of course. Despite the effort poured into it, the new design effort slipped into a sea of indifference and sank without a trace, without a single happy customer delivered.

At the end of the day he'd been unable to separate the opinion from the fact and he'd paid the price for it - money down the drain and no positive impact on the product or his customers.

Staying afloat and healthy

As I said at the start, both forms of finding are valid and both are valuable. Expert opinion is faster and cheaper than direct UX research. It can lead design (rather than review what's there) and it's more freely available in many cases - though it's still just a prediction. UX research is more accurate as it's real rather than predictive, though it is more costly and slower and is often less useful when trying to lead rather than review (think of the iPod). 

Horses for courses. But the key is to know which one you're working with. 

When you're running UX research, make sure that the findings are observed and not opinion-based. I try to avoid carrying expectations and bias into the room when observing user tests. I may have my opinion as to whether the design will work well, but I've been surprised on many occasions, positively and negatively, so learn to leave them outside the room. And try to avoid having others in the room give their opinion too much, as that can easily begin to sway you towards focusing on/adding in their opinion as a finding.

As with most important topics in the world today, it's important to separate opinion from fact, and fact from 'alternative fact'. Otherwise the relationship - and your product - can quickly end up on the rocks.

Play becomes accessible - finally

Adaptoys01_f20014729fb593f6c7ce96193d374315.jpg

Over the past few years I've had the pleasure of working on a number of projects that have related to people with needs and others who help them - children, people with a disability, older people needing care and those who are suffering illness. They are some of the best people you'll meet. I've been involved in projects to provide software to people needing care to make it easier for them to find the carers they need. I've helped providers to build more usable websites to bring services closer to those who need them. I've listened to awesome people such as Northcott Innovation and their project to bring to market a stair climbing wheelchair. I've helped the NDIS to research their market and define the shape of their upcoming services.

So when I read this week about Adaptoys (adapttoys.org), supported by the Christopher & Dana Reeve Foundation, I was really intrigued.

I've lived with a daughter being temporarily paralysed (fortunately only for a few weeks), and I've had the pleasure of interviewing several people who have learned to live with full paralysis. I can't begin to imagine the challenges that brings, but I can imagine the pain of not being able to play with your own children. Having five kids in the family I can imagine how painful life would be if you could watch your kids play, but never join in. That's the problem that Adaptoys has set out to solve.

 Image courtesy of adaptoys.org

Their approach so far includes a remote control car, controlled by a breathing straw. Move left and right with the straw and the car shifts direction. Breath out and it accelerates, breath in and it slows then reverses. 

Another prototype uses a simple voice control unit to control a baseball pitch machine for kids.

In their own words:

“Technology has been such a powerful force for individuals with disabilities. However, there is a void when it comes to technology and accessible toys,” said Peter Wilderotter, President and CEO, Christopher & Dana Reeve Foundation. “Adaptoys will help eliminate inequality by reimagining playtime for parents, grandparents, siblings, uncles or aunts who are living with paralysis. We are excited to partner with 360i on this innovative campaign to ignite a global conversation and share these life-changing toys with more families.”
To extend this idea, the Reeve Foundation and its partners in this initiative, have launched a crowdfunding campaign at Adaptoys.org to raise funds to support the research, development and cover production costs for at least 100 adapted remote control cars, which will be distributed to individuals living with paralysis through a random lottery selection.
“As a grandmother, you dream about playing with your grandchildren. But for people living with disabilities, playtime can be isolating and inaccessible. My granddaughter lit up when I was able to race cars with her,” said Donna Lowich. “Adaptoys will allow me to be part of her childhood in a more meaningful way and my only hope is that we can bring these accessible toys to many more families. Everyone deserves to play with their loved ones.” 

I got such huge pleasure from reading about this project, and imagining how much further it can go. I made a small donation from the Fore, and I'd encourage you all to do the same. Play is key to us as human beings, to both our mental and physical health. Let's give play back to some people who need it, and deserve it.

Oh, and just to bring this well and truly back into the realm of UX, I hit the biggest problem of all when attempting to make a donation - the site wouldn't let me. When I got to credit card details there was a huge bug in the coding that stopped data entry in the month and year of expiry for the card, blocking any attempt at payment. I figured out this appears to apply to just Chrome at this stage, and got round it using Firefox. I've let them know, but just be warned. 

And just this once, I'm being forgiving of the poor experience!

 

 

We'll follow a robot - to our doom

This article on NewScientist makes interesting reading - in a recent experiment at Georgia Institute of Technology, 26 out of 30 students chose to follow an 'Emergency Guide Robot' off to an obscure doorway in a simulated emergency, rather than take the clearly marked emergency exit door. That's 87% of the audience, potentially led to their doom in a burning building. It's a sobering thought.

But it's not unique.

Whilst scientists in this case are trying to get to the bottom of how we build and comprehend trust in robots, cases of people trusting their technology to their doom are relatively common. This article lists 8 drivers who blindly followed their GPS into disaster, from driving into a park to (almost) driving over a cliff, driving into the sea and driving hours and hours across International borders to reach the local train station.

Airplane crashes frequently involve trusting what the instruments are telling the pilot, over what the pilot could easily see - such as reported here. Years ago when I worked for the Military in the UK there were reports of fighter aircraft crashing in the Gulf, because their 'ground-hugging' radars would sometimes fail to read soft sand as ground and would bury the plane into the desert floor. I heard numerous other horror stories of military tech gone wrong and how it killed or almost killed the people it was supposed to protect - including one very 'not-smart' torpedo that blew the front off the vessel that fired it.

And all this comes to mind this week, as I'm involved in designing an app to help people avoid and survive bush fires. 

Software that tells someone to turn left instead of right in an emergency is no different than a robot that leads someone away from a fire in a simulated experiment. And if it leads someone into the fire rather than away from it - the effect is exactly the same. We are rapidly moving away from the age where software simply provided information, it now makes decisions for us - and that means we have a responsibility to ensure those decisions are right.

As designers, coders, product managers and UXers, we are involved in designing software that can increasingly be thought of as an agent. Those agents go out into the world, and - hopefully - people trust them. 

And if our agents lead people to their doom - that's on us.

I've always advocated for a code of ethics around the software, robotics, IoT and UX fields, never more so than now.

Great customer service - Accor Hotels

 "I have to do  what  to talk to you..?"

"I have to do what to talk to you..?"

It's unfortunate that what often drives me to write is the bad experience; good design (and customer experience) is relatively invisible, bad experiences leap out.

So it is today for Accor Hotels.

A brief background. I've been a member of their rewards program for a couple of years now. You pay an amount to join each year, and you get one free nights stay in return, to use throughout the year. Last year I was a bit busy and didn't use it. With Christmas approaching I'd planned on a night away with my wife, somewhere mid December.

Unfortunately my father-in-law fell ill early in December and was extremely unwell in hospital. As the month progressed he seemed to get better, but unfortunately passed away. In the middle of this I received a call from the Accor Hotels rewards program to speak about renewing.

The customer service rep who called me was obviously understanding, and quickly hung up after offering condolences. No problem.

However I'm now trying to sort out renewal, and I have a question. Had I renewed instantly my free night could have been carried over into this year, since my membership lapsed for a few weeks it is technically lost; but given the circumstances I just wanted to ask someone there if there was any chance they could roll it over anyway.

And that's where the awesome user experience kicks in.

And it's the classic; an organisation that wants to reduce the overload of customer support, but making it near impossible to actually contact them.

Logging into my account I found a new design that was almost entirely focussed on sales. Offers, options and booking links are everywhere. But there were links to manage my account, so I started there first. Unfortunately, to no avail - my account listed my personal details, gave me options to sign up for different cards, but had nothing to help. 

There is no phone number visible anywhere I can see.

Okay, by this point I'm getting a little frustrated, but surely there's contact information in there somewhere.

So I head to the Support space - which is dominated with my favourite content, FAQs. I can search the FAQs, I can click into categories of FAQs, I can scroll through FAQs as much as I want. Great.

Having completely failed to find the FAQ titled "A family member just died and I missed my payment - can I roll over my rewards night anyway?" I did find a Contact us link at the bottom of the page. This offered me a list of choices - Contact us by Email. Great list. I would much rather talk to someone, but since email is the only show in town, fine.

This option asks me to select a main reason to contact them. I select the rewards program. It then offers me 11 sub-reasons to choose from, none of which have any relevance to me, and none of which relate to membership - which is bizarre given that I've just selected the rewards program. But since one of them is 'Other', I try that.

This now asks me to enter my membership number (my personal details are thankfully defaulted in) and to write, which I now do - foolishly thinking I've hit the jackpot.

I explain the reason for my contact and - learned from past such experiences - copy the text, in case the form fails. I hit the button - and the page scrolls up slightly. What???

I hit the button again, and again the page scrolls up slightly. 

I then proceed to try various combinations of information, just in case I've hit some bizarre error. Is there a mandatory field I've missed? Nope, everything's filled out. Is my text maybe too long? Nope, shorter text doesn't work. I try various other contact reasons, I try different subjects - but every time, I get a small scroll instead of a form submission.

By this stage I'm seriously peeved and thinking why the hell am I chasing these guys to give them my membership money and my business. In the end I have to settle for a shot across the Twitter bows, as the twitter account is the only contact information other than Facebook I can find.

Let me be very clear. I am not complaining about losing the hotel night. That was my bad, and if Accor told me they couldn't keep it I would have continued this year anyway, I just wanted to ask before I signed. It's not the night that counts - it's the fact that they made it practically impossible to ask them about it.

 

If you want to upset your customers, ignore them. Make them jump through hoops needlessly and repeatedly to ask you basic questions, make them sweat if they want to buy from you. It's a winning strategy Accor Hotels, and I wish you every bit of luck with it.