Mr Robot to the rescue

robear.jpg

It won’t be long until there are more people who need help than people who can offer it.

The world population is ageing and our continuing development of new health advances are making that worse. We’re even starting to unravel the mysteries of ageing itself, which may lead to an increasingly older population average.

As the mirror tells me every morning - we’re gettin’ old and cranky.

All of which means we’re going to be leaning on robots more and more in terms of finding the care we’ll need. This shows one such example of what might soon be possible - Robear (image courtesy JBinfo.com), a robot that can be used to help gently lift patients between beds and chairs without nurses and support staff damaging their backs or hurting the patient. A great idea.

But there are pretty big hills to climb before we arrive at the utopian future where a skilled robotic carer does all the work we need to stay fit and healthy. We’ve broken some of those challenges down into the core areas we see are yet to be solved (though right now smart people around the world are working to solve these).


Screen Shot 2019-04-24 at 8.49.48 am.png

1. Don’t sue

We live in a world where hot water spigots need to include warnings that they produce ‘hot water’ and a ‘risk of burning’. Child strollers need to have warnings on them to remove children before they are folded. Packets of nuts have to contain warnings that they contain - well, you get the idea.

Mostly, this is because we live in a world where people will sue at the drop of a hat. And so manufacturers need to ensure they design safety limits that exclude any risk of litigation, and warnings beyond count to ensure we understand not to consume the colourful beads instead of wearing them.

Imagine just how much this may impact the design of a robot in a health or aged-care environment.

Conflicts occur every day in such environments, between choices that will potentially impact safety versus efficacy versus privacy and a dozen other factors.

One example of this was told to me by a carer working with dementia patients. She explained that she worked with a woman who had advanced alzheimers and who she first encountered in a confused state in a hallway, asking what time her husband was coming to visit. The carer went to find out, and discovered the husband had passed away several years earlier. She was told to tell the woman this, as it was deemed to be unethical to lie and too risky in terms of litigation from her children.

The carer confided in me that she did this for several days - and helped the weeping woman cope with what was, to her, the tragic and fresh news of her husband’s death. But since the woman’s short-term memory was no longer working, the next day she’d be looking for her husband again, and the cycle would repeat. After several days of providing the shattering news, the carer decided to lie and tell her the husband was coming tomorrow, not today. The patient was then quite happy to move on and take part in other activities, without the heart-breaking news.

There are extremely complex ethical questions tied up in this example but it raises an important issue in terms of AI and robotics. Individual human carers and medical staff face real-world scenarios every day, in which they are forced to make decisions and balance their own safety (physical, emotional, financial and legal) and that of the patient and their family. As a human they are able to decide when to balance a greater good against a smaller risk of litigation.

As a designer of AI or robotic systems you’re not going to be able to build that flexibility in, without facing potentially huge consequences down the road. A designer who lets the robot lie to a patient to save that patient emotional trauma will potentially be sued out of existence at some point. And that means legal protections may well trump best levels of care.

Or, we’re going to have to wrestle with this issue and find a way to build ‘wriggle room’ into robots and systems that care for us, whilst protecting the makers who deliver them.

2. Know thyself

What we know about ourselves is often not true.

Many people, especially those who’ve felt the icy tentacles of depression or anxiety, will know how their version of themselves is a mirror version of what others see. We see flaws where others see a confident human being.

The same is true for illness, where emotion often cajoles memory and rewrites history at will. Carers tell stories of patients who confidently provide completely incorrect information - not from any intent to mislead but just from being a poor judge of their own condition.

So when that robot asks us how we’re feeling, or how bad the pain was, we’re unlikely to be providing clear, concise and accurate information.

This is something the health industry understands quite well, and works around. Perception is not truth, as evidenced clearly by the placebo effect - what we think is true can often alter what is true, rather than simply mirroring it. And just as often the two can bear as little in common as a bear and a bullfrog.

So robots are going to have to come to terms that they are reliant on our self-reporting - and that this self-reporting can be as untrustworthy as the average politician.

Robot and Frank - courtesy of  theverge.com

Robot and Frank - courtesy of theverge.com

3. Lies, damned lies, and patients

In the recent sci-fi action movie Upgrade, a quadriplegic patient attempts to commit suicide by telling an injecting machine that there was a failure and his dose of drugs wasn’t properly injected. The machine responds to this by attempting a second time, then a third and fourth, etc.

Whilst this is a movie scenario it’s an interesting reminder that, as humans, we are notoriously deceptive. We lie all the time, to ourselves and to those around us. We under-report, over-report, obfuscate and lie by omission on a regular basis. As a parent one of the most fascinating ages to observe my own kids was in the 3-5 range, when they first discover and then try to master the art of deception - most parents have a story of kids covered in chocolate stains whilst denying strenuously that they’ve eaten the missing cake.

When you research doctors and nurses and carers you’ll find this is parr for the course they undertake every day. They know what to trust and what to question, what to ignore and what chocolatey crumbs to follow.

Robots will have to learn this too.

It’s notoriously complex and difficult to get robots and AI to understand us when we’re telling the truth and asking for simple actions - SIRI seems to take great delight in misunderstanding the simplest navigation instruction - how hard is it going to be to train these devices to look for the lie?

And it’s not as simple as a lie, either. People often believe what is not true; a patient with dementia or a mental illness may truly believe they took medication already, or that they have symptoms that don’t exist. They may confidently report and misdirect with the full perception they are telling the truth.



4. The sea monkey factor

If you, like me, ever received a sea monkey kingdom under the sea, you’ll know the joy of seeing your pocket universe come to life.

And if your experience was anything like mine - you’ll also know what it felt like to wake one morning and find that little world consigned to the history books and filled with little watery corpses.

Despite our best attempts it’s quite hard to care for something so small, fragile in such a finely balanced micro-environment.

And like us, robots are going to have somewhat of a similar problem. And yes, we’re the sea monkeys.

Robots currently succeed best in environments where humans aren’t about. Manufacturing floors are rife with fast moving machines that can build cars at high speed with accuracy. But put them in a space with humans and that generally doesn’t end well - for example a small child was mown down by a mall robot in 2016 when it failed to register him correctly. It just kept on trucking.

Robots are generally made of hard materials and live in fixed, predictable worlds. We humans are softer, more breakable, especially as we get older - and robots are going to have to learn this, and to be built from better materials.

Sometimes even just gently touching a patient can cause bruising in some cases, and this is something that carers have to learn to deal with. I once spoke with a nurse who had just deal with an elderly patient with horrendous bruising to their ribs, given from a loving hug from a (now devastated) grandchild, not realising how fragile they had become. What will work for one patient may break bones in another.

We’re fragile little sea monkeys, to a robot with strength. Learning how to not break us will take time.



5. Killing us with kindness

And even when robots have figured all this out, they still may do us harm, without meaning to.

Imagine you are old and frail but you’re lucky enough to have George, the carer robot, delivered to your home. He’ll cook for you, provide the medicines you need, provide wound care, lift you from your bed to your chair and do your shopping. He’ll even talk to you and give you a foot rub before tucking you into bed every night. He’s awesome.

So awesome, in fact, that now your family don’t really need to come see you every day. And when they do come, you find it hard to talk to them - after all, you’re used to talking to George now, and he doesn’t care if you tell the same stories over and over.


For those who are truly alone a robot carer could be the biggest blessing there is. But it might also potentially help to isolate us, make it harder to engage with humans, and get out of the house.

The internet gaming world has exploded exponentially, and as someone who has partaken of more than one or two online battles I can attest that it’s hugely empowering to be able to play with others, when you live alone.

But as many parents will attest, it can also isolate and devastate.

Children become locked into their world, where they feel validated, empowered and important. Exit that world and they feel lost, devalued, worthless, until they retreat back to where they feel valued.

Despite the benefits that robotic care may offer us, there’s also the potential for that care to kill us through kindness, by removing us from others. If you knew your elderly family member was not only well looked after but very happy at home - would you visit as often? And if you turning up felt like an inconvenience to them, would you visit less still?


Kicking all the goals

Screen Shot 2019-04-24 at 10.30.37 am.png

Robot caring needs to get the basics right, something that’s currently under way. Caring robots need to understand messy humans and our complex ways, they need to be able to operate safely in the chaos of human homes with a million different needs and contexts. They need to provide care at the level required, across medical, emotional and physical fields.

But once they’ve learned to do that they’re also going to have to kick more goals, to ensure they don’t end up - like my eight year old self - staring at a tank of dead sea monkeys.