Does an app have the right to save your life?

It's an interesting question, and one we'll soon have to address; where exactly is the ethical line in terms of technology and life-saving?

Not so long ago I sat in a coffee shop and dreamed up (with two others) the concept of a robotic pool life-saver. The idea was that the device would - when in guard mode - look for movement in the water, and would move towards it, sound an alarm and attempt to float it, with the assumption being that it was a child who'd fallen in. That project never made it off the ground, and there are no doubts in my mind that this would be a worthy cause. But where, exactly, is the line? Where does a device, or indeed an app, have the right or even the obligation to save us?

The past two months have been interesting ones for me, working in the aged care, youth care and disability sectors. I've researched in the field and spoken to numerous people about their lives, their issues and the needs that are out there.

Last night I watched Humans on SBS, a very interesting show. In it a Health service robot is provided to an older single man to care for him. This robot, matronly and authoritive, immediately started to curtail his life; salt-reduced broth for lunch, healthy walks and early to bed, despite his demands to be treated better.

And it raises an interesting question that we'll soon have to deal with. Apps already exist that can help to save lives, calling for help, providing life-saving guidance to start a heart, helping us to live healthier (and therefore longer). But right now all of this is self-guided.

What happens when the app or the device is in charge? Where's the ethical line here?

This might sound a little far-fetched but trust me it's already happening. We are living in a world were self-driven cars are a reality and will soon hit the roads (pun intended). Those cars will have lives in their hands every day, both passengers and others on the streets. They'll also, most likely, be gaining a layer of smarts to react in emergencies - for example if a passenger has a medical emergency whilst en-route.

Automated drones already have the capacity to carry weapons, at least for military purposes. Home automation options are available right now that will monitor movement, telling us if a person has a fall or forgets to take their pills. Others are in development that will (very soon) be able to monitor mental and physical health, and react accordingly.

So, from a UX designers perspective, we come to that question; does an app have the right to save your life?

On the surface the answer seems simple. Of course it does! Friendly robot sees person have a heart-attack and friendly robot assists. Technology has the right, is indeed obligated to save lives.

Or is that right?

What about people who make a choice to end their own lives? Should that be overridden by their app, their technology? Should the self-driving car refuse to allow it's passenger/driver to take dangerous actions behind the wheel? Should it stop the crash? Should the pill app lock the lid if someone takes too many?

Again, some people may say yes - that the app and the technology is just as obligated, even if the intent is to self-harm.

But where does that end? I had chips for lunch today, and yes, they tasted great. But they equally have added to the protective layer that I'd love to lose, and have increased my risk (though marginally, it's true) of seriously health issues and death. Should my technology stop me from eating chips ever again?

Should my car refuse to drive me anywhere except to the Doctors? Should my app refuse to show me web pages for fast food joints? Should my recipe finder recommend only vegetarian meals? 

And this hasn't even touched on the issue of choice. What happens if an app can reduce life risk or save the life of one person, when two people are at risk? How does it make a choice? What if my pool life-saver robot registered two bodies in the water, not one. How does it choose? And does it have the right to make that call?

I once talked with a team who tested the Phalanx anti-missile system in the field. This was an automated gun that could sense incoming missiles and shells threatening a ship and could fire pellets to explode it before it hit. When testing this with incoming shells from another ship it worked perfectly - right up to the point where they tested it with two incoming shells. The Phalanx gun started firing at the first, then ignored that and focused on the second, then the third... Luckily this was a field test with no live rounds, and no lives were lost. It turned out that the logic built into the gun had been to lock on to the fastest incoming target and fire. Since the first shell had slowed down marginally it ignored this, even though it was clearly a bigger threat.

Back down the track from that field test someone had built logic into a device to save lives. And it turned out that their logic was faulty. If that hadn't been captured, lives could easily have been lost due to poor logic, poor UX.

Think that's not relevant? What happens if a self-driving car registers an incoming threat (a boulder rolling down a hill, a child walking in front of the car) and has to take evasive action. What happens when the only evasive action it can take is to swerve into a line of pedestrians? Which choice should it make? 

 

I know this sounds sci-fi, but there are some pretty weighty questions here and we're not far from needing the answers. This last month or two has taught me that there is a burning need for technology to help save and protect people, to support those who are older, younger or who have a disability or special need. And solutions for these are already rolling off the line.

But the ethics and the model for how these devices might behave can't afford to follow too far behind...

Gary Bunker

the Fore