We'll follow a robot - to our doom

This article on NewScientist makes interesting reading - in a recent experiment at Georgia Institute of Technology, 26 out of 30 students chose to follow an 'Emergency Guide Robot' off to an obscure doorway in a simulated emergency, rather than take the clearly marked emergency exit door. That's 87% of the audience, potentially led to their doom in a burning building. It's a sobering thought.

But it's not unique.

Whilst scientists in this case are trying to get to the bottom of how we build and comprehend trust in robots, cases of people trusting their technology to their doom are relatively common. This article lists 8 drivers who blindly followed their GPS into disaster, from driving into a park to (almost) driving over a cliff, driving into the sea and driving hours and hours across International borders to reach the local train station.

Airplane crashes frequently involve trusting what the instruments are telling the pilot, over what the pilot could easily see - such as reported here. Years ago when I worked for the Military in the UK there were reports of fighter aircraft crashing in the Gulf, because their 'ground-hugging' radars would sometimes fail to read soft sand as ground and would bury the plane into the desert floor. I heard numerous other horror stories of military tech gone wrong and how it killed or almost killed the people it was supposed to protect - including one very 'not-smart' torpedo that blew the front off the vessel that fired it.

And all this comes to mind this week, as I'm involved in designing an app to help people avoid and survive bush fires. 

Software that tells someone to turn left instead of right in an emergency is no different than a robot that leads someone away from a fire in a simulated experiment. And if it leads someone into the fire rather than away from it - the effect is exactly the same. We are rapidly moving away from the age where software simply provided information, it now makes decisions for us - and that means we have a responsibility to ensure those decisions are right.

As designers, coders, product managers and UXers, we are involved in designing software that can increasingly be thought of as an agent. Those agents go out into the world, and - hopefully - people trust them. 

And if our agents lead people to their doom - that's on us.

I've always advocated for a code of ethics around the software, robotics, IoT and UX fields, never more so than now.

Gary Bunker

the Fore