Nearly 6,000 pedestrians were killed in 2017 by automobiles, but it's doubtful that number of national or global headlines appeared on these tragedies. More likely, most of us read just one: the fatality caused by a self-driving car. It's far more distressing, apparently, to witness intelligent machines perpetuating errors than if a human made the same mistake.
When intelligent autonomous machines go wrong, it rattles us. This is particularly true when the machine is performing a very human-like physical task, like driving a car or preparing food. Maybe it's the potential for the devastation that could be wreaked if a non-sentient being, with no notion of tragedy or value for human life, were to repeat an error over and over despite the horrific consequences, from economic upheaval, to widespread illness or fatalities. Or perhaps it's the cognitive dissonance of seeing an intelligent machine- which we assume was created to perform a task better than we can, even do it perfectly- act in a really non-intelligent way.
The same fears can surface when intelligent software delivers an unknown consequence, invisibly. For instance, how do we know that a machine-learning algorithm is behaving ethically? It's frightening to feel helpless when handling issues we can't even see, let alone control.
When we lose trust in the machine, we naturally think of the human that made the machine. Didn't he or she consider the pathways and possibilities that led to this error, and devise a Plan B? Losing trust in the machine's creator or, worse, sensing the complete absence of human intelligence or moral principles influencing the actions of the machine- it's a little like peering into the void.
Building Human-Machine Trust
As intelligent machines increasingly becoming a part of our world, it's up to us- and their makers- to establish a comfort level with these new beings and figure out how they fit into a world that is and will remain an ultimately human domicile. We already know, from experience, what doesn't work when it comes to building human-machine trust: making machines look or act more human. One look at the comments on robot videos released by SoftBank's Boston Dynamics, and the "uncanny valley" becomes acutely clear: that point at which humans' emotional response to a robot quickly turns negative when the robot appears "almost" human. The issue is, human values and behavior can't be programmed based on certain rules, at least for now.
What we need to sense is the presence of human norms (and human morality) behind the machine's design. If human considerations are overlooked in designing new machines, it could have a catastrophic effect on companies' brands, reputations and finances.
Mechanisms of Trust
One way to do this is to design in an obvious "off switch" or panic alarm that can quickly halt undesired actions or unintended consequences. Think of the early "driverless" passenger elevators; in addition to a soothing recorded voice instructing riders what to do, these moving vestibules featured what elevator historian Lee Gray calls "the biggest calming device ever invented, a big red button that said ‘stop,'" not to mention a phone in case of emergency. By exposing this small bit of technology, designers could pass control back to the human- and give humans the final say.
At a more advanced level, humans could be given another button- one that asks machines to explain the reasons behind their actions. This would help us learn the innerworkings and possible wrongdoings of machines better.
There are more subtle ways to give humans the reigns, or at least the sense that we can override machine actions, particularly in our "there's an app for that" world. Like Shazam does for entertainment- identifying music, movies and TV shows with just a small sample- a robotic companion app could be developed to detect when a machine is veering into questionable activity, and provide options for avoiding the consequences. Food prepped and served by robots, for example, could be scanned for impurities or anything else that's undesirable for consumption. If we're going to rely more on machines, after all, why not use one to monitor other machines?
Human control could also be enhanced by enabling faster and more transparent forensics into who's to blame for a machine error and what the consequences should be. Right now, machine capabilities are far ahead of industry mores and the law. In the self-driving car world, for example, the question of liability is still being worked out among hardware makers, auto manufacturers, software designers, insurers and car owners.
With reliance on not one monolithic system but a web of interlinking microservices, however, new processes would need to be instituted to trace back any one piece of code to the original developer. Perhaps this is where blockchain could come in, to provide an indelible and immutable record of who last touched any part of the chain. Or maybe augmented or virtual reality tools could help investigators recreate the events leading to the error. If the makers of the machine felt culpable (and exposable) for unforeseen consequences, it would only be natural to place more guardrails around what they actually intend for the machine to do.
Machine designers could also build in clear mechanisms that pass control back to humans when baselines and thresholds are exceeded- and humans need to be ready for the hand-off. This is the future of human-machine collaboration, in which humans develop the skills to remain in control of machine operations, and to know what to do in a multitude of situations.
An AI Ethics Code Built on Transparency
At the heart of the issue, however, is the question of developing a commonly agreed upon code of ethics for artificially intelligent devices and beings. So far, a small number of companies such as Microsoft have begun developing internal ethics boards, and a few industry organizations have been formed by the likes of Amazon, Facebook, Google, IBM and Tesla to study the ramifications of AI and guide their future forays into the technology.
Another way to address the ethics issue is to crowdsource possible solutions. MIT's Moral Machine project demonstrates how this approach can be leveraged to teach machines to make better human decisions in the context of autonomous cars.
Others argue that external governance controls will be needed, in the form of regulations and industry-enforced standards. The IEEE has introduced a global initiative to establish ethical and social principles for autonomous systems that prioritize human well-being. French president Emmanuel Macron has called for more accountability for the decisions made by "black-box" algorithms and the need to expose their innerworkings. Germany, for its part, drafted a set of ethical guidelines for autonomous cars, including 20 rules for software designers to prioritize "safety, human dignity, personal freedom of choice and data autonomy." And the UK has vowed to "forge a distinctive role for itself as a pioneer in ethical AI," with the drafting of an AI Code for adoption by national and even international governing bodies.
There should also be greater transparency in how developers, engineers and designers quantify ethical values into intelligent machines, as well as the anticipated outcomes of these choices. Maybe a design-centric approach to ethics would help build greater transparency in machines.
When it comes to intelligent robots, the world seems divided into two camps: avid enthusiasts and those who fear the coming of a dystopian future. But the engineers, designers, developers, investors and innovators creating these machines can fill that divide. It's not enough to expect people to learn to ultimately trust the machine. The trust that needs to be developed is with the human behind the machine.