Although nearly 6,000 pedestrians were killed by vehicles in 2017, hardly any of these tragedies made national or global headlines. However, when a self-driving car killed a pedestrian earlier this year, suddenly everyone got anxious, perhaps because it was the first death involving a fully autonomous test vehicle. This ensuing anxiety demonstrates the potentially catastrophic effect the failure or unexpected actions of an intelligent machine could have on its company’s brand, reputation, and finances.
As artificial intelligence (AI) tightens its grip on every part of our lives, the unknown challenges it brings will only become more pressing and surprising to us. But is the risk of these unknown consequences worth the business benefits? Ninety-four percent of car crashes are caused by human driver error. Even though they make fewer mistakes, will autonomous vehicles help reduce accidents? Is it less tolerable for us to accept a human dying at the hands of an autonomous machine than at the hands of human error? If yes, then we face the issues of who will decide what constitutes acceptable risks with machines and how will these risks be determined?
Incidentally, the potential failure of intelligent machines raises some important social, legal, and ethical questions:
- Would you react to a mistake made by a machine differently than one made by a human?
- Who bears the responsibility for wrongdoing or harmful conduct by a machine?
- Shouldn’t the governance of machines be fundamentally different from the governance of humans?
These are questions to which we currently have no clear answers. However, one problem we canidentify is that our current physical infrastructure and road rules are designed for cars with humans behind the wheels. In order for autonomous vehicles to become ubiquitous, we need to redesign our infrastructure and create new rules to accommodate machines on our roads, a process that is far from happening anytime soon.
AI ethics and morals are highly nebulous areas. While it is clear that AI needs governance, there is currently no central body to conduct such a task. Hopefully, one fine day we will have legal and technical regulations, as well as audit mechanisms in place to address the unknown consequences of machines. Until then, businesses must focus on self-regulation based on openness and accountability, while keeping an ever-vigilant eye on the maintenance of human values. Luckily, many companies are looking into this issue proactively. For instance, OpenAI is founded with the idea that AI needs to be built with safety in mind, and Google’s AI research arm, DeepMind, has launched a unit focused on ethics and society.
In fact, machines might one day prove to be more moral than humans. The book “Moral Machines: Teaching Robots Right from Wrong” proposes a Moral Turing Test to examine the challenge of building artificial moral agents. Per the book, the quest to build machines that are capable of telling right from wrong has begun. For example, institutes like MIT have even set up an online ‘who to save’ scenario, using the results as ‘moral crowdsourcing.’ What we need is to inject human norms (and human morality) into the machine’s design. However, this comes with its own issues, such as how human values and behavior can’t be programmed based on certain rules…at least for now.
Ultimately, on the timeline of the AI revolution, we have just fired the opening shot. We still need a human-controlled on/off switch to operate intelligent machines, so that we can shut them down in an emergency or before they do something they shouldn’t. While machines are only becoming smarter, they still aren’t perfect and there is no way of knowing what consequences might arise if or when the machines around us fail.