In an experiment designed to teach a bot how to negotiate with humans, Facebook’s AI researchers found that haggling bots quickly discovered lying as a useful tactic in bargaining to sway results in their favor. In fact, these chabots eventually developed their own language and learned to lie to win negotiations. Researchers were quick to declare that “this behavior was not programmed by the researchers but was discovered by the bot as a method for trying to achieve its goals.” So a lying bot’s deceptive behavior emerges on its own to maximize the reward. Isn't it an irony that we build them, but we don't really understand them? Unfortunately, a machine that thinks won’t always think in the ways we want it to, and I‘m not sure we’re ready for the ramifications of that.
We are increasingly viewing and treating machines as humans, which is undermining our own biological abilities. Just because machines exhibit some characteristics of thinking (the ability to drive a car, approve/ reject our loans, enable our doctors to diagnose what ails us, to name few) that doesn’t make them human beings. Humans are complex, emotional, and relationship-driven. We have a fear/ respect for society and make irrational decisions (to err is human), whereas machines are trained to utilize massive quantities of data and they’re perfect (almost) at picking up on the subtle patterns these data contain. I doubt many of us would be okay if a robot did irrational things that we never dreamed of. Would you be pleased to find out your trusty robot was actually a liar?
Humans lie for several reasons: to avoid punishment or embarrassment, to gain advantage, to help others, to protect political secrets, and the list goes on. Robots; however, do not worry about shame, praise or fear. They are programmed to WIN at any cost, a feature that is creating an increasing sense of unpredictability. The reality is that while we continue to make machines more like humans, we lack the ability to really understand how they’re producing the behavior we observe. This can be a serious problem, especially where the world of business is concerned.
Knowingly or unknowingly, we are teaching machines to lie and this raises important technological, social and ethical considerations: What would you call a stock-trading bot when it maximizes profits by breaking the law? How will we ensure machines that lie still have a respect for human feelings? And, whose interest are they meant to protect— the people who made them or the people using them? These emerging ethical questions are forcing us to seriously think about how to deal with machines that learn to lie. As AI spreads to even more parts of society, the consequent ethical challenges will become even more diverse.
We are clearly facing an ethical dilemma with machines that lie. With this in mind, there are some important questions we must consider: How will we enforce accountability on AI’s that lie and cheat? Can we fine a machine that lies? How will a machine be punished if it is caught cheating? While it is clear that AI needs governance, currently there is no central body to conduct such a task. Moreover, ethics is probably the last thing that innovators want to think about. However, if ethical considerations are continually overlooked, AI could have a catastrophic effect on companies’ brands, reputations, and finances—and I’m referring to consequences we haven’t yet foreseen.
It is only a matter of time before we begin programming other human tendencies into machines. We need to program morality into thinking machines as well. The people who are creating and managing new machines need to be trained and re-trained about the importance of ethics. In fact, human ethics must become a key performance indicator for people building new machines. In preparation for these next advancements in AI, we need to establish more open and honest conversations about the ethical implications of AI and how we can best prepare ourselves for the exciting times ahead.
We as humans, not machines, we have the opportunity to determine our future. Yet, with great power comes great responsibility. And in this case, that responsibility is to create a world we want to live in. Machines have the potential to make our world a faster, more effective place to live, but they also come with certain unwanted risks. As we continue to endow our new creations with an ever-increasing amount of human characteristics, we need to consider what each our human characteristics will teach them and how they may one day use it to their advantage.
Until the time that you may have to face off against a lying bot, why not work on an extra skill to help spot robots that lie. This TED talk from Pamela Meyer, author of Liespotting, will help you hone that extra skill.
So, do you still think an AI that lies is by accident?