The world of work is changing faster than ever, and machine intelligence is at the center of this change. There are two key questions when it comes to artificial intelligence (AI) and trust: Trust between workers and employers, and trust that the actions and decisions of intelligent machines won’t result in unknown or negative consequences.
Machines altering human-trust mechanisms: AI and automation will increasingly take over many routine, repetitive and low-end tasks, making some people’s skills and capabilities irrelevant, and leaving behind those unable to keep up and compete. The threat to workers’ current jobs is an inescapable topic when it comes to AI. In fact, nearly 60% of leaders cited fear of job loss as a significant challenge to the adoption of AI-driven intelligent machines in their organization. While 38% of leaders predicted a decrease in trust between employees and employers, 34% said the same was true between employees themselves, perhaps precipitated by massive job transition. This is all the more alarming when industry reports claiming that 70% of HR executives expect AI will result in significant job losses in Asia over the next five years. The fear of becoming irrelevant is a key concern for employees today.
The transition to the intelligent machine age won’t happen without an acute focus on the relationship between humans and machines, how the two will collaborate, and how the current workforce and the business itself will adapt to AI. To grow trust, leaders should proceed sensitively and gradually when introducing AI systems, and focus on the human-machine collaboration issue. No matter how well AI systems are designed, if people don’t have confidence and trust in them, businesses won’t adopt them successfully. Employees will gain confidence in machines when they see positive results from the decisions they make. Companies must focus on user acceptance as much as the technology itself. Prioritizing people will require changes in management culture, which still tends to be hierarchical and authoritarian in many Asia Pacific organizations.
Instilling Trust in Machines. From unexpected or biased results, to perpetuation of dangerous errors, many people fear “what can go wrong” with intelligent machines. My latest research shows that 65% of leaders are somewhat to very concerned about the unknown consequences of intelligent machine failures, and the same number agreed that as intelligent machine use increases, we will witness new and unknown consequences that may surprise us. What if an AI-powered medical system makes a recommendation that leads to serious injury or even death? This could have a catastrophic impact on companies’ brand and finances.
So who should be held responsible for any side effects of the intelligent machine? We need to train and hold AI engineers, designers, developers, investors and innovators accountable for not only defining specific tasks for AI machines but also for recognizing the side effects of them. As machine intelligence continues to get smarter, human intelligence will be needed to ensure it is deployed sensibly and safely. After all, humans were needed to train and develop the personalities of Apple’s Siri and Amazon’s Alexa to ensure they accurately reflected their companies’ brands. We need to put humans in the driver’s seat to bring greater transparency.
Trust is the new gold in this new machine age. I believe those who will win the battle for trust are those who learn from the human-machine collaboration and implement their lessons learnt.