A wise man once said, "A lifelong learner is a lifelong winner." As artificial intelligence-driven machines begin to take over not just routine, low-end tasks, but also skilled white-collar work, the need for humans to become lifelong learners is clearly growing. With many skills and abilities becoming irrelevant, we need to continually enhance our expertise and knowledge in order to survive and thrive in the future workplace. But why restrict this concept of lifelong learning just to humans? What about machines? Should we expect them to become 'lifelong' learners too?.
While there are certainly some ambitious machines in the making today, in actuality, we are not even close to the technological marvels promised in our favorite sci-fi movies. There's no denying that many of our machines are smart, though. Any machine operating in a human environment, however, be it a chatbot or a self-driving car, has no option but to continuously learn new capabilities in order to qualify as 'smart.' These machines, by definition, need to continue to learn throughout their lifespan to fuel their intelligence.
Lifelong learning for humans and machines is not a like-for-like comparison though. Humans learn through a sense of curiosity, encouragement, and open-mindedness. Machine learning is programmed, and lifelong learning should be programmed in the same manner, allowing machines to learn from the execution of specific tasks continuously, to add or tweak capabilities based on outcomes, and ultimately to learn from the actions of their human operators. With continuous learning, machines become better at performing prescribed tasks, discovering new tasks, and solving problems. Engineers, designers, and creators of intelligent machines need to develop, implement, and optimize underlying algorithms that keep lifelong learning in mind.
As machines learn more, they deliver more. And as a result, we may be confronted with unknown consequences of machines' actions in the future. How do we know that today's lifelong learning machine is not delivering a negative or biased result or not on its way of becoming tomorrow's Terminator? Equally, it's possible machines could prove to be more moral than humans. The book "Moral Machines: Teaching Robots Right from Wrong" proposes a Moral Turing Test to examine the challenge of building artificial moral agents. The quest to build machines that are capable of telling right from wrong has begun. The difficulty is, human values and behavior can't be programmed based on certain programming rules, at least for now. Still, institutes like MIT have set up an online experiment based on a 'who to save' scenario, using the results as 'moral crowdsourcing.' We need to sense the presence of human norms and human morality behind the machine.
With continuous learning, machines become more intelligent and more capable of transforming our society and becoming the major players in future cities, industries, and work environments. They will impact several industry sectors by providing care for aging populations, creating safer transport, enabling more efficient healthcare, and increasing productivity levels. In short, machines will be able to address the societal issues that humans haven't addressed adequately for decades. It's in our interest, therefore, to make machines lifelong learners. Without lifelong learning capability, machines will probably never be truly smart and intelligent.