Fans of Netflix’s Black Mirror will remember the main character in the episode “Playtest” being terrorized by an algorithmic AR + AI feedback loop. In a nutshell, it metastasized his innermost fears – with a massive, augmented reality-generated spider (!), leaving his mind literally blown (its coda/denouement is that “Mom Called” – enough said).
For people hard at work in all manner of augmented reality technologies and artificial intelligence innovations, Black Mirror presents a possible future no one has any interest seeing come to pass. It’s the “anti-demo”, instructive in extremis as an example of what NOT to do (“I like the future of work, but I DON’T like that... at all... how do we stop that from happening?”). It teaches us by showcasing – vividly – what could happen if technology innovation were to slip wildly off ethical guardrails of what’s decent, proper, good, and uplifts humanity. (To be sure, there are a few Black Mirror episodes -- like “San Junipero” and the recent “Striking Vipers” – that open up a broad exploration of the spectrum of possibilities for the human experience too.)
Meanwhile, all this is playing out concurrently to the daily societal, ethical and philosophical debates happening under the broad penumbra of “The Techlash”. But behind all the “lash” is... tech. And augmented reality is a major wave on the tech horizon, promising to be a fundamental game changer for the future of work.
That’s why getting into “the real reality” of the adoption of AR is important. And that’s why our forthcoming report from the Center for the Future of Work, “The Real Reality of Augmented Reality”, will showcase the current state of affairs for this breakthrough technology whose time has come.
But AR won’t amount to anything if it’s not ethical, fair, private, comfortable, safe, and lessen notions that it’s a “scary” technology.
At its essence, AR “works” by having real humans (with thoughts, feelings, desires, and reactions) looking around inside the medium, as fast as their mind can go. As the saying goes, the eyes are the windows into the soul. And as the industry begins its inevitable rocket-ride, how can it be shepherded, stewarded, and fostered in a way that doesn’t resemble the worst elements of a Black Mirror episode?
Black Mirror’s “Playtest” anti-demo notwithstanding, the last thing this nascent paradigm needs is prevailing sentiment that it’s exacerbating problems of dopamine-driven technology addiction, or conditions that lead to isolation (Consider: we already have a type of “augmented reality” – social media – that is steadily resulting in increased loss of connection to the real world). Addiction is a risk; in-built “stop signs” will be needed, and may take various forms. At a recent MIT event, Microsoft’s Harry Shum imagined what some of the “stop signs” for emerging tech might look like (e.g., when should AutoPlay features should “AutoQuit” if the user is sleep-deprived? 1 AM? 2 AM? 3AM?) Given its prominence in the AR market with Hololens, this is an apt question for Microsoft to be asking, and other players should be too.
Perhaps some of these fears are overblown; the Experts in our forthcoming survey were almost as equally concerned about damage caused by over-hyping AR as they were about privacy concerns. But it’s essential that they are given the utmost consideration, given the serious consequences at stake. What could those be? Well, how about the disappearance of customers, or government regulators like European Commission for Competition or the US FTC cracking down hard? In the current climate of the Techlash, any perception of AR malfeasance would be like pouring kerosene on an already-raging debate about the future of digital ethics.
Take heed -- the days of cavalier technology “debutantes” experimenting on a vast population of consumers without regard for the consequences are over. Apology tours about “we didn’t know that X would happen because of THAT” have grown more tedious by the month.
Though I’m certain that many in Silicon Valley are already avid acolytes, “Black Mirror” should be required viewing as we wend our way through digital breakthroughs, standing as a 21st-century “beware!” totem, a techno-anti-demo, a vivid example of “what not to do”. The future of augmented reality (and AI... and automation... and algorithms...) depends on getting the right ethics baked into these new models early – while there is still time.