What’s in a name? Well, in the case of Artificial Intelligence there’s a lot of misconception. Some believe that AI will take over the world in a super-human form with intelligence that will make the power of Einstein’s mind akin to that of frogspawn. Let’s call them AI-worshippers. (Bear in mind that people tend to worship not just because of adoration but often fear.) Others treat yet another article about AI with disdain and an eye-roll, fed up with every bit of automation being referred to as AI.
There’s a clear need to find a middle ground for public perception of AI. As it continues to creep into our lives, consumers, business leaders and policy-makers alike need to be on the same page, purporting the same message about the realistic application of AI. A number of governments and businesses are publishing their ethical guidelines for the use of AI, attempting to define boundaries for the widespread usage of such data-driven technologies (e.g. Google, Microsoft, IBM and the Partnership on AI). One of the principles from the UK government’s AI committee states: ‘Artificial intelligence should operate on principles of intelligibility and fairness’.
Are we at risk of those ‘in the know’ disregarding this principle and overhyping AI, taking advantage of the AI-worshippers?
The name Artificial Intelligence lends an air of authority that it simply does not deserve (yet). Recently I had the pleasure of hearing Hannah Fry speak – a mathematician by day, but by night a woman who is on a mission to set us straight when it comes to Artificial Intelligence. To paraphrase Fry’s words, really what we’ve seen is a massive advance in computational statistics, but unfortunately that terminology isn’t sexy enough to get anyone any funding.
We need to be wary of services claiming to use AI, when in actual fact it’s just an inflationary term that makes things sound more impressive than they really are… For example, Google funds a system called Perspective to identify toxic comments online. But it turns out that simple typos can fool it. Facebook uses an AI system to detect suicidal thoughts that are posted on the site, but the “AI detection” in question is little more than a pattern-matching filter that flags posts for human community managers to deal with.
With a widely-adopted realistic definition of AI (and regulation to ensure that any service claiming to deploy ‘AI’ actually is by this definition), users can be confident in products and services that they purchase or engage with.
I’m not here to belittle the massive advancements that intelligent machines are making. Just as the AI worshippers need a healthy dose of scepticism, so too do the Eye-rollers need a bout of perspective…
Consider what we should be hoping for AI to accomplish. In areas such as healthcare, the justice system and crime prevention all we need is for AI to be a bit better at the job than humans. In the tasks that machine is suited for – pattern recognition for example – we should partner with AI because it can offer more data-driven insight than a human can, which will help us get one step closer to an ethical ruling or an effective diagnosis. AI diagnosis tools for cancer get a lot of good press, and for good reason – one AI system can diagnose skin cancer more accurately than dermatologists (AI: 95% to Human: 87%). In these instances, it’s clearly worth having this second opinion.
Too often we expect perfection from machines because they are devoid of human flaws but that’s entirely illogical. We should turn to the human benchmark to consider whether AI is a value-add instead of dismissing it because it’s not yet accomplished, say, world peace.
Until we find a way to regulate the use of AI-based products, consumers need to practice scepticism. There’s a very powerful similarity between the sale of pre-approved medicine and the sale of AI systems today. Both have the power to really help people, but before medicines were regulated, anyone could put any coloured liquid in a bottle and sell it as a cure all.
It’s vital to smash this perception paradox and find a realistic middle-ground where we treat advances in AI with a healthy balance of scepticism and excitement. The potential of AI to support us in life-changing ways is real, but it is not happening – yet. We need regulation. We need education. We need systems and processes in place so that man and machine can effectively work together. Machine will not replace man. We must learn to work together instead of bowing down to it or rolling our eyes at it.