Discover The
Future of Work

Invisible Man: Blinding Bias in AI

AI
Machines
Automation
Data
Algorithms
Machine Learning
Intelligent
Innovation

Invisible Man: Blinding Bias in AI

Artificial Intelligence is the great story of our time, growing in importance as we increasingly shift decision-making from human...

6 Minutes Read

Artificial Intelligence is the great story of our time, growing in importance as we increasingly shift decision-making from human insight and intuition to automated algorithms. In many instances this represents convenience and frees us up to work on tasks beyond number crunching or data analysis. But there are blind spots in AI. Algorithms see all the data points and correlations related to a “subject” without the nuanced and contextual understanding of a human being. The decisions we offload to them can have dire consequences. Compounding the issue is the air of infallibility through which these algorithms are viewed that removes accountability from entire systems. Artificial Intelligence suffers from the same drawbacks of any other human-programmed code; garbage in, garbage out. The algorithms that control medical care, policing, and self-driving vehicles have biases baked in by homogenous design teams that design and test with themselves in mind. Thus, large swaths of the population, namely minorities and women, are rendered invisible.

“I am invisible, understand, simply because people refuse to see me. When they approach me they see only my surroundings, themselves or figments of their imagination, indeed, everything and anything except me.”

I have known this precise feeling expressed by the nameless narrator of Ralph Ellison’s Invisible Man. One spring day in my teenage years a classmate dropping me off at home committed the cardinal sin of driving with a busted tail light. The blue lights flickered and we were instructed to pull over. Thirty minutes ago we were contemplating college adventures to come as we wrapped up a game of tennis. Three minutes into the traffic stop we were suspects, on the curb, with our hands behind our backs. The impetus for this interaction with law enforcement? Our very presence in a “high drug trafficking” area conjured up an image of us as criminals for the officers. I was embarrassed. Crestfallen that I could wind up in such a demeaning position then be released without as much as an apology. The lingering feeling of distrust flares up in particular at the thought that same bias being programmed into law enforcement AI across the nation. Companies like Palantir and PredPol promise municipalities that their algorithms can predict which individuals are likely to commit crimes and the locales that are hotbeds for such activity. Study after study reveal that these systems are inaccurate and disproportionately police communities of color. The models provided by the aforementioned companies don’t measure which communities participate in the most illicit behavior, they measure which communities have had their illicit behavior recorded by law enforcement. The AI is only as good as the data that feeds it. If officers already over police certain neighborhoods, a constant feedback loop aided by policing algorithms forces them to continuously ramp up patrols in those same neighborhoods.

AI poses particular danger to women and minorities in policing or surveillance scenarios. These products use machine learning to identify individuals in photos or video and provide police or government entities the ability to track their movements or match them with databases for persons of interest. This technology has a 99% success rate for white males, but fails to properly identify darker-skinned women 35% of the time. Changes attributed to hair, makeup, or accessories are easy for humans to decipher but continue to flummox AI-based facial recognition systems. And the databases simply lack enough examples of minority populations to accurately identify individuals. Thus contributing to the invisibility of these populations as unique humans. Instead they are seen only as generic members of identifying groups. One might initially think that failure of the system to recognize you is a boon to those communities. Unfortunately, such failures cause even more problems. The poor accuracy rate of facial recognition systems on minorities can cause instances of mistaken identities that lead to unnecessary police encounters for minorities and force them to prove innocence of crimes that have nothing to do with them.

AI failure has dire consequences even outside of the penal system. When Google returned images of gorillas for users that searched “black people” it was a result of programmed bias. The search engine’s machine learning system picked up on the racist connection that users expressed and took that to be fact. Such easily gamed systems give me pause when considering how this same company is developing its autonomous vehicle technology. Google has since rectified the gorilla image problem but an AI system that sees minorities at “animals” due to racist elements of crowdsourced input could have deadly results when applied to a motor vehicle deciding what objects require more care in its decision-making.

How can businesses alleviate the biases in their AI products? To start, they must inject some emotional intelligence into their artificial intelligence. Artificial Intelligence has immense potential power that can be harnessed to the detriment of society just as easy as it can benefit. As such, innovations and products based around this technology must weigh “should we do this” in comparison to “can we do this”. R&D teams considering this paradigm would second guess turning over powerful facial recognition to police units for real time surveillance at protests or other high traffic areas. A sort of Hippocratic Oath seems to be in order.

Similar to the rest of the tech industry, women, black and hispanic people are woefully underrepresented at both the engineering and executive level of businesses creating AI products. Programmers tend to design products based on their own worldview and test on those in their social circles. This creates a homogenous pool of thought contributing to products with ramifications far beyond the group at the helm. How “intelligent” can a software be if the vast majority of its testing ignores women and minorities? By assembling diverse teams with varying opinions and experiences, companies are less likely to release products that negatively impact marginalized demographics. The various lived experiences of diverse populations should be taken into account for systems that cede judgement from humans to robots. Before AI advances to the level of ubiquitous algorithms reducing our workloads with robots at our beck & call we have the opportunity, an obligation even, to rid the systems of racist and sexist biases.


This content is bookmarked!...

Contact Us

YOUR INFORMATION HAS BEEN SUBMITTED SUCCESSFULLY!