In these weeks where decades are happening, it might have been easy to miss the following two, genuinely stunning headline announcements from IBM and Amazon.
After a decade-long rush into algorithms, automation, and AI seemed to churn, unbridled and roiling, over the top of societal norms everywhere (looking at you over there in China, SenseTime), the fact that two giants of American tech took a self-timeout is remarkable.
And the timing of these announcements- amid the cri-de-coeur on American streets over the killing of George Floyd and the future of American law enforcement- suggests that the power of freedom of speech, and freedom of (peaceable) assembly still means something. To effect change. To nudge the arc of strategic developments.
Here at the Center for the Future of Work, we've long advocated that an ethical pause on facial recognition software is the right thing to do. Our own Ben Pring wrote this apt analogy of a race car behind a pace car on a racetrack, which I immediately thought of when the news about IBM and Amazon broke:
Fixing inequality will make few demands of AI... It will stop us short with the thought that a society that worships money is a society that rots from the head. While all of this is being worked through (2 or 3 years tops, don't you think?) AI can continue to develop in the laboratory; kinks can be ironed out, racist algorithms can be taught to not be so like Roseanne. If we're smart more and more people of goodwill can quietly learn the new tools of the trade in online sandboxes and be ready for when the safety car of the race against the machine leaves the track.
The health of our democracy demands trust, and the tech trust deficit hasn't really closed, if at all. Simply put, there's near-unanimous agreement that facial recognition software error rates- especially among persons of color- are unacceptably high. Too often, there's retroactive mea-culpas that "we didn't do enough to prevent X from happening". As part of developing further safeguards around facial recognition technology, one role we expect to emerge is the 'algorithm bias auditor', whose job it will be to ensure that any bias is eliminated from a company's technology before it goes live.
One of the ironies at play, statutorily, has been that the City of San Francisco - crucible of tech innovation as the capital of Silicon Valley- banned facial recognition over a year ago, a move that addressed the thorny reality of facial recognition through direct action by democratically-elected representatives. Other cities followed suit. Now a critical mass of major tech suppliers and accelerating the momentum.
Perhaps we also owe special credit to Microsoft, who, well before the present-day exigencies of 2020 was shouting loudly in the wilderness (with actions) back in the mists of time of 2019, by refusing to sell its facial recognition system to law enforcement. For its part, Amazon shareholders did hold a vote in 2019 on whether or not the company should stop offering its system to government agencies. Even though the vote didn't end in a ban, actions like these laid down markers that tech businesses needed to be aware of the negative perception of such surveillance and weigh up the risks in customers' loss of confidence.
And then 2020 happened.
As Frank Norris wrote of railroad monopolies at the close of the Gilded Age: "[The] fact remains, nevertheless, simple, fundamental, everlasting. The People have but to say 'No' and not the strongest tyranny, political, religious, or financial, that was ever organized, could survive one week". It's an apt comment on today's events. That is, as in the past, we need consent of citizens- not digital-political overlords (e.g., the Chinese model of compulsory "Social Credit Scores")- to remain an enduring virtue for our digital civilization, democracy and economic survival.
Ultimately, it's 'We the People' who get to control who "watches the watchers".