Contact Us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Thank you for your interest in Cognizant.

We’ll be in touch soon.

We are sorry. Unable to submit your request.

Please try again or post your inquiry to

Discover The Future of Work

MIT's annual EmTech Digital conference in San Francisco is good. Really good. In fact it's so good, it's mark-your-calendar-and-prioritize-it-over-all-other-conference-invitations-you'll-get-this-year good. Ranging from meaty discussions of the latest technologies, through to truly mind-blowing, "holy-s**t moments" (HSMs), it gives scientists, venture capitalists, governments, academics and analysts (like yours truly) enough grist to see over the near-term horizon and into the future.

But something happened this year. Something different. A topic that couldn't - shouldn't, mustn't -- be avoided. Specifically, the malaise we as an industry are going through right now it. It's no surprise to anyone that Silicon Valley- and tech in general - is squarely in what Gartner's Hype Cycle would call The Trough of Disillusionment. Peter Diamandis aptly describes The Trough as being "... when people lose faith in the technology, even while the underlying technology continues its exponential growth".

So at MIT's big show earlier this month, talking about "The Trough" was essential. The mood in the room ranged from frank, to delusional, to hopeful - following is a sample.

  • We need ethical rules-of-the-road for AI. Consider: NYC mandated seatbelts in 1984, but imagine the lives saved if autos had them from the get-go. Microsoft's Harry Shum says it's time to imagine what the "stop signs" for AI look like (e.g., when should AutoPlay "AutoQuit"? 1 AM? 2 AM? 3AM?)
  • If AI's as powerful as nuclear fission, we'll need some graphite rods. So can AI save us from AI? Like attacks such as DDoS, confidentiality and data poisoning? Dawn Song from UC Berkeley says GANS - Generative Adversarial Networks, whose "generator" and "discriminator" are the AI version of the Muppets' Statler and Waldorf - will help.
  • Deep-fakes vs. shallow-fakes? Refugee or terrorist? Who can you turn to? With growing attention-deficit and disbelief-by-default, more collaboration for verification tools is urgently needed, lest too many false positives impact free speech.
  • Did you know that 55% of emotions are read by facial expression, 38% on how words are said, and only 8% on actual words? But collecting facial emotion by AI is creepy. At least most of EmTech's audience thought so. (My colleague Caroline Styr offers her recommendations here).
  • A reason productivity isn't growing is that AI isn't distributed into huge sectors like construction yet (which ranks just below hunting - seriously - in terms of tech improvements). When 40% of waste in landfill comes from construction, that's a (building) problem.
  • Speaking of materials (and science... and AI...and HSMs...) CEO Jill Becker, of Kebotix in Cambridge, MA is synthesizing molecules that scientists might use to fight global warming. The key is to give an outcome, like "create a weed killer that does the job without harming the soil, other plants, or toxicity or water," and then let AI run the tesseract of all permutations of the periodic table of elements to get to the outcome.
  • Just a little over a week after announcing it with fanfare at EmTech, Google walked back it ethics committee. And, as of this writing, continues to struggle with backlash, especially among its own employees. At EmTech, Google's head of Global Affairs used the phrase "as appropriate" were regularly: so who gets to determine what's appropriate for those should deem things "appropriate"?
  • Rashida Richardson of NYU's AI Now Institute emphatically underscored the question of "who gets to do the thinking through?", as a call to action for diversity and inclusion, and openly challenged the responsibility that tech companies have for bringing about ethical regulation faster (editorial comment: public policy will help, and give rise to new jobs of the future like Algorithm Bias Auditors...)
  • Diversity, inclusion, and human-centered tech are critical to AI. Why? Bias, security, and health of the labor market are today disciplines beyond electrical engineering and computer science departments. At a time when women are 2x more likely to drop out of EECS, human centricity becomes paramount. Speakers like Rediet Abebe (as well as Rashida Richardson) were standouts on this subject.
  • Sometimes sectors you think are data-rich are data deserts. Think delivery drivers spend most of their time stuck in traffic? Logjam-breaker Chazz Sims, CEO of Wise Systems, found out that actually 75% of service time is the last 100 meters parking, waiting for loading dock, and getting paperwork signatures.
  • Universities doing EECS research, fear not: the hardware manufacturers building the guts of AI still tremendously value the signals they get from universities: what are the faculty excited about, what's near and dear to them, etc.
  • How will humanity "win" using AI? You have to define "winning"... Consider insects: they can't learn, or think abstractly, but they constitute more biomass in the world. We have YouTube...
  • Dave Budden of DeepMind (not to be confused with the inventor of the lawnmower, Ed Budding) is pushing the frontier of ML systems to massive online strategy games like StarCraft and using the learnings to solve real-world problems (StarCraft, to those in the know, is in its way an even harder game than Go or chess; it's real-time, not turn-based. It's difficult lies not in figuring out what action to take, but when to take it).
  • Brendan McCord, who until recently was part of the US DoD's Project Maven, made one of the most coherent arguments on ethics - and AI-assisted warfare - I've ever heard. The ethics of the second-oldest job in humanity are critical - as is defense of our values (see everything above- and extra points for invoking Cicero and Thucydides)

To my mind, I equate the feelings of "The Trough" with the last 18-24 months, signposted by things like Cambridge Analytica, #deletefacebook, the better-but-still-sisyphusian struggle for diversity and inclusion, the continued state of the streets of San Francisco as Hogarth's Rake's Progress Act II, Orwellian "citizen scores" in China, and the pendulum swing of techlash in general.

While the actual hype cycle may, in sum, state otherwise, it was clear to all that the Wild West days have come to an end, and after 25 years of experimentation with the "information superhighway," ethicists, activists, policy makers, and companies are now beginning to lay down the rules of the road.

How we - as an industry, a society, an economy, an era, a species - make it to the glimmerings of solving them on the Slope of Enlightenment on the cycle will depend entirely upon decisions we take now, while there is still time. And, near as I can tell, that path runs right through the confluence of ethics, philosophy, the Golden Rule - and the future of work.


Add Filters