Back in February 2019, researchers at OpenAI, a non-profit exploring ways to make artificial intelligence (AI) safe, tested a machine learning (ML) program that could generate text based on written prompts. The result was so close to mimicking human writing that the researchers decided to withhold it.
Fast forward to September. The Wall Street Journal reported that thieves had robbed the CEO of a UK-based energy company by imitating the voice of his German boss. What makes this fraud unique is that the voice used to defraud the CEO did not come from a human mimic — it was, in all likelihood, created using state-of-the-art AI. Reportedly, the software was able to replicate the German executive’s voice so accurately that his British colleague was unable to tell the difference between man and machine.
The technology at the heart of these two, and several other high-profile incidents, is AI. In some ways AI’s rapid evolution is generating unintended consequences. Conversely, AI is also evolving as a possible remedy.
Regardless of how quickly we assimilate the accelerating changes wrought by the unrelenting advance of information technology, the age of deepfakes is upon us. And it comes at a time when researchers, businesses and governments worldwide are seeking ways to contain its spread, primarily in the form of fake news proliferated on social media platforms. Moreover, fake articles and audio are only the latest forms of deepfakes. Videos and images of people who don’t exist are multiplying even faster.
Deepfakes pose a serious risk to individual and organizational reputations. Merely monitoring social media chatter may not be enough to mitigate this risk. We believe this menace will get worse before it gets better, as the technology used to create deepfakes becomes increasingly easy and cheap to use and as social media remains entrenched as a simple and quick high-volume amplifier.
Research by Aon has found that businesses risk losing up to 30% of their shareholder value after an adverse event — double what they would have lost in the pre-social media era. Existing cybersecurity tools are clearly not enough to prevent, let alone predict, such crimes. In this scenario, organizations are left with very few options. However, several actions can be taken to thwart the adverse impact of deepfakes.
Deepfake diligence
For businesses, deepfakes present a new kind of threat. A targeted attack on a business in the form of a fake video could be created using little more than publicly available content in the form of pictures, voice and video recordings.
A CEO could be targeted with videos “showing” them announcing news unfavorable to the company, leading to chaos as stakeholders react to that bogus news. The stock price could plummet and it might take up to a week for it to recover. Moreover, the damage to the company’s reputation would take time and money to repair. Ratings agency Moody’s warned about this recently, stating that companies’ credit quality was under threat from deepfakes as combating AI disinformation campaigns would take longer to combat given how fast the technology is growing.
Meanwhile, researchers investigating the impact of fake news have found that if such information is repeated enough times, people become inclined to believe it even if it doesn’t align with their political leanings or fundamental beliefs.
Mitigating the risk posed by deepfakes requires a multi-pronged approach that combines technology with supporting legislation. Businesses, meanwhile, will need to put in place teams and processes that can quickly identify a deepfake attack so that damage control can be initiated quickly.
Resolutions on the horizon
Understandably, mitigating deepfake threats starts and ends with technology. Options explored include using AI to tackle AI deepfakes, using watermarks, and even blockchain. These methods’ common aim is to ensure trust in public discourse is maintained. But how well do these approaches hold up?
AI vs. AI and other approaches
Both startups and tech giants are working to find AI-powered panaceas. The tech giants’ approach is to use deepfake video clips to train software to identify future deepfakes as soon as possible. Facebook and Microsoft announced the Deepfake Detection Challenge, wherein researchers will use Facebook’s video dataset (created using real actors) to create detection tools.
Google has taken a similar approach, creating synthetic videos to in turn create deepfakes. The Canadian AI company Dessa ran Google’s videos through its deepfake detector, which was highly successful at least initially in identifying deepfakes. However, accuracy plummeted when testing true deepfake videos accessed across the worldwide web, indicating there’s much more work to be done.
Blockchain, the decentralized ledger technology, in combination with AI, is also seen as a possible solution. These technologies are being applied to improve everything from supply chains to healthcare outcomes. At least one company is combining blockchain’s immutable records with ML’s ability to detect patterns to combat deepfakes.
The startup Truepic has built an app that watermarks pictures, then links a given image to a copy stored on its servers that is accessible via a blockchain network. The hope is that this will instill trust in still and motion pictures by enabling individuals, as well as news and media outlets, to verify the authenticity of images viewed online.
However, the optimism around such a solution is counterbalanced by skepticism. Deepfakes are created by using generative adversarial networks (GANs), a subset of AI. GANs use two neural networks:
- A generator that creates fake video clips.
- A discriminator (for detection) that determines whether a clip is genuine or fake.
The discriminator mimics exactly what a deepfake detector is expected to do. This opens up the possibility of deepfake creators using such a discriminator to create a GAN that can beat existing detectors.
Additionally, some in the industry believe the search for a solution is ultimately a losing battle, as neither watermarks nor detection tools will solve the problem created by fake videos and pictures as long as there is an audience for such content. In fact, the fake news industry is thriving despite several fact-checking initiatives. Moreover, as seen with computer viruses, deepfakes will inevitably evolve to circumvent such tools.
What seems certain is that while technology holds promise, a solution to the deepfake problem will require more than that.
Enter legislative action
When deepfakes emerged, some pundits noted that existing laws were enough; others disagreed. While the debate is ongoing, the search for legislative remedies continues. The U.S. House of Representatives introduced the DEEP FAKES Accountability Act, which includes provisions that require social media companies to use invisible watermarks to more easily detect deepfakes. The bill also introduces heavy fines and punishment for offenders.
Meanwhile, the state of California has made it illegal to create or distribute deepfake videos. AB730, for instance, is aimed at stopping misinformation ahead of the 2020 U.S. presidential elections. However, outright bans may be hard to enforce for one good reason: the technology behind deepfakes could have positive applications in areas such as helping people with speaking disabilities express themselves using voice samples.
Elsewhere, the European Union (EU) has published a strategy for tackling disinformation that also addresses deepfakes. The EU approach emphasizes informing the public where news originates, including who created it and whether the source is trustworthy, similar to the way Facebook works with third-party fact checkers to weed out fake news.
Contending with deepfakes
Given that legislative and technological remedies remain works in progress, businesses should focus on risk management, and on mitigating damage however and wherever possible. To accomplish this, they must acknowledge the reputational risk posed by deepfakes, and think hard about what they mean to the organization. This must be supported by an ecosystem of trust that acts as a main source of facts related to key issues.
Monitoring social and traditional media chatter is a normal business practice. But this approach is inherently reactive and of little use when it comes to damage control. To reverse this, organizations need to amplify the chatter about them in different nooks and corners of the internet. The following approaches are worth considering.
- Understand what others are saying: This includes two types of chatter. The first is general chatter, the kind that includes news media, business domain publications and websites. And then there are sources for specific chatter. These include every instance of the company name being mentioned by an analyst, competitor, government body, industry regulator, and so forth.
- Listen to the chatter beyond the usual social sites: This might sound obvious, but there are more social media platforms than Facebook and Twitter. Depending on whom you ask, the number could be anywhere from seven to 50. These sites typically have chatter that is often time-sensitive, making it critical to monitor them all the time.
- Look beyond social: The murkiest corners of the internet, known as the Dark Web, are often considered too hot to touch. However, these sites could be critical in identifying suspicious activity that mentions a company’s name. Companies can access Dark Web chatter using platforms such as BitSight or Blueliv.
- Tap a broad array of intelligence sources: Companies can also subscribe to feeds from industry consortiums such as the Financial Services Information Sharing and Analysis Center (FS-ISAC), which provides a given industry with threat data. In the absence of such a consortium, providers such as Anomali can fill the gap.
Early warning and ecosystem measures
Employees can act as the first line of defense in case of a deepfake incident. By helping them identify a deepfake attack, businesses can make sure red flags are raised as soon as a potential incident is identified. This, however, is just the front end of such a system. The backbone of the system would be a team that has the relevant technology (as discussed above) and authority necessary to take immediate corrective measures, including sending an alert to company leadership (through a real-time dashboard) and determining where the attack originated.
Businesses can choose to support an ecosystem of fact finders that includes journalists, researchers, cybersecurity experts and technology companies. An example of this is the Deepfake Detection Challenge noted above. This could act as a support system that businesses can access not just to fact-check news stories but also for up-to-date knowledge on deepfakes and other fake news-related threats.
This is crucial given that a typical deepfake video on YouTube, for example, will only provide access to metadata that the uploader shared. An ecosystem could address this by providing tools that help uncover more descriptive information related to the video. Annotating videos with tools such as the University of California’s Video Annotation Tool from Irvine, California (VATIC) can help draft metadata that can be combined with the source video to create a unique signature. That signature can in turn potentially be used to detect an altered image or video. There is also industry research supporting bottom-up ideas to counter the GAN principle noted above, using recurrent neural networks (RNNs).
The challenge presented by AI deepfakes is complex, requiring businesses to use a variety of technologies to protect themselves and respond quickly. Businesses that feel exposed to a reputational risk from deepfakes may not have the necessary expertise to do this on their own. It therefore makes sense to work with a technology partner that can apply the right tools and connect technologies such as AI and blockchain, as well as social data, to detect, alert and respond to the potential impact of deepfakes before they cause reputational damage or worse.