Social Media

Light
Dark

Reality Defender raises $15M to detect text, video and image deepfakes

Reality Defender, one of several startups dedicated to developing tools aimed at identifying deepfakes and other AI-generated content, has today announced a successful Series A funding round, raising $15 million. The funding was led by DCVC, with Comcast, Ex/ante, Parameter Ventures, and Nat Friedman’s AI Grant also participating.

The newly acquired funds will be utilized to double the size of Reality Defender’s current 23-person team within the upcoming year and enhance the capabilities of their AI content detection models. This announcement came directly from the co-founder and CEO, Ben Colman.

Colman stressed the need for vigilance due to the consistent emergence of new deepfake and content generation techniques. He emphasized that by adopting a research-driven approach, Reality Defender can stay ahead of these evolving methods and models, proactively detecting them before they become public threats, rather than reacting after the fact.

Reality Defender was initially launched as a nonprofit initiative but shifted to external financing as the team recognized the extent of the deepfake problem and the increasing demand for deepfake-detection technologies in the commercial sector.

The rise in deepfake production has been largely attributed to the accessibility of generative AI tools. In the past, creating a deepfake, whether through voice or video manipulation, required substantial financial resources and specialized knowledge. However, platforms like ElevenLabs for voice synthesis and open-source models like Stable Diffusion for image generation have made it cost-effective for malicious actors to launch deepfake campaigns.

This accessibility has led to an increase in malicious use cases, such as the dissemination of racist content and the imitation of celebrity voices, among others. Some state actors, like those aligned with the Chinese Communist Party, have even used AI to create lifelike avatars for spreading disinformation.

Despite some generative AI platforms implementing filters and restrictions, the battle against abuse remains challenging. Social media platforms often lack the motivation to scan for deepfakes, as there is no legislation mandating their removal, unlike the laws governing the removal of illegal content like child sexual abuse materials.

Reality Defender aims to address this issue by offering an API and web app that analyze videos, audio, text, and images for signs of AI-driven alterations. Colman claims that their “proprietary models” have been trained on real-world datasets, allowing them to achieve higher deepfake accuracy rates than their competitors.

Nevertheless, the reliability of deepfake detection tools remains a subject of debate. OpenAI, for example, withdrew its AI-generated text detection tool due to low accuracy rates. Some studies suggest that deepfake video detectors can be fooled if the deepfakes are edited in specific ways.

Furthermore, there is a risk that deepfake detection models can amplify biases present in their training data. Colman maintains that Reality Defender actively works to mitigate biases in its algorithms, incorporating diverse accents, skin colors, and other variables into their detector training datasets.

Despite skepticism, Reality Defender has a robust business with a broad customer base spanning governments across several continents, top-tier financial institutions, media corporations, and multinational companies. It faces competition from startups like Truepic, Sentinel, and Effectiv, as well as established players like Microsoft.

To maintain its position in the deepfake detection software market, which was valued at $3.86 billion in 2020, Reality Defender plans to introduce an “explainable AI” tool for scanning documents and is working on real-time voice and video deepfake detection tools.

In essence, Reality Defender leverages AI to combat AI, aiding large entities, platforms, and governments in discerning the authenticity of media, thus protecting against financial fraud, disinformation, and the spread of harmful materials at various levels.

Leave a Reply

Your email address will not be published. Required fields are marked *