Social Media

Light
Dark

Vera wants to use AI to cull generative models’ worst behaviors

Liz O’Sullivan is on a mission to enhance the safety of AI, as she herself puts it. As a member of the National AI Advisory Committee, responsible for crafting recommendations for the White House and Congress regarding AI adoption and its regulation, O’Sullivan brings her extensive experience to the table. Prior to this role, she spent 12 years in AI startup leadership roles, overseeing data labeling, operations, and customer success. In 2019, she transitioned to the Surveillance Technology Oversight Project, focusing on safeguarding civil liberties in New York. She also co-founded Arthur AI, a startup collaborating with civil society and academia to shed light on the inner workings of AI’s “black box.”

Now, O’Sullivan is gearing up for her next venture with Vera, a startup dedicated to building a toolkit enabling companies to establish “acceptable use policies” for generative AI – the kind of AI models that produce text, images, music, and more. Vera aims to enforce these policies across both open-source and custom models.

Vera recently concluded a successful $2.7 million funding round led by Differential Venture Partners, with participation from Essence VC, Everywhere VC, Betaworks, Greylock, and ATP Ventures. This latest funding brings Vera’s total raised capital to $3.3 million, which will be allocated to expanding the five-person team, conducting research and development, and scaling enterprise deployments.

O’Sullivan explained the motivation behind Vera: “Vera was founded because we’ve seen, firsthand, the power of AI to address real problems, just as we’ve seen the wild and wacky ways it can cause damage to companies, the public, and the world. We need to responsibly shepherd this technology into the world, and as companies race to define their generative AI strategies, we’re entering an age where it’s critical that we move beyond AI principles and into practice. Vera is a team that can actually help.”

Co-founded in 2021 by O’Sullivan and Justin Norman, formerly a research scientist at Cisco, Vera’s platform identifies risks in model inputs, such as potentially sensitive information or malicious prompts. It then takes measures to block, redact, or transform these requests to ensure compliance with established policies. Vera also places constraints on the responses generated by models, granting companies greater control over their models’ behavior in real-world applications.

Vera achieves this by utilizing proprietary language and vision models positioned between users and internal or third-party models, like OpenAI’s GPT-4. This technology detects problematic content and can block inappropriate prompts or answers in various forms, including text, code, images, or videos. O’Sullivan stated, “Our deep tech approach to enforcing policies goes beyond passive forms of documentation and checklists to address the direct points at which these risks occur. Our solution…prevents riskier responses that may include criminal material or encourage users to self-harm.”

Companies face compliance-related challenges in adopting generative AI models, including concerns about confidential data security and offensive model behavior. Major corporations like Apple, Walmart, and Verizon have recently prohibited employees from using tools like OpenAI’s ChatGPT due to such concerns.

However, there are questions about the reliability of Vera’s approach. No AI model, including Vera’s, is flawless, and content moderation models are known to exhibit biases. Some AI models designed to detect toxicity in text have been found to disproportionately flag phrases in African-American Vernacular English as “toxic.” Similarly, computer vision algorithms have been shown to mislabel objects based on race.

O’Sullivan acknowledges that Vera’s models are not infallible but asserts that they can address the most problematic behaviors of generative AI models, depending on the specific model and the extent to which Vera has refined its own models.

Despite potential limitations, Vera faces competition in the emerging market for model moderation technology. Other companies, like Nvidia and Salesforce, also strive to prevent text-generating models from mishandling sensitive data, and Microsoft offers AI services for text and image content moderation. Several startups, such as HiddenLayer, DynamoFL, and Protect AI, are developing tools to defend generative AI models against prompt engineering attacks.

Vera’s unique selling point appears to be its comprehensive approach to addressing a range of generative AI threats simultaneously. Assuming that its technology lives up to its promises, this approach is likely to appeal to companies seeking a one-stop solution for content moderation and defending against AI model attacks.

O’Sullivan mentioned that Vera already has several customers, with a waitlist for additional users. She emphasized, “CTOs, CISOs, and CIOs all over the world are struggling to strike the ideal balance between AI-enhanced productivity and the risks these models present. Vera unlocks generative AI capabilities with policy enforcement that can be transferred not just to today’s models, but to future models without the vendor lock-in that occurs when you choose a one-model or one-size-fits-all approach to generative AI.”

Leave a Reply

Your email address will not be published. Required fields are marked *