Social Media

Light
Dark

Lakera launches to protect large language models from malicious prompts

Large language models (LLMs) are the driving force behind the burgeoning generative AI movement. They have the capability to interpret and generate human-language text based on simple prompts, which can range from summarizing documents and writing poetry to answering questions using data from various sources. However, these prompts can be manipulated by malicious actors, using techniques like “prompt injection” to deceive LLM-powered chatbots into granting unauthorized access to systems or bypassing stringent security measures.

In response to these challenges, Swiss startup Lakera has officially launched to the world, with a commitment to safeguard enterprises from LLM security vulnerabilities, including prompt injections and data leaks. Alongside its launch, the company disclosed that it raised $10 million in funding earlier this year.

Lakera has developed a comprehensive database that incorporates insights from various sources, including publicly available open source datasets, in-house research, and data collected from an interactive game called Gandalf, which they launched recently. In this game, users attempt to “hack” an LLM through linguistic tricks to uncover a secret password, with increasing difficulty levels as the game progresses. These insights from Gandalf contribute to Lakera’s flagship product, Lakera Guard, which companies can integrate into their applications through an API.

Lakera’s CEO, David Haber, explained that Gandalf is played by a diverse audience, from six-year-olds to adults, with a significant portion of the cybersecurity community participating in the game. The company has recorded around 30 million interactions from 1 million users over the past six months, allowing them to create a “prompt injection taxonomy” categorizing types of attacks into 10 different categories.

Beyond prompt injections, Lakera is addressing other cybersecurity concerns, such as the inadvertent leakage of private or confidential data and ensuring that LLMs do not generate unsuitable content for children. They are also tackling LLM-enabled misinformation or factual inaccuracies by ensuring that the model adheres to prescribed bounds.

As the EU AI Act introduces regulations for AI models, including generative AI models, Lakera’s launch comes at an opportune moment. The company’s founders have been involved in advisory roles for the Act, helping to establish the technical foundations ahead of its expected introduction in the next year or two.

Despite the widespread adoption of generative AI models, security remains a concern for enterprises looking to incorporate these technologies into their applications. Lakera aims to address these security challenges and enable companies to deploy generative AI apps with confidence.

Founded in 2021 in Zurich, Lakera already serves major paying customers, including LLM developer Cohere, a leading enterprise cloud platform, and one of the world’s largest cloud storage services. With $10 million in funding, the company is well-positioned to further develop its platform and adapt to the evolving threat landscape. The investment was led by Swiss VC Redalpine, with additional capital from Fly Ventures, Inovia Capital, and several angel investors.

Leave a Reply

Your email address will not be published. Required fields are marked *