In an ideal scenario, companies would meticulously assess the security and compliance of all third-party vendors before engaging in any business transactions. Sales would only proceed after these extensive reviews were successfully completed. However, the challenge lies in the fact that security evaluations demand significant time and human effort.
The primary method companies employ to scrutinize vendors is through questionnaires, which can consist of hundreds of inquiries, covering a wide array of topics ranging from privacy policies to physical data center security. These questionnaires can take anywhere from several days to weeks for vendors to respond to.
In an effort to streamline this process, Chas Ballew established Conveyor, a startup that is developing a platform employing large language models (LLMs), akin to OpenAI’s ChatGPT, to generate responses in the original questionnaire format for security questions.
Conveyor recently announced the successful raising of $12.5 million in a Series A funding round, with Cervin Ventures leading the investment. This capital infusion brings Conveyor’s total funding to $19 million, and it will be directed towards expanding the company’s sales and marketing efforts, research and development, and its team of 15 employees.
Ballew highlighted the outdated nature of security reviews and the prevalent manual approach to answering security questionnaires. He explained that Conveyor automates the entire process, providing efficient and “human-like” responses to security-related questions.
Conveyor offers two core products: a self-service portal for sharing security documents and compliance information with clients and prospects, and an AI-driven question-answering tool powered by LLMs that can comprehend the structure of security questionnaires and automatically fill them in. Conveyor leverages vendor-specific knowledge databases to craft comprehensive answers to natural language questions within these questionnaires.
Other companies, such as Vendict, Purilock, Scrut, and Inventive, are also attempting to automate security reviews using LLMs.
The article raises a question about whether AI-powered solutions like Conveyor might compromise the essence of security reviews, which are traditionally designed to extract information from various employees within a vendor’s IT and security teams. Ballew contends that Conveyor doesn’t cut corners; it reorganizes data points contributed by relevant stakeholders into a questionnaire-friendly format.
The article also raises concerns about the reliability of LLMs in answering security questionnaires, given the high stakes of security reviews. Ballew mentioned that if Conveyor is uncertain about a response, it is flagged for human review. However, the precise method used by Conveyor to distinguish between high-confidence and low-confidence answers is not detailed in the article.
In conclusion, Conveyor envisions a future where evaluating a vendor’s security measures becomes as straightforward as using a mobile phone for grocery checkout. The accuracy and quality of Conveyor’s AI are highlighted as key differentiators, promising to reduce the time spent on editing and correcting responses to security questionnaires. While there are questions about the capabilities and limitations of LLMs, it remains to be seen if they can effectively handle the nuances of security questionnaires.