Social Media

Light
Dark

Elicit is building a tool to automate scientific literature review

For researchers, delving into scientific papers can be an incredibly time-consuming endeavor. According to a survey, scientists dedicate a substantial seven hours per week to the arduous task of information retrieval. Another survey indicates that when conducting systematic reviews of literature—comprehensive analyses of scholarly evidence on a particular subject—research teams of five people, on average, invest a staggering 41 weeks.

Fortunately, there is an alternative approach available.

This alternative is championed by Andreas Stuhlmüller, co-founder of Elicit, an AI startup focused on assisting scientists and R&D labs. Elicit has developed a “research assistant” powered by artificial intelligence, and it aims to automate the more laborious aspects of literature reviews.

“Elicit is a research assistant that leverages language models to automate scientific research,” explained Stuhlmüller in an email interview with TechCrunch. “Specifically, it automates literature reviews by identifying pertinent papers, extracting crucial study information, and organizing this data into meaningful concepts.”

Elicit operates as a for-profit venture that spun off from Ought, a nonprofit research foundation founded by Stuhlmüller in 2017, a former researcher at Stanford’s computation and cognition lab. Jungwon Byun, Elicit’s other co-founder, joined the startup in 2019 after leading growth efforts at the online lending firm Upstart.

Elicit employs a range of both first-party and third-party models to explore and identify key concepts across academic papers, enabling users to pose questions like “What are all the effects of creatine?” or “What are all the datasets used in studying logical reasoning?” Users receive lists of answers drawn from the academic literature.

Stuhlmüller emphasizes that Elicit has taken precautions to enhance the reliability of its AI, addressing concerns about language models producing erroneous information. Elicit dissects the complex tasks its models perform into “human-understandable” components. This allows Elicit to track how often different models generate incorrect information in summaries and helps users discern which answers require verification.

Additionally, Elicit attempts to evaluate the overall “trustworthiness” of scientific papers, taking into account factors such as the rigor of research trials, funding sources, potential conflicts of interest, and trial sizes.

Unlike some language model applications, Elicit does not utilize chat interfaces. Users apply language models as batch jobs, and the generated answers are always linked back to the scientific literature to minimize errors and make verification straightforward.

While it remains a question whether Elicit has completely resolved the significant challenges confronting language models today, its efforts have evidently piqued the interest—and possibly the trust—of the research community. Stuhlmüller reports that over 200,000 people use Elicit monthly, reflecting a threefold year-over-year growth, with organizations such as The World Bank, Genentech, and Stanford among its users.

To fuel further development, Elicit has secured its first funding round, totaling $9 million, led by Fifty Years. The bulk of this funding will be directed toward enhancing Elicit’s product and expanding its team of product managers and software engineers.

As for Elicit’s revenue strategy, they have introduced a paid tier that allows users to conduct more extensive searches, extract data, and summarize concepts on a larger scale than the free tier. In the long run, the plan is to transform Elicit into a comprehensive research and reasoning tool that entire enterprises would be willing to invest in.

One potential challenge to Elicit’s commercial success could come from open-source initiatives like the Allen Institute for AI’s Open Language Model, which aims to develop a free-to-use large language model tailored for scientific purposes. However, Stuhlmüller sees open source as more complementary than competitive, emphasizing that the primary competition at present is human labor—research assistants who meticulously extract data from papers. The scientific research market is vast, and there are currently no major incumbents in research workflow tools, opening the door for entirely new AI-driven workflows to emerge.

Leave a Reply

Your email address will not be published. Required fields are marked *