OpenEvidence tackles the engineering challenge of real-time data for large language models
OpenEvidence, a San Francisco-based artificial intelligence (AI) startup valued at $425 million, is aiming to solve one of the major limitations facing today's large language models: the inability to access real-time data beyond their original training period.
Large language models like OpenAI's ChatGPT have garnered significant buzz for their conversational abilities. However, their knowledge is frozen in time, reflecting only the data they were trained on. For example, if asked about COVID-19 vaccines, ChatGPT responds with information as of September 2021, unable to account for new developments over the past year and a half.
This severely limits the utility of such models, especially in fields like healthcare where practitioners need access to the latest medical research and information. As Daniel Nadler, OpenEvidence's founder states, chatbots require "access to a real-time firehose of clinical documents" to be useful in medical settings.
OpenEvidence's Solution: Retrieval Augmentation
OpenEvidence aims to solve this problem through a technique called retrieval augmentation. Here's how it works:
- The language model is paired with a pool of constantly updated data - in OpenEvidence's case, over 35 million medical journal articles.
- When asked a question, the system searches this pool to find relevant, up-to-date information.
- It then uses this retrieved information, along with its original training, to generate an informed, cited response.
This allows the model to "answer with an open book," tapping into the latest evidence versus just its closed training set.
Engineering Challenges of Live Data Integration
Integrating real-time data streams poses major engineering hurdles:
- Scale: Processing over 35 million documents from thousands of journals requires huge computing power. OpenEvidence utilizes a supercomputer in Nevada to handle this data velocity.
- Relevance: Not all documents are equally credible. OpenEvidence's algorithms weigh evidence quality in results.
- Speed: Queries must scan updated documents and generate answers within seconds. This requires optimization to avoid computational bottlenecks.
- Consistency: Answers must be logically coherent, without strange hallucinations that can stem from patchy evidence. Special techniques help ensure contextual consistency.
Funding and Early Traction
OpenEvidence has raised $32 million to date, including a $27 million Series B round in 2022. Investors include many backers of Nadler's previous startup, Kensho Technologies.
The platform has seen strong early interest, with over 10,000 medical professionals signing up for early access after a Mayo Clinic accelerator. Users highlight interactivity and speed as advantages over manual resources like UpToDate.
Monetization and Ethics
OpenEvidence plans a hybrid ad-based and subscription model. While avoiding direct-to-consumer chatbots to mitigate harm, Nadler believes professionals can benefit from AI assistance while maintaining responsibility. As surgeon advisor Antonio Forte notes, the tool provides helpful efficiency gains but the doctor retains judgment.
Real-time data integration stands as a pivotal challenge in advancing large language models. By tackling this problem, OpenEvidence aims to enhance AI assistance for medical professionals. Its traction and funding suggest promise, but continued research will refine techniques and measure benefits versus human-driven systems. Beyond healthcare, the approach could potentially expand AI capabilities across sectors dependent on timely, high-quality information.
Details on OpenEvidence's Founding and Investors
- Founded in November 2021 by Daniel Nadler, founder of previous AI startup Kensho Technologies (acquired for $700M in 2018)
- Seed funding came from Nadler's personal investment of $5M
$27M Series B round closed July 2022, led by Kensho investors including:
- Jim Breyer (billionaire venture capitalist)
- Brian Sheth (Vista Equity Partners co-founder)
- Ken Moelis (investment banker)
Quotes on the OpenEvidence System
- "The fundamental construct of the problem was identical. An information overload and a need to triage that information and a need to use computers to do so." - Daniel Nadler, OpenEvidence founder
- "You have evidence-weighted answers. The quality of the input source [is] taken into account." - Nadler on weighting journal credibility
- "If you gave a human a bunch of documents or paragraphs, let the human read it and then answer questions, and also ask the human to tell you where their answer came from in those documents, even humans would make mistakes." - Uri Alon, Carnegie Mellon University researcher
- "The biggest difference has been the time savings. Rather than having to read through the equivalent of a book chapter, I can get an answer within 30 seconds, not within 10 minutes." - Dr. Antonio Forte, Mayo Clinic surgeon and OpenEvidence advisor
OpenEvidence is a healthcare AI startup trying to solve a major limitation of large language models like ChatGPT: their inability to access real-time data beyond their original training period. This makes them quickly outdated, especially in fast-moving fields.
OpenEvidence uses a technique called retrieval augmentation to pair the language model with constantly updating data sources like medical journals. This allows it to generate informed, cited answers using the latest evidence.
The company has raised $32 million to build out its compute infrastructure and integrate live data feeds. It's seeing strong early traction, with 10,000 medical professionals signed up to try its interactive chatbot.
OpenEvidence plans a hybrid revenue model with both ads and subscriptions as it tries to take on established medical data tools like UpToDate. The startup aims to enhance AI assistance for doctors and nurses by keeping their information up-to-date.