Social Media

Light
Dark

Google unveils MedLM, a family of healthcare-focused generative AI models

Google believes there is an opportunity to delegate more healthcare responsibilities to generative AI models or, at the very least, engage these models to assist healthcare professionals in their duties. Today, the company introduced MedLM, a suite of models specifically tailored for the medical field. Derived from Med-PaLM 2, a Google-developed model excelling at an “expert level” across numerous medical exam questions, MedLM is accessible to Google Cloud customers in the U.S. (with a preview in certain other markets) who have been whitelisted through Vertex AI, Google’s fully managed AI development platform.

There are currently two MedLM models: a larger model intended for what Google characterizes as “complex tasks,” and a smaller, fine-tunable model best suited for “scaling across tasks.”

According to Yossi Matias, VP of engineering and research at Google, in a blog post provided to TechCrunch before the official announcement, “Through piloting our tools with different organizations, we’ve learned that the most effective model for a given task varies depending on the use case. For example, summarizing conversations might be best handled by one model, and searching through medications might be better handled by another.”

Google reports that one early MedLM user, for-profit facility operator HCA Healthcare, has been testing the models with physicians to assist in drafting patient notes at emergency department hospital sites. Another tester, BenchSci, has integrated MedLM into its “evidence engine” for identifying, classifying, and ranking novel biomarkers.

Google, alongside main competitors Microsoft and Amazon, is fervently competing to dominate the potentially lucrative healthcare AI market, projected to be worth tens of billions of dollars by 2032. Amazon recently launched AWS HealthScribe, utilizing generative AI to transcribe, summarize, and analyze notes from patient-doctor conversations. Microsoft is conducting trials of various AI-powered healthcare products, including medical “assistant” apps supported by large language models.

However, caution is warranted regarding such technology. AI in healthcare has historically experienced mixed success. Babylon Health, an AI startup supported by the U.K.’s National Health Service, faced scrutiny for claiming that its disease-diagnosing tech could outperform doctors. IBM had to sell its AI-focused Watson Health division at a loss due to technical problems leading to the deterioration of customer partnerships.

While generative models like those in Google’s MedLM family may appear more sophisticated, research indicates that they are not particularly accurate in answering healthcare-related questions, even basic ones. One study involving ophthalmologists found that ChatGPT and Google’s Bard chatbot provided responses that were often inaccurate. There are concerns that generative AI in healthcare could generate harmful incorrect answers, spread misinformation about health issues, and potentially leak sensitive health data.

In October, the World Health Organization (WHO) warned of the risks associated with using generative AI in healthcare, emphasizing the potential for models to produce harmful responses, disseminate health-related disinformation, and disclose health data or other sensitive information.

Google asserts that it remains exceptionally cautious in releasing generative AI healthcare tools, emphasizing its commitment to enabling professionals to use this technology safely and responsibly. The company aims not only to advance healthcare but also to ensure that the benefits are accessible to everyone.

Leave a Reply

Your email address will not be published. Required fields are marked *