Social Media

Light
Dark

Ghost, now OpenAI-backed, claims LLMs will overcome self-driving setbacks — but experts are skeptical

The self-driving car industry is currently undergoing a significant challenge, with Cruise recently recalling its entire fleet following a tragic accident. This incident, coupled with public protests in San Francisco against autonomous vehicle testing, underscores a critical moment for the industry.

In response to these challenges, Ghost Autonomy, a startup specializing in autonomous driving software, aims to enhance self-driving technology by incorporating multimodal large language models (LLMs). This involves utilizing AI models capable of interpreting both text and images. Ghost has partnered with OpenAI through the OpenAI Startup Fund, gaining early access to OpenAI systems, Azure resources from Microsoft, and a $5 million investment.

Ghost’s CEO, John Hayes, believes that LLMs provide a novel approach to understanding complex scenarios, offering improved reasoning in situations where current models fall short. The company plans to employ multimodal models to analyze scenes and make nuanced decisions, such as suggesting lane changes based on images captured by car-mounted cameras.

However, skepticism exists among experts, including Os Keyes, a Ph.D. candidate at the University of Washington, who views the use of LLMs in self-driving as a potentially misguided marketing strategy. Keyes argues that LLMs may not be the most efficient solution for the challenges in autonomous driving.

Mike Cook, a senior lecturer at King’s College London, echoes this skepticism, emphasizing that LLMs are not a one-size-fits-all solution in computer science. He questions the application of this technology to such a complex and safety-critical task as driving.

Despite skepticism, Ghost and OpenAI remain optimistic, with Brad Lightcap, OpenAI’s COO, suggesting that multimodal models have the potential to broaden the applicability of LLMs, including in autonomy and automotive applications.

In response to critics, Hayes defends Ghost’s approach, asserting that LLMs could enable autonomous systems to reason about driving scenes comprehensively and navigate diverse situations. Ghost is actively testing multimodal model-driven decision-making and collaborating with automakers to integrate new models into its autonomy stack.

However, doubts persist about the readiness of such technology for commercial use in vehicles, especially considering setbacks experienced by well-established players in the autonomous vehicle space. The question remains: Can Ghost deliver on its promise with unproven technology, especially in an industry facing increasing scrutiny and challenges?

Leave a Reply

Your email address will not be published. Required fields are marked *