Social Media

Light
Dark

Good old-fashioned AI remains viable in spite of the rise of LLMs

Recall a year ago, harking back to November of last year, a period predating our awareness of ChatGPT. During that time, machine learning primarily focused on constructing models tailored to address singular tasks such as loan approvals or fraud protection. This methodology appeared to fade away with the emergence of generalized Language Models (LLMs). However, it’s essential to acknowledge that generalized models may not be universally suitable for all challenges. Task-specific models persist robustly in the corporate landscape.

These task-centric models, integral to most enterprise AI before the ascent of LLMs, continue to play a pivotal role. Werner Vogels, Amazon CTO, recently referred to them as “good old-fashioned AI,” emphasizing their ongoing efficacy in resolving real-world issues. Atul Deo, the general manager of Amazon Bedrock, a product introduced this year for interfacing with various large language models through APIs, shares the sentiment that task models won’t vanish but have evolved into another tool in the AI toolkit.

Before the era of large language models, the predominant paradigm was task-specific, involving training a model from the ground up for a specific purpose. According to Deo, the fundamental distinction between task models and LLMs lies in their training approach: one is tailored for a particular task, while the other exhibits flexibility beyond predefined boundaries.

Jon Turow, a partner at Madrona investment firm and former AWS executive, notes that while large language models introduce capabilities like reasoning and out-of-domain robustness, the extent of these capabilities remains debatable. He asserts that task models retain significance due to their smaller, faster, and potentially more cost-effective nature, tailored for specific tasks.

The allure of an all-encompassing model is undeniable, offering reusability benefits and addressing various use cases with a single model. However, Turow emphasizes the continuing relevance of task models, especially in scenarios where specificity, speed, and cost-effectiveness are paramount.

Amazon’s SageMaker, a machine learning operations platform aimed at data scientists, stands as a crucial product despite the prevalence of Bedrock, targeted at developers. SageMaker, with tens of thousands of customers building millions of models, continues to be indispensable. The introduction of upgrades to SageMaker further underscores Amazon’s commitment to managing large language models.

In the realm of enterprise software, the adoption of new technologies does not entail discarding previous investments outright. While large language models represent the current trend, the technology preceding them remains pertinent. The coexistence of task models and large language models is expected to persist, acknowledging that the optimal approach depends on the specific requirements of each use case.

In the age of large language models, where tools are increasingly tailored for developers, the role of data scientists remains crucial. They are poised to critically evaluate data, ensuring a deep understanding of the intricate relationship between AI and data within large organizations. Regardless of whether one is developing a generalized large language model or a task-specific model, the importance of data scientists in navigating these complexities remains evident. The concurrent operation of these two approaches is anticipated for the foreseeable future, recognizing that the superiority of one over the other depends on the contextual nuances of each application.

Leave a Reply

Your email address will not be published. Required fields are marked *