Social Media

Light
Dark

AI isn’t and won’t soon be evil or even smart, but it’s also irreversibly pervasive

Artificial intelligence, particularly the type centered around large language models that currently captivates our attention, has entered the later stages of its hype cycle. Unlike cryptocurrency, it won’t simply fade into obscure corners of the internet when its trendy status diminishes. Instead, it is finding a lasting place where its usage has become commonplace, even in scenarios where it is not ideally suited. There’s a pessimistic view that AI will evolve to the point of enslaving or overshadowing humanity, yet the real threat lies in its pervasive influence, introducing errors and hallucinations into our collective intellectual landscape.

The ongoing debate between doomerism and e/acc continues with grounded, fact-based arguments from Silicon Valley elites. It’s crucial to bear in mind that these figures often oscillate between lauding and decrying the extreme success or failure of the technologies they support or oppose. Technologies tend to fall short of either the perfect or catastrophic state predicted by these debates. Examples such as self-driving technology, virtual reality, and the metaverse illustrate this point.

In the realm of tech, utopian versus dystopian discussions serve their intended purpose: distracting from genuine conversations about the current-day impact of technology in practical use. AI, particularly since the introduction of ChatGPT a year ago, has undeniably left a significant mark. However, its impact is not about unintentionally creating a virtual deity; rather, it’s about how ChatGPT surpassed its creators’ expectations in popularity, virality, and stickiness, while still aligning with their modest predictions.

According to recent studies, the use of generative AI is widespread and increasing, especially among younger users. Contrary to expectations, its primary applications are not in novelty or entertainment but overwhelmingly in automating work-related tasks and communications. While the consequences of AI-generated content in such scenarios are usually inconsequential, they contribute to a digital landscape containing subtle factual errors and minor inaccuracies.

Humans are not inherently adept at disseminating error-free information, as evident in the rise of the misinformation economy on social networks. LLM-based AI models introduce errors casually and constantly, with an air of authoritative confidence derived from years of stable, factual Google search results. The trust in information delivered through internet searches has eroded critical skepticism.

The impact of ChatGPT and similar models producing content with questionable accuracy for everyday communication may be subtle but is worth investigating and potentially mitigating. Understanding why people entrust AI with such tasks is crucial, emphasizing that the focus should be on the task itself when examining widespread task automation. The significant changes AI brings are already evident, diverging from dystopian visions and warranting study beyond techno-optimistic dreams.

Leave a Reply

Your email address will not be published. Required fields are marked *