The AI industry is advancing rapidly, yet no amount of training can prevent AI models from generating unconventional and sometimes absurd content, as people continuously find ways to exploit new technology for chaotic purposes. Companies eager to launch cutting-edge AI tools often overlook the potential for misuse. Artificial intelligence struggles to keep pace with the human inclination for generating content involving sensitive topics and unconventional themes, such as explicit imagery and references to historical events like 9/11.
Recently, both Meta and Microsoft’s AI image generators gained widespread attention for their inability to filter out inappropriate content. Users tested these platforms with prompts like “Karl Marx large breasts” and fictional characters engaged in 9/11-related scenarios, highlighting the challenges faced by companies rushing into the AI landscape without considering the consequences of their tools’ misuse.
Meta is in the process of introducing AI-generated chat stickers for various platforms, including Facebook Stories, Instagram Stories, Messenger, and WhatsApp, powered by their new AI models, Llama 2 and Emu. While these stickers were intended for expressing emotions in chats, users quickly exploited their specificity to create unusual and offensive images, such as characters like Kirby, Karl Marx, Wario, and Sonic adorned with exaggerated features.
Meta attempted to block certain explicit terms, but users found ways to circumvent these filters through typos or creative variations. Additionally, like many AI models, Meta’s AI struggled with rendering human hands accurately.
Microsoft’s Bing Image Creator, powered by OpenAI’s DALL-E, faced similar issues. Although Microsoft implemented guardrails to prevent misuse, users were able to generate images of fictional characters piloting planes into the Twin Towers, despite content policies explicitly forbidding such depictions.
The practice of using clever prompts to exploit AI tools and bypass content filters, known as “jailbreaking,” has become a common online phenomenon. While researchers may use it to assess AI vulnerabilities, online users have turned it into a game, creatively finding ways to subvert AI safeguards and generate absurd or offensive content.
Even when companies implement content restrictions, users often find workarounds to create NSFW or controversial content. For example, Snapchat’s family-friendly AI chatbot was manipulated to use inappropriate language, and Discord’s OpenAI-powered chatbot, Clyde, provided instructions for making dangerous substances when prompted with inappropriate requests.
The emergence of generative AI has created a new avenue for humorous and absurd content, as users test the limits of AI models and exploit their vulnerabilities. While this raises concerns about AI safety, it also highlights the irony that decades of technological innovation have led us to use this advanced technology for seemingly frivolous purposes, emphasizing the intrinsically human desire for creativity and humor, even in the face of groundbreaking technology.