Social Media

Light
Dark

This week in AI: AI ethics keeps falling by the wayside

Staying abreast of the rapidly evolving AI industry is no small feat. Until an AI can manage this task for you, here’s a concise overview of recent developments in machine learning, encompassing noteworthy research and experiments that may have slipped under the radar.

This week in AI, the news tempo has finally slowed a bit in anticipation of the holiday season, offering a momentary respite for this sleep-deprived reporter. However, the calm doesn’t imply a shortage of noteworthy topics, presenting both a blessing and a curse.

One headline that grabbed attention is from the AP, revealing that AI image generators are being trained on explicit photos of children. LAION, a dataset employed to train various AI image generators, including popular ones like Stable Diffusion and Imagen, contained thousands of images linked to suspected child sexual abuse. The Stanford Internet Observatory, collaborating with anti-abuse charities, identified and reported these materials to law enforcement. Although LAION, a nonprofit, has taken down its training data and committed to removing offensive content before republishing, this incident underscores the lack of consideration given to ethical concerns as the competitive pressures in the generative AI landscape intensify.

The proliferation of no-code AI model creation tools has made it remarkably easy to train generative AI on diverse datasets. While this facilitates swift model deployment for startups and tech giants, the lower entry barrier raises concerns about the potential disregard for ethics in the race to market.

Ethical considerations in AI development are undeniably challenging. For instance, addressing the ethical implications of the thousands of problematic images in LAION, as seen this week, is a time-consuming process. Ideally, ethical AI development involves collaboration with all relevant stakeholders, including organizations representing groups disproportionately affected by AI systems.

Instances of AI release decisions prioritizing shareholders over ethicists abound in the industry. Bing Chat (now Microsoft Copilot) launched with controversial statements, and as of October, ChatGPT and Bard were still providing outdated, racist medical advice. OpenAI’s latest version of the image generator DALL-E exhibits signs of Anglocentrism.

In the pursuit of AI superiority, or at least Wall Street’s perception of it, harms are being inflicted. The upcoming EU AI regulations, which threaten fines for noncompliance with specified AI guardrails, offer a glimmer of hope. However, the path ahead remains challenging.

In other recent AI news:

1. Predictions for AI in 2024: Devin outlines expectations for AI in 2024, covering potential impacts on U.S. primary elections and the future of OpenAI.
2. Microsoft Copilot ventures into music creation: Microsoft’s AI-powered chatbot collaborates with the GenAI music app Suno to compose songs.
3. Facial recognition banned at Rite Aid: Following the Federal Trade Commission’s findings, Rite Aid is prohibited from using facial recognition tech for five years.
4. EU supports AI startups: The EU expands its plan to aid homegrown AI startups by providing access to processing power on the bloc’s supercomputers.
5. OpenAI enhances safety measures: OpenAI introduces a “safety advisory group” with veto power to reinforce internal safety processes.

In the realm of AI research and developments:

1. Life2vec: A Danish study predicts aspects of a person’s life and life expectancy using countless data points, demonstrating the potential of machine learning techniques.
2. Coscientist: CMU scientists create an LLM-based assistant for researchers, capable of autonomously performing lab tasks in chemistry.
3. FunSearch and StyleDrop: Google introduces FunSearch for mathematical discoveries and StyleDrop for efficient style replication in generative imagery.
4. VideoPoet: Google delves into generative video games with VideoPoet, using LLM base for various video tasks.
5. AI models for snow measurement: Swiss researchers utilize AI models to estimate snow depth using satellite imagery, offering a practical alternative to weather stations.
6. Stanford warns of biases in AI health models: Researchers caution about the propagation of old medical racial tropes by AI models, emphasizing the need for vigilance in health-related AI applications.

In conclusion, the dynamic landscape of AI continues to unfold, presenting both promising advancements and ethical challenges that demand careful consideration.

Leave a Reply

Your email address will not be published. Required fields are marked *