Social Media

Light
Dark

President Biden issues executive order to set standards for AI safety and security

U.S. President Joe Biden has issued an executive order (EO) aimed at establishing “new standards” for the safety and security of artificial intelligence (AI). These standards include requirements for companies developing foundational AI models to inform the federal government and share the results of safety tests before deploying them to the public.

The rapid progress in generative AI, exemplified by systems like ChatGPT and foundational AI models created by OpenAI, has triggered a global debate on the need for safeguards against potential risks associated with ceding excessive control to algorithms. In May, G7 leaders identified key themes within the so-called Hiroshima AI Process, leading to an agreement among the seven countries on guiding principles and a “voluntary” code of conduct for AI developers.

Recently, the United Nations (UN) introduced a new board to explore AI governance, while the United Kingdom is hosting a global summit on AI governance at Bletchley Park, with U.S. Vice President Kamala Harris scheduled to speak at the event.

The Biden-Harris Administration has been emphasizing AI safety through “voluntary commitments” from major AI developers, including OpenAI, Google, Microsoft, Meta, and Amazon, as a prelude to the executive order being announced today.

The executive order focuses on ensuring that developers of the “most powerful AI systems” share their safety test results and related data with the U.S. government. It aims to protect Americans from potential risks associated with AI systems as their capabilities expand.

To align with the Defense Production Act of 1950, the order specifically targets foundational AI models that could pose risks to national security, economic security, or public health. This approach is intended to ensure that AI systems are safe, secure, and trustworthy before they are made available to the public.

The executive order also outlines plans to create new tools and systems to enhance the safety and trustworthiness of AI. The National Institute of Standards and Technology (NIST) is tasked with developing new standards for comprehensive red-team testing before release. These tests will cover various domains, with the Departments of Energy and Homeland Security addressing AI-related risks in critical infrastructure.

Additionally, the order introduces directives and standards to address issues such as using AI to engineer dangerous biological materials, AI-powered fraud and deception, and establishing a cybersecurity program to develop AI tools for addressing vulnerabilities in critical software.

The order acknowledges concerns related to equity and civil rights, emphasizing how AI can exacerbate discrimination and bias in healthcare, justice, housing, workplace surveillance, and job displacement. However, some may find the order lacking concrete enforcement measures, as many of its aspects revolve around recommendations and guidelines.

Although the executive order outlines guidelines for integrating safety and security into AI systems, its enforceability may require further legislative changes. For example, it discusses concerns about data privacy but mainly calls on Congress to pass “bipartisan data privacy legislation” to protect Americans’ data, including seeking federal support for developing privacy-preserving AI techniques.

As Europe prepares to pass extensive AI regulations, it remains uncertain how influential President Biden’s executive order will be in regulating entities like OpenAI, Google, Microsoft, and Meta in the evolving landscape of AI technology.

Leave a Reply

Your email address will not be published. Required fields are marked *