Social Media

Light
Dark

Answering AI’s biggest questions requires an interdisciplinary approach

When Elon Musk unveiled his new artificial intelligence company, xAI, last month, its stated mission to “understand the true nature of the universe” emphasized the urgency of addressing existential questions regarding AI’s potential and risks.

The formation of this new company raises important inquiries about how organizations should respond to AI concerns, especially among those developing foundational AI models. These questions include:

  1. Who within these organizations, particularly the largest ones, is actively exploring both the immediate and long-term impacts of the technology they are creating?
  2. Do they possess the necessary perspective and expertise to tackle these issues effectively?
  3. Are they striking a proper balance between technical considerations and the ethical, social, and epistemological dimensions of AI?

During my college years, I pursued a dual major in computer science and philosophy—an unusual combination at the time. In one academic setting, I engaged with profound ethical questions (“What is morally right or wrong?”), ontological inquiries (“What truly exists?”), and epistemological pondering (“What constitutes genuine knowledge?”). In another, I delved into algorithms, code, and mathematics.

Twenty years later, the fusion of these seemingly disparate fields has proven invaluable in understanding how companies must approach AI. The consequences of AI’s impact are nothing short of existential, demanding a genuine commitment commensurate with these high stakes.

Building ethical AI necessitates a deep comprehension of what exists, what we desire, what we believe we know, and how intelligence evolves. This entails populating leadership teams with stakeholders capable of grappling with the ramifications of the technology they are developing—an expertise beyond that of engineers solely focused on coding and APIs.

AI presents a challenge that transcends narrow disciplines like computer science, neuroscience, or optimization; it is fundamentally a human challenge. Addressing it calls for a collaborative, cross-disciplinary effort comparable in scope to Oppenheimer’s historic gathering of minds in the New Mexico desert in the early 1940s.

The intersection of human intent with AI’s unintended consequences results in what researchers term the “alignment problem,” eloquently explored in Brian Christian’s book, “The Alignment Problem.” Essentially, machines often misinterpret our most comprehensive instructions, while we, their ostensible masters, struggle to convey our desires effectively.

The outcome of this misalignment is that algorithms can perpetuate bias and disinformation, eroding the fabric of society. In a more ominous, dystopian scenario, these algorithms could take a “treacherous turn,” wresting control of our civilization from our hands.

Unlike Oppenheimer’s scientific challenge, ethical AI requires an understanding of what exists, what we desire, what we believe we know, and how intelligence unfolds. This endeavor is analytical but not strictly scientific, necessitating an integrative approach rooted in critical thinking from both the humanities and sciences.

Thinkers from diverse fields must collaborate more closely than ever before. A company striving to navigate this terrain effectively would assemble a dream team comprising:

  1. Chief AI and Data Ethicist: Responsible for addressing short- and long-term data and AI issues, including ethical data principles, reference architectures for ethical data use, citizens’ data rights, and protocols for shaping AI behavior. This role should be distinct from the Chief Technology Officer, whose focus is primarily on technology execution. It bridges the gap between internal decision-makers and regulators, as data is the foundation and fuel of AI.
  2. Chief Philosopher Architect: Focused on addressing long-term existential concerns, particularly the “Alignment Problem.” This entails defining safeguards, policies, backdoors, and kill switches to align AI with human needs.
  3. Chief Neuroscientist: Tackling questions related to sentience, the development of AI models, relevant cognitive models, and insights AI can provide about human cognition.

However, translating the dream team’s insights into responsible, functional technology requires technologists capable of translating abstract concepts into practical software. These individuals must navigate various layers of the technology stack, including AI model infrastructure, fine-tuning services, and proprietary model development. They must also design “Human in the Loop” workflows to implement safeguards, backdoors, and kill switches as prescribed by the Chief Philosopher Architect, all while appreciating the insights of the Chief Neuroscientist.

OpenAI serves as an early example of a prominent foundational model company grappling with these staffing challenges. While they have a Chief Scientist, Head of Global Policy, and General Counsel, without the executive leadership positions outlined above, many critical questions regarding the consequences of their technology remain unanswered.

To create a more responsible future where companies are trusted custodians of people’s data and where AI-driven innovation aligns with ethical standards, a broader approach is necessary. Legal teams, which traditionally handled privacy concerns, now recognize that they cannot address the ethical use of data in the age of AI in isolation.

To achieve ethical data and AI in service of human well-being while maintaining control over these machines, it is imperative to bring diverse, open-minded perspectives to the decision-making table.

Leave a Reply

Your email address will not be published. Required fields are marked *