Social Media

Light
Dark

OpenAI’s head of trust and safety Dave Willner steps down

A significant personnel change is afoot at OpenAI, the artificial intelligence juggernaut that has nearly single-handedly inserted the concept of generative AI into global public discourse with the launch of ChatGPT. Dave Willner, an industry veteran who was the startup’s head of trust and safety, announced in a post on LinkedIn last night (first spotted by Reuters) that he has left the job and transitioned to an advisory role. He plans to spend more time with his young family, he said. He’d been in the role for a year and a half.

OpenAI said in a statement that it’s seeking a replacement and that CTO Mira Murati will manage the team on an interim basis. “We thank Dave for his valuable contributions to OpenAI,” it said. The full statement is below.

His departure is coming at a critical time for the world of AI.

Alongside all the excitement about the capabilities of generative AI platforms — based on large language or other foundational models and are lighting-fast at producing freely-generated text, images, music and more based on simple prompts from users — there has been a growing list of questions. How best to regulate activity and companies in this brave new world? How best to mitigate any harmful impacts across a whole spectrum of issues? Trust and safety are foundational parts of those conversations.

Just today, OpenAI’s president Greg Brockman is due to appear at the White House alongside execs from Anthropic, Google, Inflection, Microsoft, Meta and Amazon to endorse voluntary commitments to pursue shared safety and transparency goals ahead of an AI executive order that’s in the works. That comes in the wake of a lot of noise in Europe related to AI regulation, as well as shifting sentiments among some others.

The importance of all this is not lost on OpenAI, which has sought to position itself as an aware and responsible player in the field.

Willner doesn’t make any reference to any of that specifically in his LinkedIn post. Instead, he keeps it high-level, noting that the demands of his OpenAI job shifted into a “high-intensity phase” after the launch of ChatGPT.

“I’m proud of everything our team has accomplished in my time at OpenAI, and while my job there was one of the coolest and most interesting jobs it’s possible to have today, it had also grown dramatically in its scope and scale since I first joined,” he wrote. While he and his wife — Charlotte Willner, who is also a trust and safety specialist — both made commitments to always put family first, he said, “in the months following the launch of ChatGPT, I’ve found it more and more difficult to keep up my end of the bargain.”

Willner has been in his OpenAI post for just 1.5 years, but he comes from a long career in the field that includes leading trust and safety teams at Facebook and Airbnb.

The Facebook work is especially interesting. There, he was an early employee who helped spell out the company’s first community standards position, which is still used as the basis of the company’s approach today.

That was a very formative period for the company, and arguably — given the influence Facebook has had on how social media has developed globally — for the internet and society overall. Some of those years were marked by very outspoken positions on the freedom of speech, and how Facebook needed to resist calls to rein in controversial groups and controversial posts.

One case in point was a very big dispute, in 2009, played out in the public forum about how Facebook was handling accounts and posts from Holocaust Deniers. Some employees and outside observers felt that Facebook had a duty to take a stand and ban those posts. Others believed that doing so was akin to censorship and sent the wrong message around free discourse.

Willner was in the latter camp, believing that “hate speech” was not the same as “direct harm” and should therefore not be moderated the same. “I do not believe that Holocaust Denial, as an idea on it’s [sic] own, inherently represents a threat to the safety of others,” he wrote at the time. (For a blast from the TechCrunch past, see the full post on this here.)

In retrospect, given how so much else has played out, it was a pretty short-sighted, naïve position. But, it seems that at least some of those ideas did evolve. By 2019, no longer employed by the social network, he was speaking out against how the company wanted to grant politicians and public figures weaker content moderation exceptions.

But if the need for laying the right groundwork at Facebook was bigger than people at the time anticipated, that is arguably even more the case now for the new wave of tech. According to this New York Times story from less than a month ago, Willner had been brought on to OpenAI initially to help it figure out how to keep DALL-E, the startup’s image generator, from getting misused and used for things like the creation of generative AI child pornography.

But as the saying goes, OpenAI (and the industry) needs that policy yesterday. “Within a year, we’re going to be reaching very much a problem state in this area,” David Thiel, the chief technologist of the Stanford Internet Observatory, told the NYT.

Now, without Willner, who will lead OpenAI’s charge to address that?

Update: After publishing, OpenAI provided the following statement:

“We thank Dave for his valuable contributions to OpenAI. His work has been foundational in operationalizing our commitment to the safe and responsible use of our technology, and has paved the way for future progress in this field. Mira Murati will directly manage the team on an interim basis, and Dave will continue to advise through the end of the year. We are seeking a technically-skilled lead to advance our mission, focusing on the design, development, and implementation of systems that ensure the safe use and scalable growth of our technology.”

Leave a Reply

Your email address will not be published. Required fields are marked *