An important personnel change is brewing at OpenAI, the juggernaut in artificial intelligence that nearly single-handedly injected the idea of generative AI into the global public lexicon with the release of ChatGPT. Dave Willner, an industry veteran and former head of trust and safety at the startup, posted on LinkedIn last night (first spotted by Reuters) that he is no longer in the role, moving into an advisory capacity instead. He said he's planning to spend more time with his young family. He had been on the job for a year and a half.
OpenAI said in a statement that it is looking for a successor, and CTO Mira Murati will manage the team on an interim basis. "We thank Dave for his valuable contributions to OpenAI," it said. Here is the full statement.
His exit is taking place at a very crucial moment in the AI world.
There's excitement and even panic building around what such platforms — based on large language or other foundational models, are lighting-fast at generating free text, images, music, and much else in response to simple user prompts. And with it all, a growing list of questions. How best to regulate activity and companies in this brave new world? How best to mitigate any harmful impacts across a whole spectrum of issues? Trust and safety are foundational parts of those conversations.
Just this morning, OpenAI CEO and president Greg Brockman was scheduled to be here in the White House joined by the executives of Anthropic, Google, Inflection, Microsoft, Meta, and Amazon to praise an agreed-upon set of shared goals for safety and transparency ahead of an expected executive order on AI that will be issued. It comes after a great deal of noise in Europe relating to regulation around AI along with shifting sentiments from other directions.
None of that is explicitly referenced by Willner in his LinkedIn post. Instead, he keeps it high-level, noting that the demands of his OpenAI job shifted into a "high-intensity phase" after the launch of ChatGPT.
"I'm really proud of everything our team has accomplished in my time at OpenAI, and even though my job there is one of the coolest and most interesting jobs it is possible to have today, it had also grown dramatically in its scope and scale since I first joined," he wrote. While he and his wife, Charlotte Willner — also a trust and safety specialist — had both vowed always to put family first, he said, "in the months following the launch of ChatGPT, I've found it more and more difficult to keep up my end of the bargain.
He joined OpenAI roughly 1.5 years ago but boasts a long-term career within the field, even leading the trust and safety teams of Facebook and Airbnb.
It is particularly the work that he did in Facebook which is of keen interest here. He joined the organization as one of its very early employees that helped put together the then newly designed community standards first position and still acts on it.
It was a very formative period for the company, and arguably, given the influence Facebook has had on how social media has developed globally, for the internet and society overall. Some of those years were marked by very outspoken positions on the freedom of speech, and how Facebook needed to resist calls to rein in controversial groups and controversial posts.
One that comes to my mind is the very controversial, 2009 case regarding how Facebook handled accounts and posts from Holocaust Deniers in public domain. The views of some employees, and outside observers, were that Facebook had a duty and obligation to take a position and not allow such a thing. Others argued this would be a form of censorship, and Facebook should stay out of it as well, because it would appear that Facebook was compromising its stand on free speech.
The former camp was Willner. He argued that "hate speech" was in no way identical to "direct harm," and thus should not be moderated on the same basis either. "I do not believe that Holocaust Denial, as an idea on it's [sic] own, inherently represents a threat to the safety of others," he wrote then. (For a trip down TechCrunch Memory Lane, see the original post on this here.
Looking back, with everything that's unfolded, it was quite short-sighted and naive of a position. However, at least some of those ideas did evolve. By 2019, no longer employed by the social network, he was speaking out against how the company wanted to grant politicians and public figures weaker content moderation exceptions.
But if laying the proper groundwork at Facebook proved a need more momentous than people at the time realized, that arguably is more the case for the next wave of technology. According to this New York Times story from less than a month ago, Willner had been brought on to OpenAI originally to help it figure out how to keep DALL-E, the startup's image generator, from getting misused and used for things like the creation of generative AI child pornography.
But as the saying goes, OpenAI (and the industry) needs that policy yesterday. "Within a year, we're going to be reaching very much a problem state in this area," David Thiel, the chief technologist of the Stanford Internet Observatory, told the NYT.
Now, without Willner, who will lead OpenAI's charge to address that?
Update: After publishing, OpenAI provided the following statement:
"We thank Dave for his valuable contributions to OpenAI.". His work has been foundational in operationalizing our commitment to the safe and responsible use of our technology and paved the way for future progress in this field. Mira Murati will manage the team directly on an interim basis, and Dave will continue to advise through the end of the year. We are looking for a technically-skilled leader who will help us advance the mission of designing, developing, and implementing systems that assure safe usage and scalable growth in our technology.