Another lead safety researcher at OpenAI is out the door, Lilian Weng, who said she's leaving the startup. She became VP of research and safety in August, before which she had led OpenAI's safety systems team.
In a posting on X, Weng mentioned that "after 7 years at OpenAI, I feel ready to reset and explore something new." Weng said she will leave on November 15th, but she did not reveal the new destination.
"I made the extremely difficult decision to leave OpenAI," said Weng in the post. "Looking at what we have achieved, I'm so proud of everyone on the Safety Systems team and I have extremely high confidence that the team will continue thriving."
Weng's resignation is the latest in an extended line of AI safety researchers, policy researchers, and other executives who have left the company over the last year and who, in several cases, publicly accused OpenAI of favoring commercial products over AI safety. Weng joins Ilya Sutskever and Jan Leike, respectively leaders of OpenAI's now dissolved Superalignment team that aimed to create methods to control superintelligent AI systems, who also left the startup this year to focus on AI safety elsewhere.
According to her LinkedIn, Weng joined OpenAI in 2018 and had work for the startup's robotics team building a robot hand that could solve a Rubik's cube-a feat which, according to her post, had taken two years to accomplish.
As Weng focused more on the GPT paradigm with OpenAI, this is when the researcher transitioned to aid in the development of the startup's applied AI research team in 2021. Following the release of GPT-4, in 2023, Weng was given the task of leading a special team to design safety systems for the startup. In his post by Weng, he relates the current count of scientists, researchers, and policy experts working in the safety systems unit of OpenAI to be more than 80.
There are a whole lot of AI-safety folk queasy about OpenAI's fixation on safety as it seeks to build dramatically more powerful AI systems. Long-time policy researcher Miles Brundage departed the startup last October and went public with the news that OpenAI was disbanding its AGI readiness team, which he had advised. On the same day, the New York Times ran an interview with one of the former OpenAI researchers, Suchir Balaji, who left OpenAI because he felt that OpenAI's technology would cause more harm than good to society.
OpenAI tells TechCrunch executives and safety researchers are working on a transition to replace Weng.
We are deeply thankful to the contributions Lilian made toward breakthrough safety research and the building of rigorous technical safeguards," an OpenAI spokesperson said via email. "We are confident that the Safety Systems team will continue playing a key role in ensuring our systems are safe and reliable, serving hundreds of millions of people all over the world.".
More recent losses include CTO Mira Murati, chief research officer Bob McGrew and VP research Barret Zoph. In August, Andrej Karpathy, one of the most high-profile researchers on the company's payroll, announced he'd be leaving the startup, as did co-founder John Schulman. Several of those who left, including Leike and Schulman, are now with competitor Anthropic. Others have launched their own startups.