
OpenAI’s lead safety researcher Lilian Weng has quit, marking another big exit for the company’s AI ethics and safety leaders. Weng, Vice President of Research and Safety, posted on X that after seven years with OpenAI, he wanted to “start over and do something new”. It is her last day on 15 November, but Weng hasn’t yet revealed what she will be doing.
A Pivotal Role in AI Safety at OpenAI
Weng has been at OpenAI since 2018, initially working in the startup’s robotics division. In her early years, she had constructed things such as a robot hand that would crack a Rubik’s cube, which took two years to complete. When OpenAI switched to generative AI, Weng left in 2021 to head up the Applied AI Research team and in 2023 became the CEO of the startup’s Safety Systems team. With her leadership, the unit grew to include more than 80 scientists, researchers and policy specialists focused on developing technical protections for OpenAI’s ever-deeper AI models.
A Broader Trend of Departures
Weng’s exit mirrors the trajectory of OpenAI’s prominent safety and policy researchers. In early 2017, Ilya Sutskever and Jan Leike, both leaders of the now-disbanded Super Alignment team, too walked away from the company, along with other scientists such as Miles Brundage. Former OpenAI employees have also complained about the company’s perceived drift toward putting commercial interests above AI safety, a view that has attracted attention from the AI community.
Ethical Concerns Surrounding AI Safety at OpenAI
The departure of key safety leaders has left OpenAI’s attitude towards responsible AI development in doubt. OpenAI’s Safety Systems team exists to tackle the challenges arising from running large AI models, a role that some see as critically important as AI is increasingly occupying every corner of society. Weng’s exit reflects larger struggles within OpenAI’s safety arm, as the company continues to build more capable robots such as GPT-4.
A spokeswoman for OpenAI remarked following Weng’s departure: “We are extremely grateful for Lilian’s efforts in developing ground-breaking safety studies and developing rigorous technical protections. “We are excited that the Safety Systems team will remain an integral part of making sure our systems are safe and secure.”
Industry Impact: A Shifting Landscape in AI Safety
Weng’s departure, along with those of other senior safety scientists, impacts beyond OpenAI on the wider industry’s struggle to marry innovation with AI governance. Her move brings to the forefront the current debate about AI safety and the ethical imperatives corporations have as they work to create cutting-edge AI technologies with broad-ranging societal effects.
As OpenAI heads into a new phase without Weng, the leadership and Safety Systems team will be challenged to demonstrate their ongoing AI safety focus in an industry that struggles with how to move faster while keeping things ethical.