Lilian Weng Resigns from OpenAI Amidst Ongoing Shifts in AI Safety Leadership
By admin | Nov 11, 2024 | 3 min read
Lilian Weng, the Vice President of Research and Safety at OpenAI, has announced her resignation after seven years with the company. Weng’s departure follows a series of high-profile exits from OpenAI, including prominent figures like Ilya Sutskever and Jan Leike, both of whom were leading the now-disbanded Superalignment team focused on creating safety protocols for superintelligent AI.
Weng, who assumed the role of VP in August 2023, reflected on her decision in a post on X (formerly Twitter), stating;
“After seven years at OpenAI, I feel ready to reset and explore something new.” Her final day with the company will be November 15. Despite stepping away, she expressed immense pride in the achievements of the Safety Systems team, adding, "I have extremely high confidence that the team will continue thriving."
At OpenAI, Weng was instrumental in shaping the company’s safety measures and had previously led the Safety Systems team, a group now comprising over 80 specialists focused on the critical task of ensuring AI safety amidst the rapid expansion of OpenAI’s technology.
Starting her career at OpenAI in 2018, Weng was part of the robotics team that developed a Rubik’s cube-solving robotic hand in an ambitious two-year project. She transitioned to the applied AI research team in 2021, where she aligned her efforts with the company’s shift towards enhancing language models such as GPT-3 and GPT-4. Weng’s resignation adds to the growing list of departures that have raised concerns within the AI community about OpenAI’s evolving direction, particularly its balancing of AI safety with the commercialization of its products.
Read Lilian Weng's Full Statement
Her departure comes on the heels of several key resignations, including that of Ilya Sutskever, OpenAI’s Chief Scientist, and Jan Leike, the former lead of the Superalignment team. Both researchers left OpenAI this year to continue their work on AI safety in other organizations. The company has also witnessed the exit of policy researchers like Miles Brundage, who in October revealed the disbandment of OpenAI’s AGI readiness team. Additionally, Suchir Balaji, a former researcher, left the company citing concerns that OpenAI's technology might cause more societal harm than benefit. High-level exits have also included Mira Murati, former CTO; Bob McGrew, the Chief Research Officer; Barret Zoph, the VP of Research; and co-founder John Schulman, with many of these individuals finding new roles at OpenAI’s competitor, Anthropic, or starting independent ventures.
This steady stream of departures points to an undercurrent of tension at OpenAI, as the company doubles down on rapid AI advancement, sometimes at the cost of safety and ethical scrutiny. As researchers and leaders depart, many in the industry are left questioning if OpenAI’s commitment to responsible AI development is beginning to waver. The exits hint at possible discord within the organization, where balancing the drive for groundbreaking technology with the need for cautious, ethical oversight may be creating a divide.
Comments
Please log in to leave a comment.
Feels like OpenAI’s more about chasing that big tech hype these days than keeping AI actually safe—guess safety’s only cool till the profits roll in, huh?
Haha, you’re not alone in questioning it. Seeing so many top experts exit definitely raises concerns ...