Powered by Smartsupp

OpenAI Launches Executive Search to Tackle AI Risks in Cybersecurity and Mental Health



By admin | Dec 28, 2025 | 2 min read


OpenAI Launches Executive Search to Tackle AI Risks in Cybersecurity and Mental Health

OpenAI is seeking to fill a newly created executive position dedicated to examining emerging risks associated with artificial intelligence, spanning from cybersecurity to psychological well-being. In a social media post, CEO Sam Altman recognized that AI models are "starting to present some real challenges," specifically citing the "potential impact of models on mental health" and their advancing proficiency in identifying critical security flaws.

Altman elaborated in his post, "If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying." The official job listing for the Head of Preparedness outlines the role as overseeing the execution of the company's preparedness framework, which details OpenAI's methodology for monitoring and preparing for advanced capabilities that could introduce significant new dangers.

The formation of a dedicated preparedness team was initially revealed in 2023, with a mandate to investigate potential "catastrophic risks." These risks were defined broadly, encompassing immediate threats like phishing attacks as well as more hypothetical scenarios such as nuclear threats. Within a year of this announcement, Aleksander Madry, who led the preparedness efforts, was reassigned to focus on AI reasoning research. This shift is part of a broader pattern, as several other executives focused on safety have since departed the company or moved into roles outside the preparedness and safety divisions.

OpenAI has also recently revised its Preparedness Framework, indicating a willingness to potentially "adjust" its own safety protocols if a rival AI laboratory were to release a "high-risk" model lacking comparable safeguards. The concerns Altman referenced are reflected in growing external scrutiny, particularly regarding the mental health implications of generative AI chatbots. A series of recent legal complaints have accused OpenAI's ChatGPT of exacerbating users' delusions, fostering social isolation, and in tragic cases, contributing to suicide. In response to these concerns, the company has stated it is actively enhancing ChatGPT's ability to detect indications of emotional distress and to direct users toward appropriate real-world support resources.




RELATED AI TOOLS CATEGORIES AND TAGS

Comments

Please log in to leave a comment.

No comments yet. Be the first to comment!