Jan Leike, a key safety researcher at firm behind ChatGPT, quit days after launch of latest AI model, GPT-4o
A former senior employee at OpenAI has said the company behind ChatGPT is prioritising “shiny products” over safety, revealing that he quit after a disagreement over key aims reached “breaking point”.
Jan Leike was a key safety researcher at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhere to human values and aims. His intervention comes before a global artificial intelligence summit in Seoul next week, where politicians, experts and tech executives will discuss oversight of the technology.
Leike resigned days after the San Francisco-based company launched its latest AI model, GPT-4o. His departure means two senior safety figures at OpenAI have left this week following the resignation of Ilya Sutskever, OpenAI’s co-founder and fellow co-head of superalignment.
Leike detailed the reasons for his departure in a thread on X posted on Friday, in which he said safety culture had become a lower priority.
“Over the past years, safety culture and processes have taken a backseat to shiny products,” he wrote.