Home United States USA — IT The man who was supposed to keep ChatGPT safe is working on...

The man who was supposed to keep ChatGPT safe is working on AI superintelligence instead

107
0
SHARE

After leaving work on ChatGPT, Ilya Sutskever announces his next AI company: Safe Superintelligence. But don’t expect a product anytime soon.
About a month ago, Ilya Sutskever announced his departure from OpenAI. Considering what had happened in November when OpenAI CEO Sam Altman was fired and then promptly rehired following all the outrage, this wasn’t a surprise.
Sutskever was a cofounder and chief scientist at OpenAI. He was also the man who was supposed to make ChatGPT and other AI products safe for us, and was instrumental in both Altman’s firing and the CEO’s return. That’s what the public was told, at least.
Sutskever chose to depart OpenAI at a time when the newest ChatGPT developments were turning lots of heads. I’m talking about the GPT-4o model OpenAI unveiled a day before Google’s big I/O 2024 event, which was focused almost entirely on AI.
I said at the time I couldn’t wait to see what Sutskever would be working on next, as he teased “a project that is very personally meaningful to me about which I will share details in due time.” Fast-forward to mid-June, we now have the name and purpose of said project.
The new AI company is called Safe Superintelligence Inc. (SSI), and we probably won’t hear any specifics about its products anytime soon.
Safe Superintelligence has two other cofounders: Former Apple AI lead Daniel Gross, and former OpenAI engineer Daniel Levy. Sutskever announced the new venture on X, where he outlined the purpose of Safe Superintelligence. Here’s a part of the announcement:
SSI is our mission, our name, and our entire product roadmap, because it is our sole focus.

Continue reading...