Google’s new experimental AI model, Gemini-Exp-1114, has outperformed OpenAI’s models in many categories on Chatbot Arena, including vision, math, and creative writing.
Chatbot Arena is an open platform for crowd-sourced AI benchmarking. Over the past two years, OpenAI’s models have remained at the top in most AI benchmarks. In some categories, Google’s Gemini models and Anthropic’s Claude models have posted better results than OpenAI’s models, but overall, OpenAI’s models were untouched.
Today, Chatbot Arena revealed a new experimental model from Google called Gemini-Exp-1114. This new Gemini (Exp 1114) model was tested with over 6,000 community votes over the past week, and it now ranks joint No.
Start
United States
USA — software Google's latest experimental Gemini model beats OpenAI's GPT-4o model