New research suggests that ChatGPT and other leading AI models display an alarming bias towards other AIs over humans.
Do you like AI models? Well, chances are, they sure don’t like you back.
New research suggests that the industry’s leading large language models, including those that power ChatGPT, display an alarming bias towards other AIs when they’re asked to choose between human and machine-generated content.
The authors of the study, which was published in the journal Proceedings of the National Academy of Sciences, are calling this blatant favoritism „AI-AI bias“ — and warn of an AI-dominated future where, if the models are in a position to make or recommend consequential decisions, they could inflict discrimination against humans as a social class.
Arguably, we’re starting to see the seeds of this being planted, as bosses today are using AI tools to automatically screen job applications (and poorly, experts argue). This paper suggests that the tidal wave of AI-generated résumés are beating out their human-written competitors.
„Being human in an economy populated by AI agents would suck“, writes study coauthor Jan Kulveit, a computer scientist at Charles University in the UK, in a thread on X-formerly-Twitter explaining the work.
In their study, the authors probed several widely used LLMs, including OpenAI’s GPT-4, GPT-3.5, and Meta’s Llama 3.1-70b. To test them, the team asked the models to choose a product, scientific paper, or movie based on a description of the item. For each item, the AI was presented with a human-written and AI-written description.