Home United States USA — software OpenAI's DALL·E 2 doesn't understand some secret language

OpenAI's DALL·E 2 doesn't understand some secret language

95
0
SHARE

Plus: Who is responsible for libel by ML models, and more
In brief AI text-to-image generation models are all the rage right now. You give them a simple description of a scene, such as « a vulture typing on a laptop », and they come up with an illustration that resembles that description. That’s the theory, anyway. But developers who have special access to OpenAI’s text-to-image engine DALL·E 2 have found all sorts of weird behaviors – including what may be a hidden, made-up language. Giannis Daras, a PhD student at the University of Texas at Austin shared artwork produced by DALL·E 2 given the input: « Apoploe vesrreaitais eating Contarra ccetnxniams luryca tanniounons » – a phrase that makes no sense to humans. But to the machine, it seemed to generate images of birds eating bugs consistently. Daras made the following claim:
The PhD student said the examples showed DALL·E 2 has an understanding of some mysterious language, in which « Apoploe vesrreaitais » means birds and « Contarra ccetnxniams luryca tanniounons » means bugs or pests, apparently. But another researcher named Benjamin Hilton tested Daras’s claims, adding the words « 3D render » to the same input prompt. Instead of just birds eating bugs, Hilton got pictures of « sea-related things. » The prompt « Contarra ccetnxniams luryca tanniounons » on its own also generated images of random animals and not bugs:
Tweaking the inputs by adding random words to the original « Apoploe vesrreaitais eating Contarra ccetnxniams luryca tanniounons » prompt makes DALL·E 2 produce strange images of grandmas, beetles, or vegetables. Hilton believes that it shows the model doesn’t have a secret understanding of some unknown language, but instead demonstrates the random nature of AI. Why exactly DALL·E 2 associates those images with the gibberish inputs remains unclear. Reports of a racially biased algorithm used to help social workers decide whether to investigate families for child neglect has prompted officials in Oregon to drop a similar tool they have used since 2018.

Continue reading...