A study shows that AI services like ChatGPT can guess personal details about a user, such as race and location, from innocuous conversations.
I’m not as worried about the impending AI apocalypse some experts warn about as I am about privacy protections in AI services like ChatGPT and its competitors. I hate the idea of tech giants or third parties possibly abusing large language models (LLM) to collect even more data about users.
That’s why I don’t want chatbots in Facebook Messenger and WhatsApp. And why I noticed Google not really addressing user privacy during its AI-laden Pixel 8 event.
It turns out that my worries are somewhat warranted. It’s not that tech giants are abusing these LLMs to gather personal information that could help them increase their ad-based revenue. It’s that ChatGPT and its rivals are even more powerful than we thought. A study showed that LLMs can infer data about users even if those users never share that information.
Even scarier is the fact that bad actors could abuse the chatbots to learn these secrets. All you’d need to do is collect seemingly innocuous text samples from a target to potentially deduce their location, job, or even race. And think about how early AI still is. If anything, this study shows that ChatGPT-like services need even stronger privacy protections.
Let’s remember that ChatGPT didn’t have and still doesn’t have the best privacy protections in place for the user. It took OpenAI months to actually allow ChatGPT users to prevent their conversations with the chatbot from being used to train the bot.
Fast-forward to early October, researchers from ETH Zurich came out with a new study that shows the privacy risks we’ve opened ourselves up to now that anyone and their grandmother has access to ChatGPT and other products.