Start United States USA — IT 5 Things You Should Never Tell ChatGPT

5 Things You Should Never Tell ChatGPT

110
0
TEILEN

It’s always dangerous to reveal private information on the internet, and your ChatGPT conversations shouldn’t be thought of as private or secure.
ChatGPT responds to about 2.5 billion prompts each day, with the US accounting for 330 million of these. Unlike the experience when interacting with a search engine, AI responses are more like a reply from a friend than a simple list of websites that may or may not contain the answer to our query. People are using AI tools like ChatGPT in some very weird ways, but caution is essential when sharing information.
Chatbots like ChatGPT are not one-way tools that fire out responses based on a static database of information. It’s a continually evolving system that learns from the data it’s fed, and that information doesn’t exist in a vacuum. While systems like ChatGPT are designed with safeguards in place, there are significant and warranted concerns about the effectiveness of these safeguards.
For instance, in January 2025, Meta AI fixed a bug that had allowed users to access private prompts from other users. With ChatGPT specifically, earlier versions were susceptible to prompt injection attacks that allowed attackers to intercept personal data. Another security flaw was Google’s (and other search engines‘) unfortunate tendency to index shared ChatGPT chats and make them publicly available in search results.
This means that the basic rules of digital hygiene that we apply to other aspects of our online presence should equally apply to ChatGPT. Indeed, given the controversy surrounding the technology’s security and its relative immaturity, it could be argued that even more prudence is required when dealing with AI chatbots. Bearing this in mind, let’s look at five things you should never share with ChatGPT.Personally Identifiable Information
Perhaps the most obvious starting point is the sharing (or preferably not) of personally identifiable information (PII). As an example, the Cyber Security Intelligence website recently published an article based on research work done by Safety Detectives, a group of cybersecurity experts. The research looked at 1,000 publicly available ChatGPT conversations — the findings were eye-opening. They discovered that users frequently shared details like full names and addresses, ID numbers, phone numbers, email addresses, and usernames & passwords. The latter is more relevant given the rise of agentic AI browsers like Atlas — OpenAI’s ChatGPT-based AI-powered browser.
There is no doubt that ChatGPT is genuinely helpful for tasks such as resumes and cover letters. However, it does the job perfectly without unnecessarily including personal details. Placeholders work just as well, as long as you remember to edit the details to avoid that critical letter going out as being from John Doe, Nowhere Street, Middletown. Ultimately, this one simple step prevents sensitive data like names, addresses, and ID numbers — information that can all be misused, falling into the wrong hands.
Another option is to opt out of letting ChatGPT use your chats for data training. This can be done from within the ChatGPT settings, and full instructions can be found on the OpenAI website.

Continue reading...