Home United States USA — software 5 ways to catch AI in its lies and fact-check its outputs...

5 ways to catch AI in its lies and fact-check its outputs for your research

47
0
SHARE

If you’re using AI for research, follow these five best practices for better, more accurate chatbot results every time.
Sometimes, I think AI chatbots are modeled after teenagers. They can be very, very good. But other times, they tell lies. They make stuff up. They confabulate. They confidently give answers based on the assumption that they know everything there is to know, but they’re woefully wrong.
See what I mean? You can’t tell from the context above whether my descriptions refer to AIs or teenagers.
While most of us know not to go to teenagers for important information and advice, we’re starting to rely on equally prevaricating AIs. To be fair, AIs aren’t bad; they’re just coded that way.
Last year, I gave you eight ways to reduce ChatGPT hallucinations:
However, those tips were all things to avoid. I didn’t give you proactive tools for digging into prompt responses and guiding the AI to provide more productive responses. This technique is particularly important if you’re going to use an AI as a search engine replacement or as a tool for helping you research or write articles or papers.
Let’s dig into five key steps you can take to guide an AI to accurate responses.
I previously wrote a guide on how to make ChatGPT provide sources and citations. Fortunately, ChatGPT is getting better at citing sources, particularly with the GPT-4o LLM and the web search capability in the $20-a-month version.
But ChatGPT won’t always volunteer those sources. If you’re doing research, always — always — ask for sources.
Then, test the sources and make sure they actually exist. Numerous times in my experience, ChatGPT cited sources that seemed absolutely perfect for what I was looking for. The only problem was that after I clicked through or searched for the named source by title, I discovered the entire source had been fabricated.
ChatGPT even chose real academic journals, made up author names, and then assigned compelling-sounding titles to the articles. Can you imagine how bad it would have been had I included those sources in my work without double-checking? I shudder to even think about it.
So, ask for sources, check those sources, and call the AI out if it gives you a made-up answer.
Early in my exploration of ChatGPT, I asked the tool to help me find a local mechanic. I sent it to Yelp and Google reviews to do sentiment analysis on the comments. At the time, it reached into those sites and gave me useful information.
I tried the test again recently and received another set of mechanic rankings. It actually told me, « Based on a comprehensive analysis of Yelp and Google reviews for independent car repair shops. »
But, ChatGPT lied.
Liar, liar, pants on fire.
When I asked it to show its work, the tool again said it had looked at Yelp and Google reviews. However, in the « show your work » response, the tool also displayed the source for the reviews it analyzed. This turned out to be a site named Birdeye Reviews.
Now, I have nothing against Birdeye Reviews. I’ve never used it. But that’s not the point.

Continue reading...