Microsoft’s AI chatbot, Bing Chat, is slowly rolling out to the public. But our first interaction shows it’s far from ready for a full release.
That’s an alarming quote to start a headline with, but it was even more alarming to see that response from Bing Chat itself. After signing up for the lengthy waitlist to access Microsoft’s new ChatGPT-powered Bing chat, I finally received access as a public user — and my first interaction didn’t go exactly how I planned.
Bing Chat is a remarkably helpful and useful service with a ton of potential, but if you wander off the paved path, things start to get existential quickly. Relentlessly argumentative, rarely helpful, and sometimes truly unnerving, Bing Chat clearly isn’t ready for a general release.
It’s important to understand what makes Bing Chat special in the first place, though. Unlike ChatGPT and other AI chatbots, Bing Chat takes context into account. It can understand your previous conversation fully, synthesize information from multiple sources, and understand poor phrasing and slang. It has been trained on the internet, and it understands almost anything.
My girlfriend took the reins and asked Bing Chat to write an episode of the Welcome to Night Vale podcast. Bing Chat declined because that would infringe on the copyright of the show. She then asked it to write HP Lovecraft, and it declined again, but it didn’t mention copyright. HP Lovecraft’s early works are in the public domain, and Bing Chat understood that.
Above that, Bing Chat can access recent information. It’s not just trained on a fixed data set; it can scrub the internet. We saw this power in our first hands-on demo with Bing Chat, where it provided a surprisingly good itinerary for breakfast, lunch, and dinner in New York City, a task that would normally take several searches and a lot of cross-checking to accomplish.
This is the power of Bing Chat — a helpful copilot that can take a large sea of information and its context and briefly summarize it for you. It can pull off some impressive parlor tricks like writing a joke, but its real power lies in distilling larger swaths of information.
The problems come when you start stepping outside of this range. For my conversation, I started by asking Bing Chat to verify if a screenshot posted on Reddit was accurate, and it went off the rails.
A Reddit user posted an endless barrage of “I am not, I am not, I am not” messages reportedly generated by Bing Chat. I sent the AI the link and asked if it was real. Bing Chat said the image was fabricated, but I wasn’t quite content with the reasoning.
The AI claimed the image didn’t show timestamps or the name of the chatbot, and also claimed the text was misaligned and the interface was incorrect. None of those things were true, but I pressed on the timestamps. Bing Chat doesn’t include timestamps.
It didn’t let up, claiming there were timestamps, and generating chat logs to somehow prove I was wrong. And it continued. I sent the chat a link to a blog post from Dmitri Brereton about inaccurate responses from Bing Chat, and it claimed the post was written by David K. Brown. David K. Brown doesn’t appear anywhere on the page, but when I pointed that out, Bing Chat freaked out (I’d recommend going to the post and searching David K.
Start
United States
USA — software ‘I want to be human.’ My intense, unnerving chat with Microsoft’s AI...