Home United States USA — software ChatGPT almost got me fired by mistake: Learn from my ordeal

ChatGPT almost got me fired by mistake: Learn from my ordeal

84
0
SHARE

I explore whether ChatGPT (and others) are here to spell out my redundancy after a particularly horrible experience.
The above image was generated by an online AI image generator tool. It was designed to reflect the disconnection between human and machine.
Things were starting to pick back up after the Christmas slump on what was an otherwise unassuming Monday in January when I received an alarming email from my editor. 
ChatGPT had just begun to emerge as the next technology blockbuster, and everybody was excited about the possibilities of what it could do… and more importantly, what it could write. 
The email was particularly concerning as my editor wanted to know whether I had used OpenAI’s large language model (LLM) to produce one of my articles.
To my horror, that article, according to an AI detector tool, had a high degree of supposed ‘fakeness’. Mission accepted. Let’s investigate some of the unintended consequences of the artificial intelligence that’s weaving its way into our everyday lives. I wanted to get to the bottom of this and understand what threat this may pose to us writers going forward. As part of this, I also wanted to investigate how tools like ChatGPT could affect other industries to determine what the future is for the LLM chatbot.
I took to the Internet to see whether anyone else had had a similar experience, but because we are still in the early days of widespread, publicly available artificial intelligence technology, no one had really written anything that indicated so. It was at this point that I put my investigative hat on and started copying and pasting extracts from numerous other articles online into the same tool that was used against my own.
In a short space of time, I had already found two other recent articles that were reportedly written by AI, even though I was pretty certain that they weren’t. I didn’t want the authors to catch wind of what I was doing, but I needed to ascertain that the content they had written was indeed genuine and human-produced. It was time to turn back the clock; what if I could find some ‘fake’ articles that predated ChatGPT’s public preview launch in November 2022?
I searched for older articles and applied the same method. In a short time, I had already found another pair that was reportedly ‘fake’. It’s worth noting, too, that the popular AI detector tool that I used requires a minimum of 50 tokens in order for its responses to be valid; any fewer and it’s unable to get an accurate enough reading. I took this and doubled it, seeking results with over 100 tokens (one of them had more than 200 tokens). Had I settled for the lower, recommended limit, I’m sure I would have found more. Similarly, had I spent more time copying and pasting articles into the tool, I would have uncovered even more ‘fake’ articles, but ultimately, seeing how many articles have been written by AI was not my goal. It was just a foundation for my work going forward.
Of the four articles that I’d found, three were considered to be fake with at least 99% certainty. The fourth was just a touch behind at over 98%.The detector tool
I needed to get to the bottom of this, so I posed the question to Richard Ford, Chief Technology Officer at cybersecurity solutions company Praetorian, who has a quarter of a century of experience in offensive and defensive computer security solutions.
He explained that machine learning’s involvement in our workload is nothing new, and in fact, the “thin end of the wedge has been being driven in for some time” – including the humble spell checker he used to compose his email to me, and the grammar tool I’m using to help compose this piece. 
On that note, as I sit here typing away at my computer, I wonder to myself whether content that’s been through an AI grammar checker would be considered to be more ‘fake’ than the first draft – although I suspect that’s really a discussion for another day. 
I wanted to know why I had found a number of articles that were coming back as being fake, even though through my own human judgment and some easy dating information, I could tell they were genuine, human-written pieces.
Ford described AI detection as an “arms race” which will inevitably be getting more advanced over time, but for now, he said “my gut tells me that current detection isn’t great.” Ultimately, AI detectors aren’t highly skilled people sitting in a room determining the authenticity of content but are instead AI in their own right. From this, I could surmise that the long chain of non-human activity has several vulnerabilities, and the more AI we add to that chain, the less accurate it may become.
There has already been talk of AI tools applying watermarks to make it clear that content was not produced by a human being, however Ford said that for this to work, everyone would need to be “playing nicely in the sandbox”. It’s likely that there would have to be several standardized processes in place, and only the companies that adhere to them would be able to cooperate. 
While I firmly believe that AI content should be clearly distinguished wherever it may appear, be it a digital watermark or a note on a physical copy, having now seen how other rival companies interoperate outside of the realms of AI, my expectations for this happening anytime soon are next to zero. Virtually every technology company thinks that it has found the way, generally leaving it unwilling to compromise and share technology resulting in several solutions to the same problem, none of which are interoperable. 
For the time being, we’ll have to rely on our own, in-built detectors. I reached out to Robb Wilson, founder of OneReach.ai with over two decades’ experience unlocking hyperautomation. The company helps its customers (including Nike, DHL, and Unilever) design and deploy complex conversational applications.

Continue reading...