The NCSC, part of the UK’s GCHQ intelligence agency, published a post on Tuesday delving into the mechanics of generative AIs. It states that while large language.
In brief: As much of the world starts using AI chatbots, concerns about their security implications are being voiced. One of these warnings comes from the UK’s National Cyber Security Centre (NCSC), which has highlighted some potential issues stemming from the likes of ChatGPT.
The NCSC, part of the UK’s GCHQ intelligence agency, published a post on Tuesday delving into the mechanics of generative AIs. It states that while large language models (LLMs) are undoubtedly impressive, they’re not magic, they’re not artificial general intelligence, and they contain some serious flaws.
The NCSC writes that LLMs can get things wrong and ‚hallucinate‘ incorrect facts, something we saw with Google’s Bard during the chatbot’s first demo.