Home United States USA — software Singapore identifies six generative AI risks, sets up foundation to guide adoption

Singapore identifies six generative AI risks, sets up foundation to guide adoption

74
0
SHARE

AI Verify Foundation will develop test toolkits that mitigate the risks of AI.
Singapore has identified six top risks associated with generative artificial intelligence (AI) and proposed a framework on how these issues can be addressed. It also has established a foundation that looks to tap the open-source community to develop test toolkits that mitigate the risks of adopting AI. 
Hallucinations, accelerated disinformation, copyright challenges, and embedded biases are among the key risks of generative AI outlined in a report released by Singapore’s Infocomm Media Development Authority (IMDA). The discussion paper details the country’s framework for “trusted and responsible” adoption of the emerging technology, including disclosure standards and global interoperability. The report was jointly developed with Aicadium, an AI tech company founded by state-owned investment firm Temasek Holdings.
The framework offers a look at how policy makers can boost existing AI governance to address “unique characteristic” and immediate concerns of generative AI. It also discusses investment needed to ensure governance outcomes in the longer term, IMDA said. 
In identifying hallucinations as a key risk, the report noted that — similar to all AI models — generative AI models make mistakes and these are often vivid and take on anthropomorphization. 
“Current and past versions of ChatGPT are known to make factual errors. Such models also have a more challenging time doing tasks like logic, mathematics, and common sense,” the discussion paper noted.
“This is because ChatGPT is a model of how people use language. While language often mirrors the world, these systems however do not yet have a deep understanding about how the world works.”
These false responses can also be deceptively convincing or authentic, the report added, pointing to how language models have created seemingly legitimate but erroneous responses to medical questions, as well as generating software codes that are susceptible to vulnerabilities. 
In addition, dissemination of false content is increasingly difficult to identify due to convincing but misleading text, images, and videos, which can potentially be generated at scale using generative AI. 
Impersonation and reputation attacks have become easier, including social-engineering attacks that use deepfakes to gain access to privileged individuals. 
Generative AI also makes it possible to cause other types of harm, where threat actors with little to no technical skills can potentially generate malicious code. 
These emerging risks might require new approaches to the governance of generative AI, according to the discussion paper. 
Singapore’s Minister for Communications and Information Josephine Teo noted that global leaders are still exploring alternative AI architectures and approaches, with many people issuing caution about the dangers of AI.
AI delivers “human-like intelligence” at a potentially high level and at significantly reduced cost, which is especially valuable for countries such as Singapore where human capital is a key differentiator, said Teo, who was speaking at this week’s Asia Tech x Singapore summit.
The improper use of AI, though, can do great harm, she noted. “Guardrails are, therefore, necessary to guide people to use it responsibly and for AI products to be ‘safe for all of us’ by design,” she said. 
“We hope [the discussion paper] will spark many conversations and build awareness on the guardrails needed,” she added.

Continue reading...