Ten ways you can blow a hole in your software by misusing AI tech
The Open Worldwide Application Security Project (OWASP) has released a top list of the most common security issues with large language model (LLM) applications to help developers implement their code safely.
LLMs include foundational machine learning models, such as OpenAI’s GPT-3 and GPT-4, Google’s BERT and LaMDA 2, and Meta/Facebook’s RoBERTa that have been trained on massive amounts of data – text, images, and so on – and get deployed in applications like ChatGPT.
The OWASP Top 10 for Large Language Model Applications is a project that catalogs the most common security pitfalls so that developers, data scientists, and security experts can better understand the complexities of dealing with LLMs in their code.
Steve Wilson, chief product officer at Contrast Security and lead for the OWASP project, said more than 130 security specialists, AI experts, industry leaders, and academics contributed to the compendium of potential problems. OWASP offers other software security compilations, eg this one about web app flaws and this one about API blunders, if you’re not aware.
“The OWASP Top 10 for LLM Applications version 1.0 offers practical, actionable guidance to help developers, data scientists and security teams to identify and address vulnerabilities specific to LLMs,” Wilson wrote on LinkedIn.
Home
United States
USA — software We're in the OWASP-makes-list-of-security-bug-types phase with LLM chatbots