Домой United States USA — software Enterprises are not prepared for a world of malicious AI agents

Enterprises are not prepared for a world of malicious AI agents

97
0
ПОДЕЛИТЬСЯ

The current model of managing corporate identities is unprepared for a wave of AI agents gaining access to privileged resources.
Identity management is broken when it comes to AI agents.
AI agents expand the threat surface of organizations.
Part of the solution will be AI agents automating security.
As enterprises begin implementing artificial intelligence agents, senior executives are on alert about the technology’s risks but also unprepared, according to Nikesh Arora, chief executive of cybersecurity giant Palo Alto Networks.
«There is beginning to be a realization that as we start to deploy AI, we’re going to need security», said Arora to a media briefing in which I participated.
«And I think the most amount of consternation is around the agent part», he said, «because customers are concerned that if they don’t have visibility to the agents, if they don’t understand what credentials agents have, it’s going to be the Wild West in their enterprise platforms.»
AI agents are commonly defined as artificial intelligence programs that have been granted access to resources external to the large language model itself, enabling a program to carry out a broader variety of actions. The approach could be a chatbot, such as ChatGPT, that has access to a corporate database via a technique like retrieval-augmented generation (RAG).
An agent could require a more complex arrangement, such as the bot invoking a wide array of function calls to various programs simultaneously via, for example, the Model Context Protocol standard. The AI models can then invoke non-AI programs and orchestrate their operation in concert. All commercial software packages are adding agentic functions that automate some of the work a person would traditionally perform manually.
Arora: «Ideally, I want to know all of my non-human identities, and be able to find them in one place and trace them.»
The thrust of the problem is that the AI agents will have access to corporate systems and sensitive information in many of the same ways as human workers, but the technology to manage that access — including verifying the identity of an AI agent, and verifying the things they have privileged access to — is poorly organized for the rapid expansion of the workforce via agents.
Although there is consternation, organizations don’t yet fully grasp the enormity of securing agents, said Arora.
«It requires tons of infrastructure investment, it requires tons of planning. And that’s what worries me, is that our enterprises are still under the illusion that they are extremely secure.»
The problem is made more acute, said Arora, by the fact that bad actors are ramping up efforts to use agents to infiltrate systems and exfiltrate data, increasing the number of entities that must be verified or rejected for access.

Continue reading...