Start United States USA — software The EU AI Act: What do CISOs need to know to strengthen...

The EU AI Act: What do CISOs need to know to strengthen AI security?

54
0
TEILEN

What CISOs should know about the EU AI Act
It’s been a few months since the EU AI Act – the world’s first comprehensive legal framework for Artificial Intelligence came into force.
Its purpose? To ensure the responsible and secure development and use of AI in Europe.
It marks a significant moment for AI regulation, responding to the rapid adoption of AI tools across critical sectors such as financial services and government, where the consequences of exploiting such technology could be catastrophic.
The new act is one part of an emerging regulatory framework that reinforces the need for robust cybersecurity risk management including the European Cyber Resilience Act (CRA) and the Digital Operational Resilience Act (DORA). These will drive transparency and effective risk management of cybersecurity further up the business agenda – albeit adding additional layers of complexity to compliance and operational resilience.
For CISOs, navigating this sea of regulation is a considerable challenge.Key Proponents of the EU AI Act
The AI Act introduced a new regulatory aspect of AI governance, sitting alongside existing legal frameworks such as data privacy, intellectual property and anti-discrimination laws.
The key requirements include the establishment of a robust risk management system, security incident response policy and technical documentation demonstrating compliance with transparency obligations. It prohibits certain types of AI systems, for example, systems for emotion recognition or social scoring with the aim to reduce bias caused by algorithms.
It also involves compliance across the entire supply chain. It is not just the primary providers of AI systems who must adhere to this regulation, but all parties involved including those integrating General Purpose AI (GPAI) and foundation models from third-parties.
Failure to comply with these new rules can result in a maximum penalty of €35 million or 7% of a firm’s total worldwide annual turnover for the preceding financial year – but this varies depending on the type of infringement and the size of the company.
Hence, businesses will need to adhere to these new regulations if they wish to do business in the EU, but they should also take inspiration from other available guidance, such as the National Cyber Security Centre’s (NCSC) guidelines for secure AI system development, to foster a culture of responsible software development.

Continue reading...