BLOG

The EU AI Act: Ensuring Cybersecurity and Trustworthiness in High-Risk AI Systems

Artificial Intelligence (AI) has come a long way since John McCarthy first coined the term in 1955. Today, as AI technologies become deeply embedded in our daily lives, the potential they hold is immense – but so are the risks to safety, privacy, and fundamental human rights. Recognizing these concerns, the European Union (EU) took a proactive step in 2021 by proposing a regulatory framework aimed at governing AI.

This initiative culminated in the European Artificial Intelligence Act (AI Act), published on July 12, 2024, in the Official Journal of the European Union. The Act came into force on August 1, 2024, and will be implemented in phases over the next two to three years.

As the world’s first comprehensive AI regulatory framework, the AI Act is designed to ensure that AI systems deployed within the EU are safe, ethical, and aligned with the protection of individual rights. The primary goals of this legislation are to safeguard health, safety, and fundamental rights while also encouraging innovation and the adoption of reliable AI technologies.

 

Who does this apply to?

The AI Act applies to all providers, deployers, importers, distributors, and manufacturers of AI systems operating within the EU or offering their services to the EU market, regardless of their location, with specific provisions for high-risk AI systems and certain exemptions for national security, military, and research purposes.

 

What is an ‘Artificial Intelligence System’?

Aligning with the OECD definition, Article 3(1) of the AI Act defines an ‘AI system’ as a machine-based system designed to function with varying levels of autonomy, potentially exhibiting adaptiveness post-deployment. It processes input to generate outputs—such as predictions, content, recommendations, or decisions—capable of influencing both physical and virtual environments, whether for explicit or implicit objectives.

 

Risk-Based Classification

The AI Act adopts a risk-based approach, classifying AI systems according to the level of risk they present to users—ranging from unacceptable risk, high risk, specific transparency risk and minimal risk. The guiding principle is straightforward: the higher the risk, the stricter the regulations.

 

Prohibited AI Systems

Article 5 of the AI Act prohibits AI systems deemed to carry unacceptable risks, including those that manipulate individuals or exploit vulnerabilities like age, disability, or economic status. The ban also covers ‘real-time’ remote biometric identification systems, social scoring systems, emotional recognition in workplaces or education, and indiscriminate facial recognition data collection.

 

High-Risk AI Systems 

High-risk AI systems, as outlined in Article 6 and Annex III of the AI Act, are allowed but must meet strict regulations due to their potential impact on health, safety, and fundamental rights. These high-risk systems include those used in critical infrastructure like transportation, education, healthcare, as well as in law enforcement, migration, and the administration of justice.

To address these risks, the AI Act requires a robust risk management framework for these AI systems before they can enter the market. Key requirements include:

  • Risk Management System: Implementing an effective risk assessment, evaluation, and appropriate mitigation measures based on residual risks (Article 9)
  • Data Governance: Adherence to stringent data and data governance practices during AI model training, testing, and validation (Article 10).
  • Technical Documentation: Comprehensive documentation detailing the AI system’s design and functionality (Article 11).
  • Record-Keeping: Maintaining detailed records of the AI system’s operations (Article 12).
  • Transparency: Ensuring transparency throughout the AI system’s design and development stages (Article 13).
  • Human Oversight: Integrating human oversight to monitor AI decisions and actions (Article 14).
  • Accuracy, Robustness, and Cybersecurity: Demonstrating high levels of accuracy, robustness, and cybersecurity (Article 15).

 

Transparency 

Transparency is a fundamental principle of the AI Act, especially for AI systems that interact directly with people, like chatbots or content-generating tools. Providers and Deployers must clearly disclose when individuals are interacting with AI and marking AI-generated or manipulated content (Article 50).

 

Minimal Risk

Article 95 of the Act addresses minimal risk AI systems, such as spam filters and video games, by promoting the voluntary adoption of codes of conduct. Although these systems are not subject to mandatory regulations, they are encouraged to adhere to ethical standards and best practices to foster responsible AI development. The report by AI4People Institute (2024) highlights strategies for designing, developing, and maintaining AI systems that respect fundamental rights and uphold ethical and moral principles in line with European Union values.

 

Obligations for High-Risk AI Systems

The AI Act establishes specific obligations for all stakeholders in the AI lifecycle, including providers, importers, and distributors. Providers are required to ensure their AI systems meet the Act’s standards before market introduction, which includes clear labelling of high-risk systems (Article 16), implementing a quality management system (Article 17), maintaining documentation (Article 18), managing system logs (Article 19), conducting conformity assessments (Article 43), affixing CE markings (Article 48), and issuing an EU declaration of conformity (Article 47). Importers and distributors must also verify compliance and maintain proper records (Articles 23-24). These regulations ensure that every participant in the AI supply chain upholds uniform safety and compliance standards.

 

Regulation of General-Purpose AI Models

The AI Act outlines regulations for General-Purpose AI (GPAI) Models, which are versatile systems designed to perform various tasks in different settings. 

Providers of GPAI models must maintain comprehensive technical documentation, including training and testing details, and provide extensive information for integration with other AI systems. They must also adhere to copyright laws as specified in Article 4(3) of Directive (EU) 2019/790 and publicly disclose a summary of their training data. 

GPAI models with substantial computational power or impact are deemed to pose systemic risk if they involve over 10^25 floating point operations. These models must fulfil additional requirements, including risk evaluation, incident reporting, and robust cybersecurity measures (Article 55).

 

Enforcement and Penalties

The AI Act establishes a European Artificial Intelligence Board and AI Office to oversee its implementation and drive the development of AI standards. National Competent Authorities are tasked with applying and enforcing the Act, with significant fines for non-compliance:

  • Up to 35 million or 7% of global annual turnover for the most serious infringements such as the use of prohibited AI practices. 
  • Up to 15 million or 3% of global annual turnover for non-compliance with specific obligations related to high-risk AI systems.
  • Up to 7.5 million or 1.5% of global annual turnover for supplying incorrect information to authorities. 

 

Support for Innovation

To foster innovation while ensuring compliance, the AI Act introduces regulatory sandboxes enabling controlled experimentation with AI technologies (Article 57). These sandboxes offer a secure environment where businesses, especially SMEs and startups, can test and refine new AI solutions without the immediate pressure of regulatory constraints. 

Cybersecurity for High-Risk AI Systems: Protecting Against Threats with Centripetal’s CleanINTERNET® Solution

The AI Act will affect a broad spectrum of stakeholders involved with artificial intelligence, from developers to end-users. Compliance with the AI Act’s comprehensive requirements is crucial to ensuring the safety, effectiveness, and trustworthiness of AI technologies.

 

Ensuring Cybersecurity for High-Risk AI Systems

Ensuring robust cybersecurity for high-risk AI systems is crucial to protecting them from malicious attacks that could compromise their performance or data integrity. According to the AI Act, specifically Article 15(5), providers must fortify these systems against unauthorized alterations and cyberattacks, including threats like data poisoning, adversarial attacks, and breaches targeting AI-specific components. Implementing comprehensive cybersecurity measures is essential for compliance and maintaining the integrity and reliability of AI solutions. By prioritizing these protections, organizations can effectively mitigate risks and enhance the resilience of their AI systems against evolving cyber threats.

 

How can Centripetal help?

Centripetal’s CleanINTERNET® solution offers a powerful way to comply with the AI Act’s stringent cybersecurity requirements for high-risk AI systems. By leveraging augmented intelligence analysis, advanced threat detection, and real-time intelligence application, CleanINTERNET® provides proactive and robust protection against evolving cyber threats. Its advanced threat intelligence integration and deep packet inspection ensure that AI systems are shielded from sophisticated attacks like data poisoning and adversarial threats. With seamless deployment and a proactive defense strategy, Centripetal helps organizations not only meet regulatory standards but also enhance their overall cybersecurity posture, ensuring the integrity and reliability of their AI solutions.

For more information on how your organization can effectively manage AI Act compliance and enhance cybersecurity for high-risk AI systems, contact us.

SIGN UP TO OUR NEWSLETTER

Experience how CleanINTERNET® can proactively protect your organization.