The AI Act entered into force – now what?

Carolin Hindersson

Carolin Hindersson

Privacy & Security Lawyer

Marraskuu 8, 2024 at 10:00

Artificial intelligence is a topic on which almost everyone has an opinion. Some see it as a solution to foundational societal issues that drives human advancement and unlocks new potential in fields like medicine, where it aids in groundbreaking research, early disease detection, and personalized treatments. Yet, others perceive AI more as a toy, focusing on its application in entertainment and everyday convenience, and often overlook its potential to address complex challenges. Regardless of what AI may or may not bring us in the future, it is an excellent tool for enhancing efficiency, decision-making and innovation. That is why organisations should make use of the benefits of AI but also do so in a legally sustainable manner.


First EU-level law regulating AI

What does AI then mean from a regulatory compliance perspective? While AI has several legal and compliance dimensions, the most current piece of legislation relating to it is the EU Artificial Intelligence Act[1] (AI Act). Having entered into force on 1 August 2024, it is the first EU-level law regulating specifically artificial intelligence, affecting a large number of organisations. Indeed, we have noticed a rising interest in the AI Act among our customers, who are keen to understand how it will impact their level of legal compliance and operational practices. 


Part of EU product regulation

So, how should one start to unpack the AI Act? Like many other recent pieces of EU law (e.g., NIS2), the AI Act emphasises risk management, possible heavy sanctions, and management responsibility. At the same time, understanding the AI Act requires understanding its context.

The AI Act is product safety regulation that aims to protect the fundamental rights, health, and safety of humans. Here, it differs from, for example, the NIS2 and CER directive. The AI Act follows the logic of other, already established product safety laws. This affects the interpretation of some of its definitions, which are not found in the AI Act but in other product safety regulation.


Many organisations will be affected by the AI Act

It should also be emphasized that the AI Act primarily applies to organisations that develop and offer AI-based solutions, imposing requirements on those responsible for bringing these systems to market. To a limited extent, it also applies to companies using AI tools, for example, Microsoft Copilot or ChatGPT, to support their work. In the AI Act, these organisations are called deployers.


Introducing a risk-based approach to AI systems

In addition, the AI Act tackles AI with a risk-based approach. AI systems will be classified into different risk categories based on their use: unacceptable risk, high risk, transparency risk, and minimal or no risk. Certain risks, such as cyber security risks specific to AI systems and biases that may lead to discrimination, are specifically mentioned in the Act. We will discuss these more in detail later in another blog post.


Penalties may rise high depending on the infringement

As mentioned earlier, the AI Act introduces heavy sanctions. Although the happiness of knowing one’s organisation complies with applicable legislation is a good motivator, the fear of sanctions is often an even better one. While penalties vary between member states, the maximum amount of fines is the same in every country. Depending on which article the organisation infringes, fines may reach as high as €35 million or 7% of the organisation’s total annual worldwide turnover, whichever is higher. The AI Act’s sections on administrative fines will be applied from August 2025 or August 2026 in the case of general-purpose AI models. These will be complemented by national rules on penalties and other enforcement measures.


Learn more about the AI Act in our upcoming blog posts

Artificial intelligence and its governance will require both time and financial investment from organisations that are within its scope. 

In our upcoming blog posts, we will cover the AI Act’s key aspects from a security and privacy perspective that may be of use to your organisation, including key requirements and the relationship between the AI Act and available AI standards and data protection legislation. In addition, we will cover the AI Act’s awareness requirements that enter into force already in February 2025. Follow our upcoming posts as we explore these topics more and offer insights to help your organisation stay informed and prepared!

You can find the EU AI Act here: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 

 

The blog has been co-authored by Carolin Hindersson, Privacy & Security Lawyer and Jouko Juhola, Lead Security Consultant at Nixu, a DNV company. 


 

[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)

Related blogs