AI surprised everyone in recent years, including policymakers. Because AI is already having such a big impact on society, the European Parliament created a law to change that: The AI Act. The AI Act is a European law that limits the use of AI, algorithms and machine learning and demands that risks to people and society be minimized.
What is the AI Act and from when must it be complied with?
The AI Act is a European regulation that sets strict rules in the EU for all autonomously acting computer systems that make decisions, generate content or assist people. With the AI Act, the EU wants people and businesses to be able to rely on safe, transparent, traceable, non-discriminatory and environmentally friendly AI systems supervised by humans. It is expected that by 2026, anyone working with AI will have to comply with the requirements.
If violated, the AI Act imposes high fines. The maximum fine is 40 million, or 7% of your total worldwide annual sales for the prior fiscal year, whichever is higher.
Who does the AI Act apply to?
The AI Act applies to all industries, such as health care, government, finance, education and entertainment. So it is not limited to any specific industry. The new rules apply to anyone who develops, deploys or imports AI into the EU and sells it here. Consider:
- Producers of AI systems;
- Integrators incorporating AI into their own software services;
- Importers getting AI from outside the EU;
- Users deploying such services.
If you are using AI, you must be able to demonstrate compliance yourself and will therefore need to seek clarity from the supplier.
The risk levels
The AI Act imposes certain obligations on providers and users. These obligations depend on the risk of the AI application. For many AI systems, the risk is minimal, but all AI systems must be assessed. There are three levels of risk for this:
- Unacceptable risks: if an AI does something that violates European fundamental norms and values. In that case, it should not be used in the European market. For example: facial recognition in public places and predicting criminal behavior.
- High risks: AI systems that pose a high risk to health, safety, fundamental rights or the environment. These are allowed under strict conditions. It must be clear where the data comes from, human oversight is required, and technical documentation must be in order.
- Low risk: AI systems that do not fall into the aforementioned categories may enter the European market without too much trouble. However, the AI must be transparent so that no one thinks they are dealing with a human being. In addition, the AI must not make decisions independently.

What do you need to comply with under the AI Act?
The AI Act places most obligations on AI systems in the high category. For these systems, the following must be in place:
- Risk Management System;
- System of data management;
- Technical Documentation;
- Transparency;
- Human supervision;
- Specific conformity assessment tailored to the characteristics of the AI.
Low-risk AI systems must meet minimum transparency requirements. Users must be able to make informed decisions. It must be clear to the user than they are dealing with an AI system. This includes AI systems that generate image, audio or video material, such as deepfakes.
What does the AI Act mean for you?
If you or your organization works with AI or algorithms, or is considering doing so, take the following steps in advance:
- Inventory: Inventory which AI is involved, which vendor the AI is from and what information you need from them. When doing this, don't forget AI services that employees have purchased on their own initiative.
- Investigate: Research what your role is and what responsibilities go with it. Then assess whether your contracts fit that. Consider liability, notice periods and opportunities to demand documentation of operation and the like.
- Evaluate: Evaluate the output of the AI system. Provide protocols for evaluating output and make policies regarding periodic audits.
- Determine the risk level of the AI system.
The relationship between the AI Act and the AVG
The premise of the AVG is the fundamental right to data protection. As a citizen, you can exercise your right against organizations that process your data. The AI Act views AI systems as products and regulates them from this perspective. This means that the security of the AI system is checked in advance. Citizens are indirectly protected from faulty AI systems, but they have no direct role in the AI Act. They cannot exercise their rights directly as under the AVG. This is not necessarily necessary if faulty AI systems are kept off the market and individuals can retain their rights under the AVG. In other words, if you suspect that personal data is being processed by a faulty AI system, you can still exercise your rights under the AVG. Importantly, this only applies to AI systems that process personal data. Therefore, the relationship between these two regulations must be transparent.
Why the AI Act is so important right now
Some AI technologies have been around for more than 50 years, but computers are getting stronger and stronger, and with the vast amount of data available today, AI can do more and more. As a result, AI is having a major impact on society. That is precisely why it is reassuring that artificial intelligence will be regulated. The goal is that people and businesses in the EU can count on safe AI systems that are supervised by humans.