Artificial Intelligence (AI) is transforming industries worldwide, yet there is still much debate about what exactly qualifies as “AI” – and how to regulate it. The European Union’s AI Act aims to clarify this emerging field, but many critical details remain pending. While the Act sets out a broad framework to regulate AI systems, the definition of AI remains somewhat vague, covering a wide range of technologies from machine learning to expert systems.
This lack of specificity has led to concerns about how the Act will be implemented in practice. Many organizations are still unsure which systems will fall under the regulation’s scope. As discussions continue, it is clear that AI’s influence on society will only grow, and the EU is taking proactive steps to ensure that its use is safe, ethical, and aligned with fundamental rights.
In this post, we will explore the EU AI Act’s proposals, how it aims to regulate AI systems based on their risk level, and why companies must prepare for its stringent compliance requirements. Just as with the GDPR, non-compliance with the AI Act could result in significant financial penalties, making it vital for businesses to establish clean, auditable processes supported by technologies like MLOps to ensure compliance.
We are not a law firm but a software company. This post discusses the topic from a practical standpoint and does not replace legal advice.
What is the EU AI Act?
The EU AI Act is a European regulation that entered into force on 1 August 2024. It introduces a risk-based approach to regulating AI systems, categorizing them into four groups based on their potential impact on safety, rights, and freedoms, with each risk class having a different timeline for compliance:
- Unacceptable risk: AI systems deemed too dangerous or unethical, such as those used for social scoring or government-run mass surveillance, are prohibited under the Act from 2 February 2025.
- High-risk AI: AI applications with significant implications in areas like law enforcement, healthcare, education, and employment face strict regulatory oversight. From 2 August 2026, these systems must meet rigorous safety, transparency, and ethical standards. Some high-risk applications will only be regulated starting one year later.
- Limited-risk AI: This category includes AI systems that don’t pose significant risks but still require transparency measures. Examples might include AI systems that interact with humans, such as customer service chatbots or AI-powered recommendation engines. For these systems, organizations must inform users that they are interacting with AI. These transparency obligations will come into effect on 2 August 2026.
- Minimal risk AI: Systems like AI used in video games or spam filters fall under this category. Since these applications pose little to no risk to individuals’ rights or safety, they are largely exempt from regulatory obligations. Businesses deploying minimal-risk AI won’t face additional requirements, though the Act encourages best practices.
Who are you?
Under the EU AI Act, an organization’s role within the AI supply chain determines its specific compliance responsibilities. A provider who develops and supplies the AI system is accountable for ensuring that the system meets all regulatory standards, particularly for high-risk AI. They must ensure safety, transparency, and proper data management throughout the AI lifecycle.
A deployer, the entity that integrates and uses AI in its operations, does not face the same strict requirements as a provider but must still comply with several obligations. For example, they must ensure that an AI system is used according to instructions and monitor its operation to report any identified risks or serious incidents.
Importers and distributors facilitate the sale or distribution of AI systems and are responsible for verifying that the systems they offer comply with the Act’s requirements.
Each role carries distinct compliance duties to ensure accountability throughout the AI system’s development, deployment, and use.
What does this mean for your organization?
A self-assessment is needed to assure EU AI Act compliance. Currently, it is unclear who will control the assessments.
Transparency is key, even for limited-risk AI, where users must be informed when interacting with AI systems. Regular audits, clear governance structures, and proactive compliance checks will help avoid penalties and ensure ongoing adherence to the Act’s requirements.
To ensure compliance with the EU AI Act, organizations must assess their AI systems by risk category and apply the appropriate regulations. For high-risk AI, this includes establishing auditable development processes, adhering to strict safety and transparency standards. Using MLOps to continuously monitor and document AI performance is an easy and professional approach. Let’s focus on this part, as it already solves most of the problem.
The Role of MLOps in Ensuring Compliance
One key challenge companies face is the complexity of managing AI systems compliant and transparently. This is where MLOps (Machine Learning Operations) plays a crucial role. MLOps is a set of practices and tools that streamline the development, deployment, and monitoring of machine learning models in production.
Here’s how MLOps helps businesses ensure compliance with the EU AI Act:
- Auditable Pipelines: MLOps enables organizations to track the development lifecycle of AI models, from data collection and training to deployment. Tooling-based documentation provides evidence that the AI system meets regulatory requirements.
- Continuous Monitoring: AI systems evolve over time, and high-risk AI needs to be continuously monitored to ensure that it behaves as intended. MLOps ensures that models are consistently checked for bias, accuracy, and ethical concerns, which is critical for compliance.
- Transparency: MLOps tools often include features allowing developers to log every decision made during the AI development process, ensuring transparency in how models are trained and data is used. This is key to adhering to the transparency obligations of the EU AI Act.
- Automated Reporting: MLOps platforms can generate reports that track how AI systems were developed and deployed, making it easier for companies to prove compliance during audits or inspections.
Please read our post about MLOps to gain more insights on how MLOps increases professionalism while positively impacting development times.
Conclusion
The EU AI Act regulates the use of AI to ensure that it benefits society while safeguarding fundamental rights. With significant penalties for non-compliance, companies operating in Europe must prepare for a future where ethical, transparent, and responsible AI development is not just encouraged but required by law.
Using MLOps as a part of your AI development strategy can help ensure that your organization complies with the EU AI Act. From maintaining auditable workflows to providing real-time monitoring and reporting, MLOps offers a solution for meeting the stringent requirements of high-risk AI systems. By investing in clean processes and compliance infrastructure, businesses can avoid costly penalties and build trust and longevity in a rapidly evolving AI landscape. Get started right now and read our article about MLOps.
Authors © 2024:
- Dr. Jörg Dallmeyer – www.linkedin.com/in/jörg-dallmeyer-5b3452243/
- Ruth Schreiber – www.linkedin.com/in/ruth-schreiber-0565742ba/