
The EU AI Act: Towards horizontal regulation of Artificial Intelligence
With the EU AI Act, its new attempt to establish horizontal regulation for AI, the European Union has taken a decisive step in balancing technological innovation and risk management. To better understand the implications of this regulation for businesses, particularly in Luxembourg, we discussed with Eva Gram, Head of Codit Luxembourg, a subsidiary of the Proximus Group. Drawing on her expertise in helping companies navigate complex technological challenges, Eva shared her insights on the specifics of the EU AI Act and the role played by Proximus NXT Luxembourg, through initiatives like AI4Gov, in guiding businesses toward the ethical and compliant integration of AI.
The EU AI Act represents the first attempt to implement horizontal regulation for AI. What sets this approach apart from other technological regulations, and how does it help manage the risks associated with specific uses of AI systems?
Eva Gram (EG): “What makes this regulation unique is its risk-based approach. This is a very positive development as it strikes a balance between technological innovation and individual protections, particularly regarding privacy and confidentiality.
With the EU AI Act, each development is assessed based on its expected outcome and the associated risks. For example, an AI system used for managing medical records involves significantly higher risks than a chatbot designed to search for public information. This tailored approach is a major difference compared to other frameworks, such as the GDPR, which applies a ‘one size fits all’ approach. Here, flexibility is key: assessments and measures are adapted to the complexity and severity of the risks.
Moreover, this flexibility allows the regulation to quickly adapt to technological advancements. AI models evolve at an incredible pace, and the Act considers this dynamic by leaving room for integrating smarter and more secure tools over time. This ability to evolve with technology makes it a less rigid and more business-friendly framework.”
What specific measures must companies in Luxembourg implement to comply with the new regulation? How can they ensure the security of AI systems while meeting the AI Act requirements?
EG: “First and foremost, companies need to conduct risk assessments to identify, manage, and mitigate risks. This helps detect potential flaws or vulnerabilities before deploying a system. These assessments must be thorough and tailored to the complexity of the project.
Another key aspect is dataset management. AI models require massive amounts of data to function effectively, but it is crucial to ensure the quality, robustness, and reliability of these datasets while avoiding biases. Data security is equally essential: companies must protect against potential threats, such as cyberattacks, by applying high standards for data integrity, confidentiality, and accessibility.
Luxembourg has a well-structured ecosystem to support these efforts. Institutions like the National Commission for Data Protection (CNPD), ILNAS, and the National AI Commission provide recommendations and governance to guide companies in achieving compliance.
Lastly, investing in training and awareness is critical for both technical teams and end users. Developers and data scientists must fully understand their responsibilities and the impact of the models or algorithms they design. This requires improved governance, comprehensive documentation, and greater accountability for everyone involved. For end users, transparency is paramount. For instance, when a user interacts with a chatbot or automated system, they must be clearly informed that they are not communicating with a human. This clarity fosters mutual trust between companies and their customers, promoting ethical and informed use of AI systems.”
How does Proximus NXT Luxembourg help its clients understand and implement best practices to comply with the new rules?
EG: “At Proximus NXT Luxembourg, we have implemented several initiatives to guide our clients through this transition. For example, we help businesses integrate a risk assessment layer into their projects. This is particularly relevant in sectors like finance and insurance, where risks are often higher. Take the case of an insurance company: a poorly calibrated AI model could lead to discrimination by wrongfully denying certain contracts. We work with these businesses to prevent such scenarios.
In terms of coding, there are ways to monitor models, which we already apply to some of them. This allows us to track their behavior and intervene or adjust if the results deviate from the objectives initially defined during the model’s design phase.
Security remains a priority. We conduct in-depth assessments of potential vulnerabilities around data or the models themselves. This includes safeguarding datasets, ensuring comprehensive documentation of processes, and establishing contingency plans to respond effectively in case of incidents.
Finally, we collaborate closely with Luxembourgish institutions to ensure our practices meet local standards, in particular on the basis of the AI4Gov initiative, a government project that aims to provide tools and practical advice to ministries, administrations, and public servants to help them navigate the complex and challenging AI landscape.”
The EU AI Act marks a turning point in the governance of artificial intelligence in Europe. With its flexible, risk-based approach, it provides businesses with a clear and adaptable framework. Thanks to players like Proximus NXT Luxembourg, companies can not only comply with the new requirements but also leverage best practices to integrate AI in an ethical, secure, and sustainable manner.