Ensure your website is compliant with our Cookie Consent Management Platform; PrivacyConsent Learn More!

Reflections on the Provisional Agreement of the AI Act – Insights & Takeaways for DPOs

Dr Maria Maloney of PrivacyEngine

    Need world class privacy tools?

    Schedule a Call >

    Dr. Maria Maloney reflects on the recent provisional agreement of the A.I Act and gives us plenty to think about.

    How did the A.I Act begin?

    In February 2020, The European Commission published a White paper on Artificial Intelligence, which outlined its vision and strategy for developing and regulating AI in the EU. The AI Act was then first presented by the European Commission on April 21st, 2021, forming a significant part of the Commission’s Digital Strategy.

    The Commission initially proposed to classify AI systems into three categories based on their risk level, but ultimately, four categories were agreed upon. The Commission’s draft proposal was then submitted to the European Parliament and the Council of the European Union for further discussion and negotiation. The Council agreed on its general approach in December 2022, while the Parliament adopted its position on the proposal in June 2023.  The Council presidency and the European Parliament’s negotiators finally reached agreement on the 9th of December 2023.

    Provisional Political Agreement of the AI Act

    Following three lengthy days of talks, the resolution of the AI Act Trilogue marked a crucial moment for Europe and the Global AI community.  

    Members of the European Parliament (MEPs) finally achieved a provisional agreement with the Council on the AI Act. The Act aims to guarantee the safety of AI in Europe, uphold fundamental rights and democracy, ensure environmental sustainability, and foster growth, innovation, and expansion of European businesses. 

    Needless to say, this has always been a big ask, and whether it truly achieves this finely balanced equilibrium, only time will tell.  What it has achieved immediately, in my opinion, is to deliver a robust message about the EU’s continued effectiveness as a significant authority in international technology regulation.  


    Is Your Team Leveraging AI Tools Like ChatGPT?

    Ensure your use of AI, like ChatGPT, aligns with the AI Act and GDPR. Explore our AI compliance services to safeguard your operations. Learn more at PrivacyEngine AI Services.


    Achievements of the AI Act So Far

    Some of the great achievements of the AI Act are as follows: it provides safeguards for general-purpose AI, allows limits on biometric identification systems by law enforcement, it bans social scoring and manipulative AI, and it provides for the right for consumers to file complaints. It also introduces a revised system of governance for AI at an EU level in the form of an AI Office with some enforcement powers. It provides for better protection of citizens’ rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use. 

    The EU approach is somewhat unique in that it has sought an overarching legislation that will sit across all sectors of the economy. This is a different approach to many jurisdictions in that, often, each sector derives its own relevant legislation to regulate AI in their sector. That is not to say that individual pieces of legislation, guidelines, codes of conduct and best practices will not appear in Europe in the future across various industries and sectors to support and complement the AI Act. Still, it does mean that this AI Act is much more complex and all-encompassing than many pieces of legislation.  

    Watch "AI and Privacy: Navigating Data Protection for DPOs in the Age of AI"

    Come away feeling more confident about the future of the AI & privacy landscape. This webinar aims to equip and empower you!

    Watch Webinar Now! ›

    What is important in Europe’s approach is that they have drawn from previous legislation like product safety legislation and policy. This approach recognises that AI is not just one technology, it has multiple uses and applications and, thus, many more implications for society. What the EU have done is take a risk-based approach to legislating AI. Recognising that different types of systems in different contexts present different types of risk. 

    The only reservation in this provisional agreement concerns regulating Foundation Models (FMs). Admittedly, the Act has made great progress in this area; however, whether this progress will be sufficient is still unclear. The provisional agreement stipulates that foundation models must adhere to designated transparency requirements before being introduced to the market.  

    A more rigorous framework for ‘high impact’ foundation models has been instituted. These high-impact models, characterised by extensive data training, advanced complexity, and capabilities surpassing the norm, possess the potential to spread systemic risks down the value chain, hence the need for tougher controls. Very few existing FMs, however, fall into this ‘high impact’ category, which leaves many FMs lightly regulated and yet still possessing the potential for significant harm, in my opinion. 

    In summary, the AI Act has been 2.5 years in the making, and incredibly, we are still in the early stages of understanding its full impact. One thing that is certain at this time is that it is seen as hugely significant at both the EU and global levels. The EU is not alone in determining how best to regulate AI systems and provide appropriate oversight while facilitating innovation, keeping people safe, and upholding fundamental rights. It is, however, the first to achieve a provisional political agreement, something that all EU citizens can be satisfied about in the run-up to the holidays.

    What questions are left to be answered?

    • Does the AI Act truly balance innovation and regulation without stifling innovation or creativity, thus hampering growth for Europe?
    • Will the outlined enforcement and compliance mechanisms for high-risk systems be successful and not prove too burdensome for smaller European organisations, or will they put these organisations at a competitive disadvantage compared to larger international organisations?
    • Given that AI technology is changing rapidly, will the AI Act continue to ensure respect for human rights and fundamental freedoms while ensuring public safety and security long into the future? Will it stand the test of time?
    • Will Europe continue to enjoy the ‘Brussels effect’, where the AI Act fosters international cooperation and dialogue to ensure other countries follow the European approach to regulating AI, or will another Jurisdiction develop a more robust approach to AI regulation?

    About Dr. Maria Moloney

    Need some extra help? Feel free to schedule a consultation with us at PrivacyEngine at a time that suits you.

    Dr. Maria Moloney Senior Researcher and Consultant of PrivacyEngine

    Try PrivacyEngine
    For Free

    Learn the platform in less than an hour
    Become a power user in less than a day

    PrivacyEngine Onboarding Screen