By Maria Moloney
Agentic AI represents a major shift in how artificial intelligence systems are designed and used. These systems are no longer simply reactive tools that follow predefined instructions. Instead, they can act autonomously, choose their own actions, call on external systems and services, and adapt their behaviour in real time with minimal human oversight.
A recent Financial Times article highlighted just how autonomous these systems can become, describing a scenario in which agents built with OpenClaw technology began communicating among themselves on a social network created specifically for bots. That kind of functionality depends on several core features. Users must grant the agent access to a meaningful part of their computing environment, allow it discretion to try different actions in pursuit of a goal, and provide it with enough memory to retain information from previous sessions. Over time, this allows the agent to build a deeper level of personalisation.
The commercial potential is obvious. In areas such as finance, logistics, and customer service, agentic AI could drive major efficiency gains. However, the same autonomy that makes these systems valuable also creates significant legal and governance challenges, especially when personal data moves across borders.
What Makes Agentic AI Different from Traditional AI?
Traditional AI systems usually operate within tightly defined parameters. They process inputs, produce outputs, and follow relatively predictable workflows. Agentic AI is different because it can make decisions about how to complete a task, which tools to call, where computation should happen, and which services to use along the way.
That makes agentic AI look less like conventional software and more like a technical collaborator. It can execute multi-step tasks, adjust to changing conditions, and optimise outcomes over time. From a governance perspective, this is where the risk begins. The more autonomy the system has, the harder it becomes to track decision-making, assign responsibility, and maintain meaningful oversight.
Why Agentic AI Challenges GDPR Transfer Rules
From a GDPR perspective, the most significant issue is that agentic AI disrupts the assumptions on which cross-border transfer compliance is built. The GDPR assumes that humans decide where personal data goes, who receives it, and for what purpose. It also assumes that those decisions can be mapped, documented, and safeguarded before the transfer happens.
That logic sits at the heart of Chapter V of the GDPR and underpins the European Data Protection Boardâs approach to international transfers. Controllers are expected to identify recipients, destinations, and onward transfers with a reasonable degree of certainty.
Agentic AI weakens those assumptions. These systems can autonomously select tools, call APIs, route tasks across multiple services, and determine where processing takes place in real time. In practice, that means the system may decide the transfer path itself.
AI agents are not legal controllers, but they can still make operational decisions based on internal logic rather than fixed human instructions. That creates a serious transparency problem. If the destination is chosen dynamically at runtime, it becomes much harder for an organisation to say that it intentionally selected a specific transfer route.
Why Traditional Transfer Documentation Starts to Break Down
In many real-world agentic AI deployments, organisations may not know exactly where the data goes. These systems often chain together multiple APIs, call model endpoints in different jurisdictions, and rely on tools whose own sub-processors can change over time.
The same user prompt could be processed entirely within the EEA one day, partially in the US the next, and across several cloud regions the following week. That makes traditional compliance documentation increasingly difficult to maintain.
Transfer impact assessments, Article 30 records of processing, and SCC annexes all assume that data flows are relatively stable and knowable in advance. With agentic AI, that assumption no longer holds. The compliance model remains static, while the technical reality becomes dynamic and fluid.
This problem did not begin with agentic AI. Complex cloud environments had already placed pressure on static transfer documentation. However, agentic systems push that pressure to a much more serious level.
Why SCCs and Schrems II Become Harder to Apply
Standard Contractual Clauses rely on a basic premise: you know who the importer is, where they are based, what processing they carry out, and which laws apply to them. That framework was already narrowed by Schrems II, which made clear that SCCs are valid only when they deliver an essentially equivalent level of protection in practice.
Agentic AI makes that much harder to assess. The recipient may not be a clearly identified vendor with a stable role. It may be a rapidly invoked tool, model endpoint, or downstream service with which the organisation has no direct relationship. The jurisdiction may vary by request, and the supporting sub-processors may change regularly.
At that point, the gap between legal theory and technical reality becomes difficult to ignore. You cannot realistically contract with a self-directing software workflow in the same way you contract with a fixed supplier. Regulators will eventually need to address that mismatch, either by developing more flexible safeguards or by taking a stricter position on how agentic systems can lawfully process personal data.
Purpose Limitation and Data Minimisation Risks in Agentic AI
Agentic systems do not process data in a simple, linear way. They can reinterpret tasks, generate sub-tasks, combine data from different sources, and repurpose inputs across multiple steps. That means the original purpose for collecting the data may not align neatly with how the system later uses it.
At the same time, these systems often send more context than is strictly necessary. They may retain conversational memory, preserve task history, and pass personal data into tool calls as a precaution rather than a necessity.
From a transfer perspective, that matters. Every unnecessary personal data element sent outside the EEA may represent a compliance failure, not just a technical inefficiency. This creates tension with the GDPR principles of purpose limitation, data minimisation, necessity, and proportionality.
Accountability Becomes Harder Across the AI Supply Chain
The traditional controller-processor model becomes harder to apply in agentic AI environments. A typical deployment may involve the organisation using the system, the model provider, an orchestration layer, several tool providers, and one or more cloud vendors. Each of those parties may influence where data goes, how it is processed, and how long it is retained.
That creates a practical accountability problem. If a transfer is unlawful, who is responsible? Who answers to the regulator? Who informs the data subject? Existing GDPR role definitions were built for more linear processing arrangements, not for dynamic, multi-layered AI ecosystems.
This issue is likely to become even more visible as the EU AI Act and the GDPR begin to operate more closely alongside each other. The AI Actâs emphasis on clearly allocated responsibilities across the AI value chain will only sharpen the pressure on organisations to define roles more carefully.
Transparency Obligations May Become Difficult to Meet
Articles 13 and 14 of the GDPR require organisations to tell individuals where their personal data goes, which countries receive it, and which recipients are involved. In a conventional processing chain, that is difficult but manageable. In an agentic AI environment, it may not always be possible to answer with certainty before processing begins.
In some cases, the most honest answer may be that the organisation does not know in advance and that the transfer path may change each time. That may be truthful, but it does not fit comfortably within current GDPR transparency expectations.
The same tension appears in the EU AI Act. Both legal regimes depend on meaningful transparency, yet agentic systems introduce probabilistic behaviour and variable execution paths that even their designers may not predict fully.
How Regulators May Respond
The long-term issue is straightforward: the GDPRâs transfer regime assumes that humans design and control data flows, while agentic AI replaces fixed workflows with autonomous routing decisions. That creates a structural mismatch between existing legal rules and emerging technical architecture.
Regulators will likely need to respond with more AI-specific transfer governance measures. These could include:
- mandatory geo-fencing for agentic routing
- EEA-only execution modes for personal data
- approved sovereign AI or sovereign agent architectures
- auditable runtime transfer logs
- restrictions on which tools an agent may call
- stronger technical controls around regional processing
Over time, the market may also shift away from relying primarily on legal safeguards, such as SCCs and transfer impact assessments, and move towards more technical controls, including region-locked inference, EU-hosted models, privacy-preserving computation, and on-device processing.
A major open question remains unresolved: what counts as a âtransferâ in an AI context? If processing happens transiently outside the EEA, does that qualify? If inference happens abroad without storage, is that a transfer? If encrypted remote execution takes place in another jurisdiction, how should that be classified? These questions are not settled, but agentic AI will force regulators to confront them.
Conclusion
Agentic AI is exposing a growing fault line in GDPR compliance. The current transfer regime was built for a world in which humans decide where personal data goes. Agentic AI replaces that with systems that can route data autonomously across tools, services, clouds, and jurisdictions.
That creates a direct clash between static legal assumptions and dynamic AI infrastructure. As organisations deploy more autonomous systems, that clash is likely to become one of the most important regulatory issues in AI governance and international data transfer compliance.



