By Dr Maria Moloney
Artificial Intelligence (AI) is not a new phenomenon, and neither is the idea of protecting one’s privacy online. When the Internet came online in the late twentieth century, the need for increased regulation slowly became apparent. Just like in our physical world, this new “online” world needed to be controlled, and protections needed to be put in place for everyone to enjoy this new world in a safe way. An important protection that quickly became apparent in the virtual world was the protection of our information, or data protection. In the decades since the advent of this new world, laws protecting our online privacy have spread across the globe.
A quarter of a century into the new millennium, we are seeing a rising need for more protection. This time to protect against the ills of AI. That is not to say that everything about AI is bad; it offers many advantages, but, like every new technology, it also has downsides. Many countries have approached these challenges differently. Just as when the Internet first came along, the options for controlling threats varied from soft laws, policies, and standards to hard laws such as regulations and directives.
Over the last decade, Europe has made great efforts to be the first to regulate both data protection and AI with the General Data Protection Regulation (GDPR) and the more recent EU AI Act. Some jurisdictions have followed suit, but many are holding off, especially around AI regulation. They are taking their time to implement rules that best suit their specific requirements. India published their Digital Personal Data Protection (DPDP) Act in 2023. This piece of legislation is quite different to the GDPR in Europe, and their approach to AI is also proving quite dissimilar to the European approach. So, compliance experts with experience in the European data protection and AI regimes will need time to understand India’s approach. With that in mind, this blog post explores the questions around the differences and similarities between Europe’s and India’s approaches to regulating these two technology-related fields.
Same challenges, different regulatory paths…
An interesting place to start is by looking at how India and the European Union are approaching AI and data protection regulation. What stands out is not just what they are regulating, but how they think about regulation itself.
The EU’s instinct is familiar by now to many of us. What they tend to do is to build a comprehensive legal architecture, define categories and obligations, and give regulators clear tools to enforce them.
India’s instinct is different…
It seems more cautious about over-regulating fast-moving technology and tends to govern through principles and existing legal frameworks. It targets interventions where harm is most likely to occur. The difference between these two jurisdictions becomes very clear when you put India’s DPDP Act, 2023, alongside the EU’s GDPR and, more recently, the EU AI Act.
India’s PDPD Act is best understood as a modern, pragmatic privacy statute. It does not try to replicate the GDPR, nor does it attempt to build a dense body of jurisprudence from day one. Instead, it sets out a clear social contract: individuals, or what the Act terms as Data Principals, are entitled to protection of their digital personal data, and organisations, or what are called Data Fiduciaries, may process that data for lawful purposes, but only subject to defined obligations.
Consent plays a more central role in the PDPD Act than it does with the GDPR. The Act also mirrors India’s policy priorities, which include ensuring digital services are allowed to scale and innovation is allowed to thrive, while ensuring a type of enforceability of the rules that does not overwhelm the regulators or the Indian industry. Enforcement of the DPDP Act is controlled through a single national body, the Data Protection Board of India, rather than a decentralised network of supervisory authorities as we have in Europe.
What really brought the DPDP regime to life, though, were the DPDP Rules notified last year in 2025. These rules have taken the DPDP Act from being a legislative objective to being something that can now be operationalised by industry. The rules spell out how notices work, how obligations are triggered, and how the system is expected to function in practice. For organisations, the notification of these rules was the moment India’s data protection regime became something that had to be actively run, not just acknowledged. These rules have levelled the playing field most definitely earlier in the lifecycle than what we saw with the GDPR. The GDPR achieved similar outcomes only gradually, through years of regulatory guidance, enforcement practices, and judicial interpretation across institutions like the European Data Protection Board, the Court of Justice, and national supervisory authorities.
Perhaps India benefited from holding off on regulation too early, observing the activity of its counterparts in Europe and learning from their experiences? Perhaps not? Who will ever know?
Neither regime is better or worse; they simply reflect fundamentally different regulatory philosophies, institutional structures, and policy priorities. Each shapes how data protection is implemented and experienced in practice in their own jurisdictions.
So, let’s break it down…
If you are used to the GDPR, the DPDP Act will feel familiar, but not identical. Both laws are rooted in accountability, individual protection, and organisational responsibility. Both expect organisations to explain what they are doing with personal data and why, and both have extraterritorial ambitions.
Where they diverge is in structure and emphasis. The GDPR is a deeply articulated rights-based instrument, shaped by almost a decade of European constitutional thinking and regulatory practice. It gives organisations multiple legal bases, detailed rights, and a dense body of guidance and case law.
The DPDP Act is more streamlined. It reflects India’s preference for clarity and enforceability over doctrinal complexity. That, by no means, makes it weaker; it simply means that GDPR compliance cannot be “copied and pasted” into India. A certain level of mapping and judgement is required, and any prior assumptions need to be tested.
“GDPR and the EU AI Act are often mentioned together, but they actually do very different jobs”
The EU AI Act is frequently described as the GDPR for AI, but that analogy only works at a very superficial level. The GDPR is about personal data and fundamental rights. The AI Act is about AI systems and systemic risk. You can fall within the scope of the AI Act without processing personal data at all, and you can trigger GDPR obligations without using AI.
What the two laws share is a governance mindset, i.e. risk awareness, documentation, lifecycle controls, and accountability. Each of these pieces of legislation, however, applies that mindset to different problems. The AI Act classifies AI systems by risk, imposes prescriptive obligations on high-risk uses, and introduces new governance concepts such as conformity assessment, post-market monitoring, and AI literacy obligations. It is explicitly a product- and system-oriented regulation.
This is why the EU has ended up with a dual compliance stack: GDPR for data protection, and the AI Act for AI governance. With the forthcoming Digital Omnibus package, the EU is seeking to make this stack flatter and more cohesive.
What about AI and India, then?
When it comes to AI, however, India seems to have deliberately resisted the temptation to pass a single, binding AI law.
In fact, its position has been quite consistent: to regulate the uses and harms of a technology, not the technology itself. Added to this, the Government has been explicit in clarifying the fact that it does not want to stifle innovation in a field that is central to the country’s economic growth, public services, and global competitiveness.
This philosophy is reflected in the AI Governance Guidelines of India issued by the Ministry of Electronics and Information Technology (MeitY). It is important to note that these guidelines are not legislation. The principles focus on responsible, safe, inclusive, and trustworthy AI. They outline AI governance as a continuous process that spans from the design phase to deployment and ongoing monitoring. These principles address the risks associated with AI, including various biases, misinformation, deepfakes, and the potential harm to individuals and society as a whole.
What is important to note is that enforcement is not embedded in the guidelines themselves. India, instead, relies on existing legal levers, like the DPDP Act, its cybersecurity obligations, various platform regulations, consumer protection laws, and sector-specific oversight, to address instances where AI causes real-world harm. That is why we see targeted interventions, such as their proposed requirements around labelling AI-generated content, rather than a sweeping AI statute.
To sum up, India is building AI governance around its legal system, rather than simply adding an extra layer on top.
India vs the EU: Two Philosophies, Two Compliance Realities
Put these two approaches side by side, and the contrast is quite striking. The EU has chosen legal certainty through classification, prescriptive obligations, and harmonised enforcement. The AI Act tells organisations exactly when they are in scope, what category they fall into, and what they must do.
India, on the other hand, has chosen flexibility. Its AI governance is principles-based, adaptive, and deliberately incomplete in legislative terms. It assumes that not all risks can be anticipated in advance, and that regulation should evolve through guidance, sectoral action, and targeted rules.
Neither approach is inherently better. They do, however, create very different expectations for organisations operating globally. In the EU, AI governance must be auditable with a paper trail. In India, it must be justified as being in the public interest.
The Practical Takeaway
For organisations operating across India and the EU, the real challenge is avoiding false equivalence. It would be wrong to assume that a solution that works for one regulatory regime will naturally satisfy the other. India’s DPDP Act is not a lighter version of the GDPR. India’s AI guidelines are not a soft version of the EU AI Act. And the EU AI Act is not an extension of the GDPR.
The smartest governance strategies would treat these regimes as distinct but interconnected. They would reuse evidence where it makes sense, e.g., in risk assessments, security controls, or specific governance structures, but keep the legal narratives separate. Privacy would remain privacy, and AI governance would remain AI governance. Integrating both auditability and justifiability is not just a legal necessity; it is the key to maintaining cross-border credibility and moving at the speed of the global market.
🎧 Want the DPDP Act explained in plain English?
Listen to our podcast episode on India’s DPDPA.
What it means, what’s changed with the Rules, and what organisations need to do next. Hit play now.



