Paris, November 28, 2024 – In the global race for leadership in AI, the legislator and several European agencies, as well as a number of leading economic players, have seized on pseudonymisation as a data protection technique. By facilitating the processing of personal data while offering solid guarantees for its protection, robust pseudonymisation, combined with other data protection measures, is a powerful lever for innovation and competitiveness for many economic sectors. These are the conclusions of a study by Professor Théodore Christakis.
At the request of Samman, a law firm specialising in public affairs, Théodore Christakis, Professor of International Law, European Law and Digital Law at Université Grenoble Alpes, has produced an exhaustive and wide-ranging study on the importance of pseudonymisation in positive law in the age of artificial intelligence.
The GDPR and several other fundamental texts of the European Union recognise the great usefulness of pseudonymisation.
Pseudonymisation is a de-identification technique that is particularly popular in scientific, historical or statistical research, as it reduces the risk of identification for the people whose data is used. This technique replaces any identifying characteristic of the data with a pseudonym or value that does not directly identify the data subject.
It is considered by the General Data Protection Regulation (GDPR), the Data Governance Act, the Data Act, the Digital Services Act and all the legislative texts that form the pillar of the European data strategy as a technique for protecting data and mitigating the risks associated with data processing.
For example, according to the GDPR, which contains no fewer than 15 references to it, pseudonymisation makes it possible to strengthen the security of personal data, protect data by design, promote compliance with several cardinal data protection principles, and mitigate the consequences of a personal data breach.
Case law in the EU and several other European countries recognises the usefulness of pseudonymisation
The legal debate on pseudonymisation traditionally focuses on two legal approaches: the relative approach, which considers the status of data in relation to the actual capacity for re-identification by a specific party; and the absolute approach, which is based on the abstract possibility of re-identification, regardless of who holds the data. European jurisdictions have not definitively resolved this debate but place emphasis on the practical likelihood of re-identification. Some data protection authorities are nevertheless adopting a strict stance, favouring an ‘all or nothing’ approach: either the data is irreversibly anonymised, in which case the GDPR does not apply (‘nothing’); or there is an abstract possibility of re-identification, in which case the data is considered personal and the GDPR fully applies, meaning pseudonymisation does not offer any ‘exemption’ to its controller (‘all’).
While acknowledging that data protection authorities are right to emphasise the distinction between pseudonymisation and anonymisation, Professor Christakis’s study demonstrates that pseudonymisation is widely recognised at the European level as an effective technique for mitigating the risks associated with the processing of personal data.
In France, the Council of State, (highest administrative court), while taking care to emphasise that the effectiveness of pseudonymisation measures should be verified by the French Data Protection Authority (CNIL), in 2020 stressed that pseudonymisation, which reduces the risk of the data subjects being identified by deleting directly identifying information, helps to guarantee the right to privacy.
With artificial intelligence paving the way for revolutionary advances in every sector of our society, the European Union is clearly at a crossroads in 2024.
Several European agencies consistently recommend the use of pseudonymisation. Since 2019, the EU Agency for Cybersecurity (ENISA) has been stating that pseudonymisation is becoming increasingly essential to facilitate the processing of personal data, while offering solid guarantees for its protection.
The success of common European data spaces, first and foremost the European Health Data Space (EHDS), may ultimately depend on a balanced and practical approach to pseudonymisation. It could also play an important role in building learning databases to train generative AI models.
“Pseudonymisation is a pragmatic solution to facilitate the secure sharing of data within common European data spaces, particularly in the field of health, and for the responsible development of AI. It makes it possible to maintain the usefulness of data while offering significant protection, under certain conditions, thus helping to reconcile technological innovation with respect for individual rights. It is therefore appropriate to move beyond the binary debate between ‘relative’ and ‘absolute’ approaches and recognise pseudonymisation as an essential component of a comprehensive data protection strategy. By adopting a balanced, pragmatic approach, it is possible to promote responsible use of data that respects the rights of individuals while fostering innovation.”
Professor Theodore Christakis,
Professor of International Law, European Law and Digital Law and AI Regulation Chair at Université Grenoble Alpes
“The study carried out by Professor Christakis, the first on this scale, is useful because it enables public decision-makers, regulators and other stakeholders to work together in a practical way to identify the technical solutions that respect the principles of EU law and support European AI stakeholders.”
Thaima Samman
Founding partner of Samman