Bridging AI development and governance: Key insights from the IPoP Workshop
The Workshop ‘Bridging AI development and governance’ organized in the context of the Interdisciplinary Project on Privacy brought together legal and technology scholars, joined by participants who have obtained Digital Ethics Officer certification from EDHEC Business School (EDHEC Augmented Law Institute), to discuss frictions and solutions on the intersection between the technological development and the regulatory governance of novel AI systems.

The event was hosted on April 3 by the EDHEC Augmented Law Institute at the EDHEC Paris campus. Through three panel discussions and two workshops, the discussions proved the challenge in regulating AI while offering several paths forward.
Opening speeches
Christophe Roquilly (Director of the EDHEC Augmented Law Institute, and Honorary Dean of Faculty at EDHEC Business School) opened the day by welcoming the participants on the EDHEC Paris Campus. He emphasized the important role of the EDHEC Augmented Law Institute in innovative legal research, education and services.
Antoine Boutet (Assistant Professor at the Inria Privatics Research Group and IPoP Coordinator) continued with an explanation of the Interdisciplinary Project on Privacy (IPoP). As the name implies, the interdisciplinary nature of the workshop perfectly aligns with the broader project goals.
Michaël Van den Poel (Research Engineer at the EDHEC Augmented Law Institute and coordinator of this event) closed the introduction, highlighting how the workshop aims to be a starting point for increased cooperation between legal and technology scholars.
Panel 1: The technological state of the art of AI And its societal impacts
The first panel, moderated by Antoine Boutet, started with a discussion on the recent EDPB Opinion on AI models. Juliette Sénéchal (Lille University) and Pankaj Raj (MIAI) explained how the Opinion provides a legal analysis on the possible anonymity of AI models, noting that according to the EDPB, a model is considered anonymous only if there is an 'insignificant' likelihood of re-identifying personal data using all reasonably likely means. Cédric Eichler (INSA) and Tristan Allard (Irisa) responded by addressing a lack of agreement on what ‘insignificant’ could mean. Instead, they explained the wide range of attacks that can be utilized to extract personal data from AI models, and the difficulties in providing protections against all possible attacks. Cooperation between regulators and developers on how to address these risks of attacks and what protections to use, which should include differential privacy, could be a way forward.
Panel 2: AI compliance with existing legal frameworks
The second panel, moderated by Christophe Roquilly, provided three use-cases where AI models were assessed in light of data protection and AI regulation. Benjamin Nguyen (INSA) presented his experience as a member of the committee evaluating the use of AI-augmented cameras during the 2024 Paris Olympics. Compliance with the GDPR was important for this government project, with the committee discussing how to apply concepts such as biometric data to the project. In addition, the committee evaluated the technical utility of the system, which was made possible by the technical knowledge of the members of the committee. Geneviève Fieux-Castagnet (SNCF) moved on to the AI Act, describing her experience in applying AI standards to a project emerging while the AI Act was still in negotiation. By applying the early on to a project, she was able to anticipate the content of the AI Act. Isabelle Landreau (IDEMIA) presented the work of her company in using biometric entry control in companies. She compared the different global perceptions towards the use of biometric and the implementation of explicit consent under the GDPR.
Panel 3: Upcoming AI regulation
The third panel, moderated by Gianclaudio Malgieri (Leiden University), provided a deep dive into the different emerging AI frameworks. Ludovica Robustelli (Nantes University) explained the interaction between upcoming and existing regulation. GDPR provisions on explainability and automated decision-making have recently been interpreted broadly by the Court of Justice of the European Union (CJEU), which makes it relevant for topics also covered by the AI Act. She also mentioned Article 10(5) of the AI Act, which aims to provide a way for AI developers to use sensitive data in debiasing. William Letrone (Nantes University) continued, warning that the regulatory landscape for AI is fragmenting. Many loose ends allow for the private sector to have a considerable impact on the application of regulation, with the Trump administration stalling AI regulation in the US. Lucas Anjos (Sciences Po) put the novelty of the US approach in context, pointing out how the previous US administration was lobbying against AI regulation in Brazil. Global divergences are also not new to AI but have instead existed since the era of the internet, with a paradox of territoriality. According to him, explainability and algorithmic transparency are valuable, with the recent Court of Justice decision a step in the right direction.
Workshop 1: The need for legal protections in AI Governance – Perspectives from technological scholars
The first workshop, moderated by Margo Bernelin (Nantes University) commenced with an introduction by Jan Ramon (INRIA), who explained different legal and technological approaches to thresholds. Whereas AI researchers can express probabilities and risks, legal experts work with abstract legislation, using words such as ‘appropriate’. Bridging this gap requires jointly determining how these abstract norms can be applied to AI. The discussion that followed focused on the uncertainty faced by technologists. While AI researchers can develop multiple attacks which are able to extract data from AI models, controllers required to protect against such attacks often do not know if they are doing enough to satisfy legal requirements as they cannot keep up or protect against every imaginable attack while maintaining effective systems.
Workshop 2: How can the law provide these protections?
The second workshop, moderated by Michaël Van den Poel (EDHEC Augmented Law Institute) aimed to provide the legal response to the questions by technologists. Amongst legal scholars, there is a consensus that the principles-based approach creates ambiguity in interpretation. This ambiguity can be resolved by judges, but this takes time which can allow companies to strategically maneuver around the toughest regulation. Regulators working with uncertainty can also be influenced by public pressure, with data protection authorities showing a relatively lenient approach toward current generative AI practices often at odds with the GDPR principles. Standards can offer clarity but are often drafted behind closed doors and with more influence of the industry. Legal scholars should aim to bring knowledge on the reasons for and the content of legislation to technologists, and could also critically assess, jointly with technologists, the outcome of the standardization process, which often lacks transparency and inclusivity
Conclusion
In his concluding remarks, Gianclaudio Malgieri (Leiden University) noted the importance of interaction between law and technology, as understanding each other leads to a better implementation of regulation. He also emphasized the political nature of decisions on technology, which are always rooted in societal choices.