Ways to take action
   |
EDHEC Vox
 |
Research

Arnaud Billion (EDHEC): "AI facts are often poorly reported, chains of responsibility are opaque, and suppliers are not very accountable"

Arnaud Billion , Associate Professor

In this interview, Arnaud Billion, Associate Professor at EDHEC and researcher at the EDHEC Augmented Law Institute, looks back on the January 2026 conference (Paris): “AI facts: what are the liability rights?” (1).

Reading time :
18 Feb 2026
Share

Arnaud Billion is an Associate Professor at EDHEC and a Researcher at the EDHEC Augmented Law Institute. A specialist in intellectual property law, information technology law and ethics, as well as the philosophy of law and epistemology, he recently organized a conference in Paris on the theme “Fait(s) de l’IA : quels droits de la responsabilité ?” (1).

 

What was the aim of this symposium organized by the Institut Présaje in partnership with the ENM, the Paris Court of Appeal, the EFB, and EDHEC?

The purpose of this symposium was to take a fresh look at the issue of artificial intelligence, the potential damage it can cause, and how the law can address it. I wanted to steer the symposium in this direction from an academic perspective, because the “facts of AI” are often poorly presented, the chains of responsibility are opaque, and suppliers are not very accountable (2). Moreover, insurance companies are failing to develop an insurance model for AI. In fact, it is the law itself that is being pushed to its limits, as its models struggle to grasp the phenomenon of AI in its entirety.

I was very pleased to be able to bring together experienced researchers and doctoral students, professional magistrates, lawyers, and corporate lawyers, as well as a European Commission official in charge of the “defective products” directive.

 

Why say that the law would be "defeated" by AI?

From the perspective of civil liability law, which seeks to identify a harmful event and the causal link, artificial intelligence is in a unique situation: apart from a few minority cases where this technology is embedded in a device in the broad sense (e.g., autonomous cars), AI is spreading throughout corporate information systems, giving them a less deterministic tone, a mixture of automation and simulation.

It is therefore a conceptual challenge to first recognize systemic damage (as the AI Regulation—or AI Act—invites us to do) but also those responsible, because in an increasingly automated management system, the few real human decision-makers who are potentially responsible act as moral buffers or even statistical scapegoats, while AI agents (which are only procedures) are never accountable.

 

What will be the outcome of this symposium?

We will, of course, publish the proceedings and develop the theoretical models we have begun to propose. I am thinking in particular of the European Commission, which is particularly keen to hear new ideas through the consultations it organizes and the expert groups it brings together.

Furthermore, thanks to its central position in the legal tech market and its institutional roots, the EDHEC Augmented Law Institute can help to clarify these difficult issues, always with a view to overall economic efficiency for the common good.

 

References

(1) Annual symposium of the Presaje Institute, co-organized on January 9, 2026, by EDHEC in Paris, in partnership with the National School for the Judiciary (ENM), the Paris Court of Appeal, and the EFB, the Professional Training School for Bar Associations within the jurisdiction of the Paris Court of Appeal - https://alll.legal/resource-item/colloque-presage/

(2) To supplement this discussion, see also Arnaud Billon's latest book, “AI: Techniques, Ethics, Legal Issues, and Best Practices” (Afnor 2026), co-authored with Denis Yrieix, Sabrina Hammoudi, Yannis Martin, and Anaëlle Martin - https://nouveautes-editeurs.bnf.fr/accueil?id_declaration=10000001273674&titre_livre=IA_:_Techniques,_%C3%A9thique,_juridique_et_bonnes_pratiques