Explore & master
   |
EDHEC Vox
 |
Programmes
Research

Perspectives on artificial intelligence from Arnaud Billion and Michelle Sisto (EDHEC)

Michelle Sisto , Associate Professor
Arnaud Billion , Associate Professor

Security risks, labour market disruptions, impacts on mental health and the role of education: Michelle Sisto, Associate Dean at EDHEC and Director of its centre dedicated to AI issues, and Arnaud Billion, Associate Professor and Research Fellow at the EDHEC Augmented Law Institute, discuss artificial intelligence.

Reading time :
15 May 2026
Share

To begin with, what strikes you most about the latest developments in AI?

 

Michelle Sisto : I believe that what happened between Anthropic and the US Department of Defense (1) has highlighted, in a very public manner, just how deeply AI has penetrated the military and geopolitical spheres. The fact that a single company, out of all the players in the AI sector, has set a clear limit on the ethical use of its technology should give us serious cause for concern. The other thing that strikes me is the speed at which agentic AI could be deployed across the economy (2). This raises real questions regarding security and the vulnerability of our economic and decision-making systems.

 

Arnaud Billion : I would even take this a step further. I get the impression that companies are only just beginning to realise the extent of ‘shadow AI’ within their organisations (3). They are using AI systems that they haven’t taken the time to specify technically or functionally, and they are gradually discovering all the potentially strategic data flows heading towards marketplaces – that is to say, other companies. Every employee who issues a prompt generates a data flow from the company to a range of third parties. The first of these is, of course, the model provider. But this provider’s business model often relies on monetising the data collected. How? From whom? I doubt many companies have asked themselves this question. We are therefore facing a ‘strainer-like’ leakage of information systems that is only just beginning to appear on the radar.

 

M.S. : Especially since these business models are by no means set in stone. They are constantly evolving, starting with the price of tokens (4). However, once you have started to roll out AI within your organisation, you are locked into a business model over which you have no control and which could change overnight.

 

A.B. : This is exactly what Cory Doctorow describes with his concept of ‘enshittification’ (5): the way in which, ever since the dawn of the web—or even computing itself—a tool that initially appears to be an improvement always ends up turning against its user. A basic feature or service suddenly becomes a subscription; a solution you thought you owned becomes a user licence that can be revoked at any time.

 

Arnaud, in your book “IA : Techniques, éthique, juridique et bonnes pratiques” (6), you explain that AI cannot be regarded as a tool. If it isn’t a tool, what is it?

 

Arnaud Billion : Generative AI for the general public is a simulation of a tool that takes the form of a question-and-answer machine. The classic characteristics of a tool are, generally speaking, control over the force applied and an idea of the outcome: if I use a hammer to drive in a nail, I control the force I apply to the action, I know that no one else is striking the nail with me, and I have a fairly good idea of the result I can expect from this action. However, with AI, I, the user, find myself at just one point in a vast calculation of which I know little or nothing about the upstream process, and often the downstream one too. I am faced with a window onto an ocean of 1s and 0s: I see the swell, I pour my glass of water into it, but I have no idea of the storm that set it in motion out at sea. If we stick to the technical description, AI is a computer programme. Which means we must revert to best practice in IT: entrusting its deployment to experts, testing, monitoring, securing, and so on.

 

Michelle Sisto : For me, what sets AI apart from a tool is the concept of autonomy. A hammer has no autonomy. But with agentic AI, I can give a task to an AI that will decide for itself how to carry it out, by breaking it down into subtasks and creating mini-programmes. To view AI as merely a tool is to severely underestimate the risks associated with its use.

 

A.B. : Categorising AI as a tool also reveals a certain sense of insecurity. We ‘use’ it, we ‘equip’ ourselves with it; in short, we seek to master something whose workings we feel are beyond our grasp. What this tells us is that we need tools, and AI appears to offer us just that. I think we mistake it for business software, a set of programmes that would genuinely serve a purpose by automating certain tasks.

 

M.S. : Back in 1988, when I was studying mathematics and computer science, people were talking about expert systems for business-support software. Although I was blown away, like everyone else, by the arrival of consumer-grade generative AI in 2022, it remains a general-purpose technology that simulates expertise.

 

A.B. : And as anyone who plays video games knows, spending hours on SimCity won’t make you a good mayor!

 

A MIT study (7) has highlighted the concept of ‘cognitive debt’ associated with the use of generative AI. What is your view on this issue?

 

Michelle Sisto :  For me, this is one of the key challenges of AI, particularly from the perspective of our education sector (8). The role of an educator is to help people develop. This MIT study has highlighted what many of us have sensed for some time now. We conducted our own experiments with first-year Master’s students to get them to ‘experience’ the risk: at the start of the class, we asked them to prepare a presentation using AI; then, three hours later, we asked them to give the same presentation without any support. And at that point, there is very often a blank. They can no longer remember what was in the slides or the arguments they presented. Cognitive debt is real (9).

That is why, at EDHEC, we have introduced the pre-Master’s course ‘Me, Myself and AI’ (10), which aims to encourage students to reflect on their relationship with AI and their personal goals. Have I come here just to hand in a series of assignments and get a stamp on my degree? Or am I here to learn, grow, develop my skills and my cognitive potential? We are actually asking them to reflect on how they learn, on the difference between producing work and learning, and on the role AI can play in their educational journey. It is essential to confront them with these questions.

We are fortunate to be working with an audience that has already shown a certain appetite for cognitive activities. When we see how quickly this otherwise privileged group is delegating tasks to AI without questioning it, it is our role as educators to encourage them to take a step back, so that we can train leaders who will be able to spread this approach more widely in the future.

 

Arnaud Billion : We are witnessing a phenomenon of cognitive offloading that can go as far as the outsourcing of thought. Someone who has delegated their capacity for analysis and decision-making has demonstrated nothing at all, has learnt nothing (11). One must experience this cognitive offloading to realise just how much the prosthesis that is AI is doing us a disservice. Especially as these interfaces are incredibly well designed: everything is done to keep us captive, to eliminate any friction associated with use, which contributes to our gradual degradation. Humans augmented by AI often find themselves diminished (12). Yet combating degradation is precisely the mission of education.

 

What risks does AI pose to mental health?

 

Michelle Sisto : Jonathan Haidt provides an excellent analysis of the impact of social media on young people’s mental health in his book The Anxious Generation (13). Self-esteem, relationships with others, echo chambers – the effects are harmful. Generative AI is the same scenario, only far worse. We conducted a survey among our students and those from six other international schools: one in five students reports a loss of self-confidence due to the use of AI, which, in turn, drives them to use it even more. It is a negative feedback loop that can have extremely harmful long-term effects. For us, this is yet another reason to encourage them to develop their critical thinking skills so they can question their relationship with AI.

 

Arnaud Billion : I would add that the nobility and appeal of the teaching profession really come into their own when it comes to AI and the challenges posed by its roll-out.

 

M.S. : And it gives us the opportunity to develop our teaching methods. It’s a real challenge, but I’m very hopeful!

 

In early March, Anthropic published a report on the impact of AI on the labour market (14). The report notes, in particular, that the jobs most at risk are those held by the most highly qualified workers in the highest pay brackets – the very roles that business school graduates are targeting. How do you approach this issue?

 

Arnaud Billion : I don’t think that AI and employment are two mirror-image curves where, as one rises, the other plummets. At least not in the long term. I believe instead in waves. We are in the first wave, and we must expect a significant number of workers to be ‘replaced’ (15): the senior staff member who uses prompts and will no longer need their junior colleague, the developer who cannot compete with the speed at which AI produces code, and so on. But what will happen next, once we have undermined business processes? 

I think the second wave will take the form of a backlash, in response to the decline in quality caused by this haphazard deployment. Following that will be the race for skills: everything will need to be put back on track, with human expertise brought back in to correct and repair the processes. Thus, to implement an AI diagnostic assistance feature, for example, will we need to standardise and formalise processes (workflows) both upstream and downstream of this intended moment of scaling up... all these additional bureaucratic tasks will place a burden on the hospital’s administrative and clinical staff, whose professional demotivation we will subsequently lament, without making the connection between these two phenomena. It is for this second wave that we must stay the course, for those who come through it will be those who have not given in and who have continued to develop their skills.

 

Michelle Sisto : There is no doubt that, in the short term, AI will have a significant impact on our students’ employment prospects. A closer look at the Anthropic report reveals that the roles and sectors most affected are precisely those for which we train our students: business, management, law, communications, and so on. And for those already in employment, the situation is not necessarily any better: a study by the Boston Consulting Group, published in early March (16), examined the phenomenon of ‘AI Brain Fry’, i.e. the mental exhaustion experienced by all those professionals who find themselves having to audit or supervise work carried out by AI. We are therefore facing a two-fold challenge: not only access to employment, but also the mental health of those already in work. For us as educators, these are warning signs. This is one of the reasons why we have opened a centre dedicated to AI: we wanted to study these risks and transformations closely in order to adapt our curricula and inform decision-makers.

 

Another major question raised by the development of AI as it becomes integrated into every aspect of our society is that of its governance. Who should regulate AI?

 

Arnaud Billion : One thing is certain: we won’t see any international governance. We’ve barely managed to achieve it when it comes to weapons of mass destruction; it’s hard to imagine how we could manage it for AI. European law promotes compliance and the principle of subsidiarity, which means delegating decision-making and control to the lowest possible level: that of the individual stakeholders. It makes perfect sense when you think about it: I am a business, so it is up to me to organise myself to deploy AI that does not destroy my IT system, that serves a business process, that does not discourage my staff and does not render all my knowledge useless. AI compels us to strive for excellence and remain vigilant: those who lose out will be those who have let things slide, who have adopted technologies they do not really understand or which do not fit the needs of their business lines.

 

Michelle Sisto : This, once again, is an issue that ties in with our role as educators. For this hyper-local governance to work, we need ‘leaders’ who are trained in managing the risks that these systems generate. Our work has only just begun!

 

 

Recent articles on the same topic:

Arnaud Billion :

 

Michelle Sisto :

References

(1) Anthropic bloque l’utilisation de son IA par le Pentagone pour la « surveillance intérieure de masse » et les « armes complètement autonomes », 27 février 2026. Le Monde - https://www.lemonde.fr/economie/article/2026/02/27/anthropic-refuse-de-ceder-a-l-ultimatum-du-pentagone-et-bloque-l-utilisation-de-son-ia-pour-la-surveillance-de-masse-et-les-armes-autonomes_6668445_3234.html

(2) IA agentique : une technologie qui soulève des interrogations, mars 2026 - https://www.vie-publique.fr/en-bref/302417-ia-agentique-une-technologie-qui-suscite-des-questions

(3) IA en entreprise : voici 5 phrases qu’Arnaud Billion nous invite à ne plus prononcer (et pourquoi !), mars 2026. EDHEC Vox - https://www.edhec.edu/fr/recherche-et-faculte/edhec-vox/ia-en-entreprise-voici-5-phrases-a-ne-plus-prononcer-et-pourquoi-arnaud-billion-intelligence-artificielle

(4) Explication : les tokens, le langage et la monnaie de l’IA, mars 2025 - https://blogs.nvidia.fr/explication-des-jetons-le-langage-et-la-monnaie-de-lia/

(5) Enshittification: Why Everything Suddenly Got Worse and What To Do About It - Cory Doctorow, oct. 2025. Verso publishing - https://www.versobooks.com/products/3341-enshittification

(6) IA : Techniques, éthique, juridique et bonnes pratiques, janvier 2026. Afnor éditions. Arnaud Billion, Denis Yrieix, Sabrina Hammoudi, Yannis Martin, Anaëlle Martin - https://www.boutique.afnor.org/fr-fr/livre/ia-techniques-ethique-juridique-et-bonnes-pratiques/fa214015/446147

(7) Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, June 2025. Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, Pattie Maes - https://arxiv.org/abs/2506.08872

(8) Michelle Sisto : « L'usage de l'intelligence artificielle doit rester centré sur l’humain et guidé par des valeurs », septembre 2025. EDHEC Vox - https://www.edhec.edu/fr/recherche-et-faculte/edhec-vox/michelle-sisto-centre-edhec-ia-usage-intelligence-artificielle-doit-rester-centre-humain-guide-valeurs

(9) ChatGPT est-il en train de casser le cerveau humain ? 5 points sur le preprint du MIT sur les effets de l’IA, juin 2025. Le Grand Continent - https://legrandcontinent.eu/fr/2025/06/19/chatgpt-cerveau-etude-mit/

(10) L’EDHEC Artificial Intelligence Centre a lancé son Bootcamp “Me, Myself and AI”, janvier 2026. EDHEC.edu - https://www.edhec.edu/fr/news/edhec-artificial-intelligence-centre-bootcamp-ia

(11) IA générative : le risque de l’atrophie cognitive, juillet 2025. Polytechnique Insights - https://www.polytechnique-insights.com/tribunes/neurosciences/ia-generative-le-risque-de-latrophie-cognitive/

(12) L'Homme diminué par l'IA - Marius Bertolucci, octobre 2023. Editions Hermann - https://www.editions-hermann.fr/livre/l-homme-diminue-par-l-ia-marius-bertolucci

(13) The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness - Jonathan Haidt, mars 2024. Allen Lane Ed. - https://jonathanhaidt.com/anxious-generation/

(14) Labor market impacts of AI: A new measure and early evidence, mars 2026. Anthropic - https://www.anthropic.com/research/labor-market-impacts

(15) Axelle Arquié, économiste : « Une catastrophe sociale causée par l’IA fait partie des scénarios possibles », mars 2026. Le Monde - https://www.lemonde.fr/idees/article/2026/03/01/axelle-arquie-economiste-une-catastrophe-sociale-causee-par-l-ia-fait-partie-des-scenarios-possibles_6668758_3232.html

(16) When Using AI Leads to “Brain Fry”, mars 2026, Julie Bedard, Matthew Kropp, Megan Hsu, Olivia T. Karaman, Jason Hawes and Gabriella Rosen Kellerman. HBR - https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry

Other items you may be
interested in

What is the future of organisations in the era of AI?

  • Maria Ximena Hincapie , Universidad de los Andes
  • Maria Figueroa-Armijos , Associate Professor of Entrepreneurship

When Social Comparisons Backfire: Evidence from Carbon Footprint Feedback

  • Joachim Schleich , Grenoble Ecole de Managamement
  • Valeria Fanghella , Assistant Professor

“VIP conversation” between Nathalie Dubois, Fnac Darty Group General Counsel and Christophe Roquilly, Professor & Director of the EDHEC Augmented Law Institute

  • Nathalie Dubois , Group General Counsel de Fnac Darty
  • Christophe Roquilly , Professor, Honorary Dean of Faculty, Director of the EDHEC Augmented Law Institute