Think differently
   |
EDHEC Vox
 |
Research

AI in business: here are 5 phrases that Arnaud Billion urges us to stop saying (and why!)

Arnaud Billion , Associate Professor

Following the publication of his latest book, "AI: Techniques, Ethics, Legal Issues and Best Practices " (Afnor, 2026), co-authored with four other researchers (1), Arnaud Billion – Associate professor at EDHEC and Researcher affiliated with the EDHEC Augmented Law Institute – revisits five common beliefs about AI in business and why we should question them.

Reading time :
10 Mar 2026
Share

 

‘AI, it's clear to me, is just a tool.’

Arnaud Billion: Many of us repeat this phrase, as if to appease a latent feeling of insecurity. But this insistence on usefulness (2) betrays a profound ambiguity: am I really the user of AI?

In our book, a rational comparison with traditional tools reveals a fundamental difference. A real tool, such as a hammer, does not solicit me, is not ‘designed’ to make me want to hammer nails, does not collect my information and does not synchronise with other hammers to guide my movements. AI, on the other hand, works in the opposite way: it transforms our actions, requests and data into information that can be exploited by its owner, who alone derives the final value from it.

In this context, the question of who the real user is arises abruptly (3): isn't it, ultimately, the owner of the AI company? If so, we are no longer the handyman, but the hammer or the nail. We become an integral part of a system where we contribute to enriching an asset that does not belong to us, while maintaining the illusion of controlling a tool that is under our control.

In our view, it is no longer enough to hope that AI will ‘remain a tool’. It is now imperative to put in place the objective conditions for this tool: radical transparency and effective control of flows, so that this technology truly serves those who use it rather than those who own it.

 

‘AI agents are amazing, they really are the new frontier.’

Arnaud Billion: AI agents are on the rise (4). In just a few quick steps, they can be assembled, connected to APIs, admired for their performance, and already referred to as ‘business software’. The effect is spectacular. But behind this apparent ease, we invite you to question these poorly structured programmes disguised as innovation. In reality, they are more like poorly designed ‘electronic spaghetti dishes’. Why?

In software architecture, mastery of inputs and outputs (I/O) is fundamental: clear formats, predictable behaviour, reproducible tests. However, most agents rely on natural language, probabilistic decisions and a stack of heterogeneous building blocks. It works... until it doesn't. The I/O is vague, the logic is implicit and maintainability is fragile (5). The result is the famous spaghetti, brilliant in demo mode but dramatically uncertain in production.

The problem is not the agent itself, but the belief that probabilistic orchestration can replace engineering. Without this architectural rigour, an agent is not a revolution: it is fuzzy code with a pretty interface. At this stage, their use should remain confined to the demonstration phase, where the illusion of fluidity still takes precedence over the robustness of the system.

 

‘What happens to my data? I don't care, AI helps me so much!’

Arnaud Billion: This is undoubtedly the little phrase that we don't always dare to say, but which floats around in many organisations. We are promised a more flexible, responsive, economical and relevant information system. This shared hope is appealing, with less friction, greater efficiency and improved decision-making.

On the contrary, we believe that the issue of the information transmitted is essential (6). The question is beginning to emerge: where does the data actually go? Is it sent abroad, stored, reused, cross-referenced? However, this is only a taste of the issues at stake. The problem posed is more fundamental: is my company's AI a tool... or a capture interface?

Because beyond the tool presented as high-performance, AI may be nothing more than a collection terminal for the model's publisher. We must ask the question head-on: is the latter just a data broker, whose business model consists of reselling your prompts to the highest bidder?

Behind the assistant is a publisher whose value is based on analysing and capturing everything that passes through the prompts. By interacting with the model, teams contribute to enriching an asset that does not belong to them. By prompting on a daily basis, could your employees be unwittingly working for another entity?

 

‘I really feel a kind of exchange with AI, the conditions of which I control.’

Arnaud Billion: This impression of mastery clearly needs to be qualified. When it offers a summary table or suggests exploring another question, is AI a genuine ideation process or a manipulative technique for building loyalty?

We believe we need to question the real intention behind the interface: why does it systematically congratulate me on the relevance of my questions? Why does it systematically offer to do more than I ask for? This is not simple courtesy, it is an acquisition strategy. Many AI tools are in fact saturated with dark patterns or subliminal techniques (7).

In an attention economy, at a stage where innovations are spreading so rapidly, the challenge is to acquire and retain as many users as possible. Software designers (UX) are well aware of this, and their bible remains the book Hooked (8), a veritable guide to capturing attention. The goal is no longer just to assist, but to create behavioural anchoring.

It is therefore necessary to consider the objective conditions for truly non-parasitic AI, which would prioritise useful application rather than attention enslavement (9). Without this vigilance, we move from a productivity tool to a capture technology that parasitises our intellectual autonomy.

 

‘Yes, I saw that AI consumes a lot of energy, but we'll manage, won't we?’

The idea is appealing: increasingly smaller (or less bulky) models, optimised algorithms, an energy footprint that would become negligible once the initial development costs have been absorbed. On paper, it's attractive. In practice, nothing could be further from the truth from a software perspective: it's the illusion of frugality (10).

Algorithm optimisation, however brilliant, has never saved a single gram of CO2 outside of laboratory conditions. The technical reality is quite different: once AI is deployed in a ‘production’ environment — i.e., connected to other software and exposed to a constant increase in load — it continues to produce and transform 1s and 0s continuously and without limitation.

Whether large or small, slow or fast, optimised or not, AI in operation responds to a logic of constant energy flow. Laboratory optimisation does not change this physical reality of the computer: the actual operation of a system remains massively energy-intensive as soon as it is called upon at scale. Any effort to make AI greener must necessarily integrate this systemic dimension and stop deluding itself with the illusion of frugal models.
 

References

(1) IA : Techniques, éthique, juridique et bonnes pratiques (2026) Billion Arnaud ; Denis Yrieix ; Hammoudi Sabrina ; Martin Yannis ; Martin Anaëlle. AFNOR Éditions - https://nouveautes-editeurs.bnf.fr/accueil?id_declaration=10000001273674&titre_livre=IA_:_Techniques,_%C3%A9thique,_juridique_et_bonnes_pratiques

(2) Yann Le Cun (Meta) : « L’IA est un outil comme un autre », mai 2024, Stratégies - https://www.strategies.fr/actualites/culture-tech/LQ3244805C/yann-le-cun-meta-lia-est-un-outil-comme-un-autre.html

(3) Les IA ne sont pas des outils ! Et c’est important de le comprendre, février 2025, L'ADN - https://www.ladn.eu/tech-a-suivre/les-ia-ne-sont-pas-des-outils-et-cest-important-de-le-comprendre/

(4) Petits agents, grandes questions : que sont les agents IA ? - https://www.conseil-ia-numerique.fr/nos-travaux/petits-agents-grandes-questions-que-sont-les-agents-ia

(5) Quand les agents d’IA échappent au contrôle : comprendre les comportements émergents (2026) Management & Data science - https://management-datascience.org/articles/67172/

(6) CNIL, Intelligence artificielle - https://www.cnil.fr/fr/technologies/intelligence-artificielle-ia

(7) Quand l’IA nous manipule : comment réguler les pratiques qui malmènent notre libre arbitre ? Janvier 2025, The Conversation - https://theconversation.com/quand-lia-nous-manipule-comment-reguler-les-pratiques-qui-malmenent-notre-libre-arbitre-246930

(8) Hooked - Comment créer un produit ou un service qui ancre des habitudes (2018), Nir Eyal, Eyrolles - https://www.eyrolles.com/Entreprise/Livre/hooked-comment-creer-un-produit-ou-un-service-qui-ancre-des-habitudes-9782212570939/

(9) Comment l’IA altère-t-elle notre pensée ? Entretien croisé entre une philosophe et un entrepreneur? Oct. 2025, Le Monde - https://www.lemonde.fr/idees/article/2025/10/04/comment-l-ia-altere-t-elle-notre-pensee-entretien-croise-entre-une-philosophe-et-un-entrepreneur_6644371_3232.html

(10) Comment l’IA dévore la planète. Déc. 2025, Le Monde - https://www.lemonde.fr/economie/article/2025/12/26/comment-l-ia-devore-la-planete_6659449_3234.html

 

Other items you may be
interested in

Opening, closing, rebuilding: middle managers and professional ‘boundaries’

  • Gazi Islam , Grenoble Ecole de Management
  • Ricardo Azambuja , Associate Professor

Sailing into the storm: how geopolitics crashed the boardroom

  • Luc de Rancourt , Co-Directeur de la chaire Géopolitique et Stratégie d’entreprise