Ways to take action
   |
EDHEC Vox
 |
Research

B. Fasterling (EDHEC) : “To regulate AI, a risk-based approach is probably the most promising way, but only if firms take data ethics seriously"

Björn Fasterling , Professor

Björn Fasterling, Professor at EDHEC, Researcher at the EDHEC Augmented Law Institute and Director of the "Digital Ethics Officer" programme, helps us to take a step aside by answering highly contemporary questions about data, privacy, ethics and artificial intelligence.

Reading time :
16 Jan 2023
Share

Data management and protection for firms has been a key subject for years now. But are they really well equipped to follow the law on one hand, and to act ethically on the other hand?

It is interesting that you ask about a possible conflict between ethics and the management of legal compliance and data protection policies. Research has shown that without ethically motivated support in organizations, legal compliance management quickly becomes ineffective, sometimes even counterproductive (1). The problem is that data protection officers, lawyers and compliance managers in companies tend to focus on meeting formal legal requirements by implementing processes and carrying out controls, because such measures can be proven and displayed. Trying to establish ethical cultures in organizations, however, is much more difficult and less visible (2), but also, as research studies demonstrate, more important (2) (3). If firms don’t take data ethics seriously, their legal compliance and data protection will be one on paper but not one in real life. It may be possible to impress authorities and the public with a sophisticated compliance program or refined data protection policies for a while, but it is likely that “paper compliance” eventually fails and results in serious trouble for a firm.

What are the main tensions that you see between the need for privacy and the business “gold rush” as regards data?

Gold rushes were often violent and lawless and the social transformations they brought about came at the cost of environmental destruction. Therefore, I am wary about gold rushes. Of course, it is legitimate and necessary that businesses exploit the opportunities offered by technology. However, already saying that there would be tensions between privacy and data use is probably misguided. Both, using data and guaranteeing privacy, move in the same direction, namely towards enhancing our human capabilities. To me it seems that privacy is one of the most misunderstood fundamental rights. It is not about letting people selfishly hide information or withhold “their” data. Privacy is sort of a shield that protects people in a way so that they can exercise other fundamental rights and freedoms, communicate with others, and be creative (4). Therefore, to me, data use without concern for privacy is suicidal for an innovative open society.

But then, how should we navigate between the needs for privacy and data protection, and solving key issues that are data intensive - such as establishing a health database to prevent diseases or combat a pandemic?

When trade-offs between data use and privacy become necessary, for example to combat a pandemic, the objective is to maximize both objectives - protecting public health on the one hand, and respecting fundamental rights, including privacy on the other. There are several ways of doing so, and technology can help here, too. What is important is to acknowledge that no value is more important than the other – health is not per se more important than freedom or vice versa. For each trade-off situation there is an optimum of realizing both values. For example, a health database may need to contain identifiable personal data when we must combat a very contagious and dangerous pandemic. Yet, the identifiable part of the database can be very restricted. Further the database’s qualities and access policies can vary over time, depending on the urgency of the situation.

How do you see the EU’s Artificial Intelligence Act adopted last December? Is that a game-changer?

The Act is not adopted yet. In December the Council of the European Union adopted a common position, or its “General Approach” (“Orientation Générale” in French) with a compromise text to the European Commission’s proposal of April 2021 (5). The Council’s text reacts to the heated debate on how to define ai-systems and has a narrower definition than the Commission’s proposal. In any case, the Council upheld the risk-based approach of the Act and the measures to enhance transparency regarding the use of high-risk ai systems. The reliance on a risk-based approach is probably the most promising way to regulate ai. Nevertheless, the Act will place a heavy internal management burden on firms that use ai-systems. In this respect I see a parallel to compliance programs and data protection law that also build on risk-based regulatory approaches: if some firms don’t manage risks with the objective to prevent negative impacts of their activities on fundamental rights, but instead spend energy on elaborating processes that only look compliant, the Act will not only be burdensome for other firms that are trying to do a good job but also for legal authorities that must enforce the laws. And we still may not be adequately protecting fundamental rights that are at peril in the face of ai.

So what should be done then to prevent such negative effects?

I am convinced that we, first, need to appeal to top management and boards to integrate concern for fundamental rights on a strategic level of the firms they lead. Second, firms should develop a very specific skillset or even make room for a new profession (“métier” in French). This métier would be that of a risk manager who not only knows how to manage risks under a legal framework, but also has profound insights into the way technology operates in real life and who has acquired a thorough understanding of the nature and importance of fundamental rights (6). If firms are committed in such a way, the Act may become a game-changer, one that helps exploit ai’s opportunities while maintaining a free democratic society.

(1) Treviño, L.K., Weaver, G.R., Gibson, D.G. and B.L. Toffler. 1999. ‘Managing Ethics and Legal compliance: What Works and What Hurts’. California Management Review 41(2): 131-151, at: https://journals.sagepub.com/doi/10.2307/41165990

(2) Fasterling 2016, ‘Criminal Compliance – Les risques d’un droit penal du risque’. Revue Internationale de Droit Economique 2016/2: 217-237, at:  https://www.cairn.info/revue-internationale-de-droit-economique-2016-2-page-217.htm

(3) Hess, David, The Management and Oversight of Human Rights Due Diligence (July 30, 2021). American Business Law Journal 2021, Available at SSRN: https://ssrn.com/abstract=3956702

(4) Fasterling, B. (2022), Privacy as Vulnerability Protection: Optimizing Trade-Offs with Opportunities to Gain Knowledge, Research in Law and Economics, Vol. 30n at: https://www.emerald.com/insight/content/doi/10.1108/S0193-589520220000030004/full/html

(5) See the press release “Artificial Intelligence Act: Council calls for promoting safe AI that respects fundamental rights” https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/

(6) In a very practical way, the EDHEC Augmented Law Institute is aiming to fill that gap by offering a new training program called “Digital Ethics Officer” https://alll.legal/formation-digital-ethics-officer-edhec/

 

Photo de Mike Kononov sur Unsplash

Other items you may be
interested in