"Systemic social media regulation focuses on adjusting the architecture of the platform to encourage discursive excellence"

Interview with Frank Fagan, professor of Law and Research at LegalEdhec

Social Media have become a common source of information for citizens and companies. Despite the positive impacts that these media make on society, it can be observed that their architecture permits the proliferation of fake news, hate-speech, and electoral interference. At what point can these effects be considered a serious threat to democracy? If lies spread faster than truth on social media, should these media be regulated?

Frank Fagan, Professor of Law and researcher at LegalEDHEC, is publishing a paper entitled Systemic Social Media Regulation in the well-regarded Duke Law & Technology Review. The paper proposes that social media platforms should be self-regulated to the extent that the interests of the platform and government overlap and converge. If interests widely diverge, platforms should be regulated by modifying their structure and the use of their algorithms in a manner conducive to discursive excellence.  By focusing on platform architecture, lawmakers avoid overtly restricting speech that may actually be lawful.


Elon Musk recently said that “there should be regulations on social media to the degree that it negatively affects the public good. We can't have like willy-nilly proliferation of fake news, that's crazy".

Do you agree with this comment? Why should social media be regulated?

Musk is right that regulation should be weighed in terms of costs and benefits. If social media causes negative impacts, society will benefit from regulation so long as it does more harm than good. But there are other avenues. Consider that data breaches such Cambridge Analytica often result from negligent handling of data and violations of end user agreements. Basic class action lawsuits, which are available to US defendants, can pressure platforms to take more care. Privacy is taken very seriously in Europe and the GDPR provides some protection. The US Constitution and various European national laws forbid unlawful speech such as inciting violence. A concern with fake news is more complicated. Suppression of outside interference in national elections is consistent with our laws. But suppression of lawful content, even if it is fake news, implicates our core political values. Direct regulation of borderline lawful content can become politicized over time even if it begins with good intentions. Systemic social media regulation focuses on adjusting the architecture of the platform to encourage discursive excellence and to avoid a direct confrontation with our longstanding norms. If platforms get this right, governments need not get involved.But the converse is true.


Last April, Mark Zuckerberg appeared before Congress to address different concerns such as the proliferation of hate speech and he announced that Facebook will have AI tools to automatically flag and remove hate speech before it appears within five to ten years.

Is the suppression of bad content the most efficient way to avoid it?

That remains unclear. In the article, I describe the problem that Reddit had with trolls. Reddit would ban them, but the trolls would change their usernames and continue to troll. Reddit finally understood that traditional banning wouldn’t work.Humans (and their bots) find creative ways to get around rules. Reddit started a practice called “shadowbanning. ”The troll is banned, but is allowed to continue to post messages. The troll sees the messages appear on the forum, but the other Reddit users can’t see them. The troll is basically shouting at an empty forum thinking that everyone is listening. No one responds to the invisible messages, and the troll eventually goes away. We might imagine that AI tools can automatically detect and remove unwanted speech, but we cannot foresee all of the consequences. Systemic approaches, such as funnelling questionable content creators to areas of the platform where they may have less impact or less incentive to troll may be superior to outright suppression. As a private entity, Facebook is free to suppress hate speech or any other content on its network within limits, though some scholars have suggested that Facebook should be considered a quasi-governmental entity held to constitutional standards. This idea has been suggested by at least one US Supreme Court justice. If that were the case, then Facebook may not be allowed to suppress hate speech (at least in the US where it is legal so long as it does not incite imminent lawless action) and Facebook would certainly be better off deploying a systemic approach.

How can social psychology help us to better understand the bad effects of social media and think about the right approach, regulatory or otherwise? In your paper, you mentioned the example of climate change. Could you elaborate?

This question raises an important point that I think is too often overlooked.Social psychologists are beginning to find that people adopt many of their political views on the basis of a social identity that they construct for themselves. Climate change offers a great example. While not so common in Europe, many skeptics care little about deliberating climate change on the basis of scientific evidence.They evaluate an argument on the basis of how their group evaluates it. Paul Bloom colorfully makes the point by saying that it is more accurate to view skeptics as football supporters, say for Juventus, screaming “Yay, our team!” and “Boo, Milan!” or the other way around (!)If this is true, which I think it very often is and in meaningful ways, then the fake news consumption phenomenon has more to do with searching for or reinforcing a group identity and less to do with deliberation over factual and scientific claims.Today, people are more likely to consume news as entertainment or a hobby. This means that suppressing fake news will likely reduce electoral interference only at the margins. Effective reform must be aimed at systemically adjusting the architecture of the forum and nudging people toward platform areas where discursive excellence thrives.

Zuckerberg and some other tech leaders now seem to expect and accept the idea of a regulation, but it remains unlikely to be fulfilled by US Congress. What do you think?  To what extent should European lawmakers be involved? 

The most serious US proposal to date essentially applies existing rules for broadcast and print political advertising to platforms. This could help, but probably in a limited way for the social psychological reasons given above. At this point, platforms themselves should be leading the charge, searching for ways to align their interests with societal interests.If they fail, then Congress should step in.With respect to Europe, the same logic applies. To the extent that platform and government interests converge, self-regulation makes sense. To the extent that interests diverge, regulation is desirable when its benefits exceed its costs. In my view, an effective way to limit costs and increase benefits, is not to target speech directly and potentially erode important norms, but to implement a systemic approach that configures platform architecture in ways that reduce unwanted speech and encourages discursive excellence.


See Also

EDHEC International BBA' sustainable forum
- 29-06-2022
On april, 7th our first sustainable forum was organised on both Nice and Lille campus....
EDHEC Augmented Law Institute and Sopra Steria sign a partnership to develop an agility index for legal departments
- 27-06-2022
Over the next 3 months, Sopra Steria's legal teams, who benefit from an advanced...
- 27-06-2022
EDHEC Business School announced the creation of its Centre for Responsible...
The Economist ranks EDHEC Global MBA among Top 20 worldwide, #4 in Europe
- 22-06-2022
The EDHEC Global MBA ranks among the Top 20 best MBAs worldwide and #4 in Europe,...