In this episode of the Liberal Europe Podcast, Leszek Jażdżewski (Fundacja Liberté!) welcomes Maciej Kuziemski, Data Program Adviser at the Ditchley Foundation and an independent expert at the European Commission. They talk about the recent takeover of Twitter by Elon Musk, its implications for liberal democracies, and the future of social media.
Leszek Jażdżewski: Why did Elon Musk decide to buy Twitter?
Maciej Kuziemski: It is a 44-billion-dollar question. I would not know – and, according to the experts, it is not a very sound decision business-wise. What we do know, however, is that Elon Musk must have some sort of a plan for Twitter. I would not assume that he would make such a decision on a whim. In the months to come, we will probably find out what exactly the plan is. I would assume it is something more than just adding an “edit” button on Twitter.
Elon Musk is said to be an orthodox free speech supporter. Do you think we will see more of Donald Trump on Twitter from now on? Meanwhile, there is talk of introducing more control and removing fake profiles from this social media platform. What is in store for Twitter after the takeover?
There are some early signs. For a long time, Elon Musk has been very critical of, what he calls, censorship on social media platforms. He calls himself a “free speech absolutist” – what he means by that remains to be seen. He also has a history of suppressing free expression – for instance, he famously engaged with legal offices to take down some of the unfavorable photos of himself from the Internet. In the past, he also blocked many of his opponents and critics on Twitter.
He made comments that would indicate the direction he wants to take with the platform. He tweeted, and I quote, “By ‘free speech,’ I simply mean that which matches the law. I am against censorship that goes far beyond the law. If people want less free speech, they will ask government to pass laws to that effect. Therefore, going beyond the law is contrary to the will of the people..” Now, this is obviously not a new claim in the content moderation dispute, but it has been challenged on the grounds that, first of all, there is harmful content that, technically, is legal – that’s one of the claims; or that it is very difficult for the law and regulation to follow the fast-paced and changing nature of how online speech is mediated by platforms.
This phenomenon is rather problematic. There already exist platforms that have assumed a very similar approach of being free speech maximalists – everything that is legal is present on the platforms. For example, Gab, has already done just that, and we all know where they ended up – they have essentially significantly contributed to radicalization and spread of misinformation. In some extreme cases, they even led to offline violence.
Who, in your opinion, should be in charge of content moderation on social media? Who shall decide to take down content and on what grounds? What would be the optimal solution?
Again, you are asking a very complex question and there are no simple answers to it. What I can do is point to a general direction I think would be quite useful. A short answer to your previous question about what do we know about Elon Musk’s plan for Twitter, was that we do not know. This in itself is quite problematic, because, after the Twitter buy-out, we are now in a situation where four of the most significant communication platforms globally are in the hands of two billionaires – Elon Musk and Mark Zuckerberg.
This is a situation that is perhaps not the proudest day for anyone who is concerned with free speech, because this means that life-changing decisions about what are the boundaries of free speech, what kind of content we allow on platforms, are made by two individuals without any significant scrutiny.
To answer your second question regarding the rules for content moderation, the solutions are to be found in digital public infrastructure models – not very much unlike what BBC used to be for television. We should look for solutions in the models that are transparent, accountable, and have recourse mechanisms in place. Meanwhile, Twitter takeover by Elon Musk is taking us further away from such solutions.
Do you believe that, at the moment, Twitter brings more responsibility and less concentration due to it being on the stock market? It seems that the platform is already facing various problems – fake accounts, fake news, spreading misinformation and lies. Even the decision to cancel Donald Trump’s account was very much disputed. Could having one individual held responsible be a step forward? Especially, if Elon Musk subscribes to the rules that he might impose? Or is it really a step back?
It is rather surprising to hear a question framed like this from a liberal, but I would say that absolute power corrupts even further. Of course, we could assume that Elon Musk might try to amend certain misdeeds or flaws of Twitter, as the platform was not perfect. However, certain democratic scrutiny is something that we should expect from major communication platforms.
Think of it this way: as we know, Elon Musk is a very entrepreneurial individual and has his fingers in many different pies – including having significant business interests in China (some of his factories are located there). What do you think would happen if the Chinese asked Elon Musk if they could get access to some little bits and pieces of data of people who are critical of the Chinese establishment in return for being able to keep his operations in China? There is no clear answer to this, but the sole fact that we must consider these questions is quite worrying.
How should we see the responsibility of social media platforms for the content published by users? Are those in charge of the platforms also responsible for what is happening on them, like in the blogosphere? Could they be sued for approving or deleting some content? How should we approach these complex social media platforms, which are clearly unique in the history of the humankind?
There are no ‘one-size-fits-all’ global solutions. Cyber sphere is not immune to cultural differences, as these are abound. There are countries that try to fiddle with regulation of harmful speech that is technically legal or try to extend the boundaries of what is legal and what is not – these include, for instance, the Network Enforcement Act in Germany or the Online Harms deal in the United Kingdom. However, these are problematic too, as they are coming from a position of trying to protect vulnerable groups, but, at times, they come across as limiting the rights of a population – or even having a semi-authoritarian effect.
There are two major problems with content moderation. First, the way it is done is either by humans or by machines. It is virtually impossible to throw as many humans as possible at a given problem to do it well and live. Even if it does happen, one may end up in a situation like in the famous case of gatekeepers from YouTube who would often be employed for pennies in developing countries and would be forced to watch absolutely horrific content (such as beheadings or rapes), so that it does not land on the platform.
The second solution that the platforms fiddle with is the use of automation – advanced tools to detect things they would consider unwelcome on the platforms (be it hate speech, terrorist content, or obscenity and violence). Here, platform use textual or image recognition etc. The problem with the second solution is that it is consistently imprecise – while it may be very good at detecting the things that should not be on the platforms, it may suppress free speech elsewhere, essentially flagging things that are comedy, irony, and so on.
Overall, there is no good solution. Needless to say, platforms are not neutral. They are not mere conveyor belts for what we post online; there is obviously a huge amount of curation going on. In terms of business models, it is something about which I am least worried. The current business model is essentially weaponizing our attention and trading our data in exchange for ads.
Do you think that social media platforms are compatible with liberal democracy?
I do not think I have an answer to this question. Perhaps not in the way they are designed right now – being driven predominantly by profit, their ownership and governance are very concentrated. There is a number of concerns one could raise that would deem them incompatible – the lack of democratic oversight, secrecy is what make it impossible for the civil society to exercise any scrutiny over how our information space is being curated. These are some of the main concerns one could have.
Shoshana Zuboff and others pointed to very dangerous self-sustained capitalist models based on surveillance and shaping our decisions before they are even consciously made. One could argue that it is difficult to talk about the freedom to make decisions, freedom of choice, or free will with this kind of manipulation enabled by the social media. What should we make of the future of these platforms? Should they be treated as monopolies, which can be threat to democracy, and broken apart? Or maybe we should approach them in a more traditional manner? Are you optimistic about the EU’s Digital Services Act?
Shoshana Zuboff’s book is a critique, but there is also a very well-established critique of her argumentation. Essentially, what many scholars are concerned with is that when Zuboff writes about surveillance capitalism, she focuses primarily on the former (surveillance), and not so much on the latter (capitalism). A really big question is whether this late-stage global capitalism that we are living in is compatible with liberal democracy? This concentration of power, social ranking, weaponization of our attention, and attempts at monetizing social interactions – whether this is an optimal space for exercising our rights and democracy as such.
What I see happening is that there is a lot of things in the regulatory pipeline, trying to make sure that we tackle some of these issues – be it, for example, through the Digital Services Act, which you have mentioned, or the AI Act. I would not want to say that this is just scratching the surface, but some of the most interesting debates to come will actually try to also challenge the concentration of power in a fundamental way, coming from the position of anti-trust and competition law.
Some people still argue that the Western world – the United States, and the European Union – should focus more on trying to be innovative – especially in the areas such as AI, because China and other powers will not wait. That if we do not arm up in time, they will, and we will lose the global competition or will be subjugated to their will and power. Are these warnings warranted? Or is it a false choice between innovation and democracy, privacy, and responsibility?
I do think it is a false dichotomy – a very old school, 20th-century ‘great-power struggle’ framing, and I am not very fond of it. Of course, innovation is important, but it is even more important to consider to what end? What kind of a world we are trying to approximate or perpetuate? Is it about constant growth, global domination, or perhaps sustaining life on the planet and trying to amend our goal setting to account for the severity of the climate crisis?
Needless to say, the solutions of today will not solve the problems of tomorrow. However, it seems that we have not yet really tackled the problems of yesterday very well.
Jaron Lanier called on us to delete our social media accounts. Is this the only solution there is for the issues we have been discussing? Or is there another way? What can an individual do to help remedy the situation for themselves and for their immediate social circle? How will the algorithms and social media platforms continue to change our society in the near future?
This is a hard nut to crack, because our infosphere is so entrenched in majoritarian social media platforms that any appeals to resist or exit those platforms is coming from the position of privilege. It is an enormous privilege to be able to sustain your social networks without the mediation of free social media platforms – not everyone can afford it. This has also become a way of attracting labor and making one’s livelihood.
Where I see the most potential is in the sphere of interoperability – trying to make the cost of moving your business, data, and every-day online presence from these predatory premium platforms that dominate the market right now, to others that may not yet even exist, but which will emerge once the regulatory infrastructure that will make it easier for us to move exists. Similarly to how one is able to subscribe to any mobile network, but you still can text others between the networks – without any loss of data or convenience. This is what I see happening over the next few years, and it is quite promising.
Unless we challenge the very foundations of the existing business models that in most cases are concerned with maximizing our time on the platform (regardless of the quality of the time spent) or maximize sharing of content (regardless of its quality), then the future is looking rather grim. We have already seen what grave consequences this interconnectedness of content creation and consumption has for the sphere of politics, for instance. How it can undermine democratic electoral processes or shape and give platform to anti-scientific views in the wake of a global pandemic. Unless we do our utmost to challenge the business models that allowed that, we are in for an unpleasant surprise going forward.
The podcast was recorded on April 27, 2022.
This podcast is produced by the European Liberal Forum in collaboration with Movimento Liberal Social and Fundacja Liberté!, with the financial support of the European Parliament. Neither the European Parliament nor the European Liberal Forum are responsible for the content or for any use that be made of it.
The podcast, as well as previous episodes, is available on SoundCloud, Apple Podcast, Stitcher and Spotify.
French Presidential Election in International Context [PODCAST]