The difficult task of controlling content on social media

publicado
DURACIÓN LECTURA: 5min.

The EU regulation of the recently approved Digital Services Act imposes obligations on the owners of social networks and other platforms, and makes them responsible for controlling content and fighting against disinformation and hate speech. In theory it’s a good idea, putting it into practice is much more complicated.

When it became clear that Elon Musk would indeed take over Twitter and that at first it seemed like the social network was going to become even more of a jungle with his “free speech first” talk, the EU’s Commissioner for the Internal Market, Thierry Breton, was quick tweet that “in Europe, the bird will fly by our rules.” A few days ago, the vice-president of the European Commission, Margrethe Vestager, repeated exactly the same idea: Twitter will have to comply with European legislation to continue operating on the continent.

What exactly is that European legislation that the richest man in the world will have comply by? In recent weeks, the Commission and the European Parliament have taken decisive steps to adapt laws and regulations to our digital society, which changes on an almost daily basis. Specifically, in October the Commission gave its final approval of the regulation of the Digital Services Act, which modifies the now obsolete Electronic Commerce Directive of 2000. The regulation is mandatory for member states and complements the also recently approved Digital Markets Act. The plan is for all this new legislation to be in full force by January 2024.

The European Commission and the authorities of Member States will have access to the algorithms each service provider uses

Against digital threats

The purpose of the Digital Services Act (DSA) is to combat phenomena that may pose a threat to society, such as disinformation or fake news, or the dissemination of illegal content. The new regulation will therefore affect a wide range of service providers including Internet services like websites, e-commerce companies, social networks, online games, cloud files, streaming platforms, applications and instant messaging services. To this end, it imposes new obligations on these companies. For example:

— The European Commission and the authorities of the Member States will have access to the algorithms each service provider uses.

— The platforms must act to remove illegal content and systems will be enabled so that they can be reported. The regulation distinguishes between illegal content –terrorist propaganda, incitement to hatred or content protected by copyright, for example– and harmful content, which is more difficult to define because it depends on socio-cultural traditions and the legal regulations of each country.

It also establishes that any of provider that reaches more than 45 million users in the EU will be under specific regulation, as their reach makes them potential “systemic risks” to society if the established rules are violated. Here we have, for example, Google, Meta, Amazon, Apple, Microsoft, Twitter, Spotify and all the major social networks.

The DSA considers that the “firepower” of these large service providers implies a series of “systemic risks”, which it specifies into three groups:

— Risks due to the dissemination of illegal content: child pornography, incitement to hatred or sale of prohibited products and services.

— Risks against the exercise of fundamental rights, such as freedom of expression and information, the right to life, the right to non-discrimination.

— Deliberate manipulation in realms that affect health, electoral processes, public safety or the protection of minors. For example, by creating fake accounts, using bots, or other automated behavior.

The Regulation has made the platforms responsible for monitoring and controlling content

Gauging the risks

This is the main contribution of the recently approved regulation, as Francis Donnat, a partner at Baker McKenzie in France, and Winston Maxwell, director of Law and digital studies at the Polytechnic Institute of Paris, wrote recently in Le Monde: “Requiring that these large platforms ( …) carry out an annual analysis of the ‘systemic risks’ derived from the design or operation of their services –either in terms of the risks associated with the dissemination of illegal content, or in terms of threats to the exercise of fundamental rights–, to come up with algorithmic and human resource solutions to mitigate these risks.”

In short, the Commission intends to hold tech companies responsible for monitoring and controlling the content, products and services offered on their platforms and, therefore, they will answer in court if the law is violated. The immediate consequence of the legislative change will be that these large companies will take great care to keep their channels clean, and, therefore, will exercise greater control over them. This should be very positive when it comes to the distribution of what the directive calls “illegal content.”

Any sensible person would agree that child pornography or content that discriminates against people based on their religion, race or sex should not be spread on the networks under any circumstances, and from now on the companies behind these networks will be responsible if this happens. But what about the so-called “harmful content,” which isn’t illegal? For example, in Spain, in recent months there has been an intense political and social debate around the so-called Trans Law, regarding the alleged right of minors to change their sex without any criteria other than their will. Is content advocating for this new right no longer “harmful” just because it’s now a law? And, on the other hand, is any content that opposes that legislation going to be considered “harmful,” meaning the platforms have an obligation to remove it from the country’s conversation?

There are other types of examples that could call into question the legislation as well. That is, policies which are perhaps well thought out, but whose ramifications create another problem altogether. What happens to freedom of expression if, for example, Apple or Google decides to remove a social network app from their store simply to avoid legal liability?

State supervision

In addition to passing the responsibility of content control on to the platforms, the DSA requires that each Member State designate one or more authorities to oversee the application and supervision of the Regulation. One of the authorities will be designated Digital Services Coordinator. And the role must be independent of the government in order to carry out the tasks in an impartial and transparent manner.

The Regulation grants broad investigative, enforcement and sanctioning powers to Digital Services Coordinators, so the authority will be a key element in each country’s regulation of digital services. In Spain, many are of the opinion that the National Commission for Markets and Competition, one of the few official bodies that is truly independent from the government, should take on this role. But such powers are added to that Commission, which is too tasty candy for any interventionist Executive.

The European Commission’s aim to avoid the dissemination of socially harmful messages throughout today’s communication channels is laudable, but, at the same time, it shows that it is very difficult –not to mention dangerous for freedom of expression– to put gates around an almost infinite playing field, like the ones they’re trying to put up online.

Translated from Spanish by Lucia K. Maher

Contenido exclusivo para suscriptores de Aceprensa

Estás intentando acceder a una funcionalidad premium.

Si ya eres suscriptor conéctate a tu cuenta. Si aún no lo eres, disfruta de esta y otras ventajas suscribiéndote a Aceprensa.

Funcionalidad exclusiva para suscriptores de Aceprensa

Estás intentando acceder a una funcionalidad premium.

Si ya eres suscriptor conéctate a tu cuenta para poder comentar. Si aún no lo eres, disfruta de esta y otras ventajas suscribiéndote a Aceprensa.