Alongside the Covid-19 pandemic, an equally dangerous emergency is underway. It relates to the circulation of false information on the internet and particularly on social media platforms. This is a pervasive and worldwide phenomenon which calls into question the role that digital platforms should play in tackling disinformation and misinformation. The question is in fact: should digital platforms be in charge of addressing the problem of online disinformation?
Last year Avaaz conducted a study aimed at monitoring the spreading on Facebook of misleading content related to the pandemic. It was also aimed at analysing and evaluating the effectiveness of the big tech policies to combat this “Infodemic”. The results showed an extreme fallacy and a decisive delay in the implementation of relevant policies. A year later, the organization published a second study that returns to this issue in order to compare those data with the current situation and to verify if an improvement occurred.
In practical terms, intervening to stop an alleged false content from circulating on social media is something left to the self-regulation of the platform. According to the said internal practice, the information must be submitted to the so-called fact checkers, that verify and certify that it is actually false information. In other words, the fake news must be “debunked”. This work is carried out by specialized companies that can be partner of the platform or independent from it. Once the fact check has been notified (before receiving a confirmation, in fact, the social media do not intervene independently) Facebook can take measures consisting of either the labelling or the removal.
The Avaaz study examined a sample of 135 Facebook content in five different languages (English, French, Italian, Spanish and Portuguese) identified as fake by independent fact checkers. A number of key findings emerged from this sample. The first is that the most widespread narrative concerns vaccines side effects, including death. Secondly, it appears that Facebook is more reluctant to intervene with fake content in non-English language. This results either in a late intervention (30 days for non-English content compared to 24 days for English-language false content) or, even, in a lack of intervention with the effect that European citizens would seem to be more exposed to the risk of misinformation than Americans (or, properly, than English-speaking countries citizens).
Ultimately, the study highlights how Facebook’s policies relating to Covid-19 disinformation and misinformation in Europe should be reviewed and strengthened especially in time of global crisis. In this regard, it should be noted that the EU Commission, even before the pandemic, was committed to fight the spread of the phenomenon at stake. In particular, in 2018 a Code of Practice on Disinformation (“the Code”) was adopted and signed by online platforms, the leading social networks and advertisers. The Code aims to implement the 2018 Communication from the Commission and it also identifies some best practices. However, Avaaz suggests that this document should be amended, providing, as an example, the following measures: i) a retroactive notification for users who interacted with fake content; ii) the reduction of the acceleration of harmful content caused by the algorithm and, finally; iii) the establishment of an independent monitoring regulator.
It should also be noted that the new proposed Digital Services Act (“DSA”) aims specifically to tackle the spread of fake news online and to increase transparency providing a series of obligations for online platforms, including social media. It will therefore be interesting to see how the works will evolve. In the meantime, one thing is certain: institutions have not found a vaccine against the Infodemic… so far.