Call for Papers on "Misinformation, Manipulation and Abuse on Social Media in the Era of COVID-19"

Malicious and abusive behaviors on social media have elicited massive concerns for the negative repercussions that online activity can have on personal and collective life. The spread of false information, the rise of AI-manipulated multimedia, and the emergence of various forms of harmful content are just a few of the several perils that social media users can, even unconsciously, encounter in the online ecosystem. In times of crisis, these issues can only get more pressing, with increased threats for everyday social media users. The ongoing COVID-19 pandemic makes no exception and, due to dramatically increased information needs, represents the ideal setting for the spread of a multitude of low-credibility and unverified information, and for malicious actors aiming to take advantage of the resulting chaos. In such a high-stakes scenario, the downstream effects of misinformation exposure or information landscape manipulation can manifest in attitudes and behaviors with potentially serious public health consequences.

By affecting the very fabric of our socio-technical systems, these problems are intrinsically interdisciplinary and require joint efforts to investigate and address both the technical (e.g., how to thwart automated accounts and the spread of low-quality information, how to develop algorithms for detecting deception, automation, and manipulation), as well as the socio-cultural aspects (e.g., why do people believe in and share false news, how do interference campaigns evolve over time). Fortunately, for COVID-19, several open datasets were promptly made available to foster research on the aforementioned matters (Chen, et al., 2020). Such assets can bootstrap the first wave of studies on the interplay between a global pandemic and online deception, manipulation and automation.

The purpose of this special issue is to collect contributions proposing models, methods, empirical findings and/or intervention strategies to investigate and tackle the abuse of social media along several dimensions that include (but are not limited to) infodemics, misinformation, automation, online harassment, false information and conspiracy theories about the COVID-19 outbreak. In particular, to defend the integrity of online discussions on social media, we aim to stimulate researchers contributions along two interlaced lines. On one hand, we look for contributions to enhance the understanding on how health misinformation spread, on the role of social media actors that play a pivotal role in the diffusion of inaccurate information, and on the impact of their interactions with organic users. On the other hand, we seek to stimulate research on the downstream effects of misinformation and manipulation on users’ perception of and reaction to the wave of questionable information they are exposed to, and on possible strategies to curb the spread of false narratives.

Submission Deadline: July 15, 2020 ***CLOSED***
Publication: October 2020 issue


Please prepare your paper following the journal's submission guidelines. All papers must be submitted to the journal's submission system.

Please select “Yes” for the question “Does this manuscript belong to a special feature?” and then select the special feature “S.I. : Misinformation, Manipulation and Abuse in the Era of COVID-19” during the submission stage.

***Unfortunately, it will take a while for the submission system to be ready to receive submissions for this special issue topic. Please refrain from submitting your paper until further notice is made in this page. Please note that the system can receive regular paper submissions (i.e., papers not intended for this special issue) as usual.***

Topic and Themes

In this special issue, we address problems related to social media abuse, misbehavior and
misinformation about COVID-19 along several dimensions that include (but should not be limited to):

  • infodemics, misinformation/disinformation diffusion;
  • false news detection and characterization;
  • malicious entities’ (e.g., bots, trolls, cyborgs) activity;
  • coordinated inauthentic behaviors and orchestrated campaigns;
  • information operations;
  • online harassment, abusive behavior, cyberbullying and hate speech;
  • users’ perception, response, and effects of misinformation exposure.

Guest Editors

  • Emilio Ferrara, University of Southern California, United States
  • Stefano Cresci, Institute for informatics and telematics, CNR (IIT-CNR), Italy
  • Luca Luceri, University of Applied Sciences and Arts of Southern Switzerland, Switzerland