Skip to main content
Log in

Quality and User Experience - Call for papers: Crowdsourced and Remote User Studies for Quality of Experience and Usability Research

Laboratory studies are an established and essential tool for Quality of Experience (QoE) and User Experience (UX) research. However, they required well-equipped test rooms built following international standards and personnel for the supervision of the test participants. Therefore, they are often cost- and time-intensive. Further, the number of test candidates is often limited due to the sparse laboratory space and the need for participants to be physically present in the test environment. In the last 1,5 years, the COVID-19 pandemic has resulted in more significant challenges to conduct laboratory studies by increasing the organizational overhead and limiting potential participants.

Two possibilities to overcome the current situation are crowdsourcing and remote user studies. Microtask crowdsourcing has been successfully used for QoE and UX research in the past years. It offers a faster, cheaper, and more scalable approach compared to laboratory tests. It may also provide a more ecological valid environment for the experiment but come at the cost of less control compared to a laboratory test. Researchers developed best practices to quickly collect a large number of subjective ratings from a diverse set of participants and applied the crowdsourcing approach in many domains of QoE research. Some of the main challenges are ensuring the suitability of the test environment/system, eligibility of participants, and controlling the reliability of responses in the absence of a test moderator.

Other potential possibilities that have not drawn much attention in the past years are supervised or unsupervised individual remote test procedures. They can be viewed as a hybrid-procedure of crowdsourcing and traditional laboratory environments. While the tests are still conducted online, the participants are not anonymous but pre-registered participants who might even be guided via a chat or video conferencing system. Such an approach can benefit from the broader reach of the online study while diminishing the challenges of a completely anonymous and unsupervised/untrusted setting.

In this context, the topic collection aims to foster contributions concerning optimizing and designing crowdsourced subjective studies for QoE and UX research. In addition, another motivation is to raise awareness and promote new research directions with respect to crowdsourcing and remote evaluations of QoE and UX. The topic collection encourages researchers to submit works on how to apply best practices from crowdsourcing studies in the context of remote user studies, with non-anonymous test takers and vice versa. Also, works on shared best practices or summarized experiences based on multiple studies are very welcome.

The topic collection also accepts submissions that extend and enhance previous published conference and workshop papers. In such scenarios, the submission should clearly state that it is an extended version of a published paper, cite the previously published work, and contain at least 40% new content.

Specific Topics of Interest:

  • Crowdsourcing for quantitative and qualitative subjective studies - Novel applications
    • Limitations of current crowdsourcing systems
    • Quality control mechanism and reliability metrics
    • Large scale crowdsourcing studies and diversity of participants
    • New subjective test methodologies
  • Reproducibility of results
    • Reproducibility and cross-platform studies
    • Assessment of hidden influence factors / Impact of hidden influence factors
    • Bias estimation and bias reduction
    • Automation and workflows
    • Standardization of crowdsourcing test methods
  • Usability and User Experience of crowdsourcing tasks
    • Optimization of task designs, interfaces, and workflows
    • Relation to result quality and worker motivation
    • Enhancing workers’ UX (e.g., by means of gamification of tasks)
    • Quality of complex crowdsourcing workflows (e.g., combination of AI and Crowds)
  • Interconnection of test concepts
    • Studies comparing results from lab, crowdsourcing, and/or remote testing
    • Adaptations of established test standards to the crowdsourcing or remote testing environments
    • Benefits of outside of the lab tests
  • Remote user studies
    • Supervised remote user studies
    • Remote studies with non-anonymous users
    • Best practices for remote user studies
  • Shared best practices
    • Impact of the COVID-19 pandemic on the design and performing user studies
    • Lesson learned from multiple studies
    • Lessons learned from combined remote, crowdsourcing, and/or lab studies


Submission / Decision Timeline:

Submission portal open: January 1, 2022
Submission portal close: 31st January, 2023

Expected editorial decision turn around times:
First-round review decisions: 2 months
Deadline for revision submissions: 6 weeks
Notification of final decisions: 2 months
Camera-ready Manuscript: 1 month


Guest Editors:

Matthias Hirth, TU Ilmenau, Institute of for Media Technology, Germany
Email: matthias.hirth@tu-ilmenau.de (this opens in a new tab) 

Babak Naderi, Technische Universität Berlin, Institute of Software Engineering and Theoretical Computer Science,Germany
Email: babak.naderi@tu-berlin.de (this opens in a new tab) 

Niall Murray, Athlone Institute of Technology, Dept. of Computer and Software Engineering, Ireland
Email: nmurray@research.ait.ie (this opens in a new tab) 

Kjell Brunnström, RISE Research Institutes of Sweden AB and Mid Sweden University, Sweden
Email: kjell.brunnstrom@ri.se (this opens in a new tab)
 

Authors should follow the Springer Journal manuscript format described at the journal site. Manuscripts should be submitted online  through  the Editorial Manager system: https://www.editorialmanager.com/quex/default1.aspx  (this opens in a new tab)

Navigation