Call for Papers: Special Issue on Performance Evaluation in Computer Vision

Editors:

Daniel Scharstein, Middlebury College, USA
Angela Dai, TU München, Germany
Daniel Kondermann, Quality Match GmbH, Germany
Carsten Rother, HCI Heidelberg, Germany
Torsten Sattler, Chalmers University of Technology, Sweden
Konrad Schindler, ETH Zürich, Switzerland

As the field of computer vision is growing and maturing, performance evaluation has become essential.  Most sub-areas of computer vision now have established datasets and benchmarks allowing a quantitative evaluation and comparison of current methods.  New benchmarks also often stimulate research into the particular challenges presented by the data. Conversely, important areas lacking high-quality datasets and benchmarks might not receive adequate attention by researchers.

The deep learning revolution has made datasets and performance evaluation even more important.  Learning-based methods not only require large, well-designed training datasets but also well-defined loss functions, which are usually designed to optimize established performance measures. This creates an implicit bias based on the availability of datasets and the definition of performance metrics.

In this special issue we seek all types of contributions broadly relating to performance evaluation in computer vision.  This includes:

  • Manuscripts introducing new performance evaluations or benchmarks (or extended journal-length versions of recent such publications), in particular in areas where quantitative evaluation is challenging or subjective, such as image generation with GANs
  • Manuscripts surveying, evaluating, or comparing existing benchmarks or datasets, for instance measuring how performance on one benchmark is an indicator for performance on other benchmarks
  • Manuscripts addressing pros and cons of performance evaluations and datasets, as well as recommendations for best practices

We specifically encourage submissions that touch on one or more of the following topics:

  • Dataset quality metrics:
    • Understanding bias in datasets and how it affects either benchmarking or network performance
    • The impact of accuracy of datasets on method performance (How good do we need to be to gain insights? Can we save time / cost?)
    • Methods to identify inconsistencies in datasets or vision methods (e.g., self-contradicting training samples)
  • Benchmarking the benchmarks: How do we know if a benchmark is good?
    • What makes a benchmark reproducible? How are the benchmarking results actionable/informative to create better methods?
    • How do we measure if a benchmark is complementary and addresses new challenges compared to existing ones?
  • Sensitivity/perturbation analyses: what happens if one slightly changes input images or training data?
  • Data architecture:
    • What is a good process to create a dataset for performance evaluation? (Annotation, synthetic images, measurements, and their caveats)
    • What are the important issues and what is a realistic scope?
    • What are the main differences between industry performance evaluation strategies and research? How can one inform the other?

Timeline

  • deadline for manuscripts is April 15
  • first decision by July 1
  • (potential) revision due by August 15
  • final decision by October 1

If an extension is needed on your submission due to COVID-19 delays, please send an email to schar@middlebury.edu.

Articles will appear online as soon as they are accepted; the actual issue will likely appear in early/mid 2021.

Submission guidelines: 

• Submit manuscripts to: http://VISI.edmgr.com Select “SI: Performance Evaluation in Computer Vision” as the article type when submitting.

• Papers must be prepared in accordance with the Journal guidelines: www.springer.com/11263

• Authors are encouraged to submit high-quality, original work that has neither appeared in, nor is under consideration by other journals.

• All papers will be reviewed following standard reviewing procedures for the Journal.