Topical Collection on AI Auditing, Assurance, and Certification

Aims, Scope and Objective 

The ethics of Artificial Intelligence (AI Ethics) can be thought of as undergoing three broad phases, with the first two being principles and processes, to the current phase which we read in terms of assurance.  

This burgeoning phase is presenting itself in publications on ‘AI audit’ by regulatory and standards bodies, as well as academic groups, NGOs and industry players. Indeed, as is the case with the broader AI ethics literature, we anticipate this space to be an engagement between multiple stakeholders and in a highly interdisciplinary fashion. Furthermore, we expect AI assurance to encompass nuances such as the ones listed here:  

  • General and sector-specific assurance: We believe that the satisfaction of a particular standard - ex. certification, auditability, etc. - will become mandatory and we anticipate that standards will be both general (national/international) and spector specific (e.g. financial services, recruitment, marketing, etc.).  
  • Governance: Within the context of AI systems governance can be divided into two broad streams, namely technical assessment and systems of governance. Where governance concerns systems and processes that focus on allocating decision makers, providing appropriate training and education, keeping the human-in-the-loop, and conducting social and environmental impact assessments, all of which fall under mitigation strategies. And, technical assessment, concerns systems and processes that render the activity of the technology itself accountable and transparent.  
  • Monitoring Interfaces: drawing from industry precedence, intuitive performance dashboard stop-light interfaces have been proposed – these will facilitate monitoring of performance over time. Furthermore, from a regulatory and standards standpoint, the UK’s Information Commissioner’s Office has a colour coded ‘Assurance Rating’ for Data. We envision that this can be used as a guidance to produce an equivalence for AI assurance.
  • Unknown Risks: Foundational to safety is that steps should be taken and procedures in place that prevent harm. One particular problem is how to mitigate unknown risks c.f. ‘Red Teaming’. 
  • Certification: Certification is the part of the assurance process by confirming that a system, process, organisation, etc. satisfies a particular standard. 
  • Insurance: We anticipate insurance of AI systems to emerge as a result of the AI assurance space maturing. 

This topical collection aims to highlight new developments in AI assurance in the areas highlighted above. Notwithstanding this, the collection will be open to further themes, such as diversity in teams, training and reporting. 

Guest Editors

Emre Kazim (Lead Guest Editor) University College London, UK, e.kazim@ucl.ac.uk
Adriano Koshiyama, University College London, UK, adriano.koshiyama.15@ucl.ac.uk
Elizabeth Lomas, University College London, UK, e.lomas@ucl.ac.uk
Denise Almeida, University College London, UK, denise.almeida.18@ucl.ac.uk
Pamela Ugwudike, University of Southampton, UK, p.ugwudike@soton.ac.uk
Arthur Gwagwa, Utrecht University, The Netherlands, e.a.gwagwa@uu.nl
Zoe Porter, University of York, UK, zoe.porter@york.ac.uk
Kelly Lyons, University of Toronto, Canada, kelly.lyons@utoronto.ca

Provisional Deadlines

Deadline for submissions: May 31st, 2021 
Deadline for review:             June 30th, 2021 
Decisions:                             July 30th, 2021 
Deadline for revised version by authors: September 10th, 2021 
Deadline for 2nd review:      September 30th, 2021 
Final decisions:                     October 8th, 2021 

Submission

Submissions should be original papers and should not be under consideration for publication elsewhere. Extended versions of high quality conference papers that are already published at relevant venues may also be considered as long as the additional contribution is substantial (at least 30% of new content).

Authors must follow the formatting and submission instructions of the AI and Ethics journal at https://www.springer.com/journal/43681.

During the first step in the submission system Editorial Manager, please select “Original Research” as article type. In further steps, please confirm that your submission belongs to a special issue and choose from the drop-down menu the appropriate special issue title (AI Auditing, Assurance, and Certification).