Skip to main content
Log in

AI and Ethics - Topical Collection on AI Ethics in the Generative AI Era

Aims, Scope and Objective of the Topical Collection

In less than a year, the explosive proliferation of so-called “foundation models” and generative AI (GenAI) applications has ushered in an unprecedented commercialization of AI technologies. Within a few months of its release, ChatGPT captured the public attention, amassing over 100 million users and triggering an “age of competition” among large tech corporations vying for market share amidst the GenAI boom. In this topical collection, we will explore the ethical and societal implications of the rapid development and spread of GenAI technologies. Areas of interest include: the potential for scaled AI-generated disinformation and misinformation; novel challenges to academic and research integrity; skills loss and overreliance; algorithmic bias; discriminatory amplification or obfuscation of voices and values in AI generated content; reification of dominant cultural perspectives that endangers the voices of historically marginalised groups; labor displacement; environmental impacts; governance challenges; and emerging prospects for marshalling GenAI applications for the public good. The goal of the collection is to stimulate rigorous, interdisciplinary, and accessible analysis of the potential risks, opportunities, and ethical impacts created by the accelerating development of GenAI techniques and associated applications.

List of Topics Proposed

  • Economic, social, cultural, political, and legal implications of the scaled development and use of GenAI technologies 
  • Technical and socio-technical challenges presented to the responsible design, development, and deployment of GenAI technologies (e.g. system “hallucination” and lack of explainability)
  • Potential harms to people, society, and the planet presented by the malicious or irresponsible production and use of GenAI technologies
  • Emerging governance techniques and regulatory approaches to GenAI technologies, their project lifecycles, and their supply chains 
  • Explorations of how to use responsibly managed GenAI technologies for the social good or public benefit (e.g. drug discovery, materials innovation, scientific insight)

Guest Editors

David Leslie (Lead Guest Editor), The Alan Turing Institute and Queen Mary University of London, UK, dleslie@turing.ac.uk (this opens in a new tab)
Mhairi Aitken, The Alan Turing Institute, UK
Atoosa Kasirzadeh, University of Edinburgh, UK
Peter Smith, University of Sunderland, UK
Rebecca Johnson, University of Sydney, Australia
Harish Arunachalam, Verizon Responsible AI Group, USA

Manuscript submission deadline: 6th October 2023

Submission

Submissions should be original papers and should not be under consideration for publication elsewhere. Extended versions of high quality conference papers that are already published at relevant venues may also be considered as long as the additional contribution is substantial (at least 30% of new content).

Authors must follow the formatting and submission instructions of the AI and Ethics journal at https://www.springer.com/journal/43681 (this opens in a new tab).

During the first step in the submission system Editorial Manager, please select “Original Research” as article type. In further steps, please confirm that your submission belongs to a special issue and choose from the drop-down menu the appropriate special issue title.

Navigation