Skip to main content
Log in

AI and Ethics - Topical Collection on Exploring the Mutations of Society in the Era of Generative AI

In the past eighteen months, Generative AI has acquired a central position in the public space to the point of becoming a key concern for governments when revising their national strategies as much as for private companies trying to integrate it into their business model. New threats are being considered, such as impersonation techniques and large disinformation campaigns to destabilize political regimes, and new capabilities are emerging with the democratization of AI systems’ use through natural language. As AI systems’ capabilities grow, new questions also emerge regarding the creative process and property rights.

Beyond the business opportunities it brings about, the large adoption of generative AI leads us to consider some fundamental questions for our societies:  How should we define data ownership and compensate those who produce the data upon which LLMs are trained? To what extent should foundational models be open to allow citizen control through compliance audits without benefiting malicious actors? How will generative AI impact the job market overall and among social classes? Will it result in universal-basic-income-based societies powered by artificial workers? What types of interactions may humans develop with artificial agents and how is it likely to impact their cognitive capacities? What are the skills they will lose and those they will develop? Will artificial agents be granted moral agency and may progress towards AGI trigger geopolitical tensions?

The 2nd edition of the Paris Conference on AI & Digital Ethics invites researchers from various disciplines to address such questions collaboratively. The conference is hosted by Sciences Po in Paris on June 6th and 7th, 2024. Details can be found here: https://www.paris-conference.com (this opens in a new tab). Out of the 64 abstracts submitted to the conference, 12 will be selected by the reviewing committee following a double-blind peer-review process. These papers will be presented at the conference and invited to contribute to the Topical Collection in the AI and Ethics Journal. Beside these papers, no other submissions will be allowed to be included in the Topical Collection.

Aims

This Topical Collection intends to underscore the profound changes our societies may undergo with the massive adoption of generative AI solutions to open new research domains and drive the responsible development of such solutions by informing their actors.

We aim to gather a collection of papers presenting fresh and innovative ideas to approach the challenges raised by the responsible use of AI technologies for defense strategies and law enforcement, address the challenges of diversity representation in synthetic content, imagine novel uses of AI systems to renew citizen’s trust in democratic institutions and to anticipate the social, political, cognitive or moral implications of increasing machines and humans’ intelligence. These papers will combine ambitious reflections with practical considerations to sketch solutions to some of the most pressing problems our modern societies have to deal with.

Scope

The Topical Collection will focus on the Mutations of society in the era of generative AI organised around the four following pillars. Although the conference mainly focuses on generative AI, discussions related to other types of AI systems are also welcome.

1.  AI for defence strategies and law enforcement: In the context of a return to high-intensity conflicts when AI technologies are being used extensively, this track welcomes papers to help society understand the implications of using AI for defence and law enforcement needs as well as to develop responsible ways of doing so. Typical contributions may examine psychological and sociological aspects of surveillance systems, suggest relevant framework and guardrails to deploy them, investigate the upkeep of meaningful control over autonomous and semi-autonomous weapon systems, or question the evolution of the just war theory in the age of AI.

2.  Generative AI and the problem of diversity: Beyond safety and superalignment concerns, generative AI models face the challenge of cultural and moral pluralism. This track welcomes papers dedicated to understanding how generative models could interact with social values: how these latter can be captured, expressed, or voted on; what scale should be considered relevant (national, regional, …); and how models could be fine-tuned to adapt to target audiences. Typical contributions may elaborate on, or criticise, holistic approaches, such as constitutional AI; question the conception of diversity in data sets and AI development teams beyond races and genders (e.g. social, religious, disciplinary diversity); or discuss the implications of self-representation for populations (e.g. autotranscendence of the social).

3.  AI to renew democracies: Many democratic regimes, especially in Western countries, currently face a trust crisis, calling for mechanisms to renew the social contract and re-cement communities. This track welcomes papers suggesting innovative uses of digital and AI technologies to renew citizens’ trust in their institutions and strengthen democratic regimes. Typical contributions may suggest solutions to make political representatives more accountable, decision-making more transparent, or democratic participation more direct as well as to moderate online misinformation, support public debate and collective decision-making processes, or promote peace between populations and nations.

4.  The implications of (super)intelligence: Recognising artificial agents as intelligent, or superintelligent, may lead to many consequences, including the expansion of their decision scope as moral agents in the medical space; the impact that such a recognition may have on other entities, such as animals; or the risks it may create. Contributions addressing these aspects are welcome in this track, and typical papers may engage with the literature on moral status, discuss the fair distribution of responsibility for AI-assisted decision tools, suggest approaches and metrics to measure intelligence, or even criticize the relevance of intelligence to address moral dilemmas and status.

By examining the impact of generative AI through these four angles, this Topical Collection aims to provide a comprehensive view of the diversity and inclusion challenges and potential solutions in the AI ecosystem, supporting the overarching objectives of the "AI and Ethics Journal".

Areas of Interest

The Topical Collection is dedicated to promoting a dialogue between academic fields and we encourage submissions from a range of disciplines including moral philosophy, political theory and science, international relations, law, sociology and computational social sciences. Transdisciplinary approaches and mixed methods are also particularly encouraged, as well as collaborations between academic and industry researchers. The objective is to build bridges between research communities and develop more sophisticated approaches to socio-technical systems. The resulting contributions should then be relevant to a variety of topics:

  • Long-term impacts of AI-assisted decision-making tools on society 
  • Challenge of aligning LLMs with cultural values
  • Psychological and social impacts of LLMs when mediating social interactions
  • Innovative comprehensive approaches to generative AI safety
  • AI applications to support online deliberation
  • Content integrity and moderation on social media
  • Algorithmic governmentality and the epistemic implications of AI-driven science
  • Use of AI for sustainable economic development 
  • Use of AI in defence strategies and law enforcement 
  • Renovating democracy and preserving digital sovereignty
  • Rethinking property rights in the age of generative AI

Guest Editors

Prof. Brent Mittelstadt, University of Oxford, UK, brent.mittelstadt@oii.ox.ac.uk
Dr. Hubert Etienne, Quintessence AI, Hubert.etienne@quintessenceai.com
Prof. Rob Reich, Stanford University, USA, reich@stanford.edu
Prof. John Basl, Northeastern University, USA, j.basl@northeastern.edu
Dr. Jeff Behrends, Harvard University, USA, jbehrends@fas.harvard.edu
Prof. Dominique Lestel, Ecole Normale Supérieure, France, lestel@ens.fr
Prof. Chloé Bakalar, Meta, cbakalar@meta.com
Dr. Geoff Keeling, Google, gkeeling@google.com
Dr. Giada Pistilli, Hugging Face, giada@huggingface.com
Prof. Marta Cantero Gamito, European University Institute, Italy, marta.cantero@eui.eu

Manuscript Submission Deadline: 1st July 2024

Submission

Submissions should be original papers and should not be under consideration for publication elsewhere. Extended versions of high quality conference papers that are already published at relevant venues may also be considered as long as the additional contribution is substantial (at least 30% of new content).

Authors must follow the formatting and submission instructions of the AI and Ethics journal at https://www.springer.com/journal/43681 (this opens in a new tab).

During the first step in the submission system Editorial Manager, please select “Original Research” as article type. In further steps, please confirm that your submission belongs to a special issue and choose from the drop-down menu the appropriate special issue title.


Navigation