Journal updates

  • COVID-19 and impact on peer review

    As a result of the significant disruption that is being caused by the COVID-19 pandemic we are very aware that many researchers will have difficulty in meeting the timelines associated with our peer review process during normal times.  Please do let us know if you need additional time. Our systems will continue to remind you of the original timelines but we intend to be highly flexible at this time.

  • Call for Papers for Topical Collection:"Ethical Perspectives on Connected and Automated Vehicles (CAVs)" (Deadline March 25, 2021

    Editors: Karolina Zawieska, Nick Reed, Filippo Santoni de Sio

    Follow-up of the European Commission report “Ethics of Connected and Automated Vehicles: Recommendations on road safety, privacy, fairness, explainability and responsibility”


    • Scope

    Ethics and Information Technology is a peer-reviewed journal dedicated to advancing the dialogue between moral philosophy and the field of information and communication technology (ICT). The journal aims to foster and promote reflection and analysis which is intended to make a constructive contribution to answering the ethical, social and political questions associated with the adoption, use, and development of ICT.


    Topic

    This Topical Collection builds on the report recently published by European Commission “Ethics of Connected and Automated Vehicles: Recommendations on road safety, privacy, fairness, explainability and responsibility”. The report was written by an Independent Expert Group established by the European Commission to advise on specific ethical issues raised by driverless mobility for road transport, with the goal to promote a safe and responsible transition to connected and automated vehicles (CAVs). Guest Editors of this Topical Collection were also among the experts who co-authored the report.

    We now welcome the contributions that would further discuss, elaborate on or challenge some of the assumptions and recommendations presented in the report. While the main emphasis is on philosophical, legal and technical perspectives on ethics of CAVs, contributions from other fields that are relevant for the development of ethical perspectives on CAVs are also welcome (e.g. manufacturers or policy-makers perspectives).


    Paper submission

    Submissions should be made through the submission system in compliance with submission guidelines. Submission of a manuscript implies that the work described has not been published before and that it is not under consideration for publication anywhere else.

    All manuscripts will be subject to a rigorous peer-review process and published as open access articles. The Topical Collection follows a rapid publication approach as the articles will be published online as soon as they are accepted for publication.


    Important dates

    • Deadline for submissions March 25, 
    • 2021Final deadline (latest) reviews April 21, 2021
    • Final deadline revisions May 27, 2021
    • Final date for publication July 1st, 2021
  • Call for Papers for Topical Collection: Ethical, Legal and Responsible AI (deadline: 15 March 2021)

    Editors: Virginia Dignum, Fosca Gianotti, Raja Chatilla

    Artificial Intelligence is transforming work, organisations, industries and society. Despite the many potential benefits of this general-purpose technology, there are significant challenges and risks, ranging from privacy, security, ethics, transparency and regulation. The prioritization of ethical, legal, and policy considerations in the development and management of AI systems to ensure responsible design, production and use of trustworthy AI requires integration of engineering, policy, law and ethics approaches. 


    This special issue is the result of collaboration between the EU Horizon 2020 projects HumaneAI-Net, TAILOR and AI4EU.


    This special issue calls for research papers, project reports, or position papers focusing on, but not limited to, the following topics: 

    • Accounts (and criteria of adequacy) of moral values, especially in the context of the human AI interaction (with reference to some prominent values in the Ethics & AI debate, e.g. accountability, privacy, fairness.)
    • Experiences on designing and operationalising Human Values and Ethical principles 
    • Methods for measuring values and norms in the human-AI ecosystem as required by an agile approach to designing for values.
    • Approaches to understand how values can change (or their balance/priorities are modified) as a side effect of complex interaction between humans and AI systems in a complex socio-technical ecosystem, also with respect to the above mentioned value hierarchy.
    • Emergence and resolutions of value conflicts by design 
    • Theory and methods to deal with ethical dilemmas and value prioritization, ensuring that such decisions are open, transparent and amenable to argumentation and participation of a wide range of stakeholders.
    • Critical scholarship addressing the power structures and imbalances as these shape the production and adoption of AI systems 
    • Ethical importance of epistemic conditions for responsibility for design and use of AI systems (e.g. contextuality of notions such as ‘understanding’ ‘explaining’ and making ‘transparent’ the working of deep learning).
    • Understand the relation of Humane-ness, human centeredness, human dignity in the application of AI.
    • Experiences in education and training of ethical, legal and responsible AI.


    Important dates

    • 15 March 2021:           Submission deadline 
    • 3 May 2021:                Notifications on decisions
    • 31 May 2021:              Major revisions deadline for selected papers
    • 5 July 2021:                Final decision on accepted papers
    • 2 August 2021:           Camera ready papers due
    • late 2021:                    Publication of special issue


    Submissions 

    This special issue welcomes submissions from a wide variety of disciplines, including computer science, statistics, law, social sciences, the humanities, and education. Given the multidisciplinary character of the issue, we request all submissions to include a short description of the potential impact of the work reported and of the ethical challenges they faced in their work. Please indicate also to which of the following topics the submission contributes:


    • Computer ScienceLaw
    • Social Sciences and Humanities
    • EducationPractical applications
    • Educating at the intersection of Law, Social Sciences, Humanities, and Computer Science
    •  Length is 8000 words maximum and must be anonymous for double-blind review. 


    Suggested submission types: 

    • Research papers
    • Position papers
    • Papers describing existing projects (maximum length for this type is 2000 words) 


    Contact: Virginia Dignum: virginia@cs.umu.se


     

    Reviewing committee

    Andreas Theodorou
    Anna Monreale 
    Atia Cortes
    Barry O’Sullivan
    Catholijn Jonker
    Dino Pedreschi
    Elisa Fromont
    Francesca Rossi
    Frank Dignum
    Jose Hernandez Orallo
    Josep Domingo Ferrer
    Juan Carlos Nieves
    Marie Christine Rousset
    Mario Paolucci
    Michael Berthold 
    Paolo Torroni
    Roel Dobbe
    Ron Chrisley


  • Call for Papers for Special Issue:"Prospects for Ethical Uses of Artificial Intelligence in Human Resources" Extended Deadline: March 5th, 2021

    Editors: E. Aizenberg & M. J. Dennis (TU Delft).

    We invite articles for a special issue of Ethics and Information Technology on the ethics of artificial intelligence in human resources, expected to be published in the third quarter of 2021.

    Topic Overview

    The transformative potential of deploying artificial intelligence (AI) in human resources (HR) is now widely accepted in many commercial and institutional contexts. AI recruitment companies promise that their products can save the labour of countless HR professionals, improve the selection of candidates, and do this at a fraction of the cost of the traditional recruitment process. Their products allow hundreds or thousands of job applications to be sifted and evaluated at the touch of a button. Recently, designers of these systems have become more ambitious in the range of tools they offer recruiters. While AI in the HR domain initially specialised in analysing written applications (CVs, motivation letters, etc.), the latest and most innovative products allow employers to analyse facial expressions, speech content, and even the voice tone of their potential employees. Data from these sources — so the creators claim — yield critical insights into the values and character traits of the employees, which the AI uses to make quantitative predictions about how the applicant will fare in a future workplace. As recruiters respond to social-distancing challenges of the COVID-19 pandemic, such tools may appear increasingly attractive.

    Not only can AI analyse job applications more efficiently, it is often touted as fairer in its selection of applicants. Designers of these systems claim that they are free of individual prejudice, systematic bias, and are even better at discerning the virtues and vices of applicants. Furthermore, some proponents have even claimed that AI could select candidates using synthetic categories that are capable of better predicting future job performance than even the most experienced HR professionals. Despite such claims, ethicists increasingly are finding reason to be sceptical of this technology. On the one hand, the ethical challenges of AI in the HR domain mirror those that have received significant attention in other AI application domains, such as policing and criminal justice, especially surrounding the problem of discriminatory profiling (Angwin et al., 2016; Barocas and Selbst, 2016). One common issue in these cases is the use of historically biased data sets in the training of AI algorithms, which results in reinforcement of historical and existing discrimination. Similarly, in the HR domain, data sets based on existing hiring practices are to be expected to replicate existing prejudice (Tambe et al., 2019). On the other hand, an emerging ethical problem that has been less investigated thus far is the manner in which the use of AI in HR infringes candidates’/employees’ autonomy over self-representation (Van den Hoven and Manders-Huits, 2008) — their ability to choose and control how they communicate their skills, motivation, personality, and experiences, while being subjected to reductionist and opaque quantification of these highly nuanced, contextual, and dynamic qualities (Delandshere and Petrosky, 1998; Govaerts and Van der Vleuten, 2013; Lantolf and Frawley, 1988). This special issue focuses on both kinds of problems.

    We invite contributions on the ethics of applying AI in the HR domain for the purpose of recruiting, hiring, employee performance assessment, etc., with special focus on (but not limited to):     

    • Autonomy and control over self-representation/presentation
    • Fairness, transparency, and justice in socio-technical HR organizational practices involving AI
    • Erosion of the idea of a labour market
    • Privacy and the right to a non-work private life
    • Appropriate distribution of roles and responsibilities among candidates/employees/employers/AI
    • Deselection by individual idiosyncrasy and other factors that are not relevant to employment

    Manuscripts should have a minimum length of approx. 5000 words and a maximum length of approx. 8000 words of content in the article (this means that title, abstract & references are not counted but that tables and quotes do count). Detailed submission guidelines are available here. During the submission process, please indicate that your submission is for the “Ethics of AI in Human Resources” special issue within the “Additional Information” step in the Editorial Manager.

    References

    • Angwin, J. Larson J., Mattu S. et al. (2016). 'Machine Bias.' ProPublica, 23 May 2016.
    • Barocas, S. and Selbst, A. D. (2016). 'Big Data’s Disparate Impact.' California Law Review 104(3): 671–732. DOI: 10.2139/ssrn.2477899.
    • Delandshere, G. & Petrosky, A. R. (1998) ‘Assessment of Complex Performances: Limitations of Key Measurement Assumptions.’ Educational Researcher 27(2).
    • Govaerts, M. & Van der Vleuten C. P. (2013) ‘Validity in work-based assessment: Expanding our horizons.’ Medical Education 47(12): 1164–74.
    • Harwell, D. (2019) ‘A face-scanning algorithm increasingly decides whether you deserve the job.’ https://www.washingtonpost.com/technology/2019/10/22/ai-hiring-face-scanning-algorithm-increasingly-decides-whether-you-deserve-job/
    • Lantolf, J. P. & Frawley W. (1988) ‘Proficiency: Understanding the Construct.’ Studies in Second Language Acquisition 10(2).
    • Rosenfield, H. & Antonini, L. (2020). ‘Data isn’t just being collected from your phone. It’s being used to score you.’ https://www.washingtonpost.com/opinions/2020/07/31/data-isnt-just-being-collected-your-phone-its-being-used-score-you/
    • Tambe P., Cappelli, P., Yakubovich, V. (2019). 'Artificial Intelligence in Human Resources Management: Challenges and a Path Forward.' California Management Review 61(4): 15–42. DOI: 10.1177/0008125619867910.
    • Van den Hoven, J., and Manders-Huits, N. (2008). 'The Person as Risk, the Person at Risk.' In: ETHICOMP 2008: Living, Working and Learning Beyond Technology, pp. 408–14.
    • Van den Hoven, J., and Manders-Huits, N. (2008). ‘Moral identification in Identity Management Systems.’ The Future of Identity in the Information Society. International Federation for Information Processing Digital Library.
    • Van den Hoven, J., ‘Information Technology, Privacy, and the Protection of Personal Data.’ Information Technology and Moral Philosophy. Cambridge University Press, pp. 301–21.
  • Call for Papers: The ethics and epistemology of explanatory AI in medicine and healthcare ( submission deadline1st May 2021)

    Guest Editors

    Juan Manuel Durán (TU Delft), Martin Sand (TU Delft), Karin R. Jongsma (UMC Utrecht)

    Ethics and Information Technology is calling for the submission of papers for a Special Issue focusing on the ethics and epistemology of explainable AI in medicine and healthcare. Modern medicine is now largely implemented and driven by diverse AI systems. While medical AI is assumed to be able to “make medicine human again” (Topol, 2019) by more accurately diagnosing diseases and, thus, freeing doctors to spend more time with their patients, a major issue that emerges with this technology is of explainability, either of the system itself or of its outcome.

    In recent debates, it has been claimed that “[for] the medical domain, it is necessary to enable a domain expert to understand why an algorithm came up with a certain result.” (Holzinger et al. 2020). Holzinger and colleagues suggest that being unable to provide explanations for certain automated decisions could have adverse effects on the patients’ trust in those decisions (p. 194). But, does trust really require explanation and, if so, which kind of explanation? Alex John London has forcefully contested the requirement of explainability, suggesting in fact that we are aiming for a standard that cannot be upheld in health care. In this context, several interventions (e.g., treatments) are commonly accepted and applied because they are deemed effective, while we lack an understanding of their underlying causal mechanisms (e.g., Aspirin). Accuracy, as London suggests subsequently, is a more important value for medical AI than explainability (London 2019). Within this junction, the central claim thus remains disputed: Is explainability philosophically and computationally possible? Are there suitable alternatives to explainability (e.g., accuracy)? Does explainability play or should play a role - and if so, which one - in the responsible implementation of AI in medicine and healthcare?

    The present Special Issue aims at diving into the heart of this problem, thereby connecting computer science and medical ethics with philosophy of science, philosophy of medicine, and philosophy of technology. All contributions must relate technical and epistemological issues with normative and social problems brought up in connection with the use of AI in medicine and healthcare.

    We are particularly interested in contributions that shed a new light on the following questions:

    • Which are the distinctive characteristics of explanations in AI for medicine and healthcare?
    • Which epistemic and normative values (e.g., explainability, accuracy, transparency) should guide the design and use of AI in medicine and healthcare?
    • Does AI in medicine pose particular requirements for explanations? 
    • Is explanatory pluralism a viable option for medical AI (i.e., pluralism of discipline and pluralism of agents receiving/offering explanations)?
    • Which virtues (e.g., social, moral, cultural, cognitive) are at the basis of explainable medical AI?
    • What is the epistemic and normative connection between explanation and understanding?
    • How are trust (e.g., normative and epistemic) and explainability related? 
    • What kind of explanations are required to increase trust in medical decisions?
    • What is the role of transparency in explanations in medical AI?
    • How are accountability and explainability related in medical AI?

    Holzinger A, Carrington A, Müller H. Measuring the Quality of Explanations: The System Causability Scale (SCS). KI - Künstliche Intelligenz. 2020;34(2):193-8. doi: 10.1007/s13218-020-00636-z.

    London AJ. Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability. Hastings Center Report. 2019;49(1):15-21. doi: 10.1002/hast.973.

    Topol EJ. Deep Medicine - How Artificial Intelligence Can Make Healthcare Human Again. New York: Basic Books; 2019.

    Important dates

    1st May 2021: submission deadline

    End of 2021: Expected time of publication of Special Issue

    Papers must be submitted via the online submission system and shall not exceed 8,000 words including references. Submissions will be double-blind refereed for relevance to the theme as well as academic rigor and originality. High quality articles not deemed to be sufficiently relevant to the Special Issue may be considered for publication in a subsequent non-themed issue. Pre-submission inquiries are encouraged and should be directed at the main guest editor Juan Manuel Durán (j.m.duran@tudelft.nl)

  • Topical Collection on COVID-19

    We invite submissions for a Topical Collection of Ethics and Information Technology on COVID-19.

    We welcome articles on how information technologies can help tackle the immediate health risks of the pandemic, how these technologies can safeguard human well-being, as well as rights and freedoms in a post-COVID world. Information technologies have a unique role to play in mitigating some of the worst ef­fects of the SARS-CoV-2 virus, but they also introduce new ethical dilemmas. Many of these technologies could have long-term transformative effects for our society. Ethical refection and analysis is therefore indis­pensable. Ethics and Information Technology investigates the ethical challenges of information technologies, so contributions will be prioritised that relate these challenges to the COVID-19 crisis, especially those that help us to identify ways to move beyond dilemmatic choices and argue for responsible digital innovations.