Call for Papers for Special Issue:"Prospects for Ethical Uses of Artificial Intelligence in Human Resources" Extended Deadline: March 5th, 2021

Editors: E. Aizenberg & M. J. Dennis (TU Delft).

We invite articles for a special issue of Ethics and Information Technology on the ethics of artificial intelligence in human resources, expected to be published in the third quarter of 2021.

Topic Overview

The transformative potential of deploying artificial intelligence (AI) in human resources (HR) is now widely accepted in many commercial and institutional contexts. AI recruitment companies promise that their products can save the labour of countless HR professionals, improve the selection of candidates, and do this at a fraction of the cost of the traditional recruitment process. Their products allow hundreds or thousands of job applications to be sifted and evaluated at the touch of a button. Recently, designers of these systems have become more ambitious in the range of tools they offer recruiters. While AI in the HR domain initially specialised in analysing written applications (CVs, motivation letters, etc.), the latest and most innovative products allow employers to analyse facial expressions, speech content, and even the voice tone of their potential employees. Data from these sources — so the creators claim — yield critical insights into the values and character traits of the employees, which the AI uses to make quantitative predictions about how the applicant will fare in a future workplace. As recruiters respond to social-distancing challenges of the COVID-19 pandemic, such tools may appear increasingly attractive.

Not only can AI analyse job applications more efficiently, it is often touted as fairer in its selection of applicants. Designers of these systems claim that they are free of individual prejudice, systematic bias, and are even better at discerning the virtues and vices of applicants. Furthermore, some proponents have even claimed that AI could select candidates using synthetic categories that are capable of better predicting future job performance than even the most experienced HR professionals. Despite such claims, ethicists increasingly are finding reason to be sceptical of this technology. On the one hand, the ethical challenges of AI in the HR domain mirror those that have received significant attention in other AI application domains, such as policing and criminal justice, especially surrounding the problem of discriminatory profiling (Angwin et al., 2016; Barocas and Selbst, 2016). One common issue in these cases is the use of historically biased data sets in the training of AI algorithms, which results in reinforcement of historical and existing discrimination. Similarly, in the HR domain, data sets based on existing hiring practices are to be expected to replicate existing prejudice (Tambe et al., 2019). On the other hand, an emerging ethical problem that has been less investigated thus far is the manner in which the use of AI in HR infringes candidates’/employees’ autonomy over self-representation (Van den Hoven and Manders-Huits, 2008) — their ability to choose and control how they communicate their skills, motivation, personality, and experiences, while being subjected to reductionist and opaque quantification of these highly nuanced, contextual, and dynamic qualities (Delandshere and Petrosky, 1998; Govaerts and Van der Vleuten, 2013; Lantolf and Frawley, 1988). This special issue focuses on both kinds of problems.

We invite contributions on the ethics of applying AI in the HR domain for the purpose of recruiting, hiring, employee performance assessment, etc., with special focus on (but not limited to):     

  • Autonomy and control over self-representation/presentation
  • Fairness, transparency, and justice in socio-technical HR organizational practices involving AI
  • Erosion of the idea of a labour market
  • Privacy and the right to a non-work private life
  • Appropriate distribution of roles and responsibilities among candidates/employees/employers/AI
  • Deselection by individual idiosyncrasy and other factors that are not relevant to employment

Manuscripts should have a minimum length of approx. 5000 words and a maximum length of approx. 8000 words of content in the article (this means that title, abstract & references are not counted but that tables and quotes do count). Detailed submission guidelines are available here. During the submission process, please indicate that your submission is for the “Ethics of AI in Human Resources” special issue within the “Additional Information” step in the Editorial Manager.

References

  • Angwin, J. Larson J., Mattu S. et al. (2016). 'Machine Bias.' ProPublica, 23 May 2016.
  • Barocas, S. and Selbst, A. D. (2016). 'Big Data’s Disparate Impact.' California Law Review 104(3): 671–732. DOI: 10.2139/ssrn.2477899.
  • Delandshere, G. & Petrosky, A. R. (1998) ‘Assessment of Complex Performances: Limitations of Key Measurement Assumptions.’ Educational Researcher 27(2).
  • Govaerts, M. & Van der Vleuten C. P. (2013) ‘Validity in work-based assessment: Expanding our horizons.’ Medical Education 47(12): 1164–74.
  • Harwell, D. (2019) ‘A face-scanning algorithm increasingly decides whether you deserve the job.’ https://www.washingtonpost.com/technology/2019/10/22/ai-hiring-face-scanning-algorithm-increasingly-decides-whether-you-deserve-job/
  • Lantolf, J. P. & Frawley W. (1988) ‘Proficiency: Understanding the Construct.’ Studies in Second Language Acquisition 10(2).
  • Rosenfield, H. & Antonini, L. (2020). ‘Data isn’t just being collected from your phone. It’s being used to score you.’ https://www.washingtonpost.com/opinions/2020/07/31/data-isnt-just-being-collected-your-phone-its-being-used-score-you/
  • Tambe P., Cappelli, P., Yakubovich, V. (2019). 'Artificial Intelligence in Human Resources Management: Challenges and a Path Forward.' California Management Review 61(4): 15–42. DOI: 10.1177/0008125619867910.
  • Van den Hoven, J., and Manders-Huits, N. (2008). 'The Person as Risk, the Person at Risk.' In: ETHICOMP 2008: Living, Working and Learning Beyond Technology, pp. 408–14.
  • Van den Hoven, J., and Manders-Huits, N. (2008). ‘Moral identification in Identity Management Systems.’ The Future of Identity in the Information Society. International Federation for Information Processing Digital Library.
  • Van den Hoven, J., ‘Information Technology, Privacy, and the Protection of Personal Data.’ Information Technology and Moral Philosophy. Cambridge University Press, pp. 301–21.