Call for Papers - Business Ethics in the Era of Artificial Intelligence

Business Ethics in the Era of Artificial Intelligence

Submission Deadline: September 30th 2020


Guest Editors (in alphabetical order):

Michael Haenlein, ESCP Europe Paris, France, haenlein@escpeurope.eu
Ming-Hui Huang, National Taiwan University, Taiwan, huangmh@ntu.edu.tw
Andreas Kaplan, ESCP Europe Berlin, Germany, kaplan@escpeurope.eu
David Vogel, Haas School of Business, Berkeley, US, vogel@haas.berkeley.edu


Description:

Artificial Intelligence (AI), defined as “a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation” (Kaplan and Haenlein 2019, p. 17), is one of the most popular topics across a variety of academic disciplines, industry sectors, and business functions, and widely influences society at large. While many will first think of computational, organizational, or technological issues related to AI, there is an entire set of ethical dimensions triggered by this new era which urgently need to be analyzed, discussed, and reflected upon. As pointed out by Martin and Freeman (2004, p. 353) “business ethicists are uniquely positioned to analyze the relationship between business, technology, and society”.

There are many examples where inappropriate use of AI has resulted in unethical outcomes and behavior. Examples include image recognition services which make offensive classifications of minorities due to biased algorithms; Microsoft’s AI chatbot Tay which became racist and adopted hate speech after only one day; and Amazon’s facial recognition technology which simply failed to recognize users with darker skin colors. In addition, AI is starting to enter the workplace and to shape company-employee interactions. Software such as Status Today can scrutinize staff behavior on a minute-to-minute basis by collecting data on who sends emails to whom at what time, who accesses and edits files and who meets whom and allows firms to compare such activity data with employee performance. On a broader perspective the German government recently had to intervene and call out Mercedes-Benz stating that their future self-driving cars would be programmed in a way to prioritize the lives of the car’s passengers instead of other people in the streets. This raises the question who actually is responsible for technological outcomes, i.e. machines or human developers (Johnson 2015; Martin 2018)?


The list of dilemmas created by advances in AI is long (Kaplan and Haenlein 2020). In the field of workforce management, while for some new jobs and opportunities will arise, many others will most likely lose their employment due to automation and digitalization (Huang and Rust 2018) or require new professional skills in order to remain an active part of the workforce (Huang, Rust, and Maksimovic 2019). Looking at social structures, in industries such as health or elderly home care, will AI further reduce the human dimension or will it make patients and senior citizens feel better when they are taken care of by AI triggered robots than by overworked human staff? Regarding the environment, AI progress will increase pressure on energy demands and raw materials like cobalt or lithium, and lead to more electronic waste but could also come up with new environmental solutions humans might not have been able to think of. For firms all over the world there will be a trade-off between data protection and innovation: more (big) data available means better AI systems for company who use this data to train them. Therefore, the less regulation on data privacy and security in place, the more likely countries will be competitive on the world scene. This potentially will lead to regulatory policy divergence on an international scale (Vogel 2012). The question arises of how companies should resolve such dilemmas (cf. also Brusoni and Vaccaro 2017; Martin, Shilton, and Smit 2019).

Already in 1993 the Journal of Business Ethics published an article dealing with ethical concerns of artificial decision-making (Khalil 1993). At that time the AI Winter was just about to turn into another summer period since funding and interest in AI research was again to take off after a period of disillusion and scarce budgets. Today, we are in AI’s Fall and harvesting season (Haenlein and Kaplan 2019) with many of the ethical questions remaining and new ones piling up. This makes research in this domain all the more important.

Topics of Interest:

This special issue aims to publish original articles from a wide variety of methodological, disciplinary, and interdisciplinary perspectives that develop new insights with regards to business ethics in this new era of AI. Questions and topics of interest for this special issue include, but are not limited to:

Accountability and responsibility of companies in finding an equilibrium between the data protection rights of consumers and the business use of AIEthical concerns of using AI to control and influence employees and organizational behaviorThe role and responsibility of companies in a global race for AI dominance, commercial war, and trade relations led by several world regionsCooperation of public administration and the private sector to tackle the ethical dimension of AIDiscrimination and bias triggered by AI Sector- and industry-specific ethical concerns of AI such as creative and cultural industries, financial services, health, higher education, insurance, media, public administration, and alikeFirms’ moral duty with respect to environmental and sustainability concerns within the evolution of AIInfluence of AI usage in the workplace on employee satisfaction and autonomyLiability and moral responsibility of companies with regards to the programming of algorithms and usage of data analyticsLiterature reviews and conceptual state-of-the art pieces on business ethics and AIMoral obligation of companies to avoid fake news, deepfakes, online manipulation and harassmentNew and innovative sustainable business models enabled by AIPotential of AI in the area of business ethics and corporate social responsibility (CSR) education and trainingRisks of AI systems meant for positive usage to be applied for negative purposes and companies’ responsibilitySocial media powered by AI and their influence on consumers


Submission instructions

The submission deadline is September 30th 2020. After an initial screening through the guest editors, a double-blind peer-review process will be applied. Authors are strongly encouraged to refer to the Journal of Business Ethics website and the detailed instructions on submitting a paper. Moreover, all papers must go through the journal’s online submission portal. The portal will open 60 days prior to the call for papers’ submission deadline. Upon submission, please indicate that your submission is to this Special Issue on AI of the Journal of Business Ethics. Questions about expectations, requirements, the appropriateness of a topic, and alike should be directed to any of the guest editors.


About Journal of Business Ethics

The Journal of Business Ethics publishes only original articles from a wide variety of methodological and disciplinary perspectives concerning ethical issues related to business that bring something new or unique to the discourse in their field. The Journal’s impact factor is 3.796 (2018). This journal is one of the 50 journals used by the Financial Times in compiling the prestigious Business School research rank.


References

•    Brusoni Stefano and Antonino Vaccaro (2017) Ethics, Technology and Organizational Innovation, Journal of Business Ethics, 143(2), 223-226
•    Haenlein Michael and Andreas Kaplan (2019) A Brief History of AI: On the Past, Present and Future of Artificial Intelligence, California Management Review, 61(4), 5-14
•    Huang Ming-Hui and Roland T. Rust (2018) Artificial Intelligence in Service, Journal of Service Research, 21(2), 155-172
•    Huang Ming-Hui, Roland T. Rust, and Vojislav Maksimovic (2019) The Feeling Economy: Managing in the Next Generation of AI, California Management Review, 61(4), 43-65
•    Johnson Deborah G. (2015) Technology with No Human Responsibility? Journal of Business Ethics, 127(4), 707-715
•    Kaplan Andreas and Michael Haenlein (2019) Siri, Siri in my Hand, who is the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence, Business Horizons, 62(1), 15-25
•    Kaplan Andreas and Michael Haenlein (2020) Rulers of the world, unite! The challenges and opportunities of Artificial Intelligence, Business Horizons, 63(1)
•    Khalil Omar (1993) Artificial decision-making and artificial ethics: A management concern, Journal of Business Ethics, 12(4), 313–321
•    Martin Kirsten (2018) Ethical Implications and Accountability of Algorithms, Journal of Business Ethics, https://doi.org/10.1007/s10551-018-3921-3
•    Martin Kirsten and Edward R. Freeman (2004) The separation of technology and ethics in business ethics, Journal of Business Ethics, 53(4), 353-364
•    Martin Kirsten, Katie Shilton, and Jeffery Smit (2019) Business and the Ethical Implications of Technology, Journal of Business Ethics, https://doi.org/10.1007/s10551-019-04213-9
•    Vogel David (2012) The politics of precaution: regulating health, safety, and environmental risks in Europe and the United States, Princeton University Press