Student Forum Online Event - Call for Abstracts

The editorial team of the Student Forum of Springer journal AI & Society: Knowledge, Culture and Communication welcomes students and early career researchers to a discussion of new perspectives and key developments in the field of AI and its impact on societies. 

The event will be held online on Wednesday, 16th June 2021. Further details to be confirmed.  

The theme of the event is Inclusive AI in a COVID-19 Era, including the marginalisation of certain groups in society and on widening inequalities apparently exacerbated by AI in the world we live in, and the challenge of solving these and other social problems through the visionary application of AI and related technologies in the future. The editorial team is delighted to invite postgraduate students and early career researchers to submit an abstract for one of two thematic sessions. Please see below for a detailed description of the two panel sessions. 

Following the event, a MASTER CLASS workshop on publishing in the Journal AI & Society’s Student Forum will also take place for postgraduate students and early career researchers. During this Master Class, participants will get an opportunity to meet the Student Forum Associate Editors, get tips on publishing in the Forum, and on ways to overcome some of the issues that face those who wish to publish in the Student Forum.  


Postgraduate students and early career researchers conducting research related to either of the thematic panel sessions are invited to send their abstracts of 250 words maximum, with a short bio of the authors (max. 100 words) and 3- 5 key words to: Please include the title of the panel session you are submitting your abstract to.  
The deadline for abstract submission has been extended until April 23, 2021; 12pm GMT. 


Session I: Stereotypes and Biases 

In order to better understand how AI can be conceptualised, designed, and deployed it is important to study the effects it has on creating a more diverse, fair and equitable society, or not. Gill (2020) proposes that Artificial Intelligence can provide for the “common good” but that the scientific community needs to study the sociotechnical issues that arise from “the accelerated integration of powerful artificial intelligence systems into core social institutions”. Algorithms have become powerful, and when assembled become systems capable of “profiling, categorising and predicting who we are, what we want and more” (Hayes et al., 2020). Whilst AI can be a positive development (Gill, 2020), it can have profound and unexpected impacts on individuals and society at large (Altman et al., 2018). According to Altman et al, (2018) “algorithmic approaches to collecting, analysing, classifying and making decisions can affect the wellbeing of individual, groups and society” and have the potential to “challenge people's control over information collection, sharing, as well as notions of privacy, equity, fairness and autonomy”. 

Altman and colleagues (2018) note the use of the automated decision support software COMPAS (Offender Management Profiling for Alternative Sanctions) in the US justice system. The COMPAS algorithm is used to determine risk of an individual reoffending based on a variety of factors including a history of crime and socioeconomic status. In a study by Angwin et al., (2016) evidence of racial bias was found; even when racial and gender characteristics were isolated from the COMPAS systems, African Americans were still more likely to be assigned a higher risk of reoffending (45 percent). Caucasians were at the same time more likely to be mis ascribed with a low risk of reoffending. COMPAS is a clear example of the application of AI informed decision making and the effects such decision making can have on individuals, including long term and profound loss of employment, mental health, and increased risk of disease. 

As Raso et al. (2018) have observed, AI is “not developed within a vacuum”. Yet, to understand social impacts of AI, there must be some account of conditions prior. Raso et al. (2018) look at the human rights impacts. Failure to do so results in AI only exacerbating existing inequalities. Conceptualising diversity and inclusion becomes all the more difficult due to the nature of AI itself. The pattern recognition and classifications of AI drive exclusionary processes and practices. Biased decisions creep into algorithms through training data, which can reflect current and historical inequalities in enforcement of law and in flawed data sampling where groups are over or underrepresented in the data. This session invites presentations that draw on such debates and which aim to address broad themes of stereotyping and bias in AI technologies. Presentations focusing on AI usage in the context of the Covid-19 pandemic are welcome but not required. 

Session II: Widening inequalities and limited access to services

A wide range of technologies are an ever-present feature of daily life, including information and communication technologies and medical and assistive devices. Yet, as sociologists and economists have repeatedly demonstrated, the increasing automation and digitalisation of private and public sector services places certain members of society at an advantage while simultaneously disadvantaging others. Inequalities in terms of availability and access to online information and services, as well as design biases built into many technologies that underpin those services, have been identified in numerous studies (gender-based, racial, cultural and language-based, ableist, ageist, rural/urban, and so on). 

New inequalities have emerged and existing ones have been amplified in past months, as many face-to-face interactions have moved online due to Covid-19 regulations. While many in professional occupations are emerging mostly unscathed during this transition, others employed in other sectors, many of them small business owners and self-employed, have been laid-off or have lost their businesses. For “essential” workers, there is also a heightened risk of infection. People with lower digital literacy, notably older persons, and people with limited Internet access, find themselves at a distinct disadvantage both in terms of accessing healthcare, for example, and in maintaining contact with family, friends, and the wider community during rolling lockdowns. This session invites presentations that address the broad theme of social and economic inequalities related to the design and availability of technologies. We welcome research comparing and assessing international, national, or local contexts, conditions and approaches. Presentations focusing on inequalities in the context of the Covid-19 pandemic are welcome but not required.   


AI & Society: Knowledge, Culture and Communication, is an International Journal publishing refereed scholarly articles, position papers, debates, short communications, and reviews of books and other publications. Established in 1987, the Journal focuses on societal issues including the design, use, management, and policy of information, communications, and new media technologies, with a particular emphasis on cultural, social, cognitive, economic, ethical, and philosophical implications. 

AI & Society has a broad scope and is strongly interdisciplinary. We welcome contributions and participation from researchers and practitioners in a variety of fields including information technologies, humanities, social sciences, arts, and sciences. This includes broader societal and cultural impacts, for example on governance, security, sustainability, identity, inclusion, working life, corporate and community welfare, and well-being of people. Coauthored articles from diverse disciplines are encouraged. 

AI & Society seeks to promote an understanding of the potential, transformative impacts and critical consequences of pervasive technology for societies. Technological innovations, including new sciences such as biotech, nanotech, and neuroscience, offer a great potential for societies, but also pose existential risk. Rooted in the human-centred tradition of science and technology, the Journal acts as a catalyst, promoter, and facilitator of engagement with diversity of voices and over-the-horizon issues of arts, science, technology and society. 

The Student Forum Section of the journal aims to provide an opportunity to postgraduate students and earlycareer researchers to communicate their ongoing research to the wider academic community. This would include theoretical, methodological, and application orientations of research including case studies, as well as contextual action research experiences. The purpose is to provide students and young researchers an opportunity to publish their case study work and experiential work on technology and society with the possibility of their work leading to future full-length Journal papers for AI & Society. Papers in this section are formally reviewed. The Student Forum also hopes to build a bridge to the future and provide opportunities for ideas to be put into action. The Forum would link students from many different universities, enabling collaboration and exchanges of experience.