Computational and Mathematical Organization Theory - Special Issue: AI and the Information Environment
Computational and Mathematical Organization Theory is now accepting submissions for the Special Issue on AI and the Information Environment.
- AI and the Information Environment
Closes April 30, 2025
Guest Editors
Kathleen M. Carley, Carnegie Mellon University
Scott Leo Renshaw, Carnegie Mellon University
Abstract
This special issue invites interdisciplinary research at the intersection of artificial intelligence and the pervasive challenges of disinformation and misinformation in today’s digital environment. Building on stimulating discussions from the 2025 AI and Disinformation mini-conference held at Carnegie Mellon University, this call encourages innovative approaches from computer science, sociology, communications, data science, and related fields. We welcome submissions that explore the mechanisms by which AI both contributes to and can help combat digital disinformation, as well as the broader social, political, and organizational implications of these developments.
Introduction
In recent years, the rapid rise of artificial intelligence (AI) has transformed our digital information environment. Today, AI systems are becoming increasingly integrated and essential in the process of creating, distributing, and personalizing content across social media and news platforms. However, this technological progress has coincided with the sophistication and spread of disinformation and misinformation campaigns – from politically driven fake news to large-scale misinformation operations – illustrating the clear disruptive potential for both organizations and society. Numerous academic studies and major news sources have underscored AI’s role in amplifying misleading narratives, complicating efforts to preserve the integrity of reliable and expert-derived information availability. This issue is evident in general skepticism about AI-generated news headlines (Altay & Gilardi, 2024), its application in election-related disinformation (Sweson & Chan, 2024; Chowdhury, 2024; Adam, 2024), and in elaborate and tailored scams targeting vulnerable groups, particularly older adults (NYC Consumer and Worker Protection, 2024; Consumer and Governmental Affairs, 2024). Nonetheless, AI also offers promising avenues for countering disinformation (Li & Callegari, 2024; The University of Queensland, 2024), making it a critical area of exploration in today’s online information environment.
During the early January 2025 mini-conference on AI and Disinformation held at Carnegie Mellon University, an interdisciplinary group of researchers from computer science, sociology, communications, data science, and related disciplines convened to explore these critical issues. The event featured poster sessions and panels addressing topics including:
•AI and Disinformation/Misinformation Generation and Propagation:
Examining how AI technologies are employed to produce and disseminate false or misleading information.
•Human-Centric Moderation and Collaborative Frameworks:
Exploring “Human in the Loop” integration of human judgement with automated systems like participatory decision-making that can enhance detection and mitigation of misinformation.
•Decision Aids and Reliability:
Assessing tools and methods to improve the accuracy and trustworthiness of digital content.
•Platform Policies, Accountability, and Regulatory Frameworks:
Analyzing the role of platforms and policy in moderating the spread of disinformation.
•Impact on Polarization and Social Movements:
Evaluating the broader social consequences of digital disinformation.
•Dynamic Networks, Predictive Modeling, and Advanced Approaches:
Advancements using AI to develop applications and other tools more efficiently, as well as utilizing novel approaches like dynamic network analysis to identify and predict evolving structures of information eco-systems.
These discussions made it clear that there is a pressing need for ongoing organizational and computational social science research to deepen our insights into the evolving landscape of AI-driven disinformation and misinformation.
Scope and Topics
The special issue editors, on behalf of the Computational and Mathematical Organization Theory (CMOT) journal invite high-quality submissions that examine the technological and sociological implications of AI applications and approaches on humans and human systems (such as organizations) within the online information environment. As a computational social science journal, the special issue is looking for computational social science approaches, therefore submissions that are pure machine learning or purely qualitative in nature are out of scope for this special issue.
We welcome research that addresses, but is not limited to, the following themes:
1. AI in Social Engineering:Examining how AI is being used to shape public opinion and influence behavior through social engineering tactics.
2. Algorithmic Amplification and Misinformation:Analyzing how recommendation systems and automated content creation can inadvertently boost the spread of disinformation.
3. Technological and Societal Countermeasures:Proposing innovative methods, tools, and policies aimed at detecting, mitigating, or neutralizing the spread of false information.
4. Computational Social Science Perspectives:Using computational models and data-driven research to understand the socio-technical mechanisms behind digital disinformation.
5. Ethical and Regulatory Challenges: Discussing the broader ethical implications and regulatory challenges that arise from AI-driven disinformation, and their impact on public policy and governance.
6. Interdisciplinary Approaches: Combining insights from diverse fields to tackle the complex and multifaceted nature of digital misinformation. This includes the quantitative analysis of case studies and other qualitative mixed-methodological approaches that capture experiences and impacts of AI within online information environments.
Submission Guidelines
Submissions must be original works that have not been previously published and are not under review elsewhere. We encourage contributions that:
•Present fresh theoretical insights, methodologies, or empirical findings.
•Incorporate case studies or comparative analyses that demonstrate the role of AI in digital disinformation.
•Provide interdisciplinary perspectives that integrate technical, social, and policy dimensions.
Manuscripts should adhere to the journal’s formatting guidelines and be submitted electronically via our online submission portal. Detailed submission instructions and deadlines are available on the journal’s website.
Important Dates
•Submission Deadline:April 30th, 2025
•Reviews and First Round Decisions: May 31st, 2025
•Anticipated Special Issue Publication Date:July 15th, 2025