Skip to main content
Log in

International Journal of Artificial Intelligence in Education - Call for Papers: Special Issue on the Use of Large Language Models in Education

Guest Editors

  • Wanli Xing, University of Florida, USA
  • Andrew Lan, University of Massachusetts- Amherst, USA
  • Scott Crossley, Vanderbilt University, USA
  • Zhou Yu, Columbia University, USA
  • Paul Denny, University of Auckland, New Zealand
  • Nia Nixon, UC-Irvine, USA
  • John Stamper, Carnegie Mellon University, USA

Background

Large language models (LLMs) are based on deep neural networks and often engineered with transformer architectures. They are built with hundreds of millions and even billions of parameters and pre-trained with large quantities of language data. LLMs have made significant strides in recent years on a wide range of natural language processing (NLP) tasks including language generation, summarization, comprehension, and classification (Brown et al., 2020). Recent LLMs, such as GPT-4 (OpenAI, 2023) and LLaMA (Touvron et al., 2023), have demonstrated a remarkable ability to understand and generate human-like text either with proprietary API services or in open-sourced approaches, making them valuable tools for a variety of applications, including education. Since LLMs demonstrate transferability through inheriting semantic and contextual understanding ability from pretraining, it fits well in the context of learning engineering and learning analytics (Baker, 2023) because they provide reusable and scalable technical architectures in various subjects (e.g., math, Scarlatos & Lan, 2023, Shen et al., 2021; science, Cooper, 2023; medicine, Luo et al., 2022). Early integrations of LLMs into educational settings have demonstrated promising results to augment learning through item response and student knowledge tracing models for open-ended questions (Liu et al., 2022), socio-emotional support (Li & Xing, 2021), automatically generating educational content (Sarsa et al., 2022), especially questions (Wang et al., 2021), and automatic contextual feedback (see the review of Hahn et al., 2021). The potential extension of LLMs to process multimodal data further empowers researchers and practitioners to support students’ learning with various data sources and formats. 

Despite the promise of LLMs in education, there is still a need to explore their potential impact, limitations, and ethical considerations. For example, little is known empirically about the learning experience design of LLM-enabled educational applications and their impacts on students’ motivation, engagement, self-efficacy, and learning outcomes. Additionally, LLMs have been predominantly trained with English and adult texts, while relatively less non-English and K-12 data has been involved in LLM development, potentially leading to equality and equity issues in education (Abid et al., 2021; Ariely et al., 2022; Kasneci et al., 2023). Finally, there are ethical concerns (e.g., factuality, safety, fairness, and transparency) in LLMs, resulting in uncertainties in building sustainable and trustworthy AI systems in education (Kasneci et al., 2023; Li et al., 2022). This special issue aims to collect, review, and publish research that investigates the use of LLMs in educational contexts, addresses the challenges and opportunities associated with their deployment, and furthers our understanding of how LLMs might change the nature of teaching and learning (e.g., forms of assessment, computing education).

Rationale, Motivation, and Scope of the Special Issue

In order to advance our understanding of the role, technicality, and ethics of LLMs in education, IJAIED is pleased to announce a special issue on “Use of Large Language Models in Education.” The rationale of this special issue is to bring together cutting-edge research that explores the technical extensions of LLMs in AIED, investigates the design and development of LLM-powered implementations in educational settings, highlights the challenges and opportunities associated with their use, and provides insights into how LLMs can be effectively integrated into educational practices and how and under which conditions they might change educational practices, perhaps fundamentally so. We welcome contributions that align with the aims and scope of IJAIED, focus on the use of LLMs in education, and provide evidence of their impact on teaching and learning. 

We are particularly interested in research that addresses the following questions: 

  • How can LLMs effectively facilitate high-quality educational content creation, organization, or dissemination?
  • What are the implications of using LLMs for personalizing learning experiences and promoting learner engagement?
  • What can prior research on technology effectiveness in education inform the effective use of LLMs?
  • How can LLMs be used to improve experimental research in the AIED (e.g., ChatGPT as a teammate in collaborative problem-solving experiments)?
  • How can LLMs improve assessment and evaluation processes in education?
  • In what ways can LLMs foster creativity, innovation, and collaboration among learners?
  • In what ways and under what circumstances might the use of LLMs transform learning and teaching?
  • What are the ethical considerations and potential risks associated with the use of LLMs in education, and how can they be mitigated?
  • What are the real-world obstacles and hindrances to the deployment of LLMs in educational environments, and what strategies can be employed to address and surmount them?
  • What approaches can be taken to establish policies and frameworks that promote the conscientious and ethical application of LLMs within the educational context?

We also welcome systematic literature review papers that provide a comprehensive overview of the current state of research on LLMs in education and offer insights into future research directions.

Additional Details of the Topics

For this special issue, we consider all AI methods on-topic if they involve the use of LLMs to support education and discuss educational implications. Examples include (but are not limited to):

  • Personalized feedback and assessment using LLMs
  • Generation, summarization, or adaptation of educational content and learning materials using LLMs
  • Interactive learning experiences facilitated by LLMs (e.g., educational chatbots, conversational agents)
  • Ethical considerations and potential biases in the use of LLMs in education
  • Privacy and security concerns associated with the use of LLMs in educational settings
  • The impact of LLMs on student engagement, motivation, and learning outcomes
  • The role of LLMs in remote and online learning environments
  • Applications of LLMs for domain modeling and knowledge extraction
  • Recommender systems using LLMs for educational resources and personalized learning pathways
  • Monitoring and supporting student well-being using LLMs
  • Addressing language and cultural diversity in education using LLMs

We encourage submissions that provide empirical evidence of the impact of LLMs on education and that engage with the challenges and opportunities associated with their use. Prospective authors are encouraged to contact the guest editors if they have any questions about the suitability of their work for this special issue.

Timing & Process:

  • Authors submit extended abstracts (1000 words) to EasyChair: Jul. 15, 2023
  • Editorial decision on full paper invitation: Aug. 1, 2023
  • Authors of accepted extended abstracts submit full paper: Oct. 1, 2023
  • 1st review cycle and editorial decision (revision/rejection): Dec. 1 2023
  • Authors submit revised manuscript: Jan. 15, 2024
  • Final editorial decision (acceptance/rejection): Mar. 1, 2024
  • Publication of the special issue: 2024

Submission Guidelines

Please submit abstracts through EasyChair: https://easychair.org/conferences/?conf=ijaiedllm23 (this opens in a new tab)

For the authors who (based on their abstract) will be invited to submit a paper to the special issue: Please submit the paper via IJAIED Editorial Manager: https://www.editorialmanager.com/aied (this opens in a new tab)

Choose SI: Use of LLMs in Education from the Article Type dropdown.

Submitted papers should present original, unpublished work, relevant to one of the topics of the Special Issue. All submitted papers will be evaluated on the basis of relevance, significance of contribution, technical quality, scholarship, and quality of presentation, by at least three reviewers. It is the policy of the journal that no submission, or substantially overlapping submission, be published or be under review at another journal or conference at any time during the review process. Manuscripts will be subject to a peer reviewing process and must conform to the author guide lines available on the IJAIED website at: https://www.springer.com/journal/40593 (this opens in a new tab)

Author Resources

Authors are encouraged to submit high-quality, original work that has neither appeared in, nor is under consideration by other journals. Springer provides a host of information about publishing in a Springer Journal on our Journal Author Resources  (this opens in a new tab)page, including FAQs (this opens in a new tab)Tutorials   (this opens in a new tab)along with Help and Support. (this opens in a new tab)

Other links include:

About the Guest Editors:

Wanli Xing is an assistant professor of educational technology at the College of Education at the University of Florida. His research themes are: (1) explore and leverage educational big data in various forms and modalities to advance the understanding of learning processes; (2) design and develop fair, accountable and transparent learning analytics and AI powered learning environments; (3) create innovative strategies, frameworks, and technologies for AI, Data Science, and STEM education.

Andrew Lan is an assistant professor in the Manning College of Information and Computer Sciences, University of Massachusetts Amherst. His research focuses on the development, not just application, of artificial intelligence (AI) methods to enable scalable and effective personalized learning in education. With a blend of expertise in core AI/machine learning/natural language processing methods and extensive experience in educational data, his research spans areas such as learner modeling, personalization via content sequencing, generation, and feedback, and human-in-the-loop AI. 

Scott Crossley is a professor in Peabody College of Education at Vanderbilt University. His primary research focus is on natural language processing and the application of computational tools and machine learning algorithms in language learning, writing, and text comprehensibility. His main interest area is the development and use of natural language processing tools in assessing writing quality and text difficulty. He is also interested in the development of second language learner lexicons and the potential to examine lexical growth and lexical proficiency using computational algorithms

Zhou Yu is an associate professor at Columbia University’s Computer Science Department.  Zhou works on Natural Language Processing, Machine Learning and Human-Communication. Zhou designs algorithms for real-time intelligent interactive systems that coordinate with user actions that are beyond spoken languages, including non-verbal behaviors to achieve effective and natural communications. In particular, Zhou optimizes human-machine communication via studies of multimodal sensing and analysis, speech and natural language processing, machine learning and human-computer interaction. The central focus of Zhou’s research is to bring together all the areas above to design, implement and deploy end-to-end real-time interactive intelligent systems that are able to plan globally considering interaction history and current user actions to achieve better user experience and task performance.

Paul Denny is an associate professor in the School of Computer Science at the University of Auckland, New Zealand. His research interests include developing and evaluating tools for supporting collaborative learning, particularly involving student-generated resources, and exploring the ways that students engage with digital learning environments.  He is also interested in the application of AI to education, specifically how feedback from large language models can be integrated into environments for computing education to support novice programmers.

Nia Nixon, Assistant Professor in Education UC-Irvine, is Vice President of the Society for Learning Analytics Research (SoLAR) and the Director of The Language and Learning Analytics Laboratory (LaLA-Lab). Dr. Nixon and her team conduct research on socio-cognitive and affective processes across a range of educational technology interaction contexts and develop computational models of these processes and their relationship to learner outcomes. Their research uses a range of artificial intelligence (AI) techniques such as computational linguistics and machine learning. Current projects focus on i) understanding differences in students’ socio-cognitive engagement patterns across gender and racial lines, ii) identifying interpersonal dynamics that characterize varying levels of creativity/innovation and sense of belonging during collaborative interactions, and ii) developing AI based interventions to promote inclusivity in digitally mediated team problem-solving environments.

John Stamper is an Associate Professor at the Human-Computer Interaction Institute at Carnegie Mellon University. He is also the Technical Director of the Pittsburgh Science of Learning Center DataShop. His primary areas of research include Educational Data Mining and Intelligent Tutoring Systems. As Technical Director, John oversees the DataShop, which is the largest open data repository of transactional educational data and a set of associated visualization and analysis tools for researchers in the learning sciences. 

References

Abid, A., Farooqi, M., & Zou, J. (2021). Persistent anti-muslim bias in large language models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 298-306).

Ariely, M., Nazaretsky, T., & Alexandron, G. (2022). Machine learning and Hebrew NLP for automated assessment of open-ended questions in biology. International journal of artificial intelligence in education, 33, 1-34.

Baker, R. S. (2023). Learning Analytics: An Opportunity for Education. XRDS: Crossroads, The ACM Magazine for Students, 29(3), 18-21.

Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I. & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901.

Cooper, G. (2023). Examining Science Education in ChatGPT: An Exploratory Study of Generative Artificial Intelligence. Journal of Science Education and Technology, 1-9.

Hahn, M. G., Navarro, S. M. B., Valentín, L. D. L. F., & Burgos, D. (2021). A systematic review of the effects of automatic scoring and automatic feedback in educational settings. IEEE Access, 9, 108190-108198.

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., Stadler, M., Weller, J., Kuhn, J. & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274.

Li, C., & Xing, W. (2021). Natural language generation using deep learning to support MOOC learners. International Journal of Artificial Intelligence in Education, 31, 186-214.

Li, C., Xing, W., & Leite, W. (2022). Building socially responsible conversational agents using big data to support online learning: A case with Algebra Nation. British Journal of Educational Technology, 53(4), 776-803.

Liu, N., Wang, Z., Baraniuk, R., & Lan, A. (2022, December). GPT-based Open-ended Knowledge Tracing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (pp. 3849-3862).

Luo, R., Sun, L., Xia, Y., Qin, T., Zhang, S., Poon, H., & Liu, T. Y. (2022). BioGPT: generative pre-trained transformer for biomedical text generation and mining. Briefings in Bioinformatics, 23(6), Article bbac409.

OpenAI. (2023). GPT-4 Technical Report. ArXiv. https://doi.org/10.48550/arXiv.2303.08774

Sarsa, S., Denny, P., Hellas, A., & Leinonen, J. (2022). Automatic generation of programming exercises and code explanations using large language models. In Proceedings of the 2022 ACM Conference on International Computing Education Research (pp. 27-43).

Scarlatos, A., & Lan, A. (2023). Tree-Based Representation and Generation of Natural and Mathematical Language. arXiv preprint arXiv:2302.07974.

Shen, J. T., Yamashita, M., Prihar, E., Heffernan, N., Wu, X., Graff, B., & Lee, D. (2022). MathBERT: A Pre-trained Language Model for General NLP Tasks in Mathematics Education. In NeurIPS 2021 Math AI for Education Workshop. https://par.nsf.gov/servlets/purl/10386545

Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M. A., Lacroix, T., ... & Lample, G. (2023). Llama: Open and efficient foundation language models. ArXiv. https://doi.org/10.48550/arXiv.2302.13971

Wang, Z., Lan, A., & Baraniuk, R. (2021, November). Math Word Problem Generation with Mathematical Consistency and Problem Context Constraints. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 5986-5999).

Navigation