Call for Extended Abstracts - Learning from less successful professional development for mathematics teachers

What went wrong? Learning from less successful professional development for mathematics teachers

Description and Rationale

Research into mathematics teacher professional development (PD) programs has taught us much about important principles for these programs, including that they should: be long-term; focus on teachers’ practices and knowledge; engage teachers collaboratively; support and build teachers’ reflective capabilities; work with appropriate tools and resources; and be consistent with theories of teacher learning and development. At the same time, studies of mathematics teachers’ PD have shown considerable variation in design, conceptualization and outcome (Goldsmith, Doerr, & Lewis, 2014). As a field, we also strive to support the scalability and sustainability of successful small-scale programs (Tirosh, Tsamir, & Levenson, 2015). Very often, papers report successful case studies with a small number of teachers, and we are left wondering about whether there are less successful cases, and what we might learn from them. Tirosh et al. suggest that “In the future, researchers might consider following up on less successful results in order to gain understanding on why a program failed to have a lasting impact” (2015, p.156).

We also know that success of PD programs depends on context, and what works in one context may not work in another. It is often difficult to take into account all of the factors that may influence the effectiveness of programs. Since the field is complex and difficult to navigate, building systematic knowledge based on the variety of contexts is also complicated. Knowing about less successful implementations in a range of contexts is an important way to build knowledge in the field. 

In informal conversations among researchers, it is not uncommon to hear about what did not work well in their projects, at least not as expected or hoped for. These encounters are often interesting and thought-provoking, but are almost never published. One possible reason for this may be the well-known phenomenon of publication bias, i.e., that researchers and editors are more likely to publish work that shows positive results, and much research that does not report on successful outcomes is shelved (Antonakis, 2017). However, if we are to understand what does work in PD programs, we need to also understand what does not work, so that we can interpret research findings in relation to the full range of research in the area.

This new Special Issue in the Journal of Mathematics Teacher Education, edited by Guest Editors Karin Brodie and Ronnie Karsenty, provides an opportunity to publish research in which "something went wrong", analyze less successful results and point to what may be learned for future endeavors.

We invite Extended Abstracts of up to 1,000 words (excluding references) to be sent by email to the two guest editors by November 30, 2020.

Authors of selected abstracts will be invited to submit a full paper in English of up to 8,000 words (excluding references) by June 1, 2021. Papers will be reviewed by three reviewers (one of which will be an author of a submitted paper for the Special Issue). Publication of the Special Issue is planned for June to September, 2022.


We invite submissions of the following types:

  • Reports on research showing that the PD project was not as successful as anticipated (in terms of improvements in teachers' knowledge and/or practices; teachers' motivation; rate of teachers' dropout, etc.). The paper should include an elaborated account of what may have been the reasons for such results, and what can be learned for the benefit of future projects.
  • Reports on studies in which researchers felt, in retrospect, that the methodology designed did not capture the PD gains well enough. The paper should detail the methods used and reflect on discrepancies between formal documented results that could not point to significant change, and informal results suggesting that gains were actually achieved. We encourage researchers to explicitly unpack possible reasons for such discrepancies and what can be learned in terms of use of methods.
  • A synthesis of results from several related studies that sheds light on why certain PDs are successful while others, although carefully planned, are less successful. Such synthesis is expected to be comparative in nature, and may relate to a range of components involved in the work of conducting PD and researching its impact.
  • Reports on replication studies of PD models that did not yield similar (positive) results to that of the originally reported study. The paper should attempt to identify reasons for this phenomenon and derive conclusions on what may be learned from this experience.

An important note: It is not the intention of this Special Issue to focus on what is commonly referred to as "limitations of the study". Although it is conventional in our field to state what may limit the generalizability of reported research (e.g., small number of participants; the issue of representativeness; elements that may cause certain biases, etc.), the intended emphasis here is specifically on reporting results that are considered as less successful in the first place, or programs that "failed to have a lasting impact", to use again the words of Tirosh et al. (2015, p. 156). 

Guest Editors

Karin Brodie (University of the Witwatersrand, South Africa)

Ronnie Karsenty (Weizmann Institute of Science,


Extended abstract submissions: November 30, 2020

Invitations for full papers: February 1, 2020

Full paper submissions: June 1, 2021

Publication: June to September, 2022

Length and Language

Extended Abstract length: 1,000 words (excluding references)

Full paper length: up to 8,000 words (excluding references)

Language: English

How to submit your Extended Abstract

By email to the Guest Editors (Karin and Ronnie)

Please include the title and the names and affiliations of all authors


Antonakis, J. (2017). On doing better science: From thrill of discovery to policy implications. The Leadership Quarterly, 28, 5-21.

Goldsmith, L. T., Doerr, H. M., & Lewis, C. C. (2014). Mathematics teachers’ learning: A conceptual framework and synthesis of research. Journal for Mathematics Teacher Education, 17, 5-36.

Tirosh, D., Tsamir, P., & Levenson, E. (2015). Fundamental issues concerning the sustainment and scaling up of professional development programmes. ZDM Mathematics Education, 47, 153-159.