Skip to main content
Log in

Journal of Business and Psychology - Free Access Articles, March 2024

Articles Currently Free Access!

The following five articles are a special collection from Journal of Business and Psychology currently free access until April 30th, 2024.


Data Aggregation in Multilevel Research: Best Practice Recommendations and Tools for Moving Forward (this opens in a new tab)

James M. LeBreton, Amanda N. Moeller & Jenell L. S. Wittmer

The multilevel paradigm is omnipresent in the organizational sciences, with scholars recognizing data are almost always nested – either hierarchically (e.g., individuals within teams) or temporally (e.g., repeated observations within individuals). The multilevel paradigm is moored in the assumption that relationships between constructs often reside across different levels, often requiring data from a lower-level (e.g., employee-level justice perceptions) to be aggregated to a higher-level (e.g., team-level justice climate). Given the increased scrutiny in the social sciences around issues of clarity, transparency, and reproducibility, this paper first introduces a set of data aggregation principles that are then used to guide a brief literature review. We found that reporting practices related to data aggregation are quite variable with little standardization as to what information and statistics are included by authors. We conclude our paper with a Data Aggregation Checklist and a new R package, WGA (Within-Group Agreement & Aggregation), intended to improve the clarity and transparency of future multilevel studies.


Gone Fishin’: Addressing Completeness, Accuracy, and Representativeness in the Search and Coding Processes of Meta-Analyses in the Organizational Sciences (this opens in a new tab)

Ernest H. O’Boyle, Martin Götz & Damian C. Zivic

No research question is compelling enough nor a meta-analytic procedure advanced enough to overcome an ineffectual search or inaccurate coding process. The bulk of attention towards meta-analyses conducted within the organizational sciences has been directed at establishing the types of research questions meta-analyses are best equipped to address and how best to go about analyzing secondary data. However, the meta-analytic process requires rigor and transparency at every step. It is too often the case that the search and coding are non-systematic, resulting in a deficient and/or contaminated dataset and, ultimately, not an accurate reflection of the extant literature. Using the analogy of a fishing trip where fish are available studies and the oceans, lakes, and rivers are the sources of data, we highlight best practices and offer actionable takeaways in conducting and reporting a thorough and representative search and accurate and inclusive coding process for meta-analyses in the organizational sciences.


Assessing Publication Bias: a 7-Step User’s Guide with Best-Practice Recommendations (this opens in a new tab)

Sven Kepes, Wenhao Wang & Jose M. Cortina

Meta-analytic reviews are a primary avenue for the generation of cumulative knowledge in the organizational and psychological sciences. Over the past decade or two, concern has been raised about the possibility of publication bias influencing meta-analytic results, which can distort our cumulative knowledge and lead to erroneous practical recommendations. Unfortunately, no clear guidelines exist for how meta-analysts ought to assess this bias. To address this issue, this paper develops a user’s guide with best-practice recommendations for the assessment of publication bias in meta-analytic reviews. To do this, we review the literature on publication bias and develop a step-by-step process to assess the presence of publication bias and gage its effects on meta-analytic results. Examples of tools and best practices are provided to aid meta-analysts when implementing the process in their own research. Although the paper is written primarily for organizational and psychological scientists, the guide and recommendations are not limited to any particular scientific domain.


Normalizing the Use of Single-Item Measures: Validation of the Single-Item Compendium for Organizational Psychology (this opens in a new tab)

Russell A. Matthews, Laura Pineault & Yeong-Hyun Hong

The application of single-item measures has the potential to help applied researchers address conceptual, methodological, and empirical challenges. Based on a large-scale evidence-based approach, we empirically examined the degree to which various constructs in the organizational sciences can be reliably and validly assessed with a single item. In study 1, across 91 selected constructs, 71.4% of the single-item measures demonstrated strong if not very strong definitional correspondence (as a measure of content validity). In study 2, based on a heterogeneous sample of working adults, we demonstrate that the majority of single-item measures examined demonstrated little to no comprehension or usability concerns. Study 3 provides evidence for the reliability of the proposed single-item measures based on test–retest reliabilities across the three temporal conditions (1 day, 2 weeks, 1 month). In study 4, we examined issues of construct and criterion validity using a multi-trait, multi-method approach. Collectively, 75 of the 91 focal measures demonstrated very good or extensive validity, evidencing moderate to high content validity, no usability concerns, moderate to high test–retest reliability, and extensive criterion validity. Finally, in study 5, we empirically examined the argument that only conceptually narrow constructs can be reliably and validly assessed with single-item measures. Results suggest that there is no relationship between subject matter expert evaluations of construct breadth and reliability and validity evidence collected across the first four studies. Beyond providing an off-the-shelf compendium of validated single-item measures, we abstract our validation steps providing a roadmap to replicate and build upon. Limitations and future directions are discussed.


Optimizing Measurement Reliability in Within-Person Research: Guidelines for Research Design and R Shiny Web Application Tools (this opens in a new tab)

Liu-Qin Yang, Wei Wang, Po-Hsien Huang & Anthony Nguyen

Within-person research has become increasingly popular over recent years in the field of organizational studies for its unique theoretical and methodological advantages for studying dynamic intrapersonal processes (e.g., Dalal et al., Journal of Management 40:1396–1436, 2014; McCormick et al., Journal of Management 46:321–350, 2020). Despite the advancements, there remain serious challenges for many organizational researchers to fully appreciate and appropriately implement within-person research—more specifically, to correctly conceptualize and compute the within-person measurement reliability, as well as navigate key within-person research design factors (e.g., number of measurement occasions, T; number of participants, N; and scale length, I) to optimize within-person reliability. By conducting a comprehensive Monte Carlo simulation with 3240 data conditions, we offer a practical guideline table showing the expected within-person reliability as a function of key design factors. In addition, we provide three easy-to-use, free R Shiny web applications for within-person researchers to conveniently (a) compute expected within-person reliability based on their customized research design, (b) compute observed validity based on the expected reliability and hypothesized within-person validity, and (c) compute observed within-person (as well as between-person) reliability from collected within-person research datasets. We hope these much-needed evidence-based guidelines and practical tools will help enhance within-person research in organizational studies.


Navigation