Free Access Articles, August 2022

Articles Currently Free Access!

The following five articles are a special collection from Journal of Business and Psychology currently free access until September 30th, 2022. 

Normalizing the Use of Single-Item Measures: Validation of the Single-Item Compendium for Organizational Psychology
Russell A. Matthews, Laura Pineault, Yeong-Hyun Hong

The application of single-item measures has the potential to help applied researchers address conceptual, methodological, and empirical challenges. Based on a large-scale evidence-based approach, we empirically examined the degree to which various constructs in the organizational sciences can be reliably and validly assessed with a single item. In study 1, across 91 selected constructs, 71.4% of the single-item measures demonstrated strong if not very strong definitional correspondence (as a measure of content validity). In study 2, based on a heterogeneous sample of working adults, we demonstrate that the majority of single-item measures examined demonstrated little to no comprehension or usability concerns. Study 3 provides evidence for the reliability of the proposed single-item measures based on test–retest reliabilities across the three temporal conditions (1 day, 2 weeks, 1 month). In study 4, we examined issues of construct and criterion validity using a multi-trait, multi-method approach. Collectively, 75 of the 91 focal measures demonstrated very good or extensive validity, evidencing moderate to high content validity, no usability concerns, moderate to high test–retest reliability, and extensive criterion validity. Finally, in study 5, we empirically examined the argument that only conceptually narrow constructs can be reliably and validly assessed with single-item measures. Results suggest that there is no relationship between subject matter expert evaluations of construct breadth and reliability and validity evidence collected across the first four studies. Beyond providing an off-the-shelf compendium of validated single-item measures, we abstract our validation steps providing a roadmap to replicate and build upon. Limitations and future directions are discussed.

Testing Moderation in Business and Psychological Studies with Latent Moderated Structural Equations

 Gordon W. Cheung, Helena D. Cooper-Thomas, Rebecca S. Lau, Linda C. Wang

Most organizational researchers understand the detrimental effects of measurement errors in testing relationships among latent variables and hence adopt structural equation modeling (SEM) to control for measurement errors. Nonetheless, many of them revert to regression-based approaches, such as moderated multiple regression (MMR), when testing for moderating and other nonlinear effects. The predominance of MMR is likely due to the limited evidence showing the superiority of latent interaction approaches over regression-based approaches combined with the previous complicated procedures for testing latent interactions. In this teaching note, we first briefly explain the latent moderated structural equations (LMS) approach, which estimates latent interaction effects while controlling for measurement errors. Then we explain the reliability-corrected single-indicator LMS (RCSLMS) approach to testing latent interactions with summated scales and correcting for measurement errors, yielding results which approximate those from LMS. Next, we report simulation results illustrating that LMS and RCSLMS outperform MMR in terms of accuracy of point estimates and confidence intervals for interaction effects under various conditions. Then, we show how LMS and RCSLMS can be implemented with Mplus, providing an example-based tutorial to demonstrate a 4-step procedure for testing a range of latent interactions, as well as the decisions at each step. Finally, we conclude with answers to some frequently asked questions when testing latent interactions. As supplementary files to support researchers, we provide a narrated PowerPoint presentation, all Mplus syntax and output files, data sets for numerical examples, and Excel files for conducting the loglikelihood values difference test and plotting the latent interaction effects.

The Job Engagement Scale: Development and Validation of a Short Form in English and French

Simon A. Houle, Bruce Louis Rich, Caitlin A. Comeau, Ann-Renée Blais, Alexandre J. S. Morin

The original 18-item Job Engagement Scale (JES18) operationalizes a multidimensional hierarchical conceptualization by Kahn (1990) of the investment and expression of an individual’s preferred self in-role performance. Encompassing three dimensions (i.e., physical, cognitive, and emotional), job engagement is a known predictor of organizational performance and personal outcomes. Using a sample (N = 7185) of military and civilian personnel nested within 60 work units in the Canadian Armed Forces (CAF) and Canadian Department of National Defence (DND), we developed and cross-validated a 9-item short-form (the JES9) of the original JES18 in English and French. Results demonstrated that both linguistic versions of the JES9 and JES18 yielded comparable psychometric properties. The scales also displayed measurement invariance as a function of participants’ sex (male/female), employee type (civilian/regular force/primary reserve), and role (supervisor/employee). Finally, the associations between scores on the JES9 and the JES18 and a series of covariates (i.e., employees’ psychological needs for competence, autonomy, and relatedness, burnout, and turnover intentions) were assessed. Collectively, results highlight the strong psychometric soundness of the English and French versions of the JES9 and the JES18 for organizational practitioners and academics.

Mastering the Use of Control Variables: the Hierarchical Iterative Control (HIC) Approach

Paul E. Spector

There has been growing criticism of the established practice of automatically including control variables into analyses, especially with survey studies. Several authors have explained the pitfalls of improper use and have provided some best practice advice. I build upon this foundation in suggesting a programmatic approach to the use of control variables that can provide evidence to support or refute feasible explanations for why two or more variables are related. The hierarchical iterative control (HIC) approach begins by establishing a connection between two or more variables and then hierarchically adds control variables to rule in or out their possible influence. The HIC approach involves conducting a series of studies to iteratively test relationships among target variables, utilizing a variety of control variable strategies involving multiple methods. A 7-step programmatic approach is described beginning with development of the research question and background literature review and then conducting empirical tests in a hierarchical (within a study) and iterative (across studies) manner.

The Relative Importance and Interaction of Contextual and Methodological Predictors of Mean rWG for Work Climate

Michael J. Burke, Kristin Smith-Crowe, Maura I. Burke, Ayala Cohen, Etti Doveh, Shuhua Sun

A variety of collective phenomena are understood to exist to the extent that workers agree on their perceptions of the phenomena, such as perceptions of their organization’s climate or perceptions of their team’s mental model. Researchers conducting group-level studies of such phenomena measure individuals’ perceptions via surveys and then aggregate data to the group level if the mean within-group agreement for a sample of groups is sufficiently high. Despite this widespread practice, we know little about the factors potentially affecting mean within-group agreement. Here, focusing on work climate, we report an investigation of a number of expected contextual (social interaction) and methodological predictors of mean rWG, a common statistic for judging within-group agreement in applied psychology and management research. We used the novel approach of meta-CART, which allowed us to assess the relative importance and possible interactions of the predictor variables. Notably, mean rWG values are driven by both contextual (average number of individuals per group and cultural individualism-collectivism) and methodological factors (the number of items in a scale and scale reliability). Our findings are largely consistent with expectations concerning how social interaction affects within-group agreement and psychometric arguments regarding why adding more items to a scale will not necessarily increase the magnitude of an index based on a Spearman-Brown “stepped-up correction.” We discuss the key insights from our results, which are relevant to the study of multilevel phenomena relying on the aggregation of individual-level data and informative for how meta-analytic researchers can simultaneously examine multiple moderator variables.

Working on a manuscript?

Avoid the most common mistakes and prepare your manuscript for journal editors.

Learn more