International Journal of Primatology - Table for IJOP Reviewer Guidelines
Checklist for transparency in empirical studies (modified from Tools for Transparency in Ecology and Evolution (TTEE) 1.0, downloaded from https://osf.io/y8aqx/ (this opens in a new tab) 31 Aug 2016).
Category | Description |
Introduction | |
Study purpose | State the original purpose for which the study was conducted and data were gathered |
Methods | |
Meta-analysis | If the study is a meta-analysis, comply with the required components of meta-analysis checklist (see TEE checklist at https://osf.io/y8aqx/) |
Context | If the article reports results from a portion of a larger study, include a statement about the broader scope of the larger study and, if appropriate, indicate other publications from this study |
Blinding | If possible, data recorders should be blind to the experimental treatment imposed on the subjects when gathering data. Report whether or not blinding was implemented. |
Location | For field studies, include specific location(s) (e.g., latitude and longitude, elevation) |
Timing of study | Report study start date, end date, duration, and justification for duration and end date |
Timing of sampling | Report timing (date, time of day if appropriate, etc.) and frequency of sampling, including storage duration for samples |
Study conditions | Describe environmental or other conditions that authors believe may be relevant to the study question and taxa (e.g., temperature, light:dark cycle, etc.) |
Subjects and treatments | Report methods used to choose subjects and to allocate subjects to treatments (e.g. randomized assignment), including organism taxon/taxa, source, and background (e.g., inbred lines, commercial seed, wild caught from X number of males and females and laboratory bred for Y generations, etc.) with institutional approvals as required and appropriate |
Design | Describe design of experiment or study, including complete treatment factors and interactions, design structure (e.g., factorial, blocked, nested, hierarchical), nature of experimental units and replicates |
Magnitude of treatment | Report both treatment and control values (with units and variation) for independent (explanatory/predictor) variables |
Sample size determination | Report how sample size was decided or determined. If sample size was not set prior to initiation of study, explain stopping rule for sampling |
Sample sizes | Report sample sizes for all data, including subsets of data (e.g., each treatment group, other subsets), and sample size used for all statistical analyses. Ideally also reported in results section |
Analysis methods | Provide the precise details of data analysis (including information on computer software programs and packages, and annotated full code or set of commands) as supplementary materials with submission and archived on a permanently supported platform on publication |
Data | Post data on which analyses are based as supplementary materials with submission and archive them in a permanently supported, publicly accessible database on publication |
Materials | Provide comprehensive materials as supplementary documentation with submission and archived on a permanently supported platform on publication. These are materials that are excluded from the methods section but which might be important for interpreting results or later attempts to replicate the study. |
Voucher specimens | If relevant, possible and allowable, deposit voucher specimens of the studied taxon/taxa in an appropriate curated collection |
Replication | If study is a replication, identify it as such and identify differences in methods between this study and the original |
Funding and conflicts of interest | Disclose all funding sources and potential conflicts of interest |
Ethics and permit | Provide relevant details of ethical and other required permits if applicable (e.g., name of permit, permit number, etc.) |
Results | |
Complete statistical reporting | List each statistical test and analysis conducted in sufficient detail such that they can be replicated and fully understood by those experienced in those methods Fully report outcomes from each statistical analysis. For most analyses, this includes (but is not limited to) basic parameter estimates of central tendency (e.g., means) or other basic estimates (regression coefficients, correlation) and variability (e.g., standard deviation) or associated estimates of uncertainty (e.g., confidence/credible intervals) Thorough and transparent reporting will involve additional information that differs depending on the type of analyses conducted. For null hypothesis tests, this also should at minimum include test statistic, degrees of freedom, and p-value. For Bayesian analyses, this also should at a minimum include information on choice of priors and MCMC (Markov chain Monte Carlo) settings (e.g. burn-in, the number of iterations, and thinning intervals). For hierarchical and other more complex experimental designs, full information on the design and analysis, including identification of the appropriate level for tests (e.g. identifying the denominator used for split-plot experiments) and full reporting of outcomes (e.g. including blocking in the analysis if it was used in the design). Relevant information will differ among other types of analyses but in all cases should include enough information to fully evaluate the design and analysis |
post hoc acknowledge-ment | When hypotheses were formulated after data analysis, this should be acknowledged |
References | |
Citation of archived data, code, and materials | Properly cite any archived data, code, or materials made available by others and used in this manuscript |
Literature cited | By citing an article, authors certify they have read the original article |