Logo - springer
Slogan - springer

Psychology - Cognitive Psychology | Cognitive, Affective, & Behaviorial Neuroscience

Cognitive, Affective, & Behavioral Neuroscience

Cognitive, Affective, & Behavioral Neuroscience

Editor-in-Chief: Marie Banich

ISSN: 1530-7026 (print version)
ISSN: 1531-135X (electronic version)

Journal no. 13415

$99.00 Personal Rate e-only for the Americas
Get Subscription

Online subscription, valid from January through December of current calendar year

Immediate access to this year's issues via SpringerLink

1 Volume(-s) with 6 issue(-s) per annual subscription

Automatic annual renewal

More information: >> FAQs // >> Policy

Instructions for Authors

CONTENTS

GENERAL INFORMATION 

Cognitive, Affective, & Behavioral Neuroscience (CABN) publishes theoretical, review and primary research articles concerned with behavior and brain processes in humans. This research can involve both normal participants as well as patients with brain injuries or processes that influence brain function, such as neurological disorders (including both healthy and disordered aging) and psychiatric disorders (e.g., schizophrenia and depression). In addition, articles that use animal models to address cognitive or affective processes involving behavioral, invasive, or imaging methods are also highly welcome. One of the main goals of CABN is to be the premier outlet for strongly psychologically motivated studies of brain-behavior relationships. Thus, the editors highly encourage papers with clear integration between psychological theory and the generation and interpretation of the neuroscientific data. Articles will be appropriate to the journal if they cover topics relating to: 1) cognition such as perception, attention, memory, language, problem solving, reasoning, and decision-making; 2) topics concerning emotional processes, motivation, reward prediction and affective states; and 3) topics relating individual differences in relevant domains, including personality. In all cases, the editors will give highest priority to papers that report a combination of behavioral and neuroscientific methods to address these research topics. We also invite synthetic papers that make use of computational and other approaches to formal modeling. CABN also welcomes multi-study empirical articles or articles integrating multiple methods and approaches to understanding brain-behavior relationships.
Article Formats: As noted above, we are interested in publishing both original research articles and review papers. The review papers could either be in the form of conceptual reviews or quantitative reviews that address pressing issues in the literature and provide a useful synthesis of an existing research literature, or point to new directions for empirical work. In addition, we are interested in publishing novel theoretical formulations that are relevant to the content mission of CABN, as outlined above. Such theoretical articles should provide a novel approach to a question (or set of questions) relevant to the mission of CABN, and/or provide new directions for empirical research.
Article Length: CABN does not have a specific word limit for any of the article formats described above. However, succinctness in presentation and description often enhances the theoretical and empirical impact of an article and should be a guiding force in determining the length of a submitted article.

HOW TO SUBMIT 

Manuscripts are to be submitted electronically via ScholarOne:
If you have not submitted via the ScholarOne submission system before, then you will first be asked to create an account. Otherwise you can use your existing account.

AFFIRMATIONS AT THE TIME OF SUBMISSION 

To submit a manuscript, the corresponding author must affirm that:
(a) the work conforms to Standard 8 of the American Psychological Association’s Ethical Principles of Psychologist and Code of Conduct [click on “Standard 8” on http://www.apa.org/ethics/code/index.aspx ], which speaks to the ethics of conducting and publishing research and sharing data for the purpose of verification;
(b) if the manuscript includes any copyrighted material the author understands that if the manuscript is accepted for publication s/he will be responsible for obtaining written permission to use that material;
(c) if any of the authors has a potential conflict of interest pertaining to the manuscript that conflict has been disclosed in a message to the Editor;
(d) the author(s) understand(s) that before a manuscript can be published in Cognitive, Affective, & Behavioral Neuroscience, the copyright to that manuscript must be transferred to the Psychonomic Society (see http://www.psychonomic.org/psp/access.html for details);
(e) The corresponding author is familiar with the Psychonomic Society’s Statistical Guidelines. Please see tab “Statistical Guidelines” below.

STATISTICAL GUIDELINES 

The Psychonomic Society’s Publications Committee and Ethics Committee and the Editors in Chief of the Society’s six journals worked together (with input from others) to create these guidelines on statistical issues. These guidelines focus on the analysis and reporting of quantitative data. Many of the issues described below pertain to vulnerabilities in null hypothesis significance testing (NHST), in which the central question is whether or not experimental measures differ from what would be expected due to chance. Below we emphasize some steps that researchers using NHST can take to avoid exacerbating those vulnerabilities. Many of the guidelines are long-standing norms about how to conduct experimental research in psychology. Nevertheless, researchers may benefit from being reminded of some of the ways that poor experimental procedure and analysis can compromise research conclusions. Authors are asked to consider the following issues for each manuscript submitted for publication in a Psychonomic Society journal. Some of these issues are specific to NHST, but many of them apply to other approaches as well. We welcome feedback regarding these guidelines via email to info@psychonomic.org with the Subject heading “Statistical Guidelines.”
1. It is important to address the issue of statistical power. Statistical power refers to the probability that a test will reject a false null hypothesis. Studies with low statistical power produce inherently ambiguous results because they often fail to replicate. Thus it is highly desirable to have ample statistical power and to report an estimate of a priori power (not post hoc power) for tests of your main hypotheses. Best practice when feasible is to draw on the literature and/or theory to make a plausible estimate of effect size and then to test a sufficient number of participants to attain adequate power to detect an effect of that size. There is no hard-and-fast rule specifying “adequate” power, and Editors may judge that other considerations (e.g., novelty, difficulty) partially offset low power. If a priori power cannot be calculated because there is no estimate of effect size, then perhaps the analysis should focus on estimation of the effect size rather than on a hypothesis test. In any case, the Method section should make clear what criteria were used to determine the sample size. The main points here are to (a) do what you reasonably can to attain adequate power and (b) explain how the number of participants was determined.
2. Multiple NHST tests inflate null-hypothesis rejection rates. Tests of statistical significance (e.g., t-tests, analyses of variance) should not be used repeatedly on different subsets of the same data set (e.g., on varying numbers of participants in a study) without statistical correction, because the Type I error rate increases across multiple tests.
  • A. One concern is the practice of testing a small sample of participants and then analyzing the data and deciding what to do next depending on whether the predicted effect (a) is statistically significant (stop and publish!), (b) clearly is not being obtained (stop, tweak, and start a new experiment), or (c) looks like it might become significant if more participants are added to the sample (test more participants, then reanalyze; repeat as needed). If this “optional stopping rule” has been followed without appropriate corrections, then report that fact and acknowledge that the Type I error rate is inflated by the multiple tests. Depending on the views of the Editor and reviewers, having used this stopping rule may not preclude publication, but unless appropriate corrections to the Type I error rate are made it will lessen confidence in the reported results. Note that Bayesian data analysis methods are less sensitive to problems related to optional stopping than NHST methods.
    B. It is problematic to analyze data and then drop some participants or some observations, re-run the analyses, and then report only the last set of analyses. If participants or observations were eliminated, then explicitly indicate why, when, and how this was done and either (a) report or synopsize the results of analyses that include all of the observations or (b) explain why such analyses would not be appropriate.
    C. Covariate analyses should either be planned in advance or be described as exploratory. It is inappropriate to analyze data without a covariate, then re-analyze those same data with a covariate and report only the latter analysis as confirmation of an idea. It may be appropriate to conduct multiple analyses in exploratory research, but it is important to report those analyses as exploratory and to acknowledge possible inflations of the Type I error rate.
    D. If multiple dependent variables (DVs) are individually analyzed with NHST, the probability that at least one of them will be “significant” by chance alone grows with the number of DVs. Therefore it is important to inform readers of all of the DVs collected that are relevant to the study. For example, if accuracy, latency, and confidence were measured, but the paper focuses on the accuracy data, then report the existence of the other measures and (if possible) adjust the analyses as appropriate. Similarly, if several different measures were used to tap a construct, then it is important to report the existence of all of those indices, not just the ones that yielded significant effects (although it may be reasonable to present a rationale for why discounting or not reporting detailed results for some of the measures is justified). There is no need to report measures that were available to you (e.g., via a participant pool data base) but that are irrelevant to the study.
3. Rich descriptions of the data help reviewers, the Editor, and other readers understand your findings. Thus it is important to report appropriate measures of variability around means and around effects (e.g., confidence intervals around means and/or around standardized effect sizes).
4. Cherry picking experiments, conditions, DVs, or observations can be misleading. Give readers the information they need to gain an accurate impression of the reliability and size of the effect in question.
  • A. Conducting multiple experiments with the same basic procedure and then reporting only the subset of those studies that yielded significant results (and putting the other experiments in an unpublished “file drawer”) can give a misleading impression of the size and replicability of an effect. If several experiments testing the same hypothesis with the same or very similar methods have been conducted and have varied in the pattern of significant and null effects obtained (as would be expected, if only due to chance), then you should report both the significant and the non-significant findings. Reporting the non-significant findings can actually strengthen evidence for the existence of an effect when meta-analytical techniques pool effect sizes across experiments. It is not generally necessary to report results from exploratory pilot experiments, such as when pilot experiments were used to estimate effect size, provided the final experiment has high power. In contrast, it is not appropriate to run multiple low-powered pilot experiments on a given topic and then report only the experiments that reject the null hypothesis.
    B. Deciding whether or not to report data from experimental conditions post hoc, contingent on the outcome of NHST, inflates the Type I error rate. Therefore, please inform readers of all of the conditions tested in the study. If, for example, 2nd, 4th, and 6th graders were tested in a study of memory development then it is appropriate to report on all three of those groups, even if one of them yielded discrepant data. This holds even if there are reasons to believe that some data should be discounted (e.g., due to a confound, a ceiling or floor effect in one condition, etc.). Here again, anomalous results do not necessarily preclude publication (after all, even ideal procedures yield anomalous results sometimes by chance). Failing to report the existence of a condition that did not yield the expected data can be misleading.
    C. Deciding to drop participants or observations post hoc contingent on the outcome of NHST inflates the Type I error rate. Best practice is to set inclusion/exclusion criteria in advance and stick to them, but if that is not done then whatever procedure was followed should be reported.
5. Be careful about using null results to infer “boundary conditions” for an effect. A single experiment that does not reject the null hypothesis provides only weak evidence for the absence of an effect. Too much faith in the outcome of a single experiment can lead to hypothesizing after the results are known (HARKing), which can lead to theoretical ideas being defined by noise in experimental results. Unless the experimental evidence for a boundary condition is strong, it may be more appropriate to consider a non-significant experimental finding as a Type II error. Such errors occur at a rate that reflects experimental power (e.g., if power is .80, then 20% of exact replications would be expected to fail to reject the null).
6. Authors should use statistical methods that best describe and convey the properties of their data. The Psychonomic Society does not require authors to use any particular data analysis method. The following sections highlight some important considerations.
  • A. Statistically significant findings are not a prerequisite for publication in Psychonomic Society journals. Indeed, too many significant findings relative to experimental power can indicate bias. Sometimes strong evidence for null effects can be deeply informative for theorizing and for identifying boundary conditions of an effect.
    B. In many scientific investigations the goal of an experiment is to measure the magnitude of an effect with some degree of precision. In such a situation a hypothesis test may be inappropriate as it only indicates whether data appear to differ from some specific theoretical value. Sometimes stronger scientific arguments can be made with confidence intervals (of parameter values or of standardized effect sizes). Moreover, some of the bias issues described above can be avoided by designing experiments to measure effects to a desired degree of precision (range of confidence interval).
    C. The Psychonomic Society encourages the use of data analysis methods other than NHST when appropriate. For example, Bayesian data analysis methods avoid some of the problems described above. They can be used instead of traditional NHST methods for both hypothesis testing and estimation.
Last Word
Ultimately, journal Editors work with reviewers and authors to promote good scientific practice in publications in Psychonomic Society journals. A publication decision on any specific manuscript depends on much more than the above guidelines, and individual Editors and reviewers may stress some points more than others. Nonetheless, all else being equal submissions that comply with these guidelines will be better science and be more likely to be published than submissions that deviate from them.
Resources
There are many excellent sources for information on statistical issues. Listed below are some that the 2012 Publications Committee and Editors recommend.
Confidence Intervals:
Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. New York, NY US: Routledge/Taylor & Francis Group. (see www.latrobe.edu.au/psy/research/projects/esci ).
Masson, M. J., & Loftus, G. R. (2003). Using confidence intervals for graphically based data interpretation. Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Expérimentale, 57, 203-220. doi:10.1037/h0087426
Effect Size Estimates:
Ellis, P. D. (2010). The essential guide to effect sizes: Statistical power, meta-analysis and the interpretation of research results. Cambridge University Press. ISBN 978-0-521-14246-5.
Fritz, C. O., Morris, P. E., & Richler, J. J. (2011). Effect size estimates: Current use, calculations and interpretation. Journal of Experimental Psychology: General, 141, 2-18.
Grissom, R. J., & Kim, J. J. (2012). Effect sizes for research: Univariate and multivariate applications (2nd ed.). New York, NY US: Routledge/Taylor & Francis Group.
Meta-analysis:
Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. New York, NY US: Routledge/Taylor & Francis Group. (see www.latrobe.edu.au/psy/research/projects/esci ).
Littell, J. H., Corcoran, J., & Pillai, V. (2008). Systematic reviews and meta-analysis. New York: Oxford University Press.
Bayesian Data Analysis:
Kruschke, J. K. (2011). Doing Bayesian data analysis: A tutorial with R and BUGS. San Diego, CA US: Elsevier Academic Press. (See www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/)
Kruschke, J. K. (in press). Bayesian estimation supersedes the t test. Journal of Experimental Psychology: General. For a preprint see http://www.indiana.edu/~kruschke/BEST/BEST.pdf .
Power Analysis
Faul, F., Erdfelder, E., Lang, A., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175-191. (See http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/ )

MANUSCRIPT STYLE 

Manuscripts are to adhere to the conventions described in the Publication Manual of the American Psychological Association (6th ed.). See www.apastyle.org/ for information on APA style, or type “APA style” into a search engine to find numerous online sources of information about APA style. Here we highlight only the most fundamental aspects of that style.
  • Layout: All manuscripts are to be double spaced and have 1” margins with page numbers in the upper right corner of each page.
    Title Page: The title page must include the authors’ names and affiliations and the corresponding author’s address, telephone number, and e-mail address.
    Abstract: There must be an abstract of no more than 250 words.
    Sections: Manuscript should be divided into sections (and perhaps subsections) appropriate for their content (e.g., introduction/background, Method, Results, etc.), as per APA style.
    Acknowledgments: The Author Note should include sources of financial support and any possible conflicts of interest. If desirable, contributions of different authors may be briefly described here. Reviewers and the Editor should not be thanked in the Author Note.
    Figures and Tables: Figures and tables are to be designed as per APA style.
    Location of Figures, Tables, and Footnotes: In submitted manuscripts, figures and tables can be embedded in the body of the text and footnotes can be placed at the bottom of the page on which the footnoted material is referenced. Note that this is a departure from APA style; if you prefer you can submit the manuscript with the figures, tables, and footnotes at the end, but it is slightly easier for reviewers if these elements appear near the text that refers to them. When a paper is accepted, in the final version that the author submits for production each figure and table must be on a separate page near the end of the manuscript and all footnotes must be listed on a footnote page, as per the APA Publication Manual.
    Citations and References: These should conform to APA style.

Acknowledgments and Funding Information 

Acknowledgments of people, grants, funds, etc. should be placed in a separate section before the reference list. The names of funding organizations should be written in full. In addition, please provide the funding information in a separate step of the submission process in the peer review system. Funder names should preferably be selected from the standardized list you will see during submission. If the funding institution you need is not listed, it can be entered as free text. Funding information will be published as searchable metadata for the accepted article, whereas acknowledgements are published within the paper.

SUPPLEMENTAL MATERIAL 

Authors are encouraged to attach, in a separate file or files, supplemental material (e.g., data sets such as stimulus norms or raw data, demonstrations or pictorial, auditory, or video stimuli, additional information regarding methods, additional tables or figures, relevant program source code [excluding executable code] for modeling or stimulus generation, or supplementary analyses that are not central to the main thrust of an article). The supplemental material will be reviewed along with the submitted article, or may be added at the time of acceptance in consultation with the Editor. Supplemental material will be published online, linked to the accepted article. The Editor makes decisions regarding supplemental material.

COLOR FIGURES 

Authors are encouraged to use color in figures if they believe that doing so improves the clarity of those figures. With the approval of the Editor, color can be used in the online version of the journal at no cost to authors. Moreover, as of 2011, the Editor has a limited budget for printing hard copy articles with color figures at no expense to authors. The Editor makes the final decision as to whether or not an article will be printed in hard copy with color: The greater the scientific value of using color the more likely an Editor will approve its use. Also, authors can pay for printed production of their articles with color figures; the current fee is $1,100 per article (regardless of the number of color figures). Many of the articles submitted to CABN are ones that need to make use of color figures in order to most clearly present the data. As with most journals, we must charge for the publication of color pictures in the print version of articles. However, if authors wish, they may opt to publish a black and white version of pictures/tables in the print version (as long as they are understandable to readers) and publish color versions in the on-line versions of articles.
Whether used only online or both in print and online, color figures should (insofar as is possible) be designed such that grayscale versions are interpretable. This is important because readers may wish to print or photocopy articles in grayscale.

English Language Editing 

For editors and reviewers to accurately assess the work presented in your manuscript you need to ensure the English language is of sufficient quality to be understood. If you need help with writing in English you should consider:
  • Asking a colleague who is a native English speaker to review your manuscript for clarity.
  • Visiting the English language tutorial which covers the common mistakes when writing in English.
  • Using a professional language editing service where editors will improve the English to ensure that your meaning is clear and identify problems that require your review. Two such services are provided by our affiliates Nature Research Editing Service and American Journal Experts. Springer authors are entitled to a 10% discount on their first submission to either of these services, simply follow the links below.
Please note that the use of a language editing service is not a requirement for publication in this journal and does not imply or guarantee that the article will be selected for peer review or accepted.
If your manuscript is accepted it will be checked by our copyeditors for spelling and formal style before publication.

.

为便于编辑和评审专家准确评估您稿件中陈述的研究工作,您需要确保您的英语语言质量足以令人理解。如果您需要英文写作方面的帮助,您可以考虑:
● 请一位以英语为母语的同事审核您的稿件是否表意清晰。
● 查看一些有关英语写作中常见语言错误的教程。
● 使用专业语言编辑服务,编辑人员会对英语进行润色,以确保您的意思表达清晰,并识别需要您复核的问题。我们的附属机构 Nature Research Editing Service 和合作伙伴 American Journal Experts 即可提供此类服务。
请注意,使用语言编辑服务并非在期刊上发表文章的必要条件,同时也并不意味或保证文章将被选中进行同行评议或被接受。
如果您的稿件被接受,在发表之前,我们的文字编辑会检查您的文稿拼写是否规范以及文体是否正式。

.

エディターと査読者があなたの論文を正しく評価するには、使用されている英語の質が十分に高いことが必要とされます。英語での論文執筆に際してサポートが必要な場合には、次のオプションがあります:
・英語を母国語とする同僚に、原稿で使用されている英語が明確であるかをチェックしてもらう。
・英語で執筆する際のよくある間違いに関する英語のチュートリアルを参照する。
・プロの英文校正サービスを利用する。校正者が原稿の意味を明確にしたり、問題点を指摘し、英語の質を向上させます。Nature Research Editing Service とAmerican Journal Experts の2つは弊社と提携しているサービスです。Springer の著者は、いずれのサービスも初めて利用する際には10%の割引を受けることができます。以下のリンクを参照ください。
英文校正サービスの利用は、投稿先のジャーナルに掲載されるための条件ではないこと、また論文審査や受理を保証するものではないことに留意してください。
原稿が受理されると、出版前に弊社のコピーエディターがスペルと体裁のチェックを行います。

.

영어 원고의 경우, 에디터 및 리뷰어들이 귀하의 원고에 실린 결과물을 정확하게 평가할 수 있도록, 그들이 충분히 이해할 수 있을 만한 수준으로 작성되어야 합니다. 만약 영작문과 관련하여 도움을 받기를 원하신다면 다음의 사항들을 고려하여 주십시오:
• 귀하의 원고의 표현을 명확히 해줄 영어 원어민 동료를 찾아서 리뷰를 의뢰합니다.
• 영어 튜토리얼 페이지에 방문하여 영어로 글을 쓸 때 자주하는 실수들을 확인합니다.
• 리뷰에 대비하여, 원고의 의미를 명확하게 해주고 리뷰에서 요구하는 문제점들을 식별해서 영문 수준을 향상시켜주는 전문 영문 교정 서비스를 이용합니다. Nature Research Editing Service와 American Journal Experts에서 저희와 협약을 통해 서비스를 제공하고 있습니다. Springer 저자들이 본 교정 서비스를 첫 논문 투고를 위해 사용하시는 경우 10%의 할인이 적용되며, 아래의 링크를 통하여 확인이 가능합니다.
영문 교정 서비스는 게재를 위한 요구사항은 아니며, 해당 서비스의 이용이 피어 리뷰에 논문이 선택되거나 게재가 수락되는 것을 의미하거나 보장하지 않습니다.
원고가 수락될 경우, 출판 전 저희측 편집자에 의해 원고의 철자 및 문체를 검수하는 과정을 거치게 됩니다.

OTHER QUESTIONS 

If you have questions not answered above, please direct them to the Editor of the journal in question:
Marie T. Banich, Ph.D.
Director, Institute of Cognitive Science
Executive Director, Intermountain Neuroimaging Consortium
Professor, Dept. of Psychology & Neuroscience
D420 Muenzinger Hall
University of Colorado at Boulder
UCB 344
Boulder, CO 80309
Phone: 303-492-6655 Fax: 303-492-7177

For authors and editors


  • Journal Citation Reports®
    2016 Impact Factor
  • 3.263
  • Aims and Scope

    Aims and Scope

    Close

    Cognitive, Affective, & Behavioral Neuroscience publishes theoretical, review, and primary research articles concerned with behavior and brain processes in humans, both normal participants and patients with brain injuries or processes that influence brain function, such as neurological disorders (including both healthy and disordered aging) and psychiatric disorders (e.g., schizophrenia and depression). In addition, articles that use animal models to address cognitive or affective processes involving behavioral, invasive, or imaging methods are also highly welcome. One of the main goals of CABN is to be the premier outlet for strongly psychologically motivated studies of brain–behavior relationships. Thus, the editors highly encourage papers with clear integration between psychological theory and the conduct and interpretation of the neuroscientific data. Articles will be appropriate to the journal if they cover: (1) topics relating to cognition, such as perception, attention, memory, language, problem solving, reasoning, and decision-making; (2) topics concerning emotional processes, motivation, reward prediction, and affective states; and (3) topics relating to individual differences in relevant domains, including personality. In all cases, the editors will give highest priority to papers that report a combination of behavioral and neuroscientific methods to address these research topics.

    Further, the editors will give highest priority to papers that include sample sizes that provide adequate power.  The fields of psychology and functional neuroimaging have become increasingly concerned that small sample sizes contribute to replication failures in the literature, and are converging on the consensus that there is a need to increase minimum samples sizes.

    We also invite synthetic papers that make use of computational and other approaches to formal modeling. CABN also welcomes multistudy empirical articles or articles integrating multiple methods and approaches to understanding brain–behavior relationships.

    For Manuscript Submission information and Author Instructions, please visit the Psychonomic Society homepage at:

    http://www.psychonomic-journals.org

  • Submit Online
  • Open Choice - Your Way to Open Access
  • Instructions for Authors

    Instructions for Authors

    Close

  • Author Academy: Training for Authors
  • Copyright Information

    Copyright Information

    Close

    Copyright of this Journal is held by The Psychonomic Society, Inc. However, for questions relating to permissions, please visit the following website: http://www.springer.com/rights?SGWID=0-122-0-0-0

Services for the Journal

Alerts for this journal

 

Get the table of contents of every new issue published in Cognitive, Affective, & Behavioral Neuroscience.