Вы находитесь на странице: 1из 27

This article was downloaded by: [Chinese University of Hong Kong]

On: 08 February 2015, At: 22:29


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

The Clinical Neuropsychologist


Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/ntcn20

Consolidated Standards of Reporting


Trials (CONSORT): Considerations for
Neuropsychological Research
a b cd
Justin B. Miller , Mike R. Schoenberg & Robert M. Bilder
a
Cleveland Clinic, Lou Ruvo Center for Brain Health, Las Vegas,
NV, USA
b
Departments of Psychiatry and Behavioral Neurosciences,
Neurology, and Brain Repair, University of South Florida Morsani
College of Medicine, Tampa, FL, USA
c
Department of Psychiatry and Biobehavioral Sciences, David
Geffen School of Medicine at UCLA
Click for updates d
UCLA College of Letters & Science, University of California, Los
Angeles, CA, USA
Published online: 28 Apr 2014.

To cite this article: Justin B. Miller, Mike R. Schoenberg & Robert M. Bilder (2014) Consolidated
Standards of Reporting Trials (CONSORT): Considerations for Neuropsychological Research, The
Clinical Neuropsychologist, 28:4, 575-599, DOI: 10.1080/13854046.2014.907445

To link to this article: http://dx.doi.org/10.1080/13854046.2014.907445

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015
The Clinical Neuropsychologist, 2014
Vol. 28, No. 4, 575–599, http://dx.doi.org/10.1080/13854046.2014.907445

Consolidated Standards of Reporting Trials (CONSORT):


Considerations for Neuropsychological Research

Justin B. Miller1, Mike R. Schoenberg2, and Robert M. Bilder3,4


1
Cleveland Clinic, Lou Ruvo Center for Brain Health, Las Vegas, NV, USA
2
Departments of Psychiatry and Behavioral Neurosciences, Neurology, and Brain Repair,
University of South Florida Morsani College of Medicine, Tampa, FL, USA
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

3
Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine at
UCLA
4
UCLA College of Letters & Science, University of California, Los Angeles, CA, USA

Evidence-based medicine (EBM) is a pillar of clinical medicine (Sackett, Rosenberg, Gray,


Haynes, & Richardson, 1996), but only recently has this been systematically discussed within
the context of clinical neuropsychology (e.g., Chelune, 2010). Across the medical sciences, ran-
domized controlled trials (RCTs) are often considered the most important source of evidence for
treatment efficacy. To facilitate the conduct, dissemination, and evaluation of research findings,
reporting standards have been implemented, including the Consolidated Standards of Reporting
Trials (CONSORT) statement. This paper considers the implications of the CONSORT statement
for the reporting of clinical trials that include neuropsychological endpoints. Adopting specific
guidelines for trials involving neuropsychology endpoints will ultimately serve to strengthen the
empirical evidence base by increasing uniformity within the literature, decrease ambiguity in
reporting, improving trial design, and fostering methodological transparency. Implementing
uniform reporting standards will also facilitate meta-analytic review of evidence from published
trials, benefiting researchers, clinicians, educators and practitioners, as well as journal editors,
reviewers, and ultimately, health care consumers.

Keywords: CONSORT statement; Reporting standards; Randomized controlled trial; Research methods;
Cognitive endpoints.

INTRODUCTION
Evidence-based medicine (EBM) is defined as “the conscientious, explicit, and
judicious use of current best evidence in making decisions about the care of individual
patients” (Sackett et al., 1996, p. 71) and is increasingly recognized as critical for the
future of clinical neuropsychology (Bilder, 2011; Chelune, 2010; Kaufman, Boxer, &
Bilder, 2013). Across the medical sciences, the evidence upon which best practices are
determined is derived from the peer-reviewed, empirical literature base. In order to
facilitate the conduct, dissemination, and evaluation of research findings, reporting
guidelines have been introduced and widely adopted throughout medicine. Examples of
such guidelines include the Standards for the Reporting of Diagnostic accuracy studies
(STARD; Bossuyt et al., 2003), Strengthening the Reporting of Observational studies in
Epidemiology (STROBE; von Elm et al., 2008), Transparent Reporting of Evaluations
with Nonrandomized Designs (TREND; Des Jarlais, Lyles, & Crepaz, 2004), and the

Address correspondence to: Justin B. Miller, 888 W. Bonneville Ave, Las Vegas, NV 89106, USA.
E-mail: millerj4@ccf.org

© 2014 Taylor & Francis


576 JUSTIN B. MILLER ET AL.

Consolidated Standards of Reporting Trials (CONSORT; Schulz, Altman, & Moher,


2010), which is specific to randomized controlled trials (RCT). The American Psycho-
logical Association (APA) has also developed a set of reporting standards, the Journal
Article Reporting Standards (JARS), to promote minimum reporting guidelines for jour-
nals published by the APA (Cooper, Appelbaum, Maxwell, Stone, & Sher, 2008). The
JARS standards are broad in scope and extend reporting guidelines beyond those estab-
lished for a particular study design or specialty (i.e., neuropsychology). In the health
sciences literature, the RCT is typically regarded as the gold standard assessment of
treatment efficacy and with the increasing prevalence of neuropsychological and cogni-
tive endpoints in RCTs, it is imperative that neuropsychology adopts similar standards
and creates extensions that consider the unique features of these endpoints.
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

To our knowledge, there has not been any systematic evaluation of CONSORT
adherence within the neuropsychological literature, and at this time there is only one
neuropsychology-specific journal that endorses the CONSORT statement (Neuropsy-
chology). The primary aim of the present paper is to review the CONSORT criteria,
emphasizing their application to RCTs that employ neuropsychological variables as pri-
mary or secondary endpoints. This paper parallels similar efforts calling for generaliza-
tion of reporting guidelines among peer-reviewed journals that target the dissemination
of neuropsychological research (e.g., Bowden, Harrison, & Loring, 2013; Loring &
Bowden, 2013). This paper specifically aims to promote uniform reporting, and to
increase familiarity with the CONSORT guidelines among clinicians, researchers, educa-
tors, reviewers, and editors. While there are similarities across reporting guidelines
emphasizing the complete and transparent reporting of research, and many aspects of
the TREND and STROBE statements in particular are applicable to neuropsychological
research, we have limited our focus to the CONSORT statement. Readers interested in
the implementation of the STROBE statement for observational studies in neuropsychol-
ogy are referred to Loring and Bowden (2013), and those interested in reporting guide-
lines more broadly are referred to the Equator Network (www.equator-network.org). We
believe that improved reporting will increase methodological transparency and facilitate
critical evaluation of published trials, independent replication of reported findings, and
clinical application of obtained results. Implementing established reporting guidelines
would also facilitate literature searches and indexing, decrease ambiguity in reporting,
and ultimately strengthen the empirical evidence base by easing the synthesis of pub-
lished findings (e.g., meta-analysis), which is a cornerstone of evidence-based biomedi-
cal research (Ioannidis, 2005). Although reporting guidelines such as the CONSORT
statement originated in the field of medicine, the same principles of good treatment
design and reporting are nothing new to the field of psychology and are indeed a core
component of the scientist-practitioner model.
The two primary clinical trial research designs important for neuropsychology are
studies that include neuropsychological endpoints, and studies that include a neuropsy-
chological intervention or manipulation. In some studies, neuropsychological measures
and assessment of cognition represent the primary endpoint and method of evaluation
(e.g., traumatic brain injury, Couillet et al., 2010; dementia, Craft et al., 2012;
schizophrenia, Keefe et al., 2007; epilepsy, Masur et al., 2013). Cognition may also serve
as a secondary endpoint or reflect a clinical proxy of disease status (e.g., Parkinson’s
disease, Edwards et al., 2013; Alzheimer’s disease, Smith et al., 2012). In other studies,
the emphasis may be placed on neuropsychological intervention (e.g., cognitive
CONSORT AND NEUROPSYCHOLOGY 577

remediation, neurorehabilitation, and recently “brain training”) with a goal of improving


cognitive ability, emotional well-being, or functional status in populations with cognitive
dysfunction (e.g., schizophrenia, Franck et al., 2013; mild cognitive impairment,
Hampstead, Stringer, Stilla, Giddens, & Sathian, 2012; traumatic brain injury, Hanks,
Rapport, Wertheimer, & Koviak, 2012; multiple sclerosis, Stuifbergen et al., 2012), or
among healthy people who are interested in enhancing cognitive performance or prevent-
ing cognitive decline (Mahncke et al., 2006; Wolinsky et al., 2010; Wolinsky, Vander
Weg, Howren, Jones, & Dotson, 2013). In all of these study designs, standardized report-
ing of neuropsychological research, including operational definitions of cognitive con-
structs, cognitive measures, and cognitive interventions, is essential to increase
uniformity in the literature and strengthen the evidence base.
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

Reporting standards for medical research have been in use for well over two dec-
ades, after reviews of several medical journals found that the reporting of essential
methodological and statistical aspects of clinical trials was lacking (e.g., Altman &
Dore, 1990; Pocock, Hughes, & Lee, 1987; Schulz, Chalmers, Altman, Grimes, &
Dore, 1995), and that the methodological rigor of meta-analysis was negatively corre-
lated with reported treatment efficacy, such that lower-quality trials were associated with
misleadingly higher estimates of treatment efficacy (Moher et al., 1998). As a result,
several groups of medical journal editors, clinical trialists, epidemiologists, and method-
ologists collaborated in developing the first version of the Consolidated Standards of
Reporting Trials (CONSORT) statement (Begg et al., 1996; Moher, 1998), which estab-
lished a set of minimum information about a clinical trial that must be included in pub-
lished reports to assure complete and transparent publication of clinical trial results.
The original standards focused on individually randomized, two-group parallel trials
(i.e., randomized controlled trials), and included 21 criteria in addition to a flowchart of
the participant process as they progressed through the study. In 2001 the statement
underwent its first revision (Moher, Schulz, & Altman, 2001), which retained only those
items deemed fundamental to reporting. In 2010 the statement underwent a second revi-
sion to its current state (Schulz et al., 2010), which now includes 25 criteria. The most
notable changes included simplification and clarification of language and improvement
in the consistency of item language as well as addition of sub-items to facilitate report-
ing of adherence to the guidelines.
Several official extensions of the CONSORT Statement have been published
including application to cluster randomized trials (Campbell, Elbourne, & Altman,
2004), non-inferiority and equivalence randomized trials (Piaggio, Elbourne, Pocock,
Evans, & Altman, 2012), pragmatic trials (Zwarenstein et al., 2008), non-pharmacologi-
cal treatments (Boutron, Moher, Altman, Schulz, & Ravaud, 2008a, 2008b), as well as
better reporting of harms (Ioannidis et al., 2004), abstracts (Hopewell et al., 2008), and
patient reported outcomes (Calvert et al., 2013). Several unofficial, recognized exten-
sions of the CONSORT Statement have also been published. In 2003 a committee was
formed to evaluate the current state of the behavioral medicine literature and it subse-
quently determined that the extant RCT literature base was neither adequate nor uni-
form (Davidson et al., 2003). As a result, Davidson et al. (2003) proposed the
Evidence-based Behavioral Medicine (EBBM) Statement as an extension of CONSORT
that provided explicit descriptions of the 22 CONSORT criteria along with 5 additional
criteria specific to behavioral medicine research, which has been recognized as an unof-
ficial extension of the CONSORT statement. However, the adoption of these publishing
578 JUSTIN B. MILLER ET AL.

guidelines within the broader field of behavioral medicine, let alone neuropsychology,
is inconsistent and highly variable. Moreover, the extensions to behavioral medicine
lack emphasis on evaluating the reliability and validity of neuropsychological endpoints
employed and many threats to internal and external validity are not directly accounted
for.
Most medical journals require manuscripts reporting RCTs to submit a completed
CONSORT checklist along with the submission in order to be considered for publica-
tion, and a recent proposal has further advocated for a “transparency declaration” by
the lead author (Altman & Moher, 2013). Systematic research comparing studies pub-
lished before and after implementation of the CONSORT statement have generally
found improved reporting of RCTs among journals that had adopted the criteria (e.g.,
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

Alvarez, Meyer, Gourraud, & Paul, 2009; Greenfield et al., 2009; Han et al., 2009;
Moher, Jones, & Lepage, 2001; Plint et al., 2006), although concerns regarding a lack
of consistent reporting of critical elements in clinical research remain (e.g., Agha,
Cooper, & Muir, 2007; Folkes, Urquhart, & Grunfeld, 2008; Turner, Shamseer, Altman,
Schulz, & Moher, 2012; Uman et al., 2010). For example, Folkes et al. (2008) con-
ducted a review of four leading general medicine journals and found that only about
half of randomized clinical trials presented the full pre-randomization data suggested by
CONSORT. Turner and colleagues (2012) systematically reviewed 53 meta-analyses
summarizing a combined total of 16,604 RCTs published between 2005 and 2010, and
found that journals adhering to CONSORT tended to publish more complete results of
RCTs than those journals that did not for 25 of 27 reporting outcomes assessed, but
concluded overall that reporting remained suboptimal.
Within the behavioral medicine literature, there have been few systematic evalua-
tions of adherence to reporting guidelines, although a recent review of the clinical psy-
chology, education, criminology, and social work literatures suggests that adherence to
existing reporting guidelines among studies involving social and psychological inter-
ventions was poor (Grant, Mayo-Wilson, Melendez-Torres, & Montgomery, 2013). To
our knowledge, there has not been any study evaluating implementation or adherence
to the JARS reporting guidelines. Thus, the extent to which the quality of research
reports published in APA journals have been improved by adhering to the JARS publi-
cation guidelines remains unknown. Despite evidence that adherence to CONSORT
leads to improved reporting, uptake of reporting guidelines in neuropsychology may be
limited as a result of unfamiliarity among authors and reviewers, and limited endorse-
ment by editorial boards.
Adopting a universal standard for reporting research methodology will also
increase the transparency of neuropsychological studies by clearly and explicitly laying
out the methodology employed in execution of experimental paradigms, as well as data
collection and data analysis. One of the primary tenets of the scientific method is inde-
pendent replication, and in order to replicate a published study, research methodologies
must be laid out in explicit, comprehensive detail in the published manuscript or avail-
able supplements. Although most published studies include a methodological overview,
many do not provide adequate detail documenting the research process to such an
extent as to allow for replication; this is particularly true for statistical analyses. Accu-
rate assessment of cognitive function requires a sophisticated understanding of psycho-
metric theory in order to isolate superfluous noise from cognitive signals. Although
development of statistical methods to account for relevant covariates (e.g., practice
CONSORT AND NEUROPSYCHOLOGY 579

effects, reliable change with repeated measurement) is an integral component of


neuropsychological research (particularly RCTs employing cognitive endpoints),
detailed reporting is crucial for critical review.

GENERAL REPORTING RECOMMENDATIONS


The current CONSORT criteria are listed in Table 1, with descriptions of how the
individual criteria might be implemented within neuropsychological research paradigms.
Throughout, examples are given to help the reader appreciate that many aspects of the
CONSORT recommendations do not require extensive additional work or reporting to
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

meet these recommended guidelines, and that adhering to CONSORT criteria need not
be lengthy or complex. Adherence to CONSORT guidelines affects the reporting of the
research study in its entirety, beginning with the title. The CONSORT group website
(www.consort-statement.org) contains multiple examples of well-reported studies for
reference.

Title and abstract


Reporting standards encourage authors to make the primary methodology of a
study explicit and readily identifiable in the study title. As noted in Criterion 1,
reporting results from a RCT should be clearly stated in the title of the manuscript
by including the phrase (or some variant thereof) “randomized controlled trial.” We
have found multiple examples of recent RCTs in neuropsychology journals that failed
to satisfy this criterion and many studies employing randomized controlled methodol-
ogies neglected to identify themselves as such. It is further critical to write an abstract
that contains a concise summary of the study design. The importance of an abstract
that conforms to CONSORT cannot be overestimated, as the abstract of a paper is the
most publicly accessible component of the manuscript after the title; may be all that
is read after the title (Barry, Ebell, Shaughnessy, Slawson, & Nietzke, 2001; Fontelo,
2011; Haynes et al., 1990; Marcelo et al., 2013; Saint et al., 2000); and major search
engines including PubMed and Google Scholar both focus on title and abstract rather
than full text. An official CONSORT extension specific to reporting of both article
and conference abstracts has been published (Hopewell et al., 2008). In brief, the
abstract should identify essential methodological and statistical information, including
the trial design (i.e., parallel, cluster, etc.), if (and how) participants were randomized,
who was blinded (e.g., participants, care givers, researchers), eligibility criteria for
participants and the setting where data collected, the hypothesis(es) and/or study
objective, type of intervention, and primary and secondary outcome measures
employed.
The abstract should also report the number of participants recruited, the number
randomized to each group, and the number analyzed in each group. Where applicable,
the outcome variables should specify the score metric (e.g., raw vs. standardized scores)
and what combination of variables was used to generate any composite scores or
indexes. The primary outcome measure and estimated effect size (with confidence inter-
val) should also be reported for each group. Conclusions should be summarized and
include a general interpretation.
580 JUSTIN B. MILLER ET AL.

Table 1. Current CONSORT criteria with description of importance for neuropsychology

CONSORT criteria Description

Title and abstract


1. Identification as a randomized trial in the title The primary methodology should be explicitly stated
in the title of the manuscript. For example, if it is an
archival study, the title should reflect this design, or if
it is a RCT, this should be explicit.

1b. Structured summary of trial design, methods, The abstract should contain, as detailed as possible,
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

results and conclusions information about each of the primary components of


the study including the methodology (e.g., design,
analyses, participants), primary outcomes and main
conclusions.

Introduction
Background and objectives
2a. Scientific background and explanation of The background section must appropriately and
rationale succinctly summarize the relevant literature and gaps.

2b. Specific objectives or hypothesis Hypotheses and/or objectives should be stated clearly
and concisely; hypotheses should follow logically
from the arguments set forth in the literature review.

Methods
Trial design
3a. Description of trial design (such as parallel, An overall description of the study methodology and
factorial) including allocation ratio trial design must be presented in clear and simple
language. The language used in this description
should be sufficiently detailed so as to allow for
complete transparency of the design.

3b. Important changes to methods after trial If at any point during the study period subsequent to
commencement (such as eligibility criteria), with initiation of the trial, the intended protocol is altered
reasons in any way, this must be clearly stated within the
manuscript, including thorough description of the
deviation from the original protocol and include
justification for the change.

Participants
4a. Eligibility criteria for participants The specific criteria required for study eligibility
must be thoroughly detailed, addressing basic
demographic factors and relevant clinical factors (if
applicable). In addition, factors that would preclude
individuals from participation must also be described,
along with rationale and justification for the
exclusion.

(Continued)
CONSORT AND NEUROPSYCHOLOGY 581

Table 1. (Continued)

CONSORT criteria Description

4b. Settings and locations where data were collected A description of each site where data were collected
is required and should include a broad description of
the geographic region (e.g., urban vs. suburban vs.
rural, etc.) and clinical context (if applicable). If data
were collected within a clinical setting, the nature of
the clinic should be described including the primary
referral sources, nature of the referrals, etc. If an
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

experimental population is studied, this demographic


factors of the target population should be explicated.

Interventions
5. Interventions are described with sufficient details Any experimental manipulation, treatment or
to allow replication, including how and when they intervention must be clearly and sequentially
were actually administered outlined, beginning with the consent process. A
description of key personnel that includes relevant
roles and level of training within the experiment must
also be included. A thorough description of the
control group is also required including justification
for the chosen methodology (e.g., placebo, waitlist
control, treatment as usual).

Outcomes
6a. Completely defined, pre-specified primary and The specific endpoints and outcome measures used to
secondary outcome measures, including how and evaluate the intervention must be described in detail,
when they were assessed including rationale for selection, which should be
based on operationalized criteria, and theoretical and
empirical evidence. Any additional measures
evaluated in the context of the study must also be
described. The basis of comparison must also be
indicated (e.g., parallel groups at endpoint, baseline).

6b. Any changes to trial outcomes after the trial As with any changes to the experimental method, any
commenced, with reasons changes in the predetermined outcome measures must
be reported along with rationale for the change (e.g.,
updated test version published, etc.).

Sample size
7a. How sample size was determined Reporting hypothesized effect sizes based on
previous research or relevant literature is encouraged
using a common metric (e.g., Cohen’s d).
Determination of sample size should include a formal
power analysis that includes specification of the
parameters of the analysis.

7b. When applicable, explanation of any interim If interim analyses are planned, a description of what
analyses and stopping guidelines those analyses are and when they will be conducted
is warranted. If an a priori decision is made to
terminate data collection prior to completion of the
proposed target sample size, justification for this
decision should be reported.

(Continued)
582 JUSTIN B. MILLER ET AL.

Table 1. (Continued)

CONSORT criteria Description

Randomization
SEQUENCE GENERATION
8a. Method used to generate the random allocation The randomization process used to assign participants
sequence to treatment groups should be specified in sufficient
detail to allow readers to assess randomization
methods and the likelihood of bias in participant
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

allocation.

8b. Type of randomization; details of any restriction Detailed description of the type of randomization is
(such as blocking and block size) required, particularly if restrictions are in place to
assure that randomization allocation procedure results
in comparable groups, which is common to
neuropsychological studies where unrestricted
randomization may not be possible. If study
personnel became aware of block size(s) or
stratification procedures, this should be reported as it
may lead to breaking the randomization allocation
procedure and bias.

ALLOCATION CONCEALMENT MECHANISM


9. Mechanism used to implement the random Allocation sequences should result in random and
allocation sequence (such as sequentially unpredictable assignment of participants to groups
numbered containers), describing any steps taken and the sequence must be concealed from study
to conceal the sequence until interventions were personnel at all phases of study, which is distinct
assigned from blinding procedures. Study personnel obtaining
informed consent and enrolling participants must be
ignorant of the next assignment in the sequence in
order to ensure true randomization. The steps taken
by investigators to conceal the random allocation
sequence must be explicitly reported.

IMPLEMENTATION
10. Who generated the random allocation sequence, Key study personnel must be identified including
who enrolled participants, and who assigned those who (or what) generated the allocation
participants to interventions sequence, who enrolled participants, and who (or
what) assigned participants to study groups, and how
these individuals remained naïve to the allocation
sequence and blinded to group membership. Ideally,
those who generate the allocation sequence are
separate those people implementing assignments.

BLINDING
11a. If done, who was blinded after assignment to In studies where blinding procedures were possible
interventions (for example, participants, care and employed, detailed description of who was
providers, those assessing outcomes) and how blinded and to what extent is essential, including
thorough description of the methods utilized to blind.

(Continued)
CONSORT AND NEUROPSYCHOLOGY 583

Table 1. (Continued)

CONSORT criteria Description

In the event that blinding was not possible or not


utilized (as is sometimes the case in
neuropsychological or rehabilitation trials), this
should be explicated and justified. In these cases, the
steps taken to minimize bias must be adequately
reported. Any compromise to blinding must be
reported.
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

11b. If relevant, description of the similarity of Authors should state the method(s) used to make the
interventions different treatment conditions similar (and blinded) to
participants as well as investigators and include a
comprehensive description of the similarities and
dissimilarities between treatment and control
conditions.

Statistical methods
12a. Statistical methods used to compare groups for The analytic strategy must be adequately detailed and
primary and secondary outcomes described in such detail that a knowledgeable reader
could reproduce statistical findings if given a copy of
the dataset, including data entry, quality control (i.e.,
data checking) and data preparation (e.g., data
cleaning, manipulations, transformations,
combination). Any normative data used for
standardization should be clearly stated and
appropriately referenced, and a discussion of how
missing data were handled is required.

12b. Methods for additional analyses, such as Subgroup analysis (if any) should be specified a
subgroup analyses and adjusted analyses priori and justified based on empirical evidence. If
done, they should be reported similarly as primary
analyses. Post-hoc subgroup analyses are
discouraged.

Results
Participant flow (diagram is strongly recommended; see CONSORT website for examples)
13a. For each group, the numbers of participants who Reporting basic demographic characteristics of the
were randomly assigned, received intended recruited and analyzed samples by group is required.
treatment, and were analyzed for the primary At a minimum, age, education, gender, and ethnicity
outcome characteristics must be reported; however, any
relevant characteristics are encouraged (e.g.,
substance use, polypharmacy treatment, primary
language). If the study included a clinical population,
relevant clinical factors should also be reported (e.g.,
diagnosis[es], symptom load, duration, severity).

(Continued)
584 JUSTIN B. MILLER ET AL.

Table 1. (Continued)

CONSORT criteria Description

13b. For each group, losses and exclusions after Similar data to the analyzed sample should also be
randomization, together with reasons reported for those not completing the trial, along with
reasons for attrition. If the number of excluded
participants or losses is sufficient to support statistical
analysis, efforts should be made to identify any
potential systematic factors related to attrition.
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

Recruitment
14a. Dates defining the periods of recruitment and The length of time over which data were collected,
follow-up and the rate of recruitment must be reported. If data
were drawn from an archive, the timeframe over
which data were accumulated must be reported.
Methods of participant tracking (if applicable) must
be reported along with the duration of the follow up
period.

14b. Why the trial ended or was stopped If the trial ended prematurely, the specific reasons for
premature termination must be described.

Baseline data
15. A table showing baseline demographic and Baseline demographic and cultural variables
clinical characteristics for each group previously established or suspected to have an impact
on study outcome measures (e.g., age, sex, education,
ethnicity), descriptive factors of the independent
factors of the study (e.g., baseline cognitive status,
disease variables) and other concurrent factors (e.g.,
type and dose of medications or medication
equivalent dosages) should be reported and
statistically significant discrepancies should be noted.

Numbers analyzed
16. For each group, number of participants If any participants were excluded from analysis after
(denominator) included in each analysis and collection, or if analyses were performed using
whether the analysis was by original assigned groupings other than their original assignments (i.e.,
groups intention-to-treat vs. per-protocol), justification for
doing so should be reported.

Outcomes and estimation


17a. For each primary and secondary outcome, Reporting of all statistical results must follow
results for each group and the estimated effect size published criteria consistent with the current edition
and its precision (such as 95% confidence interval) of the American Psychological Association
Publication Manual. Effect sizes using a common
metric and significance values should be precisely
reported for both significant and non-significant
findings; reporting significance values as simply
falling below a pre-specified criterion (e.g., p < .05)
is discouraged. For each measured outcome, relevant
measures of central tendency and distribution
statistics should be reported; where applicable, tables
are encouraged.

(Continued)
CONSORT AND NEUROPSYCHOLOGY 585

Table 1. (Continued)

CONSORT criteria Description

17b. For binary outcomes, presentation of both Reporting of binary outcomes must include both
absolute and relative effect sizes is recommended relative and absolute risk along with confidence
intervals in order to present comprehensive and
clinically useful information.

Ancillary analyses
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

18. Results of any other analyses performed, The same reporting standards for primary analyses
including subgroup analyses and adjusted also apply to any secondary analyses, clearly
analyses, distinguishing pre-specified from specifying which analyses were exploratory or post-
exploratory hoc, and which were pre-specified.

Harms
19. All important harms or unintended effects in each Reporting any potentially adverse events is critical
group (for specific guidance see CONSORT for and should be done in a manner consistent with the
harms) CONSORT extension specific to harms. Even minor
unplanned events can yield significant effects on
measurement of cognitive function (e.g.,
unstandardized test administration, alternate forms,
order effects) which must be noted to critically
evaluate study results.

Discussion
Limitations
20. Trial limitations, addressing sources of potential The discussion section should present a balanced
bias, imprecision, and, if relevant, multiplicity of argument addressing the data that support, as well as
analyses the data that contradict, each hypothesis and consider
possible alternative explanations for the findings. The
limitations of the study should include any and all
limitations of the study design, implementation,
procedures, measures, or analyses that can affect the
data collected and the results. In addition, caveats
about potential biases should be clearly specified and
how this bias has been mitigated.

Generalizability
21. Generalizability (external validity, applicability) A discussion of the generalizability of the obtained
of the trial findings findings is warranted including cautions against
generalizability to allow readers to determine the
extent to which the trial results generalize to a
particular individual/population of interest. If the
study is of a targeted intervention, consideration for
transportability to other populations is recommended.

Interpretation
22. Interpretation consistent with results, balancing The discussion section should include a logical
benefits and harms, and considering other relevant summary of the study results and include relevance
evidence of the findings to the study hypotheses. It should be
explicitly stated if the hypotheses were supported,
and if not, why this might have been the case.

(Continued)
586 JUSTIN B. MILLER ET AL.

Table 1. (Continued)

CONSORT criteria Description

Other information
Registration
23. Registration numbers and name of trial registry Authors are strongly encouraged to register their trial,
provide the name of where the trial was registered (in
the United States, it should be with
ClinicalTrials.gov), and provide the unique
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

registration number assigned in order to preserve the


integrity of reported findings and minimize the
consequences of multiple publication of trials,
selective reporting of trial data or participants, and
using per protocol analyses (as opposed to intent-to-
treat analyses). This is of particular significance for
studies employing neuropsychological endpoints,
which often include a substantially greater number of
variables beyond those of primary or secondary
interest.

Protocol
24. Where the full trial protocol can be accessed, if Availability of additional resources, if not published
available in full, that would allow for replication of the study
(e.g., treatment manuals, non-proprietary stimuli)
should be identified, and where these materials can
be obtained.

Funding
25. Sources of funding and other support (such as Reporting of any potential conflicts of interest is
supply of drugs), role of funders required, as well as, reporting of all funding sources.

Introduction
The essential component of the introduction is a review of the published literature,
including identified gaps of interest, in order to introduce the reader to the essential
scientific background and establish the theoretical rationale for conducting the research.
Primary aims and specific hypotheses logically follow from the review, and both should
be explicitly stated. It should be noted that hypotheses are pre-specified, testable
questions formulated to answer the objective(s) of a trial and are more specific than
objectives. For clinical trials involving a primary or secondary neuropsychological
outcome, a brief review of previous research using the selected cognitive measure(s) is
strongly suggested to establish the rationale for selection of those measures. For research
studies integrating experimental methodologies (e.g., new or unproven measures, use of
existing measures in new patient groups or clinical settings) an explicit framework to
support the use of the new measure should be provided.
CONSORT AND NEUROPSYCHOLOGY 587

Methods
A core component of better reporting is emphasis on methods, including partici-
pants, measures, design, and analyses. Thorough explication of participant recruitment
practices and relevant randomization procedures (where applicable), including methods
used to assign interventions to research participants, is essential, as well as detailed
description of analytic strategies including data preparation and formal analysis. If data
were drawn from an archived database, a description of the database, including the origin
of the data, and how participants were selected for inclusion in the post-hoc study and
what exclusionary criteria were used is recommended. For prospective studies employing
a randomization schedule, the specific criteria outlined in Table 1 should be addressed,
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

and if randomization was not part of the study design, justification should be provided
and the relevant method(s) by which participants were grouped should be detailed.
Discussion of effective randomization is beyond the scope of the current paper, but key
elements of successful randomization include: (1) adequate generation of an unpredict-
able allocation sequence; (2) concealment of the randomization sequence from research
participants until assignment occurs; and (3) study team members should be blinded to
the study participants’ membership in a particular group. Providing a comprehensive
description of the participant randomization is a required component of RCTs.
Defining the targeted constructs and selected outcome measures is also critical for
RCTs employing neuropsychological endpoints. Across the neuropsychological litera-
ture, there is widespread use of broad cognitive domain labels such as “memory” and
“attention” (see Lezak, Howieson, & Loring, 2004; Straus, Sherman, & Spreen, 2006),
but there is a relative lack of agreement on how these neuropsychological domains are
operationalized and measured. One result from such heterogeneity is that the same
domain has been defined and subsequently measured in multiple ways, using different
tests that do not necessarily overlap and may no longer reflect the same latent construct.
While efforts such as the National Institutes of Health (NIH) Toolbox (www.nihtool
box.org) and Cognitive Atlas (www.cognitive atlas.org) may assist in creating greater
uniformity in measurement, inconsistent reporting of operational definitions will con-
tinue to limit the conclusions that may be drawn from clinical trials. Increasing CON-
SORT adherence may offer a potential remedy. For example, item 6a emphasizes
“completely defined, pre-specified primary and secondary outcome measures,” which
could be broadened to encourage authors to provide an operational definition of the tar-
get construct(s) of interest.
The process by which measures were selected for inclusion in analyses should
also be explicated. Given that a multitude of available measures exist which can be
drawn from to assess most cognitive domains, authors should be encouraged to provide
justification for their chosen endpoints, and ideally that an explicit process was used
during endpoint selection. Citation of previous research or formulation of a theoretical
rationale as to why a given measure or set of measures was selected should be reported,
and if currently available measures are deemed insufficient, the purported limitations
should be clearly stated. In this manner, the specific criteria that each test had to satisfy
in order to be included as an endpoint are made explicit.
For new experimental measures, descriptions of stimuli and process of administra-
tion and scoring should be provided in sufficient detail to enable independent replica-
tion. If variables were transformed, or composite scores constructed from individual
588 JUSTIN B. MILLER ET AL.

variables, these methods must be clear enough to enable replication, and moreover use
of composites must be theoretically justified. Comprehensive report of analytic proce-
dures is also required. Although most studies provide basic descriptions of the chosen
statistical tests, the level of detail provided is not always comprehensive nor are the
analyses adequate. For example, the analysis plan must include description of appropri-
ate statistical methods to avoid making Type I or Type II errors. If multiple statistical
tests were used (as is typically the case in neuropsychology studies) the issue of multi-
ple comparisons should be addressed and discussed (e.g., Bonferroni correction, or
alternative approaches to controlling false discovery rates) in order to ensure that
chance results are not misinterpreted.
Also an essential component of the publication is that the selected dependent vari-
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

ables are defined sufficiently to enable independent replication. Specifically, selected


outcome variables must be explicitly defined, including citations if the method relies on
previously published work. To report simply the name of a test that generates multiple
scores which was used as an outcome measure (as is the case with most neuropsycho-
logical measures) is inadequate. Too often, a research report will identify the measure
(s) employed—e.g., “Wechsler Memory Scale-Fourth Edition (WMS-IV; Wechsler,
2008)”—but fail to identify which scores from the WMS-IV were used (i.e., subtest
scores, index scores) or if the scores reported reflect raw scores or standard scores.
Rather, reporting the precise variable or index from the measure(s), the specific kind of
score used for reach variable (e.g., raw score, scaled score, standardized score, demo-
graphically adjusted value), as well as the normative data used for standardization (i.e.,
test manual, published standardized data, or standardized from a control group within
the study trial) should be reported.
Given the nature of RCT design, serial assessment is an inherent component and
it is essential that the analytic strategy address and account for systematic biases associ-
ated with repeated measures designs. Among other things, consideration for how prac-
tice effects may differ between groups and how baseline scores may correlate with the
magnitude of change (e.g., Rapport et al., 1997) is imperative and often neglected.
Carry-forward effects (i.e., the intervention influencing subsequent assessments due to
methodological familiarity if the chosen outcomes are sufficiently similar to the inter-
vention) should also be minimized by selecting sufficiently distinct outcome measures,
and accounting for any carry-forward effects in subsequent analyses. Also of significant
importance is the use of statistical methods to assess for reliable change (e.g., reliable
change indices; RCI) to assure changes over time are meaningful (Duff, 2012). There
are multiple methods available to determine reliable change (e.g., Chelune, Naugle,
Luders, Sedlak, & Awad, 1993; Jacobson & Truax, 1991; Maassen, 2004; Maassen,
Bossema, & Brand, 2009; Sawrie, Chelune, Naugle, & Luders, 1996), and determining
which is most appropriate for any given study is beyond the scope of this paper. How-
ever, use of formal, a priori specified statistical method to determine the degree to
which change over time could be attributed to chance should be a component of any
research paradigm involving multiple assessments. If baseline scores are used as a
covariate then it must be clear that the groups do not differ on the covariate via direct
comparison, or that the regression slopes are similar across groups (i.e., the relation
between the covariate and dependent variables must not differ between groups).
Reports of clinical trial outcomes should also consider carefully how the findings
may be interpreted by clinicians. While appreciating the “clinical significance” of any
CONSORT AND NEUROPSYCHOLOGY 589

finding is complex, reports should consider carefully which metrics best complement
the reporting of group mean differences or treatment effect sizes. Some examples of fig-
ures of merit that may better convey clinical meaningfulness of findings include: the
“number needed to treat” statistic, area under the curve of receiver operating character-
istic curves comparing interventions, diagnostic classification rates and success rate
differences (Kraemer & Kupfer, 2006).
A common problem confronting neuropsychological intervention studies is
potential confounding of the intervention/training methods with the methods used to
assess outcomes. If participants receive training on a particular task, it is critical to
consider the degree to which any learning is specific to the trained task or may gen-
eralize to other tasks that use different assessment methods. It is probably useful to
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

distinguish “near” from “far” generalization in these experimental designs, to assure


that conclusions are relevant to broader cognitive constructs or domains, and not
restricted to the specific measures on which individuals were trained. Similarly, it is
also important to assure that training or assessment methods do not inadvertently
reveal strategies that may enhance performance on subsequent outcome measures. For
example, instructing participants in the rules to solve a card-sorting test, and then
testing them on the same card-sorting test that used those rules, would not be appro-
priate to test for enhancement of executive functioning. For similar reasons, it is
problematic to use certain tests (e.g., such as some card-sorting tests; Basso,
Bornstein, & Lang, 1999; Basso, Lowery, Ghormley, & Bornstein, 2001), precisely
because they suffer from this “cat out of the bag” phenomenon; once a unique strat-
egy or rule is learned, that learning is not reversible, and thus may bias all subse-
quent measures.
Analyses must consider essential confounds due to participant variables known to
affect neuropsychological functioning (e.g., demographic characteristics, premorbid
ability) and, where appropriate, disease variables (e.g., severity indices), individual dif-
ferences in baseline performance, as well as other factors that are reasonably known to
affect neuropsychological measurement (e.g., medications, task engagement/effort, and/
or comorbid medical conditions). The analytic strategy should address individual partic-
ipant characteristics that might influence scores (i.e., age, education, gender, ethnicity)
and (as appropriate) disease variables (e.g., severity ratings, baseline functioning). The
publication should establish that groups are similar in these characteristics at baseline
via direct group comparison, and assure that group differences on the dependent vari-
able are not due to differences in these characteristics. Any identified differences should
be accounted for in subsequent analyses (e.g., by using these as covariates; determining
if results hold across subsets of the sample, as appropriate). The inclusion of baseline
demographic and clinical information may be quickly summarized in a table, and allow
readers to know the characteristics of the participants in the trial, organized by the pri-
mary comparison groups. This information is critical in evidence-based neuropsycho-
logical practice to allow clinicians to determine if a particular trial included individuals
that were representative of a particular patient in basic demographic and clinical aspects
(Chelune, 2010).
590 JUSTIN B. MILLER ET AL.

Results
The results section should include a description of the flow of participants
through the trial, from selection on through entry, randomization, treatment, and com-
pletion/follow-up. How the participants in each group performed on primary and sec-
ondary outcome measures along with any unexpected outcomes or adverse events must
also be reported. Results sections should make clear exactly how missing data were
handled. In clinical trials, it is important to distinguish between per-protocol (PP), and
intention-to-treat (ITT) analyses (Criteria 16), with preference generally given to the lat-
ter to avoid introduction of bias, as ITT analysis takes into account protocol deviations
and participant attrition when properly conducted (Chappell, 2012; Piantadosi, 2005).
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

With the increasing prevalence of “modified” ITT (mITT) samples (Abraha &
Montedori, 2010) and the variability in how mITT samples are defined (Alshurafa
et al., 2012; Gravel, Opatrny, & Shapiro, 2007), the importance of comprehensive and
thorough description of the analyzed sample cannot be overstated.
The inclusion of a “flow” diagram as part of reporting results is strongly recom-
mended, and careful flowchart description can help clarify the effectiveness of the ran-
domization strategy (Hopewell et al., 2011). Inclusion of a flow diagram also has been
found to improve reporting of results (Egger, Juni, & Bartlett, 2001). An appropriately
described diagram can also aid the reader in understanding the study and why some
participants did not receive a treatment as allocated or were lost to follow-up, as partici-
pant attrition post-randomization can be easily communicated with a good flowchart
description. An example of a suggested flow diagram is available on the CONSORT
website. Reporting attrition and participant exclusion is of particular importance in
order to minimize potential for erroneous conclusions (or frank bias) if an investigator
unknowingly creates imbalance on an important variable by excluding participants in
an unequal fashion among groups.
Within the results section, the reporting of each primary (and secondary) outcome
result by group should be included. Continuous data should be represented with appro-
priate measures of central tendency, variance, and range based on the nature of the vari-
able. For binary (e.g., number of participants with and without condition of interest)
and categorical variables, data should be presented with raw counts and percentiles for
each study group, as these data are of significant importance for calculating relative-risk
and number-needed-to-treat (see criteria 17b). Precision (e.g., confidence interval)
should also be reported for each group as well as relative risk or odds ratio data where
appropriate. In comparing groups, the difference between groups, mean square error,
effect size, and confidence interval for the contrast between groups is essential to report
for primary and any secondary outcome statistical tests (see criteria 17a.). It is also
important that authors clearly indicate the metric of scores presented (e.g., raw, stan-
dard, transformed), and making untransformed, raw scores available either in tables or
manuscript supplements may be particularly beneficial for meta-analysis and permit
independent post-hoc analyses. Unfortunately, selective reporting of test variables is a
widespread problem, particularly for investigator-led studies, and it is crucial for the
transparent reporting of outcomes research that all results of comparisons between
groups be reported regardless of whether or not they are interpreted as meaningful
(Al-Marzouki, Roberts, Evans, & Marshall, 2008; Chan, Hrobjartsson, Haahr, Gotzsche,
& Altman, 2004; Dwan, Gamble, Williamson, & Altman, 2008). When reporting
CONSORT AND NEUROPSYCHOLOGY 591

statistical results, it is essential the report of statistical tests is not limited to significance
values. For any given statistical test, adequate data must be reported to comprehensively
and critically evaluate the findings including for example: degrees of freedom, the test
statistic (e.g., F-statistic, t statistic) and the associated significance value as appropriate.
If any exploratory sub-analyses were completed, these should be explicitly reported as
such.
It is critical in evidence-based neuropsychology for clinicians to be aware of the
intended benefits, as well as unintended harms, of interventions. Reporting of all harms,
adverse events, and unintended consequences experienced by participants by group
within a trial should be a mandatory component of any published trial. However, recent
research suggests that peer-reviewed journal publications of pharmaceutical trials pro-
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

foundly under-report adverse events in comparison to the associated Clinical Study


Reports (CSR; Wieseler et al., 2013). An official CONSORT extension has been pub-
lished related to reporting of harms (Ioannidis et al., 2004), which includes 10 addi-
tional criteria relevant to reporting adverse and unanticipated events. It is certainly
possible that adverse events might have occurred independent from the intervention and
might be deemed related to the disease or some other uncontrolled variable, but these
adverse events should not be ignored and need to be reported.

Discussion
Many of the CONSORT-recommended guidelines are commonly recognized com-
ponents of discussion sections in neuropsychological reports, while others have been
neglected. A discussion section adhering to CONSORT should provide a concise inter-
pretation of the data, results, and implications for patient care. In general, the discussion
section should present: (a) brief and succinct summary of the key findings, (b) a discus-
sion of the mechanisms, processes, and explanations for the observations, (c) a compari-
son of the findings of the current study with relevant findings from other published
studies, including previous studies that report data inconsistent or divergent from the
findings of the current study, (d) limitations of the current study, including those stem-
ming from the methods, procedures, and/or statistical analyses employed, and (e) a
summary of the clinical implications of the study (and other published literature) in
health decision making and implications for future research. Of critical importance for
translation to clinical practice is comprehensive discussion of both the study sample
and target population and the relevant similarities and discrepancies.

CONCLUSIONS
Scientific progress is predicated on an empirical literature in which observations
are carefully described and results of studies are reported so that the same study can be
reproduced and results can be combined across studies to yield more general conclu-
sions. Reporting quality has been improved in other areas of biomedical science by
implementing reporting standards (e.g., CONSORT) and adoption of the CONSORT
criteria should yield similar benefits in the reporting of RCTs in which neuropsycholog-
ical variables are primary or secondary endpoints, or for studies utilizing a neuropsy-
chologically based intervention or manipulation. We argue that the application of
592 JUSTIN B. MILLER ET AL.

CONSORT guidelines to the reporting of clinical trials employing neuropsychological


endpoints is necessary to maintain clinical neuropsychology as a methodologically rig-
orous, scientific discipline striving to advance the science of brain–behavior relation-
ships.
Improved reporting via adoption of the CONSORT guidelines will benefit
researchers, educators, clinicians, reviewers, and healthcare consumers. A primary bene-
fit of increased adherence to CONSORT guidelines is methodological transparency, thus
facilitating independent replication. Adopting CONSORT guidelines will also improve
the ability of clinicians to consistently and reliably evaluate the quality of trials and
generalizability of findings. Explicit and thorough descriptions of treatments, settings,
and participants will facilitate the translation of science into practice and creation of
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

evidence-based methods to help guide clinical decision making. As more CONSORT-


adherent articles are published, it is anticipated that the literature base of tests and
concepts will become more consistent, which will aid meta-analysis of studies that
includes neuropsychological measures. Perhaps more importantly, adherence to
CONSORT criteria will promote the integration of neuropsychological endpoints within
the science and practice of other biomedical sciences and clinical practices of our
physician and allied health provider colleagues.
The earliest improvements, however, will be realized during manuscript prepara-
tion, which will become more streamlined secondary to introduction of standardized
reporting criteria delineating specific requirements for publication. Efficiency and qual-
ity of the peer-review process will also be improved, as manuscripts submitted for
review can be required to document minimum reporting standards with the initial sub-
mission, limiting the need for multiple revisions to address issues of inadequate report-
ing. For educators, adaptation of the CONSORT statement to the neuropsychology
literature will facilitate teaching of research methods by providing students with a struc-
tured framework for developing research paradigms as well as reviewing and critiquing
articles (see Bowden et al., 2013).
As an extension of reporting standards, it is further possible to create systems to
measure the quality of neuropsychological studies, including clinical trials that use neu-
ropsychological endpoints. The Jadad Scale, a 5-point scale based on three items, is
already used widely to evaluate the methodological quality of clinical trials in other
areas of biomedical research (Jadad et al., 1996), but application of this scale among
neuropsychology journals has been limited. Although these existing scales could poten-
tially be utilized in neuropsychological studies, they rely heavily on basic methodology
and fail to account for some of the unique attributes of neuropsychological research
(e.g., construct definition, variable selection, test reliability and validity, measurement
confounds). Having neuropsychology-specific reporting guidelines will make it more
feasible to design measures of credibility and methodological rigor. Those measures can
then be applied to help advance meta-analytic studies and systematic reviews by clearly
identifying and enabling the appropriate weighting of studies, as is done already in
other disciplines. Given the significance of meta-analytic studies in evidence-based neu-
ropsychology, which rely heavily on inclusion of high-quality studies, establishment of
formal and specific grading criteria for neuropsychological research must be
established.
The current call for the application and extension of CONSORT guidelines to
neuropsychological studies complements other efforts that have been proposed within
CONSORT AND NEUROPSYCHOLOGY 593

the behavioral medicine literature. However, the broad scope of the Journal Article
Reporting Standards (JARS), advanced by the American Psychological Association,
may limit its applicability for randomized clinical trials in which neuropsychological
outcome variables are a primary or secondary outcome measure. Although given the
lack of research that assesses the extent to which research reports have adhered to the
JARS guidelines, whether or not the quality of research reports published in journals
adhering to JARS has been improved remains unknown. And although the Evidence-
based Behavioral Medicine (EBBM) Statement proposed by Davidson et al., (2003) rec-
ognized the problems of using multiple measures to assess psychological constructs,
their focus was to identify potential limitations in generalizability caused by poor fidel-
ity of psychological interventions and to identify treatment fidelity indicators for those
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

providing the interventions. A major emphasis of our proposed recommendations, on


the other hand, is to advance the reporting of randomized clinical trials employing cog-
nitive endpoints in better defining neuropsychological terms, the measures used to
quantify these constructs, and how the neuropsychological measures are reported.
Implementation of reporting standards should only influence with ways in which
trials are reported in the literature by requiring explicit and transparent documentation
of what should already be a component in any well-designed and scientifically rigorous
trial. Although the CONSORT criteria are intended for specific application to RCTs,
many of these same principles are mirrored in other reporting guidelines (e.g.,
STROBE, TREND) and we agree with Bowden et al., (2013) and Loring and Bowden
(2013) that reporting standards should be adopted for the publication of all types of
neuropsychological studies. Adhering to the CONSORT guidelines to report results of a
randomized trial has not adversely affected the publication of clinical trials and should
not add ominously to the length of research reports. With increasing limitations on
printed space, full reporting may be challenging; however, space limitations are virtu-
ally non-existent for digital formats, and electronic publication has been steadily on the
rise over the past decade (Schriger, Chehrazi, Merchant, & Altman, 2011). Use of
online supplements and appendices to present detailed methodologies may also reduce
space demands for publications in both print and electronic form.
The ultimate goal is to improve the literature, promote greater uniformity among
published studies, and ultimately advance the practice of evidence-based neuropsychol-
ogy. Implementation will be best achieved if adopted and promoted by journal editors
who can mandate submission of a reporting checklist along with a submitted manu-
script. As an added benefit, inclusion of such a checklist will hopefully streamline the
review process and expedite publication of submitted manuscripts by reducing the num-
ber of revisions required that arise due to inadequate reporting. Extensive resources are
available for further reading and review of the CONSORT statement including the
CONSORT group website (www.consort-statement.org) as well as the original publica-
tions (Schulz et al., 2010) and explanatory documents (Moher et al., 2012). The present
paper is intended as an introduction to the CONSORT guidelines with specific discus-
sion of issues relevant to neuropsychology, but it is not intended to serve as a compre-
hensive resource, nor a replacement for the CONSORT reporting guideline. Researchers
engaging in conduct of randomized controlled trials using neuropsychological endpoints
or interventions are strongly encouraged to consult the CONSORT resources.
594 JUSTIN B. MILLER ET AL.

ACKNOWLEDGEMENT
This work was supported by the Consortium for Neuropsychiatric Phenomics
(NIH Roadmap for Medical Research grant UL1DE019580).

REFERENCES

Abraha, I., & Montedori, A. (2010). Modified intention to treat reporting in randomised
controlled trials: systematic review. BMJ, 340, c2697. doi:10.1136/bmj.c2697
Agha, R., Cooper, D., & Muir, G. (2007). The reporting quality of randomised controlled trials in
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

surgery: A systematic review. International Journal of Surgery, 5(6), 413–422. doi:10.1016/


j.ijsu.2007.06.002
Al-Marzouki, S., Roberts, I., Evans, S., & Marshall, T. (2008). Selective reporting in clinical tri-
als: Analysis of trial protocols accepted by The Lancet. Lancet, 372(9634), 201.
doi:10.1016/s0140-6736(08)61060-0
Alshurafa, M., Briel, M., Akl, E. A., Haines, T., Moayyedi, P., Gentles, S. J., … Guyatt, G. H.
(2012). Inconsistent definitions for intention-to-treat in relation to missing outcome data:
Systematic review of the methods literature. PLoS One, 7(11), e49163. doi:10.1371/journal.
pone.0049163
Altman, D. G., & Dore, C. J. (1990). Randomisation and baseline comparisons in clinical trials.
Lancet, 335(8682), 149–153.
Altman, D. G., & Moher, D. (2013). Declaration of transparency for each research article. BMJ,
347, f4796. doi:10.1136/bmj.f4796
Alvarez, F., Meyer, N., Gourraud, P. A., & Paul, C. (2009). CONSORT adoption and quality of
reporting of randomized controlled trials: A systematic analysis in two dermatology journals.
British Journal of Dermatology, 161(5), 1159–1165. doi:10.1111/j.1365-2133.2009.09382.x
Barry, H. C., Ebell, M. H., Shaughnessy, A. F., Slawson, D. C., & Nietzke, F. (2001). Family
physicians’ use of medical abstracts to guide decision making: Style or substance? Journal
of the American Board of Family Practice, 14(6), 437–442.
Basso, M. R., Bornstein, R. A., & Lang, J. M. (1999). Practice effects on commonly used mea-
sures of executive function across twelve months. The Clinical Neuropsychologist, 13(3),
283–292. doi:10.1076/clin.13.3.283.1743
Basso, M. R., Lowery, N., Ghormley, C., & Bornstein, R. A. (2001). Practice effects on the Wis-
consin Card Sorting Test-64 Card version across 12 months. The Clinical Neuropsychologist,
15(4), 471–478. doi:10.1076/clin.15.4.471.1883
Begg, C., Cho, M., Eastwood, S., Horton, R., Moher, D., Olkin, I., … Stroup, D. F. (1996).
Improving the quality of reporting of randomized controlled trials. The CONSORT statement.
JAMA, 276(8), 637–639.
Bilder, R. M. (2011). Neuropsychology 3.0: Evidence-based science and practice. J Int Neuropsy-
chol Soc, 17(1), 7–13. doi:10.1017/s1355617710001396
Bossuyt, P. M., Reitsma, J. B., Bruns, D. E., Gatsonis, C. A., Glasziou, P. P., Irwig, L. M., … de
Vet, H. C. (2003). Towards complete and accurate reporting of studies of diagnostic accu-
racy: The STARD initiative. BMJ, 326(7379), 41–44.
Boutron, I., Moher, D., Altman, D. G., Schulz, K. F., & Ravaud, P. (2008a). Extending the CON-
SORT statement to randomized trials of nonpharmacologic treatment: Explanation and elabo-
ration. Annals of Internal Medicine, 148(4), 295–309.
Boutron, I., Moher, D., Altman, D. G., Schulz, K. F., & Ravaud, P. (2008b). Methods and pro-
cesses of the CONSORT Group: Example of an extension for trials assessing nonpharmaco-
logic treatments. Annals of Internal Medicine, 148(4), W60–66.
CONSORT AND NEUROPSYCHOLOGY 595

Bowden, S. C., Harrison, E. J., & Loring, D. W. (2013). Evaluating research for clinical signifi-
cance: Using critically appraised topics to enhance evidence-based neuropsychology. The
Clinical Neuropsychologist. doi:10.1080/13854046.2013.776636
Calvert, M., Blazeby, J., Altman, D. G., Revicki, D. A., Moher, D., & Brundage, M. D. (2013).
Reporting of patient-reported outcomes in randomized trials: The CONSORT PRO exten-
sion. JAMA, 309(8), 814–822. doi:10.1001/jama.2013.879
Campbell, M. K., Elbourne, D. R., & Altman, D. G. (2004). CONSORT statement: Extension to
cluster randomised trials. BMJ, 328(7441), 702–708. doi:10.1136/bmj.328.7441.702
Chan, A. W., Hrobjartsson, A., Haahr, M. T., Gotzsche, P. C., & Altman, D. G. (2004). Empirical
evidence for selective reporting of outcomes in randomized trials: Comparison of protocols
to published articles. JAMA, 291(20), 2457–2465. doi:10.1001/jama.291.20.2457
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

Chappell, R. (2012). Non-inferiority trials. In B. Ravina, J. L. Cummings, M. P. McDermott, &


R. M. Poole (Eds.), Clinical trials in neurology. New York, NY: Cambridge University
Press.
Chelune, G. J. (2010). Evidence-based research and practice in clinical neuropsychology. The
Clinical Neuropsychologist, 24(3), 454–467. doi:10.1080/13854040802360574
Chelune, G. J., Naugle, R. I., Luders, H., Sedlak, J., & Awad, I. A. (1993). Individual change
after epilepsy surgery: Practice effects and base-rate information. Neuropsychology, 7(1),
41–52. doi:10.1037/0894-4105.7.1.41
Cooper, H., Appelbaum, M., Maxwell, S., Stone, A., & Sher, K. J. (2008). Reporting standards
for research in psychology: Why do we need them? What might they be? American Psychol-
ogist, 63(9), 839–851. doi:10.1037/0003-066x.63.9.839
Couillet, J., Soury, S., Lebornec, G., Asloun, S., Joseph, P. A., Mazaux, J. M., & Azouvi, P.
(2010). Rehabilitation of divided attention after severe traumatic brain injury: A randomised
trial. Neuropsychological Rehabilitation, 20(3), 321–339. doi:10.1080/09602010903467746
Craft, S., Baker, L. D., Montine, T. J., Minoshima, S., Watson, G. S., Claxton, A., …Gerton, B.
(2012). Intranasal insulin therapy for Alzheimer disease and amnestic mild cognitive impair-
ment: A pilot clinical trial. Archives of Neurology, 69(1), 29–38. doi:10.1001/archneurol.
2011.233
Davidson, K. W., Goldstein, M., Kaplan, R. M., Kaufmann, P. G., Knatterud, G. L., Orleans, C.
T., … Whitlock, E. P. (2003). Evidence-based behavioral medicine: What is it and how do
we achieve it? Annals of Behavioral Medicine, 26(3), 161–171.
Des Jarlais, D. C., Lyles, C., & Crepaz, N. (2004). Improving the reporting quality of nonrandom-
ized evaluations of behavioral and public health interventions: The TREND statement.
American Journal of Public Health, 94(3), 361–366.
Duff, K. (2012). Evidence-based indicators of neuropsychological change in the individual patient:
Relevant concepts and methods. Archives of Clinical Neuropsychology, 27(3), 248–261.
doi:10.1093/arclin/acr120
Dwan, K., Gamble, C., Williamson, P. R., & Altman, D. G. (2008). Reporting of clinical trials: A
review of research funders’ guidelines. Trials, 9, 66. doi:10.1186/1745-6215-9-66
Edwards, J. D., Hauser, R. A., O’Connor, M. L., Valdes, E. G., Zesiewicz, T. A., & Uc, E. Y.
(2013). Randomized trial of cognitive speed of processing training in Parkinson disease.
Neurology, 81(15), 1284–1290. doi:10.1212/WNL.0b013e3182a823ba
Egger, M., Juni, P., & Bartlett, C. (2001). Value of flow diagrams in reports of randomized
controlled trials. JAMA, 285(15), 1996–1999.
Folkes, A., Urquhart, R., & Grunfeld, E. (2008). Are leading medical journals following their
own policies on CONSORT reporting? Contemporary Clinical Trials, 29(6), 843–846.
doi:10.1016/j.cct.2008.07.004
Fontelo, P. (2011). Consensus abstracts for evidence-based medicine. Evidence Based Medicine,
16(2), 36–38. doi:10.1136/ebm20003
596 JUSTIN B. MILLER ET AL.

Franck, N., Duboc, C., Sundby, C., Amado, I., Wykes, T., Demily, C., … Vianin, P. (2013).
Specific vs general cognitive remediation for executive functioning in schizophrenia: a multi-
center randomized trial. Schizophrenia Research, 147(1), 68–74. doi:10.1016/
j.schres.2013.03.009
Grant, S. P., Mayo-Wilson, E., Melendez-Torres, G. J., & Montgomery, P. (2013). Reporting qual-
ity of social and psychological intervention trials: A systematic review of reporting guide-
lines and trial publications. PLoS One, 8(5), e65442. doi:10.1371/journal.pone.0065442
Gravel, J., Opatrny, L., & Shapiro, S. (2007). The intention-to-treat approach in randomized
controlled trials: Are authors saying what they do and doing what they say? Clinical Trials,
4(4), 350–356. doi:10.1177/1740774507081223
Greenfield, M. L., Mhyre, J. M., Mashour, G. A., Blum, J. M., Yen, E. C., & Rosenberg, A. L.
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

(2009). Improvement in the quality of randomized controlled trials among general anesthesiol-
ogy journals 2000 to 2006: A 6-year follow-up. Anesthesia & Analgesia, 108(6), 1916–1921.
doi:10.1213/ane.0b013e31819fe6d7
Hampstead, B. M., Stringer, A. Y., Stilla, R. F., Giddens, M., & Sathian, K. (2012). Mnemonic
strategy training partially restores hippocampal activity in patients with mild cognitive
impairment. Hippocampus, 22(8), 1652–1658. doi:10.1002/hipo.22006
Han, C., Kwak, K. P., Marks, D. M., Pae, C. U., Wu, L. T., Bhatia, K. S., … Patkar, A. A.
(2009). The impact of the CONSORT statement on reporting of randomized clinical trials in
psychiatry. Contemporary Clinical Trials, 30(2), 116–122. doi:10.1016/j.cct.2008.11.004
Hanks, R. A., Rapport, L. J., Wertheimer, J., & Koviak, C. (2012). Randomized controlled trial
of peer mentoring for individuals with traumatic brain injury and their significant others.
Archives of Physical Medicine and Rehabilitation, 93(8), 1297–1304. doi:10.1016/
j.apmr.2012.04.027
Haynes, R. B., McKibbon, K. A., Walker, C. J., Ryan, N., Fitzgerald, D., & Ramsden, M. F.
(1990). Online access to MEDLINE in clinical settings. A study of use and usefulness.
Annals of Internal Medicine, 112(1), 78–84.
Hopewell, S., Clarke, M., Moher, D., Wager, E., Middleton, P., Altman, D. G., & Schulz, K. F.
(2008). CONSORT for reporting randomized controlled trials in journal and conference
abstracts: Explanation and elaboration. PLoS Med, 5(1), e20. doi:10.1371/journal.
pmed.0050020
Hopewell, S., Hirst, A., Collins, G. S., Mallett, S., Yu, L. M., & Altman, D. G. (2011). Reporting
of participant flow diagrams in published reports of randomized trials. Trials, 12, 253.
doi:10.1186/1745-6215-12-253
Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Med, 2(8), e124.
doi:10.1371/journal.pmed.0020124
Ioannidis, J. P., Evans, S. J., Gotzsche, P. C., O’Neill, R. T., Altman, D. G., Schulz, K., & Moher,
D. (2004). Better reporting of harms in randomized trials: An extension of the CONSORT
statement. Annals of Internal Medicine, 141(10), 781–788.
Jacobson, N. S., & Truax, P. (1991). Clinical significance: A statistical approach to defining
meaningful change in psychotherapy research. Journal of Consulting and Clinical Psychol-
ogy, 59(1), 12–19.
Jadad, A. R., Moore, R. A., Carroll, D., Jenkinson, C., Reynolds, D. J., Gavaghan, D. J., &
McQuay, H. J. (1996). Assessing the quality of reports of randomized clinical trials: Is blind-
ing necessary? Controlled Clinical Trials, 17(1), 1–12.
Kaufman, D. A. S., Boxer, O., & Bilder, R. M. (Eds.). (2013). Evidence based science and prac-
tice in neuropsychology: A review. New York, NY: Oxford University Press.
Keefe, R. S., Bilder, R. M., Davis, S. M., Harvey, P. D., Palmer, B. W., Gold, J. M., … Lieberman,
J. A. (2007). Neurocognitive effects of antipsychotic medications in patients with chronic
schizophrenia in the CATIE Trial. Archives of General Psychiatry, 64(6), 633–647.
doi:10.1001/archpsyc.64.6.633
CONSORT AND NEUROPSYCHOLOGY 597

Kraemer, H. C., & Kupfer, D. J. (2006). Size of treatment effects and their importance to clinical
research and practice. Biological Psychiatry, 59(11), 990–996. doi:10.1016/j.biopsych.
2005.09.014
Lezak, M. D., Howieson, D. B., & Loring, D. W. (Eds.). (2004). Neuropsychological assessment
(4th ed.). New York, NY: Oxford University Press.
Loring, D. W., & Bowden, S. C. (2013). The STROBE statement and neuropsychology: Lighting
the way toward evidence-based practice. The Clinical Neuropsychologist. doi:10.1080/
13854046.2012.762552
Maassen, G. H. (2004). The standard error in the Jacobson and Truax Reliable Change Index:
The classical approach to the assessment of reliable change. Journal of the International
Neuropsychological Society, 10(6), 888–893.
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

Maassen, G. H., Bossema, E., & Brand, N. (2009). Reliable change and practice effects: Out-
comes of various indices compared. Journal of Clinical and Experimental Neuropsychology,
31(3), 339–352. doi:10.1080/13803390802169059
Mahncke, H. W., Connor, B. B., Appelman, J., Ahsanuddin, O. N., Hardy, J. L., Wood, R. A., …
Merzenich, M. M. (2006). Memory enhancement in healthy older adults using a brain plas-
ticity-based training program: A randomized, controlled study. Proceedings of the National
Academy of Science USA, 103(33), 12523–12528. doi:10.1073/pnas.0605194103
Marcelo, A., Gavino, A., Isip-Tan, I. T., Apostol-Nicodemus, L., Mesa-Gaerlan, F. J., Firaza, P.
N., … Fontelo, P. (2013). A comparison of the accuracy of clinical decisions based on full-
text articles and on journal abstracts alone: A study among residents in a tertiary care hospi-
tal. Evidence Based Medicine, 18(2), 48–53. doi:10.1136/eb-2012-100537
Masur, D., Shinnar, S., Cnaan, A., Shinnar, R. C., Clark, P., Wang, J., … Glauser, T. A. (2013).
Pretreatment cognitive deficits and treatment effects on attention in childhood absence epi-
lepsy. Neurology, 81(18), 1572–1580. doi:10.1212/WNL.0b013e3182a9f3ca
Moher, D. (1998). CONSORT: An evolving tool to help improve the quality of reports of ran-
domized controlled trials. Consolidated Standards of Reporting Trials. JAMA, 279(18),
1489–1491.
Moher, D., Hopewell, S., Schulz, K. F., Montori, V., Gotzsche, P. C., Devereaux, P. J., & Altman,
D. G. (2012). CONSORT 2010 explanation and elaboration: Updated guidelines for report-
ing parallel group randomised trials. International Journal of Surgery, 10(1), 28–55.
doi:10.1016/j.ijsu.2011.10.001
Moher, D., Jones, A., & Lepage, L. (2001). Use of the CONSORT statement and quality of
reports of randomized trials: A comparative before-and-after evaluation. JAMA, 285(15),
1992–1995.
Moher, D., Pham, B., Jones, A., Cook, D. J., Jadad, A. R., Moher, M., … Klassen, T. P. (1998).
Does quality of reports of randomised trials affect estimates of intervention efficacy reported
in meta-analyses? Lancet, 352(9128), 609–613. doi:10.1016/s0140-6736(98)01085-x
Moher, D., Schulz, K. F., & Altman, D. G. (2001). The CONSORT statement: Revised recom-
mendations for improving the quality of reports of parallel-group randomized trials. Journal
of the American Podiatric Medical Association, 91(8), 437–442.
Piaggio, G., Elbourne, D. R., Pocock, S. J., Evans, S. J., & Altman, D. G. (2012). Reporting of
noninferiority and equivalence randomized trials: Extension of the CONSORT 2010 state-
ment. JAMA, 308(24), 2594–2604. doi:10.1001/jama.2012.87802
Piantadosi, S. (2005). Treatment effects monitoring. In Clinical Trials: A methodologic perspec-
tive. Hoboken, NK: John Wiley & Sons Inc.
Plint, A. C., Moher, D., Morrison, A., Schulz, K., Altman, D. G., Hill, C., & Gaboury, I. (2006).
Does the CONSORT checklist improve the quality of reports of randomised controlled trials?
A systematic review. Medical Journal of Australia, 185(5), 263–267.
598 JUSTIN B. MILLER ET AL.

Pocock, S. J., Hughes, M. D., & Lee, R. J. (1987). Statistical problems in the reporting of clinical
trials. A survey of three medical journals. New England Journal of Medicine, 317(7),
426–432. doi:10.1056/nejm198708133170706
Rapport, L. J., Axelrod, B. N., Theisen, M. E., Brines, D. B., Kalechstein, A. D., & Ricker, J. H.
(1997). Relationship of IQ to verbal learning and memory: test and retest. Journal of Clini-
cal and Experimental Neuropsychology, 19(5), 655–666. doi:10.1080/01688639708403751
Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996).
Evidence based medicine: What it is and what it isn’t. BMJ, 312(7023), 71–72.
Saint, S., Christakis, D. A., Saha, S., Elmore, J. G., Welsh, D. E., Baker, P., & Koepsell, T. D. (2000).
Journal reading habits of internists. Journal of General Internal Medicine, 15(12), 881–884.
Sawrie, S. M., Chelune, G. J., Naugle, R. I., & Luders, H. O. (1996). Empirical methods for
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

assessing meaningful neuropsychological change following epilepsy surgery. Journal of the


International Neuropsychological Society, 2(6), 556–564.
Schriger, D. L., Chehrazi, A. C., Merchant, R. M., & Altman, D. G. (2011). Use of the Internet
by print medical journals in 2003 to 2009: A longitudinal observational study. Annals of
Emergency Medicine, 57(2), 153–160 e153. doi:10.1016/j.annemergmed.2010.10.008
Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT 2010 statement: Updated guide-
lines for reporting parallel group randomised trials. PLoS Med, 7(3), e1000251. doi:10.1371/
journal.pmed.1000251
Schulz, K. F., Chalmers, I., Altman, D. G., Grimes, D. A., & Dore, C. J. (1995). The methodolog-
ic quality of randomization as assessed from reports of trials in specialist and general medi-
cal journals. The Online Journal of Current Clinical Trials, Doc No 197, [81 paragraphs].
Smith, G. S., Laxton, A. W., Tang-Wai, D. F., McAndrews, M. P., Diaconescu, A. O., Workman,
C. I., & Lozano, A. M. (2012). Increased cerebral metabolism after 1 year of deep brain
stimulation in Alzheimer disease. Archives of Neurology, 69(9), 1141–1148. doi:10.1001/
archneurol.2012.590
Straus, E., Sherman, E. M. S., & Spreen, O. (Eds.). (2006). A compendium of neuropsychological tests:
Administration, norms, and commentary (3rd ed.). New York, NY: Oxford University Press.
Stuifbergen, A. K., Becker, H., Perez, F., Morison, J., Kullberg, V., & Todd, A. (2012). A ran-
domized controlled trial of a cognitive rehabilitation intervention for persons with multiple
sclerosis. Clinical Rehabilitation, 26(10), 882–893. doi:10.1177/0269215511434997
Turner, L., Shamseer, L., Altman, D. G., Schulz, K. F., & Moher, D. (2012). Does use of the CON-
SORT Statement impact the completeness of reporting of randomised controlled trials pub-
lished in medical journals? A Cochrane review. Syst Rev, 1, 60. doi:10.1186/2046-4053-1-60
Uman, L. S., Chambers, C. T., McGrath, P. J., Kisely, S., Matthews, D., & Hayton, K. (2010).
Assessing the quality of randomized controlled trials examining psychological interventions
for pediatric procedural pain: recommendations for quality improvement. Journal of Pediat-
ric Psychology, 35(7), 693–703. doi:10.1093/jpepsy/jsp104
von Elm, E., Altman, D. G., Egger, M., Pocock, S. J., Gotzsche, P. C., & Vandenbroucke, J. P.
(2008). The Strengthening the Reporting of Observational Studies in Epidemiology
(STROBE) statement: Guidelines for reporting observational studies. Journal of Clinical
Epidemiology, 61(4), 344–349. doi:10.1016/j.jclinepi.2007.11.008
Wechsler, D. (2008). Wechsler Memory Scale (4th ed.). San Antonio, TX: Pearson Education.
Wieseler, B., Wolfram, N., McGauran, N., Kerekes, M. F., Vervolgyi, V., Kohlepp, P., … Grouven,
U. (2013). Completeness of reporting of patient-relevant clinical trial outcomes: Comparison
of unpublished clinical study reports with publicly available data. PLoS Med, 10(10),
e1001526. doi:10.1371/journal.pmed.1001526
Wolinsky, F. D., Mahncke, H., Vander Weg, M. W., Martin, R., Unverzagt, F. W., Ball, K. K., …
Tennstedt, S. L. (2010). Speed of processing training protects self-rated health in older
adults: Enduring effects observed in the multi-site ACTIVE randomized controlled trial.
International Psychogeriatrics, 22(3), 470–478. doi:10.1017/s1041610209991281
CONSORT AND NEUROPSYCHOLOGY 599

Wolinsky, F. D., Vander Weg, M. W., Howren, M. B., Jones, M. P., & Dotson, M. M. (2013). A
randomized controlled trial of cognitive training using a visual speed of processing interven-
tion in middle aged and older adults. PLoS One, 8(5), e61624. doi:10.1371/journal.
pone.0061624
Zwarenstein, M., Treweek, S., Gagnier, J. J., Altman, D. G., Tunis, S., Haynes, B., … Moher, D.
(2008). Improving the reporting of pragmatic trials: An extension of the CONSORT state-
ment. BMJ, 337, a2390. doi:10.1136/bmj.a2390
Downloaded by [Chinese University of Hong Kong] at 22:29 08 February 2015

Вам также может понравиться