Вы находитесь на странице: 1из 3

Critical Appraisal of a Randomised Controlled Trial Exercise

The following exercise is designed as a practical example to allow you to


critically appraise a randomised controlled trial (RCT). Before answering the
questions you will require to read the published study report of the RCT and a
separate published report of the study protocol. References:

Hohnloser SH, Crijins HJGM, van Eickels M et al. Effects of Dronedarone on


Cardiovascular Events in Atrial Fibrillation. N Eng J Med 2009; 360: 668-78.

Hohnloser SH, Connolly SJ, Crijins HJGM et al. Rationale and Design of
ATHENA: A Placebo-Controlled, Double-Blind, Parallel Arm Trial to Assess the
Efficacy of Dronedarone 400mg Bid for the Prevention of Cardiovascular
Hospitalization or Death from Any Cause in Patients with Atrial
Fibrillation/Atrial Flutter. J Cardiovasc Electrophysiol 2008; 19: 69-73

CASP Critical Appraisal Study Tool

The CASP critical appraisal study tool for RCTs is available online at:
http://www.sph.nhs.uk/what-we-do/public-health-workforce/resources/critical-
appraisals-skills-programme

The questions in this exercise are based on the questions in the CASP study
tool. In future you can use the questions below to evaluate an RCT or you can
access the tool online at the above address.

Screening Questions

The first two questions are screening questions and if the answer to either of
these is no then the study may be of limited usefulness.

Q1. Did the study ask a clearly focused question?

Q2. Was this an RCT and was it appropriately so?

Methodology

Questions 3 to 7 are intended to assess the methodology of the study. The


final published report of the study which includes the results has very little
detail on the methodology so you will also have to read the published report of
the study protocol to answer these questions.

Q3. Were participants appropriately allocated to intervention and control


groups?

(a) How were participants allocated to the intervention and control groups?
Was the process truly random?
(b) Was stratification used?
(c) How was the randomisation schedule generated and how were
participants allocated to intervention and control groups?
(d) Are the groups well balanced? Are any differences reported between
the groups at entry to the trial?
(e) Are there any differences that may confound the result?

Q4. Were participants, staff and study personnel blind to participants study
group?

Consider:
Blinding is not always possible
If every effort was made to achieve blinding
Does it matter in this study i.e. could there be observer bias?

Q5. Were all patients accounted for?

a) Was there a CONSORT diagram and were all the participants


accounted for?
b) Were the reasons for withdrawal given?
c) Did participants have the option to cross-over from their allocated
treatment at randomisation to the other treatment? i.e. could placebo
patients switch to dronedarone or vice versa?
d) Were all participants followed up in each study group? i.e. was there
loss to follow up?
e) Were all the participants outcomes analysed by the groups to which
they were originally allocated? i.e. was an intention to treat analysis
used?

Q6. Were the participants in all groups followed up and the data collected in
the same way?

Q7. Did the study have enough participants to minimise the play of chance?
i.e. is there a power calculation?

Results

The results of an RCT should be scrutinised in a similar way to the methods.


Q8 and Q9 prompt us to ask questions about how meaningful the results
presented actually are.

Q8. How are the results presented and what is the main result?

(a) Are the results presented as a proportion of people experiencing an


outcome, such as risks or as a measurement such as mean or median
survival curves and hazards?
(b) How large is the size of this result and how meaningful is it?
(c) How would you sum up the results of the trial in one sentence?

Q9. How precise are these results?


(a) Is a confidence interval reported?
(b) Is a p-value reported?

Relevance

Once we have assessed the quality of the methodology and considered the
importance of the results, we should think about how the results could be
applied to our local population and whether a change in practice seems
justified.

Q10. Were all important outcomes considered so the results can be applied?

(a) Were the people included in the trial similar to your population?
(b) What was the comparator and was it suitable?
(c) Was the study and follow-up of an appropriate duration for the
disease state and intervention under review?
(d) Does the setting of the trial differ from your local setting?
(e) Could the same treatment be provided in your local setting?
(f) Do the benefits of this treatment outweigh the risks/costs?
(g) Should policy or practice change as a result of the evidence
contained within this trial?

Вам также может понравиться