Вы находитесь на странице: 1из 35

International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11 i25 10.

1093/intqhc/mzh032 Using clinical indicators in a quality improvement programme targeting cardiac care ANNABEL HICKEY1, IAN SCOTT1, CHARLES DENARO2, NEIL STEWART1, CAMERON BENNETT2 AND THERESE THEILE2 1Department of Internal Medicine, Princess Alexandra Hospital, Brisbane, 2Depart ment of Internal Medicine, Royal Brisbane Hospital, Brisbane, Australia Abstract Rationale. The Brisbane Cardiac Consortium, a quality improvement collaboration of clinicians from three hospitals and Wve divisions of general practice, developed and reported clinical indicators as mea sures of the quality of care received by patients with acute coronary syndromes or congestive heart failure. Development of indicators. An expert panel derived indicators that measured gaps between evidence and practice. Data collected from hospital records and general practice heart-check forms were used to calculate process and outcome indicators for each condition. Our indicators were reliable (kappa scores 0.7 1.0) and widely accepted by clinicians as having face validity. Independent review of indicator-failed, in-hospital cases revealed that, for 27 of 28 process indicators, clinically legitimate reasons for withholding speciWc interventions were found in <5% of cases. Implementation and results. Indicators were reported every 6 months in hospitals and every 10 months in general practice. To stimulate practice change, we fed back indicators in conjunction with an educati on programme, and provided, when requested, customized analyses to different user groups. SigniWcant improvement was seen in 17 of 40 process indicators over the course of the project. Lessons learned and future plans. Lessons learnt included the need to: (i) ensur e brevity and clarity of feedback formats; (ii) liberalize patient eligibility criteria for interventions in order to maxim ize sample size; (iii) limit the number of data items; (iv) balance effort of indicator validation with need for timely feedback ; (v) utilize more economical methods of data collection and entry such as scannable forms; and (vi) minimize the burden of da ta veriWcation and changes to indicator deWnitions. Indicator measurement is being continued and expanded to other publi c hospitals in the state, while divisions of general practice are exploring lower-cost methods of ongoing clinical audit. Conclusion. Use of clinical indicators succeeded in supporting clinicians to mon itor practice standards and to realize change in systems of care and clinician behaviour. Keywords: cardiac, clinical indicators, performance measures, quality improvemen t Downloaded from intqhc.oxfordjournals.org by guest on October 8, 2011

Project rationale and conception Improving quality of health care requires performance measurement and feedback. Various programmes targeting cardiac care have combined the use of clinical indicators with other quality improvement interventions in identifying and closing the gaps between routine care and evidence-based best practice [1 4]. However, in the successful development and use of clinical indicators, several dimensions need to be carefully considered: (i) deWning the purpose of measurement; (ii) determining target standards of care; (iii) formulating data collection methods; (iv) converting data into usable clinical indicators; and (v) maximizing the impact of indicator feedback. In this report we describe our experience in undertaking these steps within a multifaceted quality improvement programme targeting in-hospital and post-hospital care of patients admitted with acute coronary syndromes (ACS) or congestive heart failure (CHF). An overview of our programme is provided in Box 1. Address reprint requests to A. Hickey, Clinical Services Evaluation Unit, Prince ss Alexandra Hospital, Ipswich Road, Woolloongabba, Brisbane, Queensland, Australia 4102. E-mail: annabel_hickey@heal th.qld.gov.au International Journal for Quality in Health Care vol. 16 Supplement 1 i11

A. Hickey et al. Box 1 Overview of the Brisbane Cardiac Consortium The Brisbane Cardiac Consortium (BCC) was one of four quality improvement consor tia in Australia to receive 2 years funding from the federal government under the Clinical Support Systems Program ( CSSP) auspiced by the Royal Australian College of Physicians. The program targeted the in-hospital and post-hospital ca re of patients admitted with acute coronary syndromes (ACS) and congestive heart failure (CHF). The program ran from October 1, 2000, to August 31, 2002, and involved three teaching hospitals [Royal Brisbane (800 beds), Princess Alexandra (700 beds) and Queen Elizabeth II Hospitals (260 beds)] and four Divisions of General Practice within metropolitan Brisbane. Program objectives were to optimize care using systematic processes of performan ce measurement and feedback combined with various quality improvement interventions. The latter included disseminatio n of locally developed clinical practice guidelines and other forms of decision support, academic detailing of patients a nd clinicians by clinical pharmacists, provision of patient self-management resources, and close liaison between hospit al and general practitioners and community pharmacists regarding future patient management. Primary outcome measures were changes in clinical indicators between baseline (1 /10/00 17/4/01) and post-intervention (15/3/02 30/8/02) periods as measured on all consecutive patients who met pre-spec iWed case deWnitions.* Post-hospital care indicators were collected on a subset of patients who satisWed eligibility criteria** and gave informed consent to posthospital follow-up. The program saw participation of 2495 patients (1584 with ACS; 911 with CHF), 10 cardiologists, 17 general physicians, 20 emergency physicians, 5 clinical pharmacists, 50 medical registrars, 200 residen ts, 150 nursing staff, and 1020 general practitioners. *ACS: clinical diagnosis of ACS + elevation of cardiac enzyme markers (troponin or creatine kinase levels elevated to more than 1.5 and 2.0 times upper normal reference range respectively). CHF: clinical diagnosis an d at least 3 key clinical signs (elevated jugular venous pressure, gallop rhythm, chest crackles to mid-zones bilaterally, pedal oedema, or pulmonary oedema or cardiomegaly on chest X-ray). **Absence of major co-morbidity (physical or psychological) which precluded abil ity to self-care, permanent resident in the greater Brisbane area, community-living, and ability to speak English. Development and design of clinical indicators Defining the purpose of measurement We wanted indicators that would guide and motivate internal quality improvement activities in hospital and general practice, and not be used merely for monitoring purposes by external agencies. To engage health care providers we wanted our indicators to be clinically relevant to speciWc processes of care that demonstrated an accepted link with desired health outcomes [5]. We aimed to present aggregate indicator data at the level of facility or, where sample size was large enough,

at a departmental or group general practice level. As the target groups were identiWable groups of clinicians, we considered it vital that the indicators were robustly accurate. Determining standards and indicators using multidisciplinary teams We speciWed the target standards of care in a set of clinical practice guideline recommendations derived from a systematic review of the research evidence [6 8]. These guidelines were developed over a period of 5 months, using formal group consensus methods, by multidisciplinary panels of general physicians (n = 5), cardiologists (n = 5), general practitioners (GPs; n = 12), clinical pharmacists (n = 2), and advanced physician trainees (n = 3). These panels contained expertise in clinical content and measurement, and represented the interests of potential users of the indicators [9]. The development work was coordinated by the programme manager in liaison with the eight-member Clinical Guideline Working Group. The same multidisciplinary team derived sets of clinical indicators from guideline recommendations and a review of indicators published by other groups [10,11]. Indicators related to both process (what care was given to whom) and outcome (what was the end result of care), were designed to be as explicit and objective as possible, and were developed by group consensus. Methods Data collection Data used to calculate in-hospital care indicators were abstracted by trained nurses from hospital medical records retrieved shortly after discharge. Post-hospital care data were collected using a 1-page heart-check form completed during GP consultations at 3, 6, and 12 months post-discharge. Examples of data forms used are provided in supplementary data. We restricted data elements for each indicator to those that were objectively veriWable and easily retrieved from medical records or GP case notes. Conversion of data into useful clinical indicators Improving care processes known to have a direct link with health outcomes was our prime objective. To this end, process indicators predominated over outcome indicators, particularly Downloaded from intqhc.oxfordjournals.org by guest on October 8, 2011 i12

Cardiac clinical indicators as large, risk-adjusted samples are required to detect signiWcant changes in outcome indicators (such as in-hospital death or 30-day readmission) given their relatively low event rates (in our experience <15%) [12]. We routinely gave clinicians information on 28 in-hospital and 12 post-hospital process indicators: 13 and six for ACS, and 15 and six for CHF, respectively (see Tables 1 and 2). We wanted our process indicators to accurately discriminate between eligible patients who should receive interventions (sensitivity) and ineligible patients who should not (speciWcity) [13]. We calculated each indicator as a proportion. In-hospital process indicators comprised the number of eligible patients actually receiving the intervention (numerator) over the number of patients eligible to receive it (denominator). Eligibility for each intervention was assessed as deWnite indication with no relative or absolute contra-indications as ascertained from hospital records. In contrast, post-discharge indicators were simply rates of overall prescription of drugs or non-drug interventions among all evaluable patients that had consented to follow-up and had been reviewed by GPs. This more simple format was deliberately designed to reduce the opportunity costs to GPs of collecting more detailed data about patient eligibility. Outcome indicators for in-hospital care comprised in-hospital death, 30-day same-cause readmission rate, and median length of stay. No outcome indicators for post-hospital care were included. We sought to make our indicators credible and capable of eliciting clinician responses by maximizing the following attributes. Relevance. Our process indicators were locally agreed, evidencebased, and important to both patients and clinicians [14]. Reliability. We minimized measurement error due to variations in sampling, data collection, or analysis [15] using strategies listed in Table 3. Potential for error in collection of postdischarge data by GPs was minimized by use of a 1-page form containing straightforward instructions and that allowed attachment of printouts of tests and medications from the GP s electronic records. Accuracy of discharge medications was conWrmed by cross-checking information on the in-hospital and GP forms with that recorded in pharmacy databases. Re-abstraction audits of 5% of randomly chosen hospital records found the level of agreement between data abstracters and principal investigators to be high: kappa scores were 1.0 for case deWnition and =0.7 for all other items. The accuracy of GP-returned forms could not be veriWed as GP case notes were inaccessible to independent reviewers. However, scrutiny of returned forms conWrmed entries in >70% of all data Welds, with 95% completion for medication use. Validity. We validated in-hospital process indicators by independently reviewing charts of all seemingly eligible patients who failed to receive speciWc interventions in the Wrst

and last 6-month periods for those indicators that showed a failure rate of >20% (see below). The only indicator for which clinically legitimate reasons were detected for withholding care in >5% of indicator-failed cases was prescription of warfarin in patients with CHF. In 64 of 113 (56%) patients deemed eligible for warfarin but who did not receive this drug, review disclosed relative contra-indications such as frailty or risk of falls in 43 (67%) of cases. Impact. To inXuence clinician behaviour, we chose indicators that had the capacity to signal the existence of important evidencepractice gaps and that were potentially amenable to change [16,17]. We presented indicator values in easy to interpret graphical and tabular formats, which depicted changes in indicator rates over time ( Figures 1 and 2) and which included a reference standard (or target) of 100% of highly eligible patients receiving the stated intervention. In addition, we employed various strategies (see below) to enhance the ability of indicator feedback to stimulate clinician attempts to improve care [18,19]. Maximization of clinical indicator impact Feedback of in-hospital care indicators occurred at 6-monthly intervals, while feedback of post-discharge (general practice) indicators occurred over two rounds separated by 18 months, as follow-up data were collected more slowly and the patient sample was only one-third that of the in-hospital sample. Dissemination We disseminated results via hospital unit meetings, hospital and divisions of general practice newsletters, drug and therapeutics bulletins, grand round presentations, educational meetings, and in-service training sessions. We ensured doctors, nurses, and clinical pharmacists at all levels were exposed to indicator feedback [20]. Feedback partnered with education and discussion Clinical audit and periodic feedback, in isolation, engender only moderate levels of change [21]. Lack of skills, knowledge, and leadership are some of the known barriers to practice improvement [22]. Consequently, we coupled indicator feedback in hospitals with small group discussions about guideline recommendations and the signiWcance of the indicator Wndings. Practice improvement was promoted through citing case studies of successful innovation elsewhere, and the formation of interdisciplinary groups that took responsibility for improving speciWc care processes [23]. Customized indicator analyses Encouraged by the work of others [17], and receiving local requests, we provided more personalized reports to individual hospital consultants about performance within his or her unit if patient samples were large enough to give statistically meaningful trends. These reports compared that consultant s performance with peers from all three hospitals and included patient demographics to reconcile any differences in case mix.

Feedback from providers and modifications to the original plan Clinician responses to reported indicators in the Wrst feedback round centred, not surprisingly, on data quality and face validity, Downloaded from intqhc.oxfordjournals.org by guest on October 8, 2011 i13

A. Hickey et al. Table 1 Process indicators for in-hospital care Indicator Eligibility criteria ................................................................................ ......................................................................... Inclusions Exclusions ................................................................................ ................................................................................ ........................................................ Acute coronary syndromes Presentation ECG: proportion of patients receiving ECG within 10 minutes of hospital removal Thrombolysis: proportion of highly eligible patients receiving thrombolysis Time to lysis: proportion of highly eligible patients receiving thrombolysis within 60 and 30 minutes of hospital arrival In-hospital course Cardiac counselling: proportion of highly eligible patients receiving in-hospital cardiac counselling Assessment of serum lipids: proportion of patients undergoing testing of serum lipids Discharge status1 -blocker: proportion of highly eligible patients prescribed -blocker Anti-platelet agents: proportion of highly eligible patients prescribed anti-platelet agents (aspirin or clopidogrel) ACE inhibitors: proportion of highly eligible patients prescribed ACE inhibitors

Lipid-lowering agents: proportion of highly eligible patients prescribed lipid-lowering agents All patients Patients with ST segment elevation or new left bundle branch block All highly eligible patients receiving thrombolysis All patients All patients All patients All patients Past history or in-hospital onset of congestive heart failure, LV ejection fraction <40% or LV systolic dysfunction on ECG Random serum cholesterol >4.0 mmol/l Nil Recent trauma or surgery, cardiopulmonary resuscitation, prior cerebrovascular accident or transient ischaemic attack, uncontrolled hypertension (>180/100 mmHg), coagulopathy, active gastrointestinal bleeding, late (>12 hours) presentation, patient refusal, uncertain diagnosis, or scheduled for primary angioplasty Nil Nil Nil Asthma, severe chronic obstructive pulmonary disease (FEV1 <50% predicted or severe COPD recorded in medical record), pulse rate at discharge <60 b.p.m., systolic blood pressure at discharge <90 mmHg, cardiogenic shock, adverse drug reaction Active peptic ulcer, recent or past major bleeding, concurrent warfarin therapy, adverse drug reaction to aspirin or clopidogrel Serum potassium >5.5 mmol/l, serum creatinine >0.3 mmol/l, systolic blood pressure <90 mmHg, severe aortic stenosis (deWned as aortic valve area <0.9 cm2),

adverse drug reaction Adverse drug reaction Downloaded from intqhc.oxfordjournals.org by guest on October 8, 2011 continued i14

Cardiac clinical indicators Table 1 continued Indicator Eligibility criteria ................................................................................ ..................................................................... Inclusions Exclusions ................................................................................ ................................................................................ ........................................................ Cardiac rehabilitation: proportion of highly eligible patients referred to outpatient cardiac rehabilitation programme Coronary angiography: proportion of highly eligible patients undergoing early coronary angiography (during index admission or scheduled within 30 days of discharge) Non-invasive risk stratiWcation: proportion of highly eligible patients undergoing noninvasive risk assessment (during index admission or within 30 days of discharge) Congestive heart failure At presentation Recording underlying causes: proportion of patients for whom underlying causes for heart failure were recorded in hospital notes Recording acute precipitants: proportion of patients for whom acute precipitants were recorded in hospital notes Fluid regimens: proportion of patients for whom a Xuid management regimen was explicitly recorded in hospital notes Daily weigh: proportion

of patients undergoing daily weighing in assessing effectiveness of diuresis DVT prophylaxis: proportion of patients receiving DVT prophylaxis Dietitian review: proportion of patients receiving dietitian review re: salt and Xuid intake Testing of thyroid function: proportion of patients undergoing thyroid function testing All patients Recurrent angina or reinfarction, or NSTEMI infarction or inducible ischaemia on non-invasive testing All patients All patients All patients All patients All patients All patients All patients Atrial Wbrillation as new arrhythmia Primary or rescue angioplasty, age >75 years, current smoker, severe COPD, stroke with hemiplegic deWcit, renal disease (serum creatinine =0.2 mmol/l), advanced liver disease, advanced cancer, alcohol/drug dependence, living in residential care Coronary angiography (performed or scheduled) or other exclusions as listed under coronary angiography Nil Nil Nil Nil Concurrent warfarin therapy

Nil Downloaded from intqhc.oxfordjournals.org by guest on October 8, 2011 continued i15

A. Hickey et al. Table 1 continued Indicator Eligibility criteria ................................................................................ ....................................................................... Inclusions Exclusions ................................................................................ ................................................................................ ........................................................ Assessment left ventricular function: proportion of patients who have undergone left ventricular imaging either during index admission or within previous 12 months Clinical pharmacist review: proportion of patients receiving review by clinical pharmacist Discharge status1 ACE inhibitors: proportion of highly eligible patients prescribed ACE inhibitor at discharge ACE inhibitor dose: proportion of highly eligible patients prescribed ACE inhibitors at discharge who receive target dose -blockers: proportion of highly eligible patients prescribed -blockers at discharge Warfarin: proportion of highly eligible patients prescribed warfarin at discharge Deleterious agents: proportion of patients who did not receive deleterious agents (class I antiarrhythmic agents, verapamil, diltiazem, NSAIDs, tricyclic antidepressants) Clinic follow-up: proportion of patients scheduled for outpatient clinic review within

4 weeks of discharge All patients All patients LV ejection fraction <40% or LV systolic dysfunction on LV imaging All highly eligible patients receiving ACE inhibitors LV ejection fraction <40% or LV systolic dysfunction on LV imaging Chronic atrial Wbrillation or Xutter All patients All patients Nil Nil Serum potassium >5.5 mmol/l, serum creatinine >0.3 mmol/l, systolic blood pressure <90 mmHg, severe aortic stenosis (deWned as aortic valve area <0.9 cm2), renal artery stenosis, adverse drug reaction Nil Systolic blood pressure <90 mmHg, pulse rate <60 b.p.m., past history severe COPD (deWned as above) or asthma, adverse drug reaction Systolic blood pressure >150 mmHg, serum creatinine >0.30 mmol/l, past history of active peptic ulcer disease or recent major bleed (including intracranial bleeding), perceived risk of fall, adverse drug reaction Nil Nil Downloaded from intqhc.oxfordjournals.org by guest on October 8, 2011 ECG, electrocardiogram; NSTEMI, non-ST segment elevation myocardial infarction; LV, left ventricular; FEV, forced expiratory volume; COPD, chronic obstructive pulmonary disease; ACE, angiotensin-converting enzyme; NSAID, non-steroidal anti-inXammatory drug; DVT, deep venous thrombosis. 1Patients discharged alive and not transferred to other hospitals. with some recipients wanting further veriWcation of data accu-indicator feedback were invited to review the records of their

racy and justiWcation of the method of indicator calculation. indicator-failed p atients. Only four clinicians accepted this offer The rigorous research approach to measurement can appear at and none asked for t he reported indicators to be revised. odds with the more pragmatic process of measurement for In the second round we e mphasized indicators for which improvement [24]. Clinicians experiencing difWculty in accepting improvement in care would result in the greatest gains in i16

Cardiac clinical indicators Table 2 Process indicators for post-discharge care measured at 3, 6, and 12 mont hs following hospital discharge Indicator Description ................................................................................ ................................................................................ ........................................................ Acute coronary syndromes Anti-platelet agent Proportion of patients receiving anti-platelet therapy -blockers Proportion of patients receiving -blockers ACE inhibitor Proportion of patients receiving ACE inhibitor Lipid-lowering agents Proportion of patients receiving lipid-lowering therapy Control of serum lipids Proportion of patients achieving serum cholesterol <4 mm ol/l or low density lipoprotein <2.6 mmol/l Smoking cessation counselling Proportion of active smokers who have received smo king cessation counselling Congestive heart failure ACE inhibitor Proportion of patients receiving ACE inhibitor ACE inhibitor dose Proportion of patients receiving ACE inhibitor who receive ta rget dose -blockers Proportion of patients receiving -blockers Control of blood pressure Proportion of patients achieving systolic blood pressu re =130 mmHg and diastolic blood pressure =80 mmHg Weight monitoring Proportion of patients who are being weighed at each follow-up visit in assessing Xuid status Exercise prescription Proportion of patients receiving exercise prescription ACE, angiotensin-converting enzyme. Table 3 Strategies to enhance indicator reliability Develop a speciWc case deWnition for patient selection Construct a core dataset of items with unambiguous operational deWnitions Develop a procedure and data entry manual that deWnes all data Welds Train data collectors in the use of standardized datasheets to collect information using data entry manual Minimize errors in database design and reporting through regular communication between clinicians, data managers,

and statisticians Develop database logic checks to alert when incorrect data are being entered, e.g. discharge date that is before an admission date Pilot test the indicators and data collection methods and reWne them over a set period (in our case, 3 months) and update data entry manual accordingly Monitor data reliability with frequent interim analyses and regular meetings of data collectors to ensure consistency of data item interpretation and measurement Undertake re-abstraction audits of randomly selected hospital charts in assessing level of agreement among abstracters (inter-rater reliability) patient outcome. We offered potential solutions based on literature reviews and action research conducted elsewhere. We collaborated with various groups in providing more customized data that better informed their attempts to improve care. This support-giving approach was further consolidated in the third round of feedback, by which time data quality was no longer an issue. Various multidisciplinary groups were now acting across professional boundaries to improve speciWc aspects of care [23]. At this time, we surveyed a representative sample (n = 150) of hospital clinicians about the usefulness and impact of indicator feedback. Although the response rate was low (25%), >70% of respondents found feedback useful and wished to continue receiving it, while 52% reported that it had resulted in changes to practice. During the course of our implementation, we modiWed the project design as a result of clinician feedback and internal re-appraisal. Brevity and clarity. Our Wrst post-discharge care feedback newsletter to GPs met with almost universal dismissal on the grounds that it contained too much uninterpretable data that could not be assimilated by busy practitioners. Consequently we restricted ourselves to simple tables and boxed key messages (see Figure 3), which elicited positive responses. Liberalization of eligibility criteria. With certain pharmacological indicators (e.g. warfarin, ACE inhibitors) we found that by using restricted indications and inclusive contra-indications, very few patients demonstrated eligibility for treatment, and thus the indicator was rendered useless. In such cases we created, by consensus, a new version of the indicator with more liberal eligibility criteria that still accorded with evidencebased clinical decision-making. Balancing effort of validation with the need for timely feedback. We found that repeated minor corrections of the raw data aimed at maximizing validity of the calculated indicator did not make any signiWcant difference to the reported indicator value, and simply delayed its timely release which, in turn,

reduced its impact on clinicians. We reconWrmed Hannan s dictum: do not wait for better data perfect should not be the enemy of good [25]. Downloaded from intqhc.oxfordjournals.org by guest on October 8, 2011 i17

100 80 60 40 Series Series Series Series 20 0

1 2 3 4

A. Hickey et al. Discharge status % of eligible patients receiving treatment B-B A-A ACE-I LLA Heparin Figure 1 Graphical feedback of consecutive series of values of process indicator s for in-hospital care of patients with acute coronary syndromes. Series 1: patients discharged 17 October 2000 to 16 April 20 01 (note: no data on heparin use were collected for patients in this series). Series 2: patients discharged 17 April 2 001 to 20 September 2001. Series 3: patients discharged 21 September 2001 to 18 February 2002. Series 4: patients discharged 19 February 2002 to 31 August 2002. A-A, patients receiving anti-platelet agents; ACE-I, patients receiving angiotensin-c onverting enzyme inhibitors; B-B, patients receiving -blocker; LLA, patients receiving lipid-lowering agents; Heparin, patients receiv ing heparin infusion. Downloaded from intqhc.oxfordjournals.org by guest on October 8, 2011 Avoiding frequent minor changes in indicator deWnition [26]. While data sets should allow for changes in indicator deWnitions to reXect evidence-based changes in practice, minor changes served only to confuse clinicians wanting to make comparisons across feedback periods. We ceased making different versions of the same indicators after the Wrst round for reasons other than those mandated by the publication of results of important, new clinical trials. Discontinuation of indicator reports based on small samples. We originally envisaged a 3-month rather than a 6-month cycle for inhospital feedback. While we desired timely indicator feedback, the shorter cycle span led to smaller patient samples which, for some indicators with low event rates, introduced excess random error [27]. While statistical process control methods [28,29] could have been used to correct for such error, we were concerned that adding such analyses to our feedback formats would have jeopardized their interpretability to the majority of clinicians. Discontinuation of formal feedback sessions. By the third round of feedback, hospital clinicians did not desire feedback to be accompanied by formal discussion sessions, suggesting that external facilitation becomes redundant once the culture of improvement has been established.

Results Preliminary results Baseline process indicators suggested suboptimal performance in several areas [30,31], with subsequent improvement in most indicators and signiWcant change in 17 of 40 (Tables 4 7). It is impossible to gauge the extent to which these improvements in care can be attributed solely to clinical indicator feedback among several concurrent quality improvement interventions. However, focus group discussions and results of the previously mentioned questionnaire survey suggested that indicator feedback had stimulated changes in practice. Lessons learned and future plans Our experience yielded several lessons that are guiding future plans. Firstly, keep the number of data elements to a minimum. Our dataset for in-hospital care comprised 171 variables for ACS and 204 for CHF, many relating to patient demographics and clinical characteristics, which we reasoned were relevant to determining process-of-care eligibility or conducting risk-adjusted outcome comparisons. In retrospect, a large fraction of these data (and the effort involved in collecting them) added little to the validity of reported indicators and was not required for case-mix adjustment in the presence of case deWnitions and standardized patient ascertainment. In a further extension of our work, data elements have been reduced to 50 for ACS and 45 for CHF [32]. Secondly, more economical methods of data collection and entry are worth exploring. In the absence of universal electronic medical records, proformas that are read by text recognition software are being trialed [33], which affords more rapid importation of data into databases and reduction in transcription errors. i18

Cardiac clinical indicators by guest on October 8, 2011intqhc.oxfordjournals.orgDownloaded from Figure 2 Tabular feedback of clinical indicators for in-hospital care of acute c oronary syndromes. i19

A. Hickey et al. by guest on October 8, 2011intqhc.oxfordjournals.orgDownloaded from Figure 3 Feedback forms relating to post-discharge (general practice) care. i20

Cardiac clinical indicators by guest on October 8, 2011intqhc.oxfordjournals.orgDownloaded from Figure 3 continued i21

A. Hickey et al. Table 4 Changes in process and outcome indicators for in-hospital care for acute coronary syndromes Indicator Baseline (n = 428) Final remeasurement period (1/10/00 17/4/01) (n = 436) (15/2/02 30/8/02) ................................................................................ ................................................................................ ........................................................ Presentation Electrocardiograph within 10 minutes of presentation1 Lysis administration Lysis in 60 minutes Lysis in 30 minutes In-hospital course Lipid levels checked Non-invasive risk stratiWcation tests Coronary angiography In-hospital cardiac counselling1 Discharge status -blocker Anti-platelet agents ACE inhibitors1 Lipid-lowering agents1 Referral to outpatient cardiac rehabilitation1 Outcomes In-hospital mortality Re-admission (same cause) in 30 days Median length of hospital stay (days)1 ACE, angiotensin-converting enzyme. 1SigniWcant change (P = 0.05). Thirdly, in terms of sustainability, our costings suggest that measurement and feedback systems used ~50% (or $500 000) of the programme budget, which was expended on 2500 patients (i.e. approximately $200 per patient). Trends in the data suggest that costs are more than likely to be offset by improvements in health outcomes such as the observed 1-day reduction in median length of hospital stay for patients with ACS. The minimization of datasets and the use of previously mentioned scanning technology will signiWcantly reduce this infrastructure cost. In the Queensland state public hospital system, our revised methods for measuring and reporting clinical indicator data are being continued in the three consortium hospitals and extended to 14 other large hospitals under the auspices of the Cardiac Collaborative of the Queensland Health Collaborative for Healthcare Improvement [32]. Our indicator experience is also assisting the Cardiac Society of Australia and New Zealand (CSANZ) [34] and the National Institute of Clinical Studies (NICS) [35] to develop national indicator sets for ACS and CHF, respectively, and to inform the quality

improvement strategies that these organizations may want to sponsor in the future. As a result of our programme, Wve divisions of general practice representing all GPs in metropolitan Brisbane have become involved in clinical audit and feedback. However, the long-term continuation of this activity in its current form is proving to be a challenge due to the expense and labour involved. Less than ideal response rates limit the sample size, 145/238 (61%) 49/49 (100%) 35/39 (71%) 17/49 (35%) 311/428 (73%) 17/57 (30%) 41/45 (91%) 168/351 (48%) 212/251 (84%) 301/318 (95%) 105/143 (73%) 165/202 (82%) 24/351 (8%) 28/379 (7.4%) 26/351 (7.4%) 7.0 170/244 (70%) 39/39 (100%) 28/39 (72%) 16/39 (41%) 336/436 (77%) 17/55 (31%) 43/46 (93%) 212/371 (57%) 202/239 (85%) 321/334 (96%) 113/139 (81%) 197/223 (88%) 64/371 (17%) 23/394 (5.8%) 17/371 (4.6%) 6.0 requiring longer feedback cycles in order to accrue meaningful data. Performance measurement and feedback are an integral part of quality improvement initiatives. We attempted to develop a sustainable system of indicator collection and reporting as part of a programme targeting care of two common cardiac conditions associated with signiWcant mortality

and morbidity. Taking a patient-centred focus, we provided feedback for indicators targeting both hospital and community providers. At an organizational level, our methodology is being extended to multiple public hospitals throughout our state, while the smaller, less resourced divisions of general practice are building on their experience to develop lower cost systems for ongoing clinical audit. Acknowledgements We are appreciative of the extensive support given by Queensland Health, and the cooperation of the members of the clinical indicator expert panels and others involved in indicator analysis, in particular: Dr Kathleen Armstrong, Dr John Atherton, Dr John Bennett, Mr Neil Cottrell, Dr Christine Fawcett, Dr Judy Flores, Dr Andrew Galbraith, Dr Paul Garrahy, Ms Ann Hadwyn, Professor Tom Marwick, Dr Alison Mudge, Dr Mark Morris, Dr Bronwyn Pierce, Ms Daniela Sanders, and Ms Justine Thiele. The authors gratefully acknowledge the funding support of the Australian Commonwealth Downloaded from intqhc.oxfordjournals.org by guest on October 8, 2011 i22

Cardiac clinical indicators Table 5 Changes in process and outcome indicators for in-hospital care for conge stive heart failure Indicator Baseline (n = 220) Re-measurement period (n = 235) (1/10/00 17/4/01) (15/2/02 30/8/02) ................................................................................ ................................................................................ ........................................................ In-hospital course Recording underlying causes 188/220 (85%) 215/235 (91%) Recording precipitating factors1 160/220 (75%) 211/235 (90%) Limiting Xuids1 89/220 (40%) 128/235 (54%) Weighing daily1 121/220 (55%) 148/235 (63%) DVT prophylaxis1 31/104 (30%) 94/128 (73%) Dietician review 38/220 (17%) 44/235 (19%) Thyroid function test1 16/31 (52%) 41/52 (79%) Echo cardiograph 135/220 (61%) 164/235 (70%) Clinical pharmacist review1 105/191 (55%) 142/219 (65%) Discharge status ACE inhibitors 58/71 (82%) 61/71 (86%) ACE inhibitor target dose 82/136 (60%) 108/164 (66%) -blocker1 47/135 (35%) 88/152 (58%) Warfarin 22/50 (44%) 27/63 (41%) Deleterious agents (avoidance of) 180/191 (94%) 214/219 (98%) Physician clinic follow-up1 87/191 (46%) 130/219 (59%) Outcomes In-hospital mortality1 21/212 (9.9%) 11/230 (4.8%) Re-admission (same cause) in 30 days 10/199 (5.0%) 13/224 (5.8%) Median length of hospital stay (days) 7.0 7.0 ACE, angiotensin-converting enzyme; DVT, deep venous thrombosis. 1SigniWcant change (P = 0.05). Table 6 Changes in process indicators for post-hospital care (acute coronary syn dromes) Indicator 3 month follow-up 6 month follow-up ................................................................................ ............................................................................... ... Baseline (n = 95) Remeasurement (n = 89) Baseline (n = 93) Remeasurement (n = 10 4) ................................................................................ ................................................................................ ........................................................

Anti-platelet agent 81% 90% 83% 87% -blocker 65% 74% 61% 70% ACE inhibitor 59% 57% 61% 63% Lipid-lowering agents 81% 81% 81% 83% Control of serum lipids 50% 44% 60% 68% Smoking cessation 17% 50%1 15% 39%1 ACE, angiotensin-converting enzyme. 1SigniWcant change (P = 0.05). Table 7 Changes in process indicators for post-hospital care (congestive heart f ailure) Indicator 3 month follow-up 6 month follow-up ................................................................................ .. ............................................................................. ..... Baseline (n = 47) Remeasurement (n = 38) Baseline (n = 45) Remeasurement (n = 54 ) ................................................................................ ................................................................................ ........................................................ ACE inhibitor 72% 87% 69% 69% ACE inhibitor dose 55% 66% 63% 70% -blocker 38% 66%1 38% 48% Control of blood pressure 63% 74% 45% 68%1 Weight monitoring 69% 100%1 57% 69% Exercise prescription 40% 58%1 44% 63%1 Downloaded from intqhc.oxfordjournals.org by guest on October 8, 2011 1SigniWcant change (P = 0.05). i23

A. Hickey et al. Department of Health and Ageing who made this programme possible through the Clinical Support Systems Program (a joint initiative with the Royal Australasian College of Physicians). References 1. DeLong JF, Allman RM, Sherrill RG, Schliesz N. A congestive heart failure project with measured improvements in care. Eval Health Prof 1998; 21: 472 486. 2. Mehta R, Montoye CK, Gallogly M et al. Improving quality in care of acute myocardial infarction: the Guidelines Applied to Practice (GAP) initiative in south-east Michigan. J Am Med Assoc 2002; 287: 1269 1276. 3. Marciniak TA, Ellerbeck EF, Radford MJ et al. Improving quality of care for medicare patients with acute myocardial infarction: results from the Cooperative Cardiovascular Project. J Am Med Assoc 1998; 279: 1351 1357. 4. Axtell SS, Ludgwig E, Lope-Candales P. Interventions to improve adherence to ACC/AHA recommended adjunctive medications for the management of patients with an acute myocardial infarction. Clin Cardiol 2001; 24: 114 118. 5. Rubin H, Pronovost P, Diette GB. From a process of care to a measure: the development and testing of a quality indicator. Int J Qual Health Care 2001; 13: 489 496. 6. Braunwald E, Antman E, Beasley J et al. ACC/AHA 2002 guideline update for the management of patients with unstable angina and non-ST-segment elevation myocardial infarction: summary article. A report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on the Management of Patients with Unstable Angina). Circulation 2002; 106: 1893 1900. 7. Aroney C, Boyden A, Jelinek M, for the Unstable Angina Working Group. Management of Unstable Angina Guidelines 2000. Med J Aust 2000; 173 (suppl.): S65 S88. 8. National Heart Foundation of Australia and Cardiac Society of Australia and New Zealand Chronic Heart Failure Clinical Practice Guidelines Writing Panel. Guidelines for management of patients with chronic heart failure in Australia. Med J Aust 2001; 174: 459 466. 9. Cook DJ, Greengold NL, Ellrodt AG, Weingarten SR. The relation between systematic reviews and practice guidelines. Ann Intern Med 1997; 127: 210 216. 10. Quality of Care and Outcomes Research in CVD and Stroke Working Groups. Measuring and improving quality of care. A report from the AHA/ACC First ScientiWc Forum on Assessment of Healthcare Quality in Cardiovascular Disease and Stroke. Circulation 2000; 101: 1483 1493. 11. Krumholz HM, Baker DW, Ashton CM et al. Evaluating quality of care for patients with heart failure. Circulation 2000; 101: e122 http://circ.ahajournals.org/cgi/content/full/101/12/e122 Accessed 3 March 2003. 12. Mant J. Process versus outcome indicators in the assessment of quality of health care. Int J Qual Health Care 2001; 13: 475 480. 13. Hofer TP, Bernstein SJ, Hayward RA, DeMoner S. Validating

quality indicators for hospital care. Jt Comm J Qual Improv 1997; 23: 455 467. 14. Brook RH, McGlynn EA, Cleary PD. Measuring quality of care. N Engl J Med 1996; 335: 960 970. 15. Huff ED. Comprehensive reliability assessment and comparison of quality indicators and their components. J Clin Epidemiol 1997; 50: 1395 1404. 16. Fitzgerald ME, Molinari GF, Bausell RB. The empowering potential of quality improvement data. Eval Health Prof 1998; 21: 419 428. 17. Mani O, Mehta RH, Tsai T et al. Assessing performance reports to individual providers in the care of acute coronary syndromes. Jt Comm J Qual Improv 2002; 28: 220 232. 18. Rainwater JA, Romano PS, Antonius DM. The California Hospital Outcomes Project: how useful is California s report card for quality improvement? Jt Comm J Qual Improv 1998; 24: 31 39. 19. Goddard M, Davies HT, Dawson D, Mannion R, McInnes F. Clinical performance measurement: part 2 avoiding the pitfalls. J R Soc Med 2002; 95: 549 551. 20. Jeacocke D, Heller R, Smith J, Anthony D, Williams JS, Dugdale A. Combining quantitative and qualitative research to engage stakeholders in developing indicators in general practice. Aust Health Rev 2002; 25: 12 18. 21. Thomson O Brien MA, Oxman AD, Davis DA et al. Audit and feedback: effects on professional practice and health care outcomes (Cochrane Review). In: The Cochrane Library, Issue 4. Oxford: Update Software, 2002. 22. Berwick DM, James B, Coye MJ. Connection between quality measurement and improvement. Med Care 2003; 41: I-30 I-38. 23. Lurie JD, Merrens EJ, Lee J, Splaine ME. An approach to hospital quality improvement. Med Clin North Am 2002; 86: 825 845. 24. Solberg LI, Mosser G, McDonald S. The three faces of performance measurement: improvement, accountability, and research. Jt Comm J Qual Improv 1997; 23: 135 147. 25. Hannan EL. Measuring hospital outcomes: don t make perfect the enemy of good! J Health Serv Res Policy 1998; 3: 67 69. 26. Ohm B, Brown J. Quality improvement pitfalls: how to overcome them. J Healthc Qual 1997; 19: 16 20. 27. Zaslavsky AM. Statistical issues in reporting quality data: small samples and casemix variation. Int J Qual Health Care 2001; 13: 481 488. 28. Maleyeff J, Kaminsky FC, Jubinville A, Fenn CA. A guide to using performance measurement systems for continuous improvement. J Healthc Qual 2001; 23: 33 37. 29. Gibberd R. Pathmeswaran A, Burtenshaw K. Using clinical indicators to identify areas for quality improvement. J Qual Clin Practice 2000; 20: 136 144. 30.

Scott IA, Denaro CP, Flores JL, Bennett C, Hickey A, Mudge AP. Quality of care of patients hospitalised with acute coronary syndromes. Intern Med J 2002; 32: 502 511. 31. Scott IA, Denaro CP, Bennett C et al. Quality of care of patients hospitalised with congestive heart failure. Intern Med J 2003; 33: 140 151. Downloaded from intqhc.oxfordjournals.org by guest on October 8, 2011 i24

Cardiac clinical indicators 32. Queensland Health Collaborative for Healthcare Improvement Cardiac Collaborative, Brisbane, Australia. http://www.qheps. health.qld.gov.au/chi/home.htm Accessed 11 December 2002. 33. Thoma G. Automating the production of bibliographic records for medline. An R&D report of the Communications Engineering Branch Lister Hill National Center for Biomedical Communications, National Library of Medicine. Bethesda, MD: National Library of Medicine. http://archive.nlm.nih.gov/ pubs/thoma/mars2001.php Accessed September 2001. 34. Cardiac Society of Australia and New Zealand/National Heart Foundation of Australia Acute Coronary Syndrome Working Group, Sydney, Australia. http://www.csanz.edu.au Accessed 11 December 2002. 35. National Institute of Clinical Studies Heart Failure Data Expert Group, Melbourne, Australia. http://www.nicsl.com.au Accessed 11 March 2003. Accepted for publication 21 November 2003 i25 Downloaded from intqhc.oxfordjournals.org by guest on October 8, 2011

Downloaded from intqhc.oxfordjournals.org by guest on October 8, 2011

Вам также может понравиться