High quality hospitals are a hallmark of the publicly funded healthcare system in Canada. Hospital quality is important to patients and it is important to healthcare policy-makers charged with ensuring the delivery of safe, high quality care. When quality suffers in hospitals, the public often hears about it.
Highlighting one indicator of quality, local and national media attention regularly highlight hospital outbreaks of Clostridium difficile (C. difficile) and associated deaths (“B.C. hospital declares C. difficile outbreak,” 2011; Hunter, 2012; Nagel, 2012). C. difficile is a bacterium that causes intestinal inflammation and diarrhea, and spreads through hospitals when hygiene and infection control processes are inadequate. C. difficile infection rates are often used as a hospital quality measure, and in some regions infection rates are publicly reported (e.g. Vancouver Island Health Authority and Fraser Health Authority).
A review of the impact of healthcare facility-acquired C. difficile in Europe indicated that 30-day mortality rates for patients diagnosed with C. difficile infection were between 2.8% and 29.8%, with an additional length of stay between 16 and 37 days (Wiegand et al., 2012). In addition, C. difficile outbreaks cost the healthcare system significantly. A Canadian analysis of the adverse events in acute care hospitals determined the economic burden of C. difficile infections to be $46 million (Etchells et al., 2012). This compares to $36 million for methicillin-resistant Staphylococcus aureus (MRSA) infections and $24 million for surgical site infections (Etchells et al., 2012).
Before policy-makers, healthcare funders or hospital managers can begin to address quality problems, they first must define what quality is. Healthcare quality is the degree to which health care services increase desired health outcomes and are consistent with the current state of evidence (Berenholtz, Dorman, Ngo, & Pronovost, 2002). From this definition of quality, specific indicators can be developed to monitor and promote quality and to compare (over time and between) hospitals or other healthcare providers (Copnell et al., 2009). In this introduction to quality indicators, we review their theory, application and potential in Canada.
Hospital quality indicators
Most discussions of hospital quality frame the issue in the context of Donabedian’s work on assessing care delivery according to three different aspects of evaluation: structure, process and outcome (Donabedian, 2005; Evans, Lowinger, Sprivulis, Copnell, & Cameron, 2009; Schuster, McGlynn, & Brook, 2005).
Outcome measures report whether a patient’s health improved as a result of the care they received by evaluating changes in a patients’ health status (Schuster et al., 2005). Process measures report what activities or treatments a patient receives. Process measures are the most common type of evaluation, with clinical effectiveness, patient centeredness and patient safety being the most common indicators (Groene, Skau, & Frølich, 2008).
Structure refers to the physical characteristics of the organization and delivery of the healthcare system, such as the number of hospital beds available. This definition encompasses the physical structures around care, such as hospital buildings and other infrastructure (Schuster et al., 2005).
Outcome measures
Outcomes are results of care. An ideal measurement of an outcome would show the effect of specific, evidence-based care on the health of a patient (Mainz, 2003; (Berenholtz, Dorman, Ngo, & Pronovost, 2002). Common outcome indicators include mortality and morbidity measures (Mainz, 2003).
Outcomes of care are determined by several factors related to the patient, the illness, and the healthcare system. In other words, differences in outcome may not be due to the quality of healthcare provided but instead to the severity of the patient’s illness or comorbidities. Thus, standardized data collection and risk adjustment are important for interpreting outcomes data (Mainz, 2003).
The main strengths of health outcomes measures is that they are direct measures of the improved (or not) health of patients, and they can be evaluated over long periods of time (Mainz, 2003; Lieberthal, 2008). Outcome indicators are most applicable when variations in the care provided to a patient have a significant effect on outcomes (Mainz, 2003; Mant, 2001).
A major weakness associated with outcome indicators is the difficulty of linking a poor outcome (say, mortality) with specific steps that can be taken to improve quality (Freeman, 2002). Other weaknesses are that there is no standardized definition of outcomes data and the collection of outcomes data is not always done using the same methods (Freeman, 2002). Some researchers have also raised concerns that adjustments to indicators to account for variations do not adequate reflect underlying differences in populations (Lilford, Mohammed, Spiegelhalter, & Thomson, 2004).
Process measures
Processes are actions that a provider has undertaken on behalf of a patient to improve their health, and process measures reflect if the action was done or how well it was done (Mainz, 2003). These indicators can be effective measures of quality since they link processes to (assumed) improvements in health outcomes (Evans et al., 2009). Processes of care are the most common type of indicator (Copnell et al., 2009). Some examples of process indicators include: (Mainz, 2003)
- Proportion of patients with myocardial infarction who received thrombolysis
- Proportion of patients treated according to clinical guidelines
Some strengths of process indicators are that they provide clear feedback about care with actionable steps to improve results, can be collected at the point of care, do not need to be risk adjusted, and can be sensitive to differences in quality of care (Evans et al., 2009) (Schmaltz, Williams, Chassin, Loeb, & Wachter, 2011).
One weakness of process indicators is that the evidence linking processes to outcomes for many indicators is limited (Copnell et al., 2009). For example, there is no clinical trial evidence that brain imaging for acute stroke patients improves their health outcomes (Gandjour, Kleinschmit, Littmann, & Lauterbach, 2002). According to a literature review, almost 30% of indicators used were done so based on expert opinion, and not based on evidence of clinical effectiveness (Gandjour et al., 2002).
Structural indicators
Structural indicators measure institutional characteristics that can affect quality of care; these can include the type or amount of resources used to deliver care (Mainz, 2003). Some examples of structural indicators relate to the presence or number of staff, clients, money, beds, supplies, and buildings and can specifically include: (Mainz, 2003)
- Ratio of specialists to other doctors
- Access to specific technologies (e.g. MRI)
- Presence of specialty units
- Clinical guidelines
- Accreditation status
Structural indicators are the least common, accounting for less than 10% of all quality indicators used (Copnell et al., 2009). Evidence linking structural factors to quality of care is developed for a few indicators but limited overall (Lilford et al., 2004).
One example of a structural factor that can impact quality of care is the presence of in-hospital dedicated stroke units. Evidence shows that these units reduce morbidity and mortality, as well as length of stay compared to patients managed on general medical wards (Zhu et al., 2009; “Organised inpatient (stroke unit) care for stroke., 2007,”).
The conundrum of measuring quality
Measuring hospital quality is complex, and as discussed above, is measured in many ways. Excellent or poor performance on one quality measure does not mean that a hospital is or is not a high quality one. A balanced range of indicators needs to be examined in order to fairly assess the overall quality of a hospital.
This complexity has given rise to hospital scorecards, such as those used in Ontario. These scorecards look at quality along four dimensions: system integration and change, patient satisfaction, clinical utilization and outcomes, and financial performance and condition. In the clinical utilization and outcomes category are measures such as readmission rates, adverse events, and access to coronary angiography for patients with acute myocardial infarction (heart attack).
Another measure that is often included in hospital scorecards, or used to get a more complete view of hospital quality, is measures that incorporate patients. Asking patients for their experiences regarding the quality of care they have received provides valuable information to hospital administrators and policy-makers. There are two main types of patient outcome measure: patient reported outcome measures (PROMs) and patient reported experience measures (PREMs). These are not measures of hospital quality in the same way that process, outcome or structures are, but they help to provide a more complete picture of overall hospital quality.
For more information on PROMs visit Patient Reported Outcomes, for more information on PREMs visit Hospital Care Quality Information from the Consumer Perspective (HCAHPS).
There are also instances of quality measurement creating perverse incentives (i.e. gaming and/or data manipulation), and the potential to undermine a culture that promotes quality improvements (Freeman, 2002). Data availability is also a problem when measuring quality. Data has to be available, valid and reliably measured over time; other data concerns include robustness, sensitivity and specificity (Freeman, 2002).
In addition, some literature suggests that quality measures convey a false sense of objectivity, given the weak evidence that supports their use (Freeman, 2002). Indicators are rarely chosen based on empirical evidence, but more commonly as a result of expert opinions, data availability, and public opinion and media perceptions of risk (Copnell et al., 2009).
There is no one or best process for selecting which indicators to use to measure hospital quality. For the OECD’s Health Care Quality Indicator (HCQI) Project, the selection of quality indicators was based on the conceptual frameworks for indicators that already existed in each member country, resulting in a multi-dimensional framework of both process and outcome indicators (Kelley & Hurst, 2006). The inclusion of specific indicators was based on three criteria:
- the importance of what is being measured
- the scientific soundness of the measure
- the feasibility/cost of obtaining data (Kelley & Hurst, 2006)
Quality and funding
Some countries have linked the quality of care provided in hospitals with funding, creating financial incentives for high quality hospital care. Value-based purchasing (VBP) is one strategy for funding hospitals that links quality and funding by rewarding hospitals for the delivery of high quality and efficient care (Damberg et al., 2007). VBP activities include attempts by organized purchasers in the US to improve quality through the use of purchasing power (Maio, Goldfarb, Carter, & Nash, 2003) by linking incentive payments to quality (and cost containment) (Mehrotra, Damberg, Sorbero, & Teleki, 2009).
One other way that other countries link hospital funding to quality is to not pay hospitals for unplanned readmissions (this approach assumes that unplanned readmissions are consequences of poor quality care). This is the approach taken by Germany and the U.K., where payments to hospitals are reduced or eliminated for unplanned readmissions (Averill et al., 2009; Busse, Geissler, Quentin, & Wiley, 2011).
Conclusions
Quality reporting in healthcare is widespread, but currently, no Canadian provinces rate their hospitals’ quality. The Canadian Institutes of Health Information (CIHI) publicly reports a multitude of indicators of hospitals’ quality, but leaves interpretation to the individual.
Policies that link hospital quality and funding are emerging in a number of countries. These experiences rely heavily on comprehensive and accurate data, such as we already have for hospital care in Canada. However, even in places like the US, with robust data collection (to generate ‘charges’), there is little agreement on the best method for measuring hospital quality.
<h6A few organizations that measure hospital quality
- Agency for Healthcare Research and Quality (AHRQ) (30 indicators): The AHRQ’s mission is to improve the quality, safety, efficiency and effectiveness of healthcare for all Americans. They focus on comparing the effectiveness of treatment, quality improvement and patient safety, health information technology, prevention and care management, and healthcare value.
- Joint Commission on the Accreditation of Healthcare Organisations (JCAHO) (51 indicators): The JCAHO is an independent not-for-profit organization that accredits and certifies healthcare organizations and programs in the US. Their mission is to improve healthcare by evaluating organizations and inspiring them to excel in providing safe and effective care.
- Centres for Medicaid and Medicare Services (CMMS)/Hospital Quality Alliance (HQA) (29 indicators): Hospital Compare was created by the CMMS and the Hospital Quality Alliance, as a website that provides information about the quality of care at over 4,000 Medicare-certified hospitals across the US.
- National Centre for Health Outcome Development Indicators (NCHOD) (28 indicators): The Oxford University NCHOD ‘s main areas of work includes condition specific outcome indicators, population-based outcome measures, patient-assessed health instruments and outcome indicators derived from linked HES and ONS mortality data.
- World Health Organisation, Performance Assessment Tool for QI in Hospitals (PATH) (25 indicators): PATH was developed by the WHO Regional Office for Europe to support hospitals in collecting data on their performance, identifying their performance in comparison to peer group and initiating quality improvement activities. The assessment has six dimensions: clinical effectiveness, efficiency, staff orientations, responsive governance, safety and patient centeredness.
- Organisation for Economic Co-operation and Development (OECD) (49 indicators): The OECD Health Care Quality Indicators project aims to measure and compare the quality of health service provision in the different countries. A set of quality indicators has been developed at the health systems level, which allows hospitals to assess the impact of particular factors on the quality of health services.