CMS publishes overview of quality improvement and quality measurement

Quality Improvement and Quality Measurement

The vision of the CMS Quality Strategy is to optimize health outcomes by improving quality and transforming the health care system. CMS serves the public as a trusted partner with steadfast focus on improving outcomes, beneficiary/consumer experience of care, population health, and reducing health care costs through improvement. Among the areas of focus are:

  • Leading quality measurement alignment, prioritization, and implementation and the development of new, innovative measures;
  • Guiding quality improvement across the nation and fostering learning networks that generate results.

The close connection between quality measurement and quality improvement is also evident in the Merit-based Incentive Payment System (MIPS) in which participants earn performance-based payment adjustments based on evidence-based and practice-specific data in the categories of quality, improvement activities, advancing care information, and cost (data collection starting in 2018).

How exactly are quality measurement and quality improvement connected? The starting point is always the definition of quality from the National Academy of Medicine: the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge1. Both quality measurement and quality improvement should increase the likelihood of desired health outcomes, using different but mutually supporting mechanisms.

The mechanism of quality improvement is standardization. First, behavior is made systematic so that the same inputs result in the same outputs within the bounds of uncertainty (randomness). Second, behavior is aligned with evidence (e.g. guidelines and systematic reviews). The PDSA Cycle (Plan-Do-Study-Act) is a systematic series of steps to identify the patient, process or system characteristics associated with “non-standardized behavior.” Through each repetition of the PDSA Cycle, behavior becomes more systematic and more aligned2. We standardize behavior through both structure and process. Structure might include things like physical capital (EHR), leadership or culture. Process might include knowledge capital (standard operating procedures) or human capital (education and training).

The mechanisms of quality measurement are selection and choice3. A quality measure is a tool for making “good decisions” defined as decisions that make it more likely to experience a good result and less likely to experience an adverse result that was not foreseen or was not understood. Consumers use quality measures to select high performing clinicians, and low performing clinicians choose to allocate resources to become high performing. Process measures that are based on evidence and well specified, and outcome measures that are reliable, valid, and risk-adjusted support “good decisions.”

How are quality measurement and quality improvement mutually supportive? Benchmarking of quality measures using comparison groups of peers with similar patients, process and system characteristics suggests “best practices” that may be implemented in quality improvement. Process quality measures associated with outcome measures decompose behavior into non-discretionary and discretionary components. When outcome measures are in the early phases of maturity, the discretionary components may suggest concepts for future biomedical, clinical, or health services research that may contribute to the advancement of professional knowledge4. 

References

Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, D.C.: National Academy, 2001. Print.

Langley, Gerald J. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. San Francisco: Jossey-Bass, 2009. Print.

Berwick DM, James B, Coye MJ, Connections between quality measurement and improvement, Med Care, 2003;41(1 Suppl):I30-I38.n. New York: Duxbury, 2004

Califf, Robert M., Eric D. Peterson, Raymond J. Gibbons, Arthur Garson, Ralph G. Brindis, George A. Beller, and Sidney C. Smith. “Integrating Quality into the Cycle of Therapeutic Development.” Journal of the American College of Cardiology 40.11 (2002): 1895-901. Web.

Measures Management Up Close

Each month, we will bring you an introspective look at a measures management topic.

A Closer Look at Continuing Evaluation

In the August 2016 issue, we provided an overview of Measure Use, Continuing Evaluation and Maintenance. In this article, we provide a closer look at Continuing Evaluation. Evaluation answers the question, “is the measure adding value and does it meet the required criteria?”

To help CMS ensure the continued soundness of the measures, the measure developer must provide strong evidence that a measure currently in use continues to add value to quality reporting and incentive programs and that its construction continues to be sound throughout its lifecycle. To determine this, CMS considers the following:

  • Measurement results provide meaningful information to consumers and healthcare providers.
  • Measurement results drive significant improvements in healthcare quality and health outcomes.
  • The measure specifications accurately and clearly target the aspects of the measure that are important to collection and reporting.
  • Collection of the data does not create an undue burden on those collecting it.
  • The calculation methods provide a clear and accurate representation of variation in quality or efficiency of care or health outcomes.
  • The measure is unique or “best in class” when compared to competing measures.

Some measures, such as Patient-Reported Outcome-Based performance measures, have additional criteria to meet. The overarching principle is that these measures should put the patient first. Patient-Reported Outcome-Based measures must be:

  • Psychometrically sound—In addition to the usual validity and reliability criteria, cultural and language considerations, and burden to patients of responding should be considered.
  • Person-centered—These measures should reflect collaboration and shared decision making with patients. Patients become more engaged when they can give feedback on outcomes important to them.
  • Meaningful—These measures should capture impact on health-related quality of life, symptom burden, experience with care, and achievement of personal goals.
  • Amenable to change—The outcomes of interest have to be responsive to specific healthcare services or intervention.
  • Implementable—Data collection directly from patients involves challenges of burden to patients, health literacy of patients, cultural competence of providers, and adaptation to computer-based platforms. Evaluation should address how to manage these challenges.

Measure evaluation occurs throughout the lifecycle, from information gathering in the conceptualization phase to maintenance. To assist the developer and reviewers in this process, there is a Measure Evaluation Report available in the Blueprint for the CMS Measures Management System that lists the criteria it must meet – such as its scientific acceptability (the extent to which the measure produces reliable and valid results about the intended area of measurement) and its usability (how easy it is to use).

Measures must also be continuously reevaluated during maintenance, with reports submitted at specified periods. The comprehensive reevaluation must be completed every three years while the ad hoc reviews are conducted as needed. Measure updates are done to determine if there are opportunities for harmonization and to review the specifications. Though there may be differing evaluation details for the specific reevaluations, the general principles are the same. CMS uses evaluation results to help determine whether to adopt, retire, retain, revise, suspend or remove a measure.

Updated CMS Measures Inventory Now Posted

The CMS Measures Inventory and the Measure under Development (MUD) list have been updated and publicly posted on the CMS website (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/QualityMeasures/CMS-Measures-Inventory.htmlhttps://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/QualityMeasures/CMS-Measures-Inventory.html) on February 17th. The Inventory includes 31 programs, 2,064 unique measures, and is accompanied by the CMS Measures Inventory User Guide. The MUD List contains 29 programs and 522 unique measures. The next public posting will be in July 2017. For any questions regarding the Inventory or the MUD List, please contact MMSSupport@battelle.org.

Leave a Reply

Your email address will not be published. Required fields are marked *


*