December 15, 2010 | | Comments 7
Print This Post
Email This Post

Metrics ideas and thoughts

Don't use measuring tape to determine your CDI program success. Try out these metrics instead.

I thought I’d discuss a few metrics I use which are not the standard metrics such as percentage of cases reviewed, query rate, response rate, agree rate, etc. One of my motivations in considering and reflecting on metric choices is to find fairly simple concepts and numbers (simple to communicate, explain, understand at a glance) that at the same time provides a good, solid picture over time and fairly and broadly reflects the work and benefits of CDI.  Metrics need to be chosen both for executive level data as well as detailed program level conversations—some metrics are not suited for both.

Comment— I am including at least one purely financial metric.  I believe it unwise for a CDI program to not be aware of their total impact on the financial health of their organization.  In today’s developing environment of providing more care for less reimbursement, this must be considered.  However, we all know that financial return is not the sole benefit and many agree this should not be the primary consideration of CDI activity.  We need some manner to concretely demonstrate and measure CDI activity and successes in addition to direct financial returns.

Chart value

One analysis I particularly like is a metric I refer to as chart value.  I calculate for the entire program as well as for the individual CDI specialist.  I believe it helps to partially level comparison among different clinical areas. For example, surgical areas have fewer query opportunities (See poll: Do you find many query opportunities in your surgical unit?), but there often is a larger financial impact with those surgical CC’s or MCC’s.

Calculation for any given time frame is simple: Total financial gain divided by total number of cases reviewed equals chart value

I find this useful in several aspects:

  • It provides a nice perspective on a quarterly (or monthly) basis for program or individual performance.
  • When there is a dynamic transfer process between units, this metric helps to clarify and understand a lower individual performance when compared with the activity on a particular discharge unit. There were many opportunities from previous units that were captured for CC’s or principal diagnosis clarifications and the final CDI specialists was picking up at what remained.
  • Also, as mentioned, it helps to ‘level’ the comparison when comparing individual performances.
  • The variables over time of the number of queries, the number of cases reviewed, etc. are also somewhat leveled out.

Net ROI (Return on Investment)

This is another simple calculation that is very common in a variety of settings.  It directly addresses the question of the comparative strength of contribution of a CDI program.  But this does NOT only mean financial ROI. There is an interesting perspective and possibility with ROI.  If one would like to use some other measure of improvement in the numerator position (other than financial impact), it can demonstrate how much it cost to gain so much ‘X’.  X could certainly be increased documentation of risk of mortality (ROM)/severity of illness (SOI), or some measure of the complexity of coding (I will ask for examples from our coding professionals), etc.  As denominator, I have used my total department budget and costs and included payroll, consultant costs, equipment, etc.

Impact percentage
(If anyone wants to suggest a snazzier name for this metric, please do!) This is a new metric (or at least I have not seen anyone else using it).  Essentially, I’ve rolled up all of the customary metrics into one single figure of merit.

First, determine what your CDI department considers a successful query. Whether that measure is improved finance, improved ROM/SOI, or additional/more specific ICD-9 coding (than would have been without query) does not really matter. The measure is what percentage of all cases reviewed is one successful with — ‘a win’.   To calculate, simply divide the number of “wins” by the total number of cases reviewed.

Note, I deliberately removed the factor of case coverage (how many of the targeted cases are reviewed).  However, every other step or metric is rolled up into this one figure of merit:  query rate, response rate, and agree rate as well as whether or not the intended gain was actually reflected in the coding process.

Adjusted case review volumes

Fair notification, though I can’t remember from whom specifically this came from another hospital program. In essence, this metric answers the question how much volume would the CDI have contributed if they had the same level of efficiency but took no time off? Partially, this metric derives from personnel factors— depending on the seniority of individual team members and accrued time off, individual health incidents, full versus part time, etc., simply counting the number of cases reviewed by individual does not provide a level comparison.

The adjustment is simple. Take the actual number of cases reviewed and divide by the fractional value of time worked.  Examples for fractional value of time worked include:

  • If there were a maximum possible 160 hours, but the CDI is half time and took four hours of vacation time, then they only worked 76 hours, so that fraction would be 0.475
  • For a second full time CDI who took eight hours of vacation and 16 hours of sick time, the fraction would be 0.85.

So, if one CDI specialist reviewed 80 cases and another CDI specialist reviewed 120, their respective adjusted volumes are 80/.475 = 168 and 120/.85 = 141.  Looks to me like the part time CDI was more efficient. Of course there are other variables which might  influence a CDI specialist’s productivity but at least this metric provides a starting point from which to delve deeper.

I would certainly like to hear from others about their own unique metrics.  What do you feel are your most important measurements? I would greatly appreciate feedback as I am always looking for a better answer (as well as help identifying the right answer)!

Entry Information

Filed Under: CDI ProfessionGrowing your program


Donald A. Butler About the Author: Donald A. Butler entered the nursing profession in 1993, and served 11 years with the US Navy Nurse Corps in a wide variety of settings and experiences. Since CDI program implementation in 2006, he has served as the Clinical Documentation Improvement Manager at Vidant Medical Center (an 860 bed tertiary medical center serving the 29 counties of Eastern North Carolina). Searching for better answers or at least questions, Butler says he has the privilege to support an outstanding team of CDI professionals, enjoys interacting with his CDI peers and is blessed with a wonderful family.

RSSComments: 6  |  Post a Comment  |  Trackback URL

  1. Great article! I enjoy looking at metrics! I track all of the metrics at our facility. However we do have a “canned” program from a vendor so we didn’t have to “think” about what we wanted to track, we were told what and the signficance of each. In answer to your question these are my top 3 importance measurements: DRG impact %, Match-No Match% between Coder and CDS and MD response rate (agree, disagree, no response ). I think your productivity metric is a great idea! We don’t look at that here, but we should, so will present this option to our Program Director. We also track why we don’t match with the coder, if the information changes at discharge and we couldn’t have known we track that, if we didn’t sequence the PDX and secondary DX correctly and should have we track that. This is helpful over longer periods of time to show how consistent the Medical record is or if more training is needed for the CDS. Thanks for your great article.

  2. Cindy, thanks.
    I can not speak to other consultant services, but we started with JA Thomas 5 years ago. Their software and system as they teach is a good basis. One particular good facet is the ability to download a lot of detail on each case reviewed as far as raw data that is buried in the database. With that ability, you can do just about any data analysis on program data and performance as you might want to pursue. I find a mix of both Excel and Access very helpful. Can also pull in other data sources for analysis which is fun too.

    Perhaps other consultant software systems don’t allow data export or reporting customization? JATA did not provide custom reporting and did not easily allow additional data collection. I personally quickly started exploring data analysis and reporting on my own to get what I wanted.

    There are elements of the consultant data collection that does limit customization. For example, I’ve never liked the restriction of agree / decline / no-response. There are some shades of grey in there (agreed verbally for example) and there is no way to add such options.

    I believe we need to individually craft our programs, philosophy, etc. That questioning and conversation probably ought to start early in a program’s life and never stop.

    If a consultant does not provide the type of data you might want, talk to them!

  3. Glenn Krauss

    Don, excellent article, some thoughts to consider are the development of metrics that measure the impact and effectiveness of educating the physicians on the merits and importance of clinical documentation. I am speaking of elements of quality of clinical documentation that impact the physicians business of the practice of medicine. As collaborators with physicians, we must not lose sight with improving clinical documentation that benefits the physician as well as the hospital.

    Just some food for thought

  4. Glenn,
    Great point. I do believe that CDI improved understanding and collaboration for the physician side of coding, documentation, quality, etc. is an important avenue to enhance partnership and responsiveness with physicians.

    Hopefully you will be able to address metrics with your CDI & E/M audioconference today (I unfortunately am not able to dial in). I am sure that your content will help anyone wanting to consider possible metrics through a much increased understanding of the topic.

    My first thought is to look at a couple of things — the E/M distribution (#’s at each E/M level). Secondly, look at trends in auditing (especially when the physician does their own E/M assignment) for frequency of documentation not supporting the assigned E/M.

    Certainly an area for me to do some learning this year!

  5. As a new CDI I am concerned about metrics vs quality esp when coding staff is not on-board and refusing to answer any questions. I believe quality should be a factor but with large companies numbers seem to be the bottom line. Would appreciate any feedback related to caseload etc.

  6. Donald A. Butler


    I agree that quality needs to be a primary focus — both in daily CDI activity & proficiency (clinical & coding knowledge, putting together the picture, asking a well-founded and compliant query, etc.) as well as trying to keep your personal focus on pursuing a complete and accurate medical record vs solely chasing the $ and volume (a number of posts regarding this discussion).

    Working on the relationship with your coding professional peers is critical — and may need to involve administration to help (depending on circumstances). That is a crucial area that has been addressed both in the blog as well as via CDI Talk.

    As far as caseload, etc., encourage to keep a sharp eye out for the the Query Survey report — I know Melissa has been working on compiling the results. There should be some good benchmarks with the questions on the survey.

    For basic metrics, I think the experienced CDS should be able to handle 150+ discharges a month. Query rates will vary, but personally feel that somewhere between 15-25% is a ballpark. Gaining physician response is largely up to you, so would be shooting for >90%. Agreement (ie, providing the documentation you were expecting) should be >75% — but there is danger in my opinion once above 90%. My concern at that point is whether the physician is actually answering with their honest expert opinion (I am not always right), or I am only asking ‘easy’ questions.


Trackbacks: 1  |  Trackback URL

  1. From gadai bpkb mobil on Jul 10, 2018

RSSPost a Comment  |  Trackback URL