by Debbie Mackaman, RHIA, CHCO
We’ve been on quite a ride since the start of the Recovery Auditor (RA) efforts. In 2012, RAs continued to expand. This report examines how providers have adjusted their approach in the past year, and it looks forward to some new initiatives and proposals that could alter the state of RAs as a whole.
The Revenue Cycle Institute’s 2013 Recovery Auditor Benchmarking Survey had 325 respondents, representing both small and large hospitals, from all four RA regions. These respondents represented a number of different departments, including compliance, HIM, PFS, case management, and clinical documentation improvement.
The main theme of this year’s survey is the growing state of the RAs, and the fact that they only seem to be gaining speed.
The percentage of providers that have had recoupments from automated reviews rose by 14% this year. In addition, the amount of providers that have seen record requests for complex or semi-automated reviews has increased from 82% to 91%. This may not be shocking, as CMS continues to approve more issues and the scope of the RAs continues to expand, but it highlights the fact that the audits are ever-changing and will force providers to stay on their toes.
Another item providers need to pay attention to is the arrival of the prepayment RA reviews. As of right now, only two issues have been approved—MS-DRGs 312 and 069—with short inpatient hospital stays looming, and only 11 states are in the prepayment demonstration, but that does not mean that providers are unfamiliar with the prepayment review process. According to our survey, 74% of respondents have seen a prepayment review either from their MAC (52%), their RA (6%), or both (16%). A total of 52% said that they are not specifically changing any internal processes related to prepayment concerns but 33% said they’ve heightened awareness in departments affected by RAs. However, almost half of the respondents have felt the need to revise internal processes to meet auditor scrutiny, which appears to be increasing from year to year. As the RA program expands, providers need to be flexible to adjust to current issues and anticipate upcoming reviews.
Another RA-related demonstration project is the Part A to Part B rebilling demonstration. While only 380 hospitals are participating in this demonstration, 28 responded to the survey. Of these, the majority (62%) have been able to rebill and get reimbursed. Only 14% have not rebilled anything yet, but were planning on doing so in the future. This demonstration project will last three years, and although it could assist facilities in recouping some of their costs for inpatient stays that were deemed not medically necessary, it is certainly not an answer to resolving the daily operational issues facilities face.
The third recent development is the complaint filed by the American Hospital Association (AHA) against RAs. As you all probably know, the AHA, along with several other hospitals, issued a lawsuit against the RAs for unfair Medicare practices. Based on this bold move, I was curious what respondents thought about the initial action. Sixty percent think that something will happen as a result of the complaint, but 43% of those think that it will only involve very minor changes. On the other side, 17% think that nothing will happen, while 25% are just happy to see the RAs being called out on their current practices. This will be an interesting topic to follow and may set a precedent for other audit processes. Although respondents appear to be pessimistic regarding the potential outcome of the lawsuit, the fact that RAs are being challenged should give providers hope for changes going forward.
Editor’s Note: Debbie Mackaman, RHIA, CHCO, is a Regulatory Specialist with HCPro, Inc., and teaches its Medicare Boot Camps. This post is an excerpt from the benchmark report. To read the entire report, visit The Revenue Cycle Institute.
In light of the most recent physician query practice guidance released in the February edition of the Journal of AHIMA “Guidelines for Achieving a Compliant Query Practice,” ACDIS wants to know how your query practices have changed or progressed.
Please take a few minutes to participate in this 35-question survey. It asks for input regarding CDI query rates, physician response rates, auditing and tracking efforts, and policy creation. ACDIS will share the results in the featured article section of the website and provide a download to members in the upcoming CDI Journal.
Editor’s Note: Join query practice brief committee members William E. Haik, MD, FCCP, CDI-P and Cheryl Ericson, MS, RN, CCDS, CDI-P on Monday, March 4, 1-2:30 p.m., eastern for the webcast “Physician Queries: Comply with new ACDIS/AHIMA guidance.”
Q: Do you predict coder productivity will decline as a result of ICD-10? If so, what do you think the declines will be six months after implementation?
A: These are just my predictions, but I think that inpatient cases are going to drop to 2.5–3 records per hour. Currently we’re upwards of 3–3.5 per hour in non-teaching/tertiary environments.
On the ambulatory surgery side, I think those are going to drop to 5.5-6.5, and I really think it will be closer to the 5. HCPro’s 2011 Coder Productivity survey results show coders completing 6 -7 cases per hour at the time. So the reason I give these estimate is because we’re going to have more of a challenge with the surgeons being able to provide coders the information needed. So I really do think it will be the lower end of that range.
And if you’re one of those facilities that codes today in both ICD-9 and CPT® and if you can continue that practice in ICD-10 and CPT, then you’re going to have more of a reduction, closer to 4 cases per hour just because of the two different thinking patterns for the two coding classifications.
For non-interventional radiology outpatient testing cases, we’re averaging approximately 25–30 per hour right now. I think we’ll that also go down slightly to a range of 23–26.
Editor’s Note: Rose T. Dunn, MBA, RHIA, CPA, FACHE, FHFMA, chief operating officer of St. Louis–based First Class Solutions, Inc., answered this question during the February 29-March 2, 2012 “JustCoding Virtual Summit: ICD-10-CM and ICD-10-PCS, ” and was originally published on JustCoding.com.
ACDIS sister publication and network the Revenue Cycle Institute is conducting its annual Recovery Auditor Benchmarking survey to determine how facilities nationwide are handling this intricate auditing and denials management process. The survey explores experiences with Recovery Auditors and inquires about plans your facility has put into place to deal with them. It covers new developments, such as prepayment audits, Medicaid RACs, Part A to Part B rebilling, and much more.
ACDIS understands that many CDI professionals also play an important role in managing Recovery Auditor record requests as well as in denial management processes. We value your input and appreciate your time and effort in completing this anonymous survey. We know you must be as interested as we are in finding out what your peers are doing to handle the Recovery Auditor process, so all of this data will be compiled and shared with you in an upcoming report.
To take the short survey, please click here.
When the first ACDIS ICD-10 implementation benchmarking survey published in October of 2010, there were three years left prior to actual implementation. At that time, 52% of the more than 300 survey respondents indicated they had only basic awareness of ICD-10. Another 44% of respondents at the time indicated their facility did not have an ICD-10 training timeline, and 51% said CDI staff didn’t have a seat at the table regarding to ICD-10 implementation. (ACDIS members can read the entire report in the CDI Journal section of the ACDIS website.)
Fast forward to December 2012.
What many thought would be a mad-scramble rush to the finish line with ICD-10 implementation pending seemingly remains a struggle to secure resources. In August, CMS announced a one-year delay of ICD-10 implementation—setting the new “final” date at October 1, 2014. Nevertheless just last month the American Medical Association (AMA) vocalized its opposition to ICD-10 implementation, according to a preliminary report of its 2012 interim meeting.
According to an ACDIS website poll following the delay announcement, 52% of respondents indicated that they were relieved to have the one-year extension; 24% indicated that no one in their facility had received ICD-10 training.
To obtain data regarding CDI program involvement in ICD-10 preparations, ACDIS composed a 30-question followup ICD-10 survey. The survey aims to further clarify what role CDI specialists play in preparing for the upcoming transition to ICD-10 and gauge where facilities stand in the planning process. Just as it did in 2010, your participation matters. Help us make the 2013 ICD-10 benchmarking survey meaningful by adding your experiences to our results. Click here to complete the survey.
During a recent CDI Talk conversation, I alluded to data available in the CMS IPPS Final Rule that CDI specialists can use to benchmark their progress and compare their efforts against national norms. It may take a little digging, development, and analysis but such effort is worth it.
One drawback, however, is its lack of comparability to hospitals similar to one’s own. Another drawback is that the data represent averages—and who wants to only be average? When you use this data to compare your individual organization’s performance against the national norms, keep in mind that an effective CDI program should likely be above those national benchmark averages. I say this for two reasons: First, many hospitals don’t have any CDI efforts in place and others have meager or ineffective programs so one can suspect the national reporting average to be lower than what an effective CDI program might observe as “average” at its facility. Secondly, best-in-class is never average. I believe we all want to be effective if not “the best” in our CDI practices. Despite these two drawbacks, the data is free and available to everyone, so why not take advantage of it?
The following analysis is based on the data contained in Table 5, Table 7A, and Table 7B. Table 5 is, of course, the MS-DRG table for fiscal year (FY) 2013. Tthe ICD-9-CM data used in Tables 7A and 7B is from FY 2011. The ICD-9-CM data is then pushed through the v29 grouper (FY12, Table 7A) and the v30 grouper (FY13, Table 7B). Both tables 7A and 7B’s primary purpose is to present data on length of stay (LOS) at different percentiles. It also provides the case volume across the entire Medicare data set for each MS-DRG. I used this table to forecast the possible impacts (assuming no changes in documentation) with the acute renal failure when it changed to a complication/comorbidity (CC) in 2010. (Read that article “CDI analysis can help facilities understand impact of MCC downgrade” on the ACDIS Blog.)
Since we have access to the DRG volumes and the DRG relative weights (RW), from Table 5, we can start to examine frequency, distribution, etc. So, let’s start slicing data. National discharge volumes equal 10,771,161 and the national case-mix index (CMI) equals 1.6045.
Let’s review some background before digging in deeper. Under the IPPS reimbursement system ICD-9-CM diagnosis and procedure codes are grouped into Major Diagnostic Categories (MDCs). Most of these groupings are by body system; however, there are a few exceptions. The exceptions are the “pre-MDCs,” HIV, and multiple significant trauma. Each MDC is further subdivided into Medicare severity diagnostic related groups (MS-DRGs), which can be classified as medical or surgical based on the presence of an ICD-9-CM procedure code that is classified as requiring additional resources by CMS and, therefore, impacts the DRG payment. The most basic initial MS-DRG organization sorts DRGs into three groups: Medical DRGs, Surgical DRGs, and an odd group called “Pre” which largely consists of the most aggressive cases (transplants, heart machines, ECMO, etc.).
DRGs 3 and 4 are, of course, the rather heavily weighted DRGs for patients that received a trach but don’t have a primary head/neck diagnosis. These are likely the most widely occurring DRGs among the “Pre” group (at many more and smaller hospitals than transplants, etc.).
|MS-DRG Type||% of total cases|
|Surg Without DRG 3 or 4||
|Surg Without Pre||
As an aside, note the very small difference in the percentage of total volume when eliminating DRGs 3 and 4: 0.2% of the total cases. Yet, in the table below, pulling out those few cases drives the CMI down for surgical cases by 0.0984 which is a 3.5% decrease.
Whenever I see an unexpected change (either up or down) in CMI, the first place I investigate the cause is in the general med/surg split and then the volumes of DRGs 3 and 4. Then I look to see if there were any changes in service line volumes due to various possible factors, such as a short term change in physician staffing among certain higher weighted DRGs, a change in facility focus or operational capacity, as well as any significant market changes.
Why DRGs 3 and 4? For a hospital with CMI 2.0 and 1,000 discharges a month, just one less DRG 3 case a month will drive the CMI down 0.002, which is an 0.8% decrease. One case is easily lost in the weeds if your focus is looking at case volumes.
Number of Discharges
|Surg without DRG 3 or 4||
|Surg without Pre||
With such a disparity in CMI between surgical and medical cases, and considering the relatively small slice of all patients that surgical DRGs represent, using the total CMI as a metric for CDI effectiveness might be considered fraught with risk.
At first glance here are the specific types of the DRGs as far as influence of secondary diagnosis on the DRG assignment:
|Pair CC or MCC||3%||1%||8%|
It is interesting to see where the volume variations between medicine and surgery are, specifically in the column for “None,” such as chest pain, TIA, and syncope, and the column for “Pair CC or MCC.” The volumes between the MCC pair and the triplet are similar for both medicine and surgical DRGs. However, as one can see below, the overall capture of secondary diagnosis is rather different between the medical and surgical DRGs. [more]
Many CDI managers use case-mix index (CMI) as the primary metric for determining the success or failure of their program. If the CMI rises in a given month, the CDI staff is doing its job, appropriately querying physicians for the correct principal diagnosis and accompanying complications/comorbidities. If the CMI dips, CDI staff aren’t getting physicians to respond, or aren’t reviewing records thoroughly enough.
Or so goes the common logic.
But using CMI as your solitary or even principal metric for success is fraught with problems. Sure, CMI shows a good snapshot of the type of patients a hospital is treating. But as a cold piece of data in isolation it does not tell the story of what is going on inside the walls of a given facility. For example, what happens if a high-volume heart surgeon in your hospital takes two weeks’ vacation this summer? Your CMI will dip, perhaps significantly if you work in a small facility. What happens if your hospital adds an expensive new neurosurgery service line? Your CMI is going to climb. And both of these factors are out of the hands of CDI. Is this the measure you ultimately want to be judged against?
Glenn Krauss, BBA, RHIA, CCS, CCS-P, CPUR, FCS, PCS, C-CDIS, CCDS an independent revenue cycle consultant from Madison, Wis., and a member of the ACDIS advisory board, has uncovered another problem with CMI as a CDI metric: It doesn’t account for takebacks from Recovery Auditors (commonly known by their original acryonym RACs), Medicare Administrative Contractors (MACs), and other audit entities.
Krauss refers to CMI “as the cost to buy the product. We should be using gross margin, instead of CMI. Gross margin is gross increase in case mix—minus the take-backs.That’s the net benefit.”
Krauss cites a New England hospital currently under scrutiny as part of an Office of the Inspector General study. An auditor is reviewing more than 100 records of DRG 252, Other vascular procedures with MCC. These records were selected primarily for the fact that they contained only one MCC.
In many of these charts encephalopathy was written only once in the chart, and without the necessary consistency or continuity.
“Undoubtedly, these MCCs will ultimately be denied by the reviewer,” Krauss says. The result is an artificially high CMI that will come back down. “What is the net benefit if we don’t solidify the chart to remain accurate?” Krauss asks.
Instead of declaring victory after a query results in documentation of a single shaky CC or MCC, Krauss says CDI specialists should pursue “valid and explicit, well orchestrated documentation throughout the chart.” This solidifies the entire chart and ultimately results in a more accurate CMI.
In short, CDI departments shouldn’t ignore CMI. But if you do use it, make sure you account for other contributing factors. Deduct valid auditor recoupments from your numbers. And strive in your efforts to create a strong chart, top to bottom, that can withstand scrutiny. Doing so ensures that your CMI is a true reflection of severity of illness—and not an easy auditor target.
Editor’s Note: This article was originally published in the July 2012 edition of the CDI Journal.
Well, I’m not sure about all that pumpkin pie cooling on the windowsill and cider doughnuts stuff but it was October 2007 when ACDIS first asked members about the age of their programs. There were 274 responses to that first survey. Of those, most (30%) said their program was less than one year old.
It’s been five years since we asked that question. Last week’s poll illustrated something interesting. For the most part, CDI programs have grown up. Responses to the recent poll totaled 308. Of these, 32% indicated they were five years old or older. Compare that against the 18% of five-plus programs back in 2007.
Depending on your perspective (are you a glass half-full or a glass half-empty sort of person?) that might sound like good news. As programs mature they’re more likely to look past the basic CC/MCC capture and case-mix index focus of younger program. As programs mature they’re more likely to increase staffing levels since they’ve been able to prove their accomplishments to the facility powers-that-be. As programs mature they’re more likely to have increased physician engagement. And so forth.
However, does the 6% in the recent poll who indicated that their programs were less than a year old reflect somewhat of a stagnation of new CDI startups? Experts don’t seem to think so. Over and over again in the upcoming (July 1) edition of the CDI Journal experts point to CDI efforts as the balm to sooth many of healthcare reform’s banes. Value-Based Purchasing initiatives? CDI can help capture severity and mortality documentation. ICD-10 preparation? CDI staff members can start ensuring the capture of that granularity now and they can help educate the rest of the staff, too.
All this seems to point to a greater need for CDI programs at facilities across the country in a variety of venues. All this seems to point to the possibility that CDI roles could expand to the physician office setting, to the critical access setting and more.
Yet, unlike the first poll taken at the time of MS-DRG implementation it seems new program implementation may have slowed. As frequently happens when analyzing data, more questions have risen from the responses. Maybe (wink, wink) the low rate of “younger” programs is simply due to the fact that facilities got on the CDI train years ago and only few have yet to implement one.
Wouldn’t that be nice?
It is probably more likely that priorities such as electronic medical record implementation and ICD-10 preparation have shifted funding and focus away from CDI efforts. If so, that’s a shame because improving the quality of clinical documentation is really the reason behind those initiatives.
We know CDI programs are growing. (Read Donald A. Butler’s estimate in this previous blog post.) The question is when will we see another uptick in new program initiatives. In a general guess, I’ll say “not too long now.” More specifically, I’d bet that uptick comes as soon as ICD-10 implementation goes live.
Although you might not have heard of it before, ACDIS has formed a group called the CDI Roadmap Committee to help develop and define some of the core structures that the CDI profession has been lacking. These include the broad goals and objectives of CDI, staffing and productivity considerations, setting new goals for mature programs, and a realistic structured outline to help map out the way.
The CDI Roadmap Committee has been meeting since September 2011. The committee currently consists of the following members:
- Glenn Krauss, RHIA, CCS, CCS-P, CPUR, FCS, PCS, CCDS, C-CDI, ACDIS Advisory Board Member, Independent Revenue Cycle Consultant in Madison, WI.
- Lynne Spryszak, RN, CCDS, CPC-A, ACDIS Advisory Board Member and independent HIM consultant in Roselle, IL.
- Donna D. Wilson, RHIA, CCS, CCDS, ACDIS Advisory Board Member and Senior Director of Compliance Concepts, Inc. in Wexford, PA.
- Cheryl Ericson, MS, RN, ACDIS Advisory Board Member and CDI manager for Medical University of South in Charleston, SC.
- Gail B. Marini, RN, MM, CCS, LNC, ACDIS Advisory Board Member and CDI manager for South Shore Hospital in Weymouth, MA.
- Beth Kennedy, RN, BS, CCS, CCDS, Associate Director, Documentation Improvement Program CMO, The Care Management Company, LLC., Montefiore Medical Center in Bronx, NY.
The majority of the group’s first meeting was spent discussing the purpose and intent of the group and defining both short and long-term objectives. The committee determined that its objective is to create a phased approach to CDI success. The team decided to develop a pre-implementation timeline/checklist, then took a deeper delve into the goals/objectives of a basic CDI program and requirements and expectations for staff.
At subsequent meetings members offered drafts of a pre- implementation checklist with items such as assembling a steering committee and an outline for developing a project plan. The group also discussed sample orientation checklists, collected job descriptions for physician advisors, CDI supervisors, and CDI specialists, and discussed potential CDI evaluation criteria and assessment of CDI staff coding and clinical skills.
The CDI Roadmap Committee will likely break after it completes the “pre-implementation” and “implementation” phases of the timeline, and continue work on “ongoing maintenance” and “advanced level CDI” phases at a later date.
The committee plans to send its work to the ACDIS advisory board for approval and compile its findings in a series of White Papers available as free resources to the ACDIS membership.
Editor’s Note: This article first appeared in the March 15 edition of CDI Strategies.
Over the past several years there have been a number of conversations that touch on physician leadership involvement with CDI. Programs can and do achieve success, but so much more is achieved when there is a proactive and supportive medical voice.
Physician leadership can come from a number of sources and in a variety of forms. Some CDI programs (a few anyway) report directly or indirectly to a physician executive (medical staff functions, chief medical officer [CMO], etc.) and other programs report to the quality department where a physician executive is frequently directly involved. In these circumstances, I hope the physician executive maintains some amount of time dedicated for CDI efforts.
Some organizations are fortunate enough to have physician leadership within the broader organization that is (or have been convinced to be) very supportive to CDI efforts. From what I’ve heard, these frequently include CMOs and chiefs of staff and/or service lines within a given facility. Finally, some physicians, such as a medical director, physician champion, advisor, or liaison, devote a portion of their time to work directly with CDI. (Read more about the expanding roles and responsibilities of CDI physician advisors in the January 2012 edition of the CDI Journal.)
Furthermore, even with supportive medical staff leadership, how that support translates into action varies. Some facilities provide physicians time to offer educational sessions to their CDI and coding teams. Others provide CDI education sessions to entire physician groups by service line.
Most CDI programs earn physician leadership and support through the tireless efforts of the CDI staff and program leaders. Only occasionally have I seen this support present from the very beginning.
I’d like to look at the “state of affairs” in regards to physician leadership. One ACDIS weekly online poll (2008) addressed the simple question of whether respondents had a “physician champion” and if that champion was effective. That poll was rather surprising; only 46% indicated they had a physician champion, and half of the respondents with a physician champion actually rated him/her as ineffective. So, according to that poll, only 23% of programs have an effective physician advisor.
ACDIS repeated the poll (with slightly different wording) in April 2011 and though the results showed some improvement, they were still discouraging. In 2011, 31% described having a very beneficial physician champion, 22% described their physician champion as “’minimally effective”, 24% felt the position was not affordable, and 16% indicated that their program could not find a good candidate. Even more surprisingly to me, 7% said they simply did not see the need for the roll.
Additional polls from 2008 which echo the theme of limited physician support for CDI programs include:
- “How have physicians reacted to your CDI program and query requests?” where only 40% reported a positive response from physicians
- “Are your physicians catching on to your CDI program? ” 3% yes, 74% yes and no, 23% no
- “Do you have any physicians who refuse to participate in your CDI program?” where 81% indicated anywhere from one to many physicians refuse
Other recent poll responses illustrate different aspects of physician involvement in CDI , but I thought these painted an interesting picture.
Don’t forget the most recent study, published in the January CDI Journal, in which 73% (178 individuals) indicated that their physician advisor spends five hours or less dedicated to CDI efforts, and 54% described their advisor as either moderately effective or ineffective.
I think it is important to have data to effectively measure any focus area of interest. I believe a couple of key metric data pieces provide insight to the level of success with physician engagement. In any analysis, I would include items such as:
- Physician response rates
- Severity of illness (SOI)/risk of mortality (ROM) data
- Trends in volume of queries and more specifically the focus of queries (Do CDI staff ask the same queries repeatedly?)
I specifically would not include physician agreement rate except in a broader sense in looking for individual outlier physicians, to find those who either agree to whatever the CDI specialist asks or those who never agree with the premise of a CDI specialist’s query.
As always, I’d love to hear what elements other CDI programs use to statistically validate their physicians’ involvement with and support of their CDI programs.
Quite a bit of material is available between the ACDIS online polls (I have fun with those, obviously), various blog postings, journal articles, and conference presentations that offer useful information regarding physician engagement. Several provide inspiring examples of successes. Various items from other organizations are in the public domain.
If you are interested, shoot me an e-mail or leave a comment here and I can develop a partial list of links.
I am sure most agree that fostering physician engagement in CDI efforts is one of the key challenges of every CDI program.
I certainly don’t have many great answers to this question, and I’d like to hear more thoughts, experiences, and success stories. I know some great examples would be wonderful Journal articles or blog posts.
I will toss in a final thought. Organizational cultural change typically takes five years. Certainly obtaining physician interest in documentation and coded data represents a significant cultural change.
Sometimes I wonder if just need to practice a little more persistence and a lot more patience.