RSSAll Entries Tagged With: "Compliance"

Medicare Compliance Review provides new blueprint for CDI efforts

Glenn Krauss

Glenn Krauss

If you haven’t seen the OIG report “Medicare Compliance Review of University of Cincinnati Medical Center [UCMC] for Calendar Years 2010 and 2011,” take a look here at the Office of the Inspector General’s (OIG) website.

What you will see is eye-opening: The OIG reviewed a sample of claims that it deemed were improperly billed by the 695-bed hospital, and, by extrapolating the error rate, determined that UCMC owes more than $9.8 million in improper payments.

The next thing you should consider as a CDI specialist is: How can I prevent my hospital from such a similar (potential) catastrophic review by the OIG? By focusing on affecting positive change in clinical documentation that represents “true” documentation improvement vs. a narrowly defined CDI focus on the capture of CCs/MCCs, says Glenn Krauss, BBA, RHIA, CCS, CCS-P, CPUR, FCS, PCS, CCDS, C-CDI, a manager with Accretive Health in Chicago.

CDI specialists tend to look only at solidifying individual diagnoses in the chart, but often ignore equally important supporting information like clinical indicators to support admission to the facility.

“Do we have good solid documentation of the patient’s DRG, or do we have diagnoses with little clinical support? Are we just sending automatic queries?” he asks. “Often we’re not focused on getting a solid, effective, and encompassing history and physical [H&P] that accurately captures the patient’s history of present illness [HPI] reflective of the patient’s severity of illness, signs and symptoms.”

Physicians tend to elaborate on a patient’s past illnesses vs. a patient’s present illness. A sound HPI consists of a chronological description of the development of the patient’s present illness from the first sign and/or symptom to the present, Krauss notes. “There is often inconsistent or lack of clinical context for the reason for the admission. Doctors need this context for their billing, and [hospitals] need it for quality,” he says.


Book Excerpt: Physician queries and the regulatory environment

The Physician Queries Handbook, 2nd ed.

The Physician Queries Handbook, 2nd ed.

The advancement of CDI efforts brings with it unique challenges. As benevolent a mission as CDI may seem to have, for many facilities the focus of concurrent physician queries continues to be “optimizing” information in the medical record in order to increase reimbursement. When such efforts do not reflect the care provided to the patient these practices could be construed as fraud–particularly when data patterns appear to illustrate inconsistencies with national norms.

When AdvanceMed Corporation, the Zone Program Integrity Contractor (ZPIC) for CMS parsed its data, it identified eight aberrant providers all essentially from the same healthcare system. After years of investigations and subsequent negotiations between the facilities and the U.S. Department of Justice (DOJ), the facilities ultimately paid an $8.9 million settlement. The DOJ found, in most cases that “the timing of changes in peer comparison data–from below average to above average–coincided with implementation of CDI programs.”

Similarly, when a 2005 Maryland qui tam case settled for nearly $3 million in June 2009, prosecutors pointed to CDI efforts related to leading queries at the crux of the allegations.

Of course, healthcare providers must ensure the financial solvency of their organizations just as government officials must ensure the solvency of their healthcare funding programs. Both sides of this fiscal conundrum face growing financial frustration as both sides continue to search for an underlying cause to answer the dilemma of expanding healthcare costs. Nevertheless, when a facility submits a claim to the federal government for payment of activities that were never provided it risks being accused of False Claims Act (FCA) violations, investigations by the Office of the Inspector General, and in some cases, prosecution by the DOJ.

Editor’s Note: This excerpt comes from The Physician Queries Handbook, Second Edition, by Marion Kruse, MBA, RN.

Q&A: Multiple choice options on query templates

You've got questions? Let us know!

You’ve got questions? Let us know!

Q: Our CDI department is developing clarification forms and I have voiced concern with some of the templates. For example, the anemia clarification lists many possible diagnoses including aplastic anemia. If the listed condition would not be clinically acceptable based on the clinical indicators and treatment, should this diagnosis even be listed? I did raise the issue with our physician advisor and he is concerned with the forms, too. Personally, I feel we should not list diagnoses that are not clinically accurate for the specific case.

A: The AHIMA query practice briefs (the latest created in affiliation with ACDIS, Guidelines for Achieving a Compliant Query Practice, published in February) state that “reasonable” diagnoses must be listed in multiple choice queries. With that reference in mind, it is therefore inappropriate to include options on a query that are not supported by clinical indicators.

Although it is good those creating the query templates want to be as inclusive of as many types of anemia as possible, sometimes there is only one appropriate/relevant diagnosis. In such situations, it is okay for the query form to have only one specific diagnosis option as long as the form also includes options for “other,” with a line for comments, and “unable to determine.” The risk with query templates is that there needs to be a way to exclude information not applicable to a particular patient during a specific episode of care; the CDI specialist/coder needs to have the ability to edit/customize the template to suit the situation.

Creation of query templates have many benefits, however.

For starters, they provide a comprehensive starting point for the CDI specialist to work from. In a situation where an anemia query is warranted the CDI specialist could pull up the query template and adapt it to that particular patient’s medical record, including relevant clinical indicators and eliminating inappropriate options.

Furthermore, when CDI professionals include multi disciplinary members in the query creation process such efforts can prove educational for all involved. Physicians can offer clinical, diagnostic insight and HIM professionals may offer insight into coding nuances. Such inclusionary efforts at the outset also help to ensure all vested parties work together. In short, it can help ensure support for the CDI program and its documentation improvement efforts

Hope this helps.

Q&A: Anyone on the care team can answer a query and be compliant

Ask your CDI question in the comment section.

Q: For some reason, I was under the impression that a query could be answered by any healthcare provider, even one just doing a review of the case (a fellow hospitalist, for instance). I know that a treatment provider is, of course, the way to go but was wondering about this as an option for getting the query completed and if we do go this route if the record would be able to stand up to a Recovery Audit review.

A:Any member of the treating medical team can answer a query (not just the attending physician) as long as the documentation does not conflict with that of the attending provider. Coding can occur based on the documentation of any licensed independent practitioner (NP, PA, MD, DO or resident), who provides direct treatment.

CDI specialists may leave queries in the medical record addressed to the medical team rather than a particular provider to ensure a timely response as the attending physician may not be making rounds the day the query is issued.

The exception may be electronic medical records that require the query to be addressed to a particular person in which case, it would probably go to the attending, but if possible would be copied to all members of the treating medical team. I previously worked at an academic medical center and we never had a problem with any member of the treating medical team addressing a query.

With that said, the attending physician is ultimately responsible for the medical record, which is why some CDI programs address their queries directly to that individual. But I don’t know of any guidance that says queries can only be issued to the attending physician. The only exception is if there is conflicting documentation in the medical record. In that case the attending physician must provide final clarification.

I think it is also important to address the role of the CDI/UR/CM physician advisor in health record documentation. Although a physician advisor is a practitioner, who can provide direct patient care under the scope of their licensing, it is inappropriate for them to document within the medical record unless they are part of the treating medical team e.g., if the patient is under their care and they are assuming/sharing responsibility for the care of the patient.

In other words, when the physician advisor is responsible for the care/treatment of a patient they can use their knowledge of CDI to ensure accurate documentation within the medical record; however, it is inappropriate for the physician advisor to document in the medical record/answer queries when they are not involved in the care of the particular patient. Documentation under these circumstances can be viewed as fraudulent because it appears the health record is being modified for the purpose of reimbursement or some other outcome metric rather than as part of patient care.

Editor’s Note: Cheryl Ericson, MS, RN, CCDS, CDIP, education director at HCPro, Inc. and an AHIMA-approved ICD-10-CM/PCS trainer in Danvers, MA, answered this question. For information about CDI-related Boot Camps taught by Ericson, visit 

Case-mix index: Use with caution

Don't get mixed up by case-mix index metrics

Many CDI managers use case-mix index (CMI) as the primary metric for determining the success or failure of their program. If the CMI rises in a given month, the CDI staff is doing its job, appropriately querying physicians for the correct principal diagnosis and accompanying complications/comorbidities. If the CMI dips, CDI staff aren’t getting physicians to respond, or aren’t reviewing records thoroughly enough.

Or so goes the common logic.

But using CMI as your solitary or even principal metric for success is fraught with problems. Sure, CMI shows a good snapshot of the type of patients a hospital is treating. But as a cold piece of data in isolation it does not tell the story of what is going on inside  the walls of a given facility. For example, what happens if a high-volume heart surgeon in your hospital takes two weeks’ vacation this summer? Your CMI will dip, perhaps significantly if you work in a small facility. What happens if your   hospital adds an expensive new neurosurgery service line? Your CMI is going to climb. And both of these factors are out of the hands of CDI. Is this the measure you ultimately want to be judged against?

Glenn Krauss, BBA, RHIA, CCS, CCS-P, CPUR, FCS, PCS, C-CDIS, CCDS an independent revenue cycle consultant from Madison, Wis., and a member of the ACDIS advisory board, has uncovered another problem with CMI as a CDI metric: It doesn’t account for takebacks from Recovery Auditors (commonly known by their original acryonym RACs), Medicare  Administrative Contractors (MACs), and other audit entities.

Krauss refers to CMI “as the cost to buy the product. We should be using gross margin, instead of CMI. Gross margin is gross increase in case mix—minus the take-backs.That’s the net benefit.”

Krauss cites a New England hospital currently under scrutiny as part of an Office of the Inspector General study. An auditor is reviewing more than 100 records of DRG 252, Other vascular procedures with MCC. These records were selected primarily for the fact that they contained only one MCC.

In many of these charts encephalopathy was written only once in the chart, and without the necessary consistency or continuity.

“Undoubtedly, these MCCs will ultimately be denied by the reviewer,” Krauss says. The result is an artificially high CMI that will come back down. “What is the net benefit if we don’t solidify the chart to remain accurate?” Krauss asks.

Instead of declaring victory after a query results in documentation of a single shaky CC or MCC, Krauss says CDI specialists should pursue “valid and explicit, well orchestrated documentation throughout the chart.” This solidifies the entire chart and ultimately results in a more accurate CMI.

In short, CDI departments shouldn’t ignore CMI. But if you do use it, make sure you account for other contributing factors. Deduct valid auditor recoupments from your numbers. And strive in your efforts to create a strong chart, top to bottom, that can withstand scrutiny. Doing so ensures that your CMI is a true reflection of severity of illness—and not an easy auditor target.

Editor’s Note: This article was originally published in the July 2012 edition of the CDI Journal.

Coding for inpatient postoperative complications requires explicit documentation

Capture appropriate documentation for coding postoperative complications.

Determining when to code a post-surgical complication as opposed to simply considering it to be an expected outcome after surgery can be a complicated task.

A complication is “a condition that occurred after admission that, because of its presence with a specific principal diagnosis, would cause an increase in the length of stay by at least one day in at least 75% of the patients,” according to CMS.

Therefore documentation of a postoperative condition does not necessarily indicate that there is a link between the condition and the surgery, according to Audrey G. Howard, RHIA, senior consultant for 3M Health Information Systems in Atlanta, who will join Cheryl Manchenton, RN, BSN, an inpatient consultant for 3M Health Information System on Thursday, July 12, for a live audio conference “Inpatient Postoperative Complications: Resolve your facility’s documentation and coding concerns.”

For a condition to be considered a postoperative complication all of the following must be true:

  • It must be more than a routinely expected condition or occurrence, and there should be evidence that the provider was evaluating, monitoring, or treating the condition
  • There must be a cause and effect relationship between the care provided and the condition
  • Physician documentation must indicate that the condition is a complication

According to Coding Clinic, Third Quarter, 2009, p.5, “If the physician does not explicitly document whether the condition is a complication of the procedure, then the physician should be queried for clarification.”

Coding Clinic, First Quarter, 2011, pp. 13–14 further emphasizes this point and clarifies that it is the physician’s responsibility to distinguish a condition as a complication, stating that “only a physician can diagnose a condition, and the physician must explicitly document whether the condition is a complication.”

For example, a physician may document a “postoperative ileus,” but it is very common for a patient to have an ileus after surgery, Howard says. Therefore, this alone does not qualify as a postoperative complication.

“If nothing is being evaluated, monitored, [or] treated, increasing nursing care, or increasing the patient’s length of stay, I would not pick up that postop ileus as a secondary diagnosis even though it was documented by the physician,” Howard says.

Editor’s Note: This article first published on

To register for the July 12 program visit

ACDIS/AHIMA joint project seeks volunteers

ACDIS invites you to join a work group to research industry needs and develop best practices for physician/provider queries. The work group deliverables will serve to educate members and the industry on ongoing physician and provider query questions, and will be conducted in conjunction with the American Health Information Management Association (AHIMA).

Work group activity will be conducted via weekly one-hour conference calls. No face-to-face meetings are planned. The weekly calls will begin in July. To volunteer, send a resume, a brief email statement of your qualifications, and a short description of your experience with physician queries to ACDIS Director Brian Murphy at

Please note that the deadline for response is the close of business on June 15th.

Tales from the Classroom: Abandoning the CC/MCC emphasis

Look for complete documentation in the medical record not just for diagnoses and conditions that improve DRG assignment and increase reimbursement.

As the lead instructor of HCPro’s CDI Boot Camp I have the opportunity to teach new and old (or, rather, experienced) CDI specialists in a live classroom setting. I primarily teach what we call our “open-reg” (that’s shop-talk for open registration) classes which are offered at various dates and locations around the country. However, I am also frequently asked to teach the CDI Boot Camp for a specific facility or a local group of hospitals or hospital system. (In shop-talk, we call this an “onsite” class.) Sometimes the students are all from one facility and other times three (or more) local hospitals band together to bring the CDI Boot Camp to their area.

For those of you who haven’t met me or heard me teach, I am a huge advocate for CDI specialists, whether they are coders, nurses, physicians or mid-level providers (nurse practitioners, physician assistants). I have had all of these types of students in class and I like to think that everyone who comes to my classes takes away at least one thing they can use to improve their program or do their job better.

As someone who has done the job (I was previously the CDI reviewer/manager for a 400-bed acute care facility) I think I am able to address the reality (and the frustrations) of working in the CDI role on a daily basis. And I should say that I see the CDI role as one which affects long-lasting changes in provider behavior and documentation patterns.

On CDI Talk recently Melissa Varnavas, the Assistant Director of ACDIS, posed the question: “What is on your wish list for 2012?” Some suggested they’d wish to change the opinions of those who view the CDI as a revenue enhancement tool. The discussion there reminded me of a experience I once had during a Boot Camp. The individual who introduced me on the first day told the group that their facility was providing the classroom training because “[CDI staff] have to focus on getting the highest DRG, increase the CMI, and really hone in on getting those MCCs and CCs!”

Students in my classes won’t hear me (ever) tell them to query a provider just solely to capture a MCC or CC. And Boot Camp attendees will never hear me tell them to query because one diagnosis results in a higher-paying DRG than another. Of course I teach the concepts of DRG assignment and the difference between an MCC and a CC—that is the world we live in. Medicare is not going to stop using the MS-DRG system just because we don’t like it.

I do not focus on queries for increased reimbursement because I know from experience that when CDI programs stop focusing on the almighty DRG and adjust their efforts to querying whenever greater specificity is required for accurate, specific code assignment, the Case Mix Index improves, facilities start to report complications accurately, quality measures look better and yes, programs also receive what they deserve under the IPPS.

In the above mentioned scenario, that individual’s introduction did not deter me from teaching what I ethically believe are CDI program best practices. I think the students in this particular class, many of whom had been CDI specialists for several years, were relieved to hear me say that: If you do the right thing for the right reason, you’ll do fine in the long run. I believe most of them knew the essence of this all along.

As a member of the ACDIS Advisory Board since its inception as well as the lead CDI Boot Camp instructor I am aware that I do not solely represent myself when I talk to students. I know that I must also present a positive image of the CDI profession and ACDIS and what we are all working so hard to achieve: A complete, accurate written representation of the care provided to patients in our facilities. Nothing more. Nothing less.

There may be people who don’t want to hear that message, but if I’m teaching your CDI team that’s what you’re going to hear.

A peck of PEPPER, Part 1

Peter Piper picked a peck, how about your CDI program? Have you taken a peek at your PEPPER?

My analytical side is always harassing me to get it more involved in what I do. So I decided to dig into our hospital’s PEPPERs. PEPPER is the Program for Evaluating Payment Patterns Electronic Report, issued quarterly. (Calling it a PEPPER report is like calling an ATM an ATM machine; it comes from the department of redundancy department.) The acronym ST-PEPPER stands for short term acute care hospitals’ PEPPER.

Glenn Krauss previously provided a good overview of PEPPER both here on the ACDIS Blog and through his contributions to an article in the April 2011 edition of the CDI Journal. While I have been aware of PEPPER for some time, I did not have access to our reports until fairly recently. And, to be honest, PEPPER can be a little intimidating. You need to become familiar with what the reports tell you and then you have to be comfortable doing a little digging into your own facility data after you’ve reviewed the reports.

I really recommend that a new user of PEPPER become comfy cozy with the user guide provided by PEPPER Resources. You might discover that it’s not really that difficult to learn, and if you love to crunch numbers and analyze information the way I do, it’s almost fun.

If you don’t understand percents versus percentiles, now is the time to learn it. For areas that PEPPER identifies as potential risks for audit, your report will give you a percentage, based on a numerator/denominator. The numerator is always the targeted DRGs, and the denominator is a larger base of DRGs.

So if you are looking at the stroke/intracranial hemorrhage target area, the numerator is the number of cases in DRG 61-66 (CVA/ICH with or without thrombolytics), and the denominator is DRG 61 – 69 (all of the above, plus carotid disease and TIAs). Let’s say your hospital had 20 cases in DRG 61-66; that would be the numerator. Your facility had 80 cases in DRG 61-69, so 80 would be the denominator. So the ratio would be 20/80 or 1/4 or 25%. That is your percent.

Now PEPPER takes your percent and compares it to the percents for hospitals in your state, in your Medicare Administrative Contractor/Fiscal Intermediary (MAC/FI) jurisdiction, and in the nation. The percentile is where your hospital falls among its peers. If your facility percent is higher than that of 75% of the hospitals in the nation, your national percentile is 75%. If your percent is lower than the percent held by 75% of the hospitals in the nation, your national percentile is 25%. If that’s still too confusing, the PEPPER Resources website offers tutorials.

PEPPER focuses on what it calls “outliers,” hospitals whose percentile is above 80  or below 20.   Hospitals whose percentile is above 80 (remember, their percent is higher than 80% of hospitals in the group) are high outliers.  Hospitals whose percentile is below 20 (their percent is lower than 80% of hospitals in the group) are low outliers. For many of the target areas, PEPPER recommends facilities with high outliers review their charts for overcoding, and low outliers for undercoding.

The Office of the Inspector General (OIG) is tasked to detect and prevent fraud, waste, and abuse, improve economy and efficiency, and hold accountable those who do not meet requirements or who violate the law. Among the numerous focus areas in the 2012 OIG Work Plan, (which is a document that outlines the OIG target areas for the coming year) is Medicare inpatient and outpatient payments—to be evaluated by reviewing hospitals that are the most risky and the least risky, as determined by data analysis. It’s fascinating reading.

So when you get your PEPPER, which is an Excel spreadsheet, the first thing you should see is your “Compare” page. On the Compare page, you will get an overview of each target area, with numbers specific to your hospital, for the last reported quarter of data.

What I like to do is go through it and highlight all the targets in which my hospital is a high or low outlier for either national, jurisdictional, or state. (National is most important, followed by jurisdictional, then state.) Then, I scrutinize those target areas, pull cases in those target DRGs, and review them for coding accuracy and clinical documentation support.

But PEPPERs include 12 quarters worth of data, so evaluating trends is important. After the Compare page, there is a tab for each target area, followed by a tab with a line graph of the hospital’s percentiles for the previous 12 quarters, if data is available. So maybe this quarter we were just under the 80th percentile, but for the previous 11 quarters, we were above. Should I look at this target? Yes, I think so.

Don’t assume that because your hospital is a high outlier for a given target, that automatically means there’s a problem. There may not be. You may trigger as a high outlier for stroke/ intracranial hemorrhage (ICH) because you are a stroke center and receive a large number of referrals for stroke/ICH.  You may trigger as a high outlier for 2-day stays in heart failure because your hospital aggressively follows core measures, and you’ve got a great relationship with the nursing home next door and the home health agency across the street–so you get your patients out quickly.  Nevertheless, I would still do a random audit of cases in those DRGs to protect your facility against potential audit risks.

At the same time, I would also not assume that because your hospital is a low outlier for a target area, that you are free and clear.

You want your medical records to reflect the most accurate severity of illness, intensity of services, and resources expended, and too many low outliers might mean you’re not capturing those variables effectively. Working with your report will give you an enhanced awareness of what areas are particularly vulnerable to scrutiny, and so all the target areas should gather your attention. Look at what processes are working well, and try to apply them across the board.

In my next entry, I will discuss some of the target areas and talk about strategies for using PEPPER to its best advantage. Happy hunting!

Collaboration: Coding and me

A Venn Diagram helps illustrate where overlapping interests lay.

I realize that many of the faithful members of ACDIS are, indeed, coders, but most of us have a nursing background, so I’m going to give my two-cents on the coding/CDI specialist relationships from a nursing perspective and hope that the coders among us will forgive me.

The first thing and the last thing that coders and nurses need to understand is that nobody knows everything.  If you remember a Venn diagram—yes, those big bubbles with the overlap in the middle that you learned in 7th grade math—and apply it here, we have the coding world, and we have the nursing world, and we have that great big space in the middle where we cross paths. Nevertheless, we also must bear in mind that there is space on the left and right where never the twain shall meet.

Both nurses and coders have studied anatomy and physiology, we all know medical terminology, and we all have some understanding of coding guidelines and principles.  That’s where we meet.

But coders have studied coding, and they typically can code up to 30-40 charts or more per day with staggering precision. The average nurse doesn’t spend the time assigning CPT codes, or E-codes, or worrying about whether the femur fracture is of the head, the shaft, or the condyle part of the bone, the way coders do.

Likewise, the average coder has never been in the room with the hundreds or thousands of patients that the nurse has seen, has not personally observed or helped treat the signs and symptoms associated with the myriad medical conditions people can acquire, and does not have the in-depth knowledge of intricacies of medical management that nurses have.

When I first started as a CDI specialist, it took time for the coders to get used to me and what I could do for them—and to them.  Because my orientation was bare bones and my preceptor was literally in the next state, I had to learn by mistake. And boy, did I make mistakes.

I can’t tell you how long it took me to grasp that hypertension in a patient with chronic renal disease codes out differently than it does for hypertension in the general population. I’m still embarrassed to admit that I nagged a coder to take a vascular ulcer as a CC on a patient with peripheral vascular disease because I didn’t understand how to apply the combination code.

It took persistence and patience but eventually the coders realized that not only was I a fast learner, but that there were some things that I could teach them.  One coder was coding atrial fibrillation (AF) with rapid ventricular response (RVR) as ventricular tachycardia, which not only added CCs to the coding summaries, but drastically altered the dynamic of those charts.  As a former cardiac care unit (CCU) nurse, I knew that AF with RVR is absolutely not “v-tach.”  I argued my case, and even enlisted one of our electrophysiologists to help me explain the situation.

The electrophysiologist was able to verify that AF w/RVR is definitely not v-tach, and further emphasized that if v-tach were to be coded, it would completely change the treatment protocols he would have been expected to perform.  By pressing the issue, I might have lost our facility some CCs, but I think I saved us a lot of heartache in future audits.

I have tremendous respect for the work that coders do.  It pains me to see adversarial relationships between coders and nurses.  Everybody wants to be right, especially if their work is going to be graded negatively if they’re not officially right. But some nurses are just determined to prove that they know more than coders—and vice-versa.

I really miss the days when I could just call a coder for a consult on a complex case while the patient was still in-house, and when the coder could call me to ask my take on a confusing chart they were coding.

It may be difficult for more experienced coders to understand the need for a CDI program when they have been sending back-end queries for years without  help. So those CDI specialists who do have a nursing background may be in a situation where they need to prove their value—not by fighting with coders but by sharing our clinical expertise in a nonjudgmental manner.

We need to remember that everyone’s goal is an accurate, pristine chart, regardless of who gets credit.

I suppose there are some relationships that will always be sticky.  Let’s just make this one stick.