Not so very long ago, The Joint Commission and ASHE announced the creation of an information resource to assist with all those pesky EC/LS findings that have been reproducing like proverbial rabbits (here’s coverage of that announcement and coverage of those rapidly reproducing findings).
Well, since that announcement, the elves have been very busy cobbling together bits and pieces of this and that, with the end result being a rather interesting blend of stuff (please note that I did not employ the more severe descriptor—stuff and nonsense), with titles like “Is Your Hospital’s Air Ventilation System Putting Your Patients At Risk?” (this one’s in the Leadership module, so I guess they’re asking the question of organizational leadership). I truly hope that your response to that particular query would be “absolutely not,” but I’ve also been working this part of the street long enough to know (absolutely, if you will allow me a brief moment of hyperbole) that there are few absolutes when it comes to the management of the physical environment.
Which leads me to the follow-up thought: Recognizing that there is always the potential for the performance of air ventilation systems to drift a little out of expected ranges, at what point does the performance of air ventilation systems actually put patients at risk? And perhaps most importantly, have you identified those “points” in the performance “curve” that result in conditions that could legitimately cause harm to our patients? And please know that I understand (in perhaps a very basic sense, but I think I can call it an understanding) how properly designed and maintained HVAC systems contribute to the reduction of HAIs, etc. But with any fluid situation, there is an ebb and a flow to conditions, etc., that, again, may veer into the “red” zone from a compliance standpoint. But let me ask you—particularly those of you who have experienced out-of-range conditions/values—have those conditions resulted in a discernible impact on your infection control rates, especially those relating to surgical site infections?
BTW, I’m asking because I really don’t know what you folks are experiencing. And, for those of you that have identified shortcomings on the mechanical side of things, are your Infection Control folks keeping a close (or closer) eye on where those shortcomings might manifest themselves as a function of impact to patients? From the information posted in the Portal (I think I’m going to capitalize), remedying compliance issues in this regard is a simple four-step process (You can find the example of improved compliance there). Who knew it would be so easy? (I could have had a V8!) I don’t think anyone in the field is looking at this as a simple, or easy, task.
At any rate, despite the best efforts of the Portal, until we have buildings (and staff) that are a little closer to perfect, I think we’re going to continue to see a lot of EC/LS findings during survey. Ohboyohboyohboyohboyohboy!
Also, as I think about it, please be sure to check out the Clarifications and Expectations column in the September issue of Joint Commission Perspectives; there are some interesting points to be gleaned, the particulars of which we will cover in a wee bit, so watch this space!
When it comes to the Life Safety document review, there is still a fair amount of survey risk exposure and (no surprise) a fair amount of survey findings. EC.02.03.05 is the 6th most frequently cited standard; 44% of the hospitals surveyed in the first six months of 2013 were cited under this standard!
Some of the findings have related to irregularities in the testing process. You have to make sure that your device inventory numbers match up; if you had 50 pull stations last year and 55 pull stations this year in the testing documentation, you had better know why and it better be because you added some pull stations, otherwise it could get ugly. But one thing I’ve seen in a couple of recent surveys (both TJC and in my own practice) is the documentation of fire alarm and sprinkler system testing and maintenance that is performed by in-house staff. A typical one is the weekly fire pump test—pretty much everyone does that one in house. Now, there’s nothing wrong with that, but you have to make sure that the documentation of in-house activities is in compliance with EP 25, which requires the inclusion of the following information:
- Name of the activity
- Date of the activity
- Required frequency of the activity
- Name and contact information, including affiliation, of the person who performed the activity
- NFPA standard(s) referenced for the activity
- Results of the activity
In my experience, most folks have the first three pretty well in hand, but sometimes those last three get lost a bit in the shuffle. It might be worth a review of your in-house documentation to make sure you have all the required elements in place.
I could have sworn that I had covered this last year, but I can find no indication that I ever got past the title of this little piece of detritus, so I guess better late than never.
One of the more interestingly painful survey findings that I’ve come across hinge on the use of a household item that previously had caused little angst in survey circles—I speak of the mighty tissue paper! There has been any number of survey dings resulting from tissue paper either being blown or sucked in the wrong direction, based on whether a space is supposed to be positive or negative. And this lovely little finding has generated a fair amount of survey distress as it usually (I can’t say all, but I don’t know of this coming up in a survey in which the following did not occur) drives a follow-up visit from CMS as a Condition-level finding under Physical Environment/Infection Control.
The primary “requirements” in this regard reside under A-Tag 0726 and can be found below. Now I’m thinking that tissue paper might not be the most efficacious measure of pressure relationships, which (sort of—give me a little leeway here) begs the question of whether you should be prepared to “smoke” the doorway/window/etc. for which the tissue paper might not be as sensitive to the subtleties of pressures. I think it’s a reasonable thing to plan for—as much because there can be a whole lot at stake. So, I’ll ask you to review the materials below and be prepared to discuss…
(Rev. 37, Issued: 10-17-08; Effective/Implementation Date: 10-17-08)
§482.41(c)(4) – There must be proper ventilation, light, and temperature controls in pharmaceutical, food preparation, and other appropriate areas.
Interpretive Guidelines §482.41(c)(4)
There must be proper ventilation in at least the following areas:
• Areas using ethylene oxide, nitrous oxide, glutaraldehydes, xylene, pentamidine, or other potentially hazardous substances;
• Locations where oxygen is transferred from one container to another;
• Isolation rooms and reverse isolation rooms (both must be in compliance with Federal and State laws, regulations, and guidelines such as OSHA, CDC, NIH, etc.);
• Pharmaceutical preparation areas (hoods, cabinets, etc.); and
• Laboratory locations.
There must be adequate lighting in all the patient care areas, and food and medication preparation areas.
Temperature, humidity and airflow in the operating rooms must be maintained within acceptable standards to inhibit bacterial growth and prevent infection, and promote patient comfort. Excessive humidity in the operating room is conducive to bacterial growth and compromises the integrity of wrapped sterile instruments and supplies. Each operating room should have separate temperature control. Acceptable standards such as from the Association of Operating Room Nurses (AORN) or the American Institute of Architects (AIA) should be incorporated into hospital policy.
The hospital must ensure that an appropriate number of refrigerators and/or heating devices are provided and ensure that food and pharmaceuticals are stored properly and in accordance with nationally accepted guidelines (food) and manufacturer’s recommendations (pharmaceuticals).
Survey Procedures §482.41(c)(4)
• Verify that all food and medication preparation areas are well lighted.
• Verify that the hospital is in compliance with ventilation requirements for patients with contagious airborne diseases, such as tuberculosis, patients receiving treatments with hazardous chemical, surgical areas, and other areas where hazardous materials are stored.
• Verify that food products are stored under appropriate conditions (e.g., time, temperature, packaging, location) based on a nationally-accepted source such as the United States Department of Agriculture, the Food and Drug Administration, or other nationally-recognized standard.
• Verify that pharmaceuticals are stored at temperatures recommended by the product manufacturer.
• Verify that each operating room has temperature and humidity control mechanisms.
• Review temperature and humidity tracking log(s) to ensure that appropriate temperature and humidity levels are maintained.
Kind of vague, yes indeedy do! Purposefully vague—all in the eye of the beholder. Lots of verification and ensuring work, if you ask me, but this should give you a sense of some of the things about which you might consider focusing a little extra attention.
Last week, we started the discussion regarding findings relative to the inspection, testing, and maintenance of medical gas systems, which reminds me that I kind of skirted exactly how those findings were manifesting themselves.
The most common variant is for organizations that have established a less-frequent-than-annual schedule for the med gas system components, particularly the outlets (as they are usually the most numerous of the system components). Folks are doing half or a third or a quarter of their outlets on an annual basis, and they have not specifically identified the time frame in the Utility Systems Management Plan (USMP; feel free to give your USMP a quick check to see if you’ve defined the time frame(s) for the med gas system components and that your practice accurately reflects what is in the management plan, which is the other most common way this standard generates findings). Make sure you identify the time frame for the testing, etc., and make sure that what the management plan says accurately reflects the process (I know there’s a certain inescapable logic to this, but I’ve seen folks get tagged for this, so please just take a moment to make sure…).
How do we determine those time frames? Well, once again we can ping back through to EC.02.05.01, this time stopping at EP 4, which requires the identification (in writing—but of course) of inspection, testing, and maintenance intervals “based on criteria such as manufacturers’ recommendations, risk levels, or hospital experience.” I think that pretty much captures the gamut of possible criteria, but I’ll throw the question out to the studio audience: Anyone using anything other than those criteria? If so, please share. This would be required for all the utility systems on the inventory, so the next question becomes: What’s on your inventory and how did you populate that inventory?
Jumping back a wee bit further to EC.02.05.01, EP 2, it appears that we would choose between an inventory that contains all operating components of utility systems or we would establish the inventory based on “risks for infection, occupant needs, and systems critical to patient care (including all life-support systems).” Now, I’m not at all certain what you folks might doing individually (I suspect it will have at least something to do with the complexity of your systems and the component elements), but I’m going to guess we have a mix of both strategies of inventory creation. So the task then becomes one of fitting the medical gas system, in total or in pieces, into that decision, then considering the criteria noted under EP 4 to wrap things up in the form of a lovely little risk assessment. Then update the USMP to reflect whatever it is you’ve determined and you should be good to go.
A word of caution/advice: Once you’ve done the risk assessment, picked the maintenance strategy, determined the frequency, and updated the USMP, please remember that is always a wise move to periodically evaluate the decision you made relative to, well, basically anything in your USMP/inventory thing. And a fine spot to do that (if you prefer to call it an opportunity, you’ll get no grief from me) is the annual evaluation process. It comes down to a simple question: Have the maintenance strategies, frequencies, and activities provided reliable performance in support of patient care activities? And while the answer is also pretty simple (yes or no, maybe with a periodic instance of “don’t know” thrown in for good measure), it might be useful to develop a measurement that will tell you when the process is not working well. Could be something like “unscheduled disruptions resulting from preventable conditions” (which might indicate you need to increase your frequencies) or delays in care and/or treatment as the result of unscheduled disruptions (I am a very big fan of EC measurements that tie performance in the care environment back to the bedside—powerful stuff), things like that.
We always want to try and base our risk decisions on data, but sometimes you have to pick a course based on that rapidly vanishing commodity—common sense. When that occurs, I’d want to have some means of “telling” whether the decision was a good one, fold that into (or through) the annual evaluation process, and then move on to the next challenge (and there will surely be another challenge…any minute now). Hope you found this discussion helpful. I will again solicit any feedback that might be percolating out there—I love to know what you all are doing with this stuff, and so does the rest of the class.
Now there may be some folks out there who are thinking that there are certain topics to which I have administered beatings akin to the deceased equine, but sometimes there are other folks who appear to share at least some of my “wacky” perspectives on how to manage safety in the healthcare environment.
So, I encourage you to contact the individual in your organization responsible for coordinating Joint Commission accreditation and ask them to share with you the February 2013 issue of Joint Commission Perspectives. And, if you turn to p. 9, you will find the latest column penned by George Mills entitled “Safety Champions—Making Health Care Safety Everyone’s Business.” And to this, I say hallelujah! Those of you who’ve been with me since we started this little space (it’s been years and years, I tell you, years and years) will recognize this as a common theme (I think I’ve twisted it every which way, over time, but you should recognize the basic form) and still one that I believe holds a key to compliance success ( I refrain from referring to it as “the” key, because the education “key” is pretty gosh-darn important as well).
And, interestingly enough, Mr. Mills’ column in the March 2013 issue of Perspectives focused on, wait for it…
Can I get an A(ssess)MEN(ts)! Stay tuned: You know I’ll have something to add to that conversation…
I know we’ve (at least sort of) talked about this before (for those of you who might need some thought refreshment in that count or if you’re new to the conversation, see this previous post), but there are still some findings being generated during Joint Commission surveys this year, so I figured it might be worthwhile revisiting one aspect of the whole nuclear medicine security issue.
Let me preface things by noting that I don’t believe that there have been too many instances (a number that approaches zero) in which nuclear medicine deliveries to hospitals have been diverted or otherwise redirected for nefarious purposes. That said, there are certain provisions in the regulations regarding radiation safety and controls programs in healthcare that require couriers delivering nuclear materials to your hot lab (presuming you have one) to be escorted when they are in the hot lab. Unfortunately, if you are interested in finding out what the deal might be for you, the first point to keep in mind is that some states (a handful or so) administer their radiation control programs in accordance with the Nuclear Regulatory Commission (NRC) statutes which do require the escort into the hot lab. But (and isn’t there always a “but”?), there are a great many other states that have an “agreement” with the NRC that allows them to pretty much make their own way (to see where your state figures into the equation, this would be a good place to start) in this regard.
Now the good survey folks from our friends at TJC know about the requirement for escorting the couriers, but they are not necessarily conversant with the requirements for the agreement states—and some of the agreement states do not specifically require the escorting of the couriers into the hot lab. So you need to know (yes, another in the long list of things you need to “know”) what the requirements are in your state, so if it does come up in survey (and it is coming up with increasing frequency), you will know where you stand from a compliance standpoint. As a further thought on this coming up as a survey finding, I suspect that you would need to be prepared to show the surveyor(s) the regulatory evidence that you don’t have to do the escort thing, and, if that is not sufficient evidence in the moment (and we’ll discuss how that might happen in a moment), then you will probably need to make full use of the post-survey clarification process.
Now, the reason I suspect that the state regs might not be enough revolves around the general concept of best practices, etc., which are becoming increasingly similar to actual regulations (or so it seems—it might just be my overactive imagination. I think not, but I’m prepared to admit that there is a possibility). To that end, I suggest (and if you’ve been paying any attention over the years I’ve been scribbling this blog, you probably have a good idea where I’m going now—and I certainly wouldn’t want to disappoint) that you conduct a (ta-da!) risk assessment to demonstrate that the levels of security in place are of sufficient robust-ity (I know that’s not a real word, but shouldn’t it oughta be?) that an unescorted courier results in minimal, if any, risk to your organization.
As I look back at this little screed, I’m glad that I did not promise (or otherwise imply) that I was going to be brief. At any rate, make sure you understand the security requirements in your state and make sure that you are poised and ready to educate any surveyors (real or imagined) that might push on your process.
An interesting development on the survey front this year; it may be merely a blip on the compliance radar screen (I know of two instances in which this happened for sure—but if you folks know of more, please share), but if this signals a sea change in how The Joint Commission is administering surveys, you’d best have your ducks in a row.
So, I’ve heard tell of two instances in which the survey team arrived at an organization with the results of the previous triennial survey clutched in their paws, with the intent being to validate that the actions submitted as part of the Evidence of Standards Compliance (ESC) process did indeed remedy the cited deficiency. Now I think we can agree that the degree to which we can fix something and keep it fixed over the course of 36 or so months can be a bit of a, how shall we say, crap shoot. As we’ve noted in one fashion or another, lo these many years, fixing is easy—keeping it fixed is way difficult.
And so dear friends, those of you in the survey bucket for 2013 should dig out those survey results from last time, review the ESC submittal and make sure that what was accepted by TJC as a means of demonstrating compliance with the standards is indeed the condition/practice that is in place now. And the reason this is so very, very important, just to complete the thought, is that there is a pesky little standard in the APR chapter of your beloved Accreditation Manual (APR stands for Accreditation Participation Requirements, and the standard in question is APR.01.02.01) that requires “(t)he hospital provides accurate information throughout the accreditation process.” So if a surveyor gets to thinking that there may have been some less than forthcoming aspect of your action plans, etc., you could be looking at a Preliminary Denial of Accreditation, a most unpleasant state of affairs, I assure you. So let’s give those “old” findings at least one more ride around the track and make sure that we’ve dotted all the “i’s” and crossed all the “t’s.”
In case you’ve not heard (I don’t see as much info on the various list servs I monitor when it comes to the timing of the unannounced survey process), there have been some instances this year when unannounced Joint Commission surveys have been occurring months earlier than anticipated (nobody has gone outside of their official “window,” which opens 18 months prior to the anniversary date of your last triennial survey).
Certainly during 2012, there were some folks who went six weeks or so early, but we’re talking four or five months early. There was some indication that the incredibly reliable nasty weather in the Midwest and Northeast over the last little while has resulted in some schedule juggling by the folks in Chicago (and doing as much traveling as I do, I can well understand the impact of stinky weather). As has become an increasingly familiar mantra, you can’t predict future survey activities/results based on past experiences, but I figured it might be worth sharing the possibility. You all are supremely prepared for your survey I’m sure, but I figured I’d share that bit of info.
One other survey process wrinkle of some note is the tale of an organization that was anticipating a five-day onsite survey and ended up having a four-day survey—with additional surveyors on the team to compress the five days of activities into four days. So, for those of you with five-day surveys who have blocked off Mondays in hopes of maybe blocking out the entire week, there may be a little surprise in store. This has only happened once that I know of, but if anyone out there has a story to share on that front, I’m sure we would all be very interested to hear.
At any rate, as I type this, I am looking out at a very gray day at the airport in Chicago with a forecast of snow. I guess we’re not quite a week into spring, so this must just be a period of transition. Hopefully we transition pretty darn quickly. I do wonder “where those birdies is” (with apologies to Mr. Nash)…
If you’re reading this, then in all likelihood you’re a regular subscriber to this august publication (august in February—what kind of crazy talk is that, but I digress). In which case, I’m sure you read with some interest the article a couple of weeks ago in which one Mr. George Mills (of the Joint Commission Mills) called out facilities professionals for something akin to dereliction of duty (okay, that might be a wee bit hyperbolic, but this topic, and Mr. Mills stance on said topic, are as serious as all get out), based on the continued frequency of findings in the EC/LS part of the survey process.
At any rate, back in October 2012, Mr. Mills addressed a group of facility managers during a webinar sponsored by ASHE. During the webinar, there was much discussion of the persistence of EC/LS findings during surveys, including attribution of many of those findings to what was characterized as a “lack of management.” I think we can agree that, as characterizations go, that is a very strongly worded characterization indeed. So what types of things are resulting in this level of unhappiness in Chicago? Stay tuned and we’ll find out (by the way, be prepared not to be surprised about much, or even any, of the sticking points during surveys. If you’ve been following this space for any period of time, you are already intimately familiar with the foibles and follies of the modern-day survey process.