RSSRecent Articles

Now bring me some sticky findings—and bring them right here!

Another hodgepodge of stuff this week. I suppose with the holidays, I could be lazy and break these up into smaller chunks of bloggy goodness—maybe next week, but first some ponderings.

I was penning some thoughts relative to the current state of accreditation and a common theme kept reasserting itself: the recent changes are going to absolutely nothing to help decrease the number of findings in the physical environment and, in fact, are much more likely to increase the number (and probably types) of findings experienced during regulatory inspections. Now, I suppose this is rather an extension of the alignment with CMS (I mean, whoever received a visit from those folks and escaped completely unscathed? Perhaps some, but not a whole bunch, I’d venture to guess) and how that philosophy (scorched earth seems like a particularly apropos descriptor—at least at the moment) aligns with the idea/sense/concept that perfection is a noble goal, but not particularly obtainable on this particular space-spinning blue sphere. I’ve said it before, I’ll (no doubt) say it again: they are going to find “stuff” when they visit you—they have to! But that brings me back to the age-old question of what value does this level of attention to minutia bring to the process. I don’t think there’s anyone among us that believes that we have achieved a level of perfection of heretofore untold proportions—has a lot to do with why we have to show up at work every day, n’est-ce pas? There has got to be a better way to facilitate improvement in the management of the healthcare environment without brandishing the regulatory equivalent of a sharp stick (if not a cattle prod). So, as we wind down the 2016 season, those one-off OFIs have now been converted into a cluster of regulatory middle fingers—ouch! Okay, hopping down from the soapbox.

In the December 2016 issue of Perspectives, there is a fair discussion on how the Interim Life Safety Measures (ILSM) process is going to be utilized (perhaps even evaluated) during the survey process. In the October Perspectives, there was coverage of how a completed project (that involved ILSM implementation) would be reviewed to evaluate the effectiveness of the ILSM process. There was also discussion indicating that construction-related deficiencies would not be cited as specific RFIs but rather as a function of the ILSM performance elements. But the December publication offers yet another nuance to the process—when you have a Life Safety Code® deficiency identified during survey, there will be a resultant “discussion” of the deficiency and an inquiry as to which ILSM will be implemented to protect building occupants until such time as the deficiency (or deficiencies) is corrected. I think the important thing to keep in mind here is that the requirement is to implement your ILSM policy, which would then provide criteria for determining what, if any, of the ILSMs would be implemented. I also think that now would be a really good time to dust off your ILSM policy and run it through a couple of test deficiencies to ensure that your policy supports a reasonable approach to ILSM implementation. Finally (on this subject), in the days when clarification of findings was a worthwhile endeavor, it never “paid” to fix stuff during the survey (fixing a condition was tantamount to admitting that you had messed up), but now that everything gets cited, the simplest ILSM to implement is “none at all because we fixed the condition.” Can somebody give me a “that’s a pain in the posterior”? Amen!

As a final thought (or perhaps thoughts) for the week, I think we have to treat any construction or renovation activities as an invasive procedure, so we need to come up a process akin to the Universal Protocol adopted by the folks in surgery to make sure that everyone is on the same page before the activity starts (and that especially includes contractor staff—I am absolutely convinced that we could do a better job with that process).  As an offshoot of this, I think it might be time to adopt a process for periodically evaluating the construction/renovation management process, much as we evaluate the 6+1 EC/EM functions. I can’t think of a single “normal” process that has more potential for disruption, angst, chaos—you name it—than the construction and renovation process. Some folks are fortunate enough to have in-house resources for the management of these activities, but even then there can be opportunities for improvement–the communications process springs to mind as being frequently flawed.

Until next time, I bid you as much holiday cheer as you can tolerate!

Can we count painful survey findings and new requirements as blessings?

First off, please accept my bestest wishes to you and yours for a most joyous and restful (or as restful as you want it to be) Thanksgiving holiday.

To paraphrase a certain musical ensemble, what a long, strange compliance year it’s been. Hopefully, 2016 will head off into the realm of history with a whimper (I think we’ve experienced enough “bangs” to take us well into 2017 and beyond). And so, a little casserole of safety stuff to tide you over ’til next week. First up, some risk assessment deliciousness, courtesy of NFPA 99.

I had intended to discuss this back a few weeks, but there has been a lot to discuss these past few weeks. At any rate, I was able to get a look at the CMS update portion of the Executive Briefings presentation and it appears that there was some discussion relating to the practical application of how a space is used to determine the risk category for the equipment and/or systems used to support that space. My sense of this is that it’s not so much the space itself as it is, but rather what processes, etc., exist within the space you are evaluating, using the definitions from NFPA 99. So, the methodology focuses on an analysis of facility systems and equipment based on the risks associated with failures of those systems:

Category 1—Facility systems in which failure of such equipment or system is likely to cause major injury or death of patients or caregivers

Category 2—Facility systems in which failure of such equipment or system is likely to cause minor injury to patients or caregivers

Category 3—Facility systems in which failure of such equipment is not likely to cause injury to patients or caregivers

Category 4—Facility systems in which failure of such equipment would have no impact on patient care.

 

So, moving to the definitions in NFPA 99, you sort the above concepts based on how the space is used:

  • Facility systems and equipment for critical care rooms would be Category 1
  • Facility systems and equipment for general care rooms would be Category 2
  • Facility systems and equipment for basic care rooms would be Category 3
  • Facility systems and equipment for support rooms would be Category 4

Each of the chapters in NFPA 99 (gas and vacuum systems, gas equipment, electrical systems, HVAC, etc.) have provisions for the different categories, as applicable, so it appears that the expectation (at least as it was presented at Exec Briefings) is that the organization of the facilities systems and equipment would reflect this methodology. To be honest, I think this may be more of an issue with re-packaging how things are equipped and maintained; maybe including the category designation on work orders, etc. I don’t know that this is going to extend to TJC’s activities, though with the bad marks it received on its CMS report card, it seems unlikely that TJC will become more reasonable…time, as they say, will tell.

Another potential complication for survey year 2017 (I’m pretty confident of this, but not yet certain about the timeline for implementation) is a broadening of the Evidence of Standards Compliance (ESC) process to include at least two more considerations. At the moment, the ESC process requires a response to the following categories: Who (is responsible for the correction); What (was done to correct the deficiency); When (the corrective action was completed); How (the corrective action was implemented and will be sustained), and Measure of Success (for those pesky “C” performance elements—to which we will bid a hearty “adieu” on January 1, 2017). I think we’re all pretty familiar with that part of the process (I can’t imagine that anyone’s had a survey with no findings in the physical environment, though I suppose the infamous “bell” curve might dictate otherwise), but there is indication that with the removal of the Measure of Success category, we will have two additional elements to document within the framework/context of the corrective action: Leadership Involvement and Preventive Analysis. At the moment, it appears that the sequence will look something like this:

Who:

Leadership Involvement:

What:

When:

How:

Preventive Analysis:

I think being able to account for leadership involvement is a pretty straightforward response (I think probably the best way to frame this would be to identify the boss of whoever the “who” would be; and perhaps that boss’ boss, depending on the circumstance), but I suspect that the Preventive Analysis portion of the response could get quite complicated. As near as I can tell, it would be an amalgam of the root cause that resulted in the finding and the strategy for preventing future deficiencies, although minimizing the risk of recurrence might be a more useful viewpoint—as I like to tell folks, it’s the easiest thing in the world to fix something and the among the most difficult things to keep that something fixed. Hopefully, this will end up being no more than a little more water under the bridge, but I guess as long as findings in the physical environment remain a focus, the sustainment of corrective actions will be part of the conversation.

And on that note, I bid you a Thanksgiving to eclipse all yet experienced: gobble, gobble!

The song changes and yet remains the same…

There was a time when The Joint Commission actually seemed to be encouraging folks to fully engage with the clarification process in all its bountiful goodness. And I certainly hope that folks have been using that process to ensure that they don’t (or didn’t) have to “fix” processes, etc., that might not have been absolutely perfect in execution, but were not, by any stretch of the imagination, broken. But now, it appears that the bounty is going to be somewhat less bountiful as TJC has announced changes to the process, effective January 1, 2017. Please forgive my conspiracy theorist take on this, but it does seem that the new order in the accreditation world appears to lend itself to survey reports that will be increasing in the number of findings, rather than a reduction—and I am shocked! Okay, perhaps “shocked” is a tad hyperbolic. BTW, in a new Advocacy Alert to members, it appears that ASHE has come to the same conclusion, so it’s not just me…hoorah!

And so, the changes:

 

  • Any required documents that are not available at the time of survey will no longer be eligible for the clarification process (basically, the vendor ate my homework). It is important for everyone to have a very clear understanding of what TJC means by “required documents”—there is a list on your organization’s Joint Commission extranet site. My advice, if you have not already done so, is to immediately coordinate the download of that list with your organization’s survey coordinator (or whoever holds the keys to accessing that information—it may even be you!) and start formulating a process for making sure that those documents are maintained in as current a fashion as possible. And make sure your vendors are very, very clear on how much time they have to provide you with the documentation, as well as letting you know ASAP whether you have any deficiencies/discrepancies to manage—that 60-day correction window can close awfully quickly!
  • While I never really liked to employ this strategy, there were times when you could use clerical errors in the survey document to have things removed from the survey report. Areas that were misidentified on the report (non-existent to your facility; not apropos to the cited finding, for example, identification of a rated door or wall where there is none, etc.) or perhaps the location of the finding was so vague as to be impossible to identify—these have all been used successfully, but (apparently) no more. Now whether this means that there will be more in-depth discussions with the survey team as they prepare the report is unknown at this time, but even if one slips by (and I can tell you, the survey reports in general are much more exact—and exacting—in their description of the deficiencies and their locations), it won’t be enough to remove it from the report (though it could make your ESC submittal a bit more challenging if you can’t tell what it is or where it is).
  • The other piece of this is, with the removal of “C” Elements of Performance, you can no longer go the audit route to demonstrate that you were in substantial compliance at the time of survey. So now, effectively, everything is being measured against “perfection” (son of a…); miss one month’s check on a single fire extinguisher and—boom—finding! One rated door that doesn’t latch? Boom—finding! One sprinkler head with dust or a missing escutcheon? Boom—finding! And, as we touched on last week, it’s not just your primary location (aka, “the hospital”) that’s in play—you have got to be able to account for all those pesky little care sites, even the ones for which you are not specifically providing services. Say, for example, the landlord at one of your off-sites is responsible for doing the fire extinguisher checks; if something is missed (and hey, what’s then likelihood of that happening…), then you are vulnerable for a finding. So, unless you are prepared to be absolutely, positively perfect, you’d best be making sure that your organization’s leadership understands that the new survey reality is not likely to be very pretty.

I would like nothing better than to tell you that with the leadership change in Washington there will be a loosening of the regulatory death grip that is today’s reality, but somehow I don’t think that’s gonna happen…

Pauline’s Preposterously Perilous Permutations

Or, for the less aged folks, we could use Penelope Pitstop’s Preposterously Perilous Permutations…

I’ve recently had the opportunity to review some fourth quarter (2016) Joint Commission survey reports and I have to tell you that I’m not seeing indication of the rosiest of futures when it comes to the physical environment. (I keep trying to convince myself that it is merely because of my perspective that things seem to be weighted so heavily in the direction of the physical environment—it is, after all, my “beat.”) That being said, there does seem to be a trend in “where” the findings are being found, so to speak. And that, my friends, is in the outpatient setting, particularly physician office practices.

The story kind of starts with the “reveal” of TJC’s prepublication of the 2017 EC and LS chapters. I suspect that we will continue to discuss the various and sundry permutations of peril that will befall us as we move through the process, but this week I wanted to focus on a corner of the Life Safety chapter that doesn’t necessarily get a lot of attention: the Ambulatory Health Care Occupancy standards and performance elements.

Contained within the Ambulatory Health Care Occupancy section are some notes, one of which appears to be very much like business as usual when it comes to determining what rules in and what rules out when it comes to ambulatory surgery services, and so we have something to the effect that the ambulatory-related standards apply to care locations where four or more patients (at the same time) are provided either anesthesia or outpatient services that render those patients incapable of being able to save themselves in an emergency (I’m paraphrasing a bit here—our friends in Chicago are very attentive to verbatim quotes of their content—you’d think that the Cubs win might put them in a better frame of mind, but that’s too much risk. Maybe they’re sore winners…).

So, we got that one, yes? Pretty straightforward, very much in keeping with how we’ve been managing our outpatient environments, etc.

But then we move on to the second note, and the slope gets a bunch more slippery (and again, I paraphrase): if you use TJC accreditation for deemed status purposes, the ambulatory LS standards apply to outpatient surgical departments in hospitals—regardless of how many patients are rendered incapable (so that’s one patient all the way up to however many patients you can render incapable of self-preservation…ouch!). Now, I guess we could have some fairly lengthy discussion about exactly what constitutes “outpatient surgical departments in hospitals.” Does that mean physically within the four walls of the hospital? Does it mean operated under the hospital’s license or CMS Certification Number (CCN)? At the moment, I’m tending to lean towards the latter, just because it would be so much more messy.) It will be interesting to see how this whole thing rolls out into survey reality; it is entirely possible that folks are already having these discussions with their TJC account reps as planning for the 2017 survey season begins in earnest, if anyone has some indication on how, for instance, office-based surgical procedures are being accounted for in the process. Can you imagine having an LS surveyor heading out to all those physician offices in which surgical procedures are occurring? It’s about half past Halloween, but that’s a pretty scary thought. Sooooo, you might want to start evaluating your offsite locations for compliance with the LS.03.01.XX standards and performance elements.

Some other potential vulnerabilities relate to the management of high-level disinfection activities in these same office environments. I’m seeing a lot of the same types of findings that were once associated with areas like ultrasound, cardiology, etc., basically locations in which instruments and equipment are being manually disinfected. Lately I’ve seen findings relating to eyewash stations (check those disinfectant products to make sure that if you have a corrosive product, you’ve got a properly ANSI-configured eyewash station and if you have one, make sure it’s being checked on a weekly basis), management of disinfectant temperature, ensuring there is sufficient ventilation, making sure secondary containers are properly labeled (including biohazard labels), using PPE in accordance with the disinfectant product’s Instructions for Use, etc. The real “danger” here is that this appears to be becoming a mix that results in significant survey impact relative to the physical environment, infection control, even surgical services. These are findings that can “squirt” (small pun intended) in many different directions, causing a big freaking mess, particularly when it comes down to clinical surveyors conducting the outpatient portion of the survey. You might want to make sure you’ve got a very robust means of communications from the outpatient sites to ensure that you can nip these types of findings in the bud. But you also probably want to do a little focus education with the folks out in the hinterlands to ensure that PPE is available and used, products are being used properly, etc. I know it becomes “one more thing” to do, but I think we have to come to grips with the reality that the surveyors are becoming very adept at generating lots and lots of findings in the physical environment; they understand that there are locations in almost any healthcare organization that are not “attended” quite as robustly and that if they pick at certain common vulnerabilities, they will be rewarded with findings. We need to take that away from them, toot sweet!

Keep documenting those risk assessments: the Conditions of Participation and other regulatory rapscallia still do not tell us how to appropriately maintain a safe environment, so we have to be diligent in plotting our own course(s). We get to decide how we do this, but we do have to actually make those decisions—and make them in a manner that provides evidence of the process. I know it probably seems like a lot of drudge work, but it’s pretty much what we have to do.

As a closing note, I’d like to thank all the veterans for their service, pride and dignity—we are all the better for it!

I’ve got a feeling…

Just a quick drop of the microphone to let you know that our friends in Chicago are presenting a webinar on the SAFER methodology that The Joint Commission will use during hospital surveys starting in January. As we’ve discussed previously, with the removal of standard types (As and Cs and whatever else you can conjure up) and the introduction of the “Survey Analysis for Evaluating Risk (SAFER) matrix to prioritize resources and focus corrective action plans in areas that are in most need of compliance activities and interventions,” it appears that once again we are heading into some white water rapids (certainly Class 4, with intermittent burst of Class 5/6—better wear your life vest). That said, I appears that the webinar (scheduled for November 15) is for a limited audience number, but I do think that it might be useful to listen in to hear what pearls may (or may not) be uttered. You can register here and it also appears that the session will be recorded and made available on the TJC website (as near as I can tell, the webinar is free, so check your local listings).

Ciao for now. Back next week with more fun than you can shake a stick at…

History shows again and again how standards (and EPs) point out the folly of men…

It’s beginning to look like the proofreaders in Chicago must be enduring some late nights watching the Cubs! I don’t know about you folks, but I rely rather heavily on the regular missives from The Joint Commission, collectively known as Joint Commission E-Alerts. The E-Alerts deliver regular packages of yummy goodness to my email box (okay, that may be a little hyperbolic) and yesterday’s missive was no exception. Well, actually, there was an exception—more on that in a moment.

While it did not get top billing in the Alert (which seems kind of odd given what’s been going on this year), the pre-publication changes to the Life Safety chapter of the accreditation manual have been revealed, including comparison tables between what we had in January 2016 and what we can expect in January 2017. Interestingly enough, the comparison tables include the Environment of Care (EC) chapter stuff as well (though the EC chapter did not merit a mention in the E-Alert), so there’s lots of information to consider (which we will be doing over the course of the next little while) and some subtle alterations to the standards/EP language. For example (and this is the first “change” that I noted in reviewing the 112 pages of standards/EPs), the note for EC.02.02.01, EP 9 (the management of risks associated with hazardous gases and vapors) expands the “reach” to specifically include waste anesthetic gas disposal and laboratory rooftop exhaust (yes, I know…very sexy stuff!). It does appear that at least some of the changes (tough to figure out the split between what is truly “new” and what is merely a clarification of existing stuff—check out the note under EC.02.03.05, EP 1 regarding supervisory signal devices because it provides a better sense of what could be included in the mix). Another interesting change occurs under EC.02.03.05 (and this applies to all the testing EPs) is that where previously the requirement was for the completion dates of the testing to be documented, now the requirement actually states that the results of the testing are to be documented in addition to the completion dates. Again, a subtle change in the language and certainly nothing that they haven’t been surveying to. Oh, and one addition to the canon is the annual inspection and testing of door assemblies “by individuals who can demonstrate knowledge and understanding of the operating components of the door being tested. Testing begins with a pre-test visual inspection; testing includes both sides of the opening.” At any rate, I will keep plowing through the comparison table. (Remember in the old days, it would be called a crosswalk. Has the 21st century moved so far ahead that folks don’t know what a crosswalk is anymore?)

The top billing in yesterday’s All Hallows Eve E-Alert (making it an Eve-Alert, I suppose) went to the latest installment in that peppiest of undertakings, the Physical Environment Portal. Where the proofreaders comment comes into play is that the Alert mentions the posting of the information relative to LS.02.01.30, (which happened back in August) but when you click on the link, it takes you to the update page, where the new material is identified as covering LS.02.01.35, so there is updated material, though you couldn’t really tell by the Alert. So, we have general compliance information for the physical environment folks, some kicky advice and information for organizational leadership, and (Bonus! Bonus! Bonus!) information regarding the clinical impact of appropriately maintaining fire suppression systems (there is mention of sprinkler systems, but also portable fire extinguishers). I’d be interested to see if anyone finds the clinical impact information to be of particular use/effectiveness. I don’t know that compliance out in the field (or, more appropriately, noncompliance) is based on how knowledgeable folks are about what to do and what not to do, though perhaps it is the importance of the fire suppression systems and the reasons for having such systems (Can you imagine having to evacuate every time the fire alarm activates? That would be very stinky.) that is getting lost in the translation. I have no reason to think that the number of findings is going to be decreasing in this area (if you’re particularly interested, the comparison table section on LS.02.01.35 begins on p. 80 of that document—any changes that I can see do appear likely to make compliance easier), so I guess we’ll have to keep an eye on the final pages of survey year 2016 and the opening of the 2017 survey season. Be still my beating heart!

I wanna know: Have you ever seen the rain?

In our intermittently continuing series on the (final!) adoption of the 2012 Life Safety Code®, we turn to the one area about which I have still the most concerns—the magic land of NFPA 99. My primary concern is that while NFPA 99 contains lots and lots of references to risk assessments and the processes therein, I’m still not entirely convinced that the CMS oversight of the regulatory compliance process is going to embrace risk assessments to the extent that would allow us to plot our own compliance courses. I guess I will have to warily keep my fingers crossed and keep an eye on what actually occurs during CMS surveys of the physical environment. So, on to this week’s discussion…

When considering the various and sundry requirements relating to the installation and ongoing inspection, testing and maintenance of electrical system components, one of the key elements is the management of risk associated with electrical shock in wet procedure locations. NFPA 99 defines a wet procedure location as “(t)he area in a patient care room where a procedure is performed that is normally subject to wet conditions while patients are present, including standing fluids on the floor or drenching of the work area, either of which condition is intimate to the patient or staff.”

Typically, based on that description, the number of areas that would “rule in” for consideration as wet procedure locations is pretty limited (and depending on the nature, etc., of the procedures being performed maybe even less limited than that). But in the modern age, the starting point for this discussion (and this is specifically provided for under section 6.3.2.2.8.4 of the 2012 edition of NFPA 99) is that operating rooms are to be considered wet procedure locations—unless a risk assessment conducted by the healthcare governing body (yow!) determines otherwise (all my yammering over the years about risk assessments is finally paying off—woo hoo!). By the way, there is a specific definition of “governing body”: the person or persons who have overall legal responsibility for the operations of a healthcare facility. This means you’re going to have to get your boss (and your boss’ boss and maybe your boss’ boss’ boss) to play in the sandbox on this particular bit of assessmentry.

Fortunately, our good friends at ASHE have developed a lovely risk assessment tool (this is a beta version) to assist in this regard and they will share the tool with you in exchange for just a few morsels of information (and, I guess, a pledge to provide them with some useful feedback as you try out the tool—they do ask nicely, so I hope you would honor their request if you check this out—and I really think you should). Since I’m pretty certain that we can attribute a fair amount of expertise to any work product emanating from ASHE (even free stuff!), I think we can reasonably work with this tool in the knowledge that we would be able to present it to a surveyor and be able to discuss how we made the necessary determinations relative to wet procedure locations. And speaking of surveys and surveyors, I also don’t think it would be unreasonable to think that this might very well be an imminent topic of conversation once November 5 rolls around and we begin our new compliance journey in earnest. Remember, there is what I will call an institutional tendency to focus on what has changed in the regulations as opposed to what remains the same. And I think that NFPA 99 is going to provide a lot of fodder for the survey process over the next little while. I mean think about it, we’re still getting “dinged” for requirements that are almost two decades old—I think it will be a little while before we get our arms (and staff) around the ins and outs of the new stuff. Batten down the hatches: Looks like some rough weather heading our way!

At any rate, here’s the link to the wet procedure location assessment tool.

Hope everyone has a safe and festively spooky (or spookily festive) All Hallows Eve!

Is you is or is you ain’t a required policy?

Yet another mixed bag this week, mostly from the mailbag, but perhaps some other bags will enter into the conversation. We shall see, we shall see.

First up, we have the announcement of a new Joint Commission portal that deals with resources for preventing workplace violence. The portal includes some real-world examples, some of the information coming from hospitals with whom I have done work in the past (both coasts are covered). There is also invocation of the Occupational Safety & Health Administration (lots of links this week). I know that everyone out there in the listening audience is working very diligently towards minimizing workplace violence risks and perhaps there’s some information of value to be had. If you should happen to uncover something particularly compelling as you wander over to the Workplace Violence Portal, please share it with the group. Bullying behavior is a real culture disruptor and the more we can share ideas that help to manage all the various disruptors, we’ll definitely be in a better place.

And speaking of a better place, I did want to bring to your attention some findings that have been cropping up during Joint Commission surveys of late. The findings relate to being able to demonstrate that you have documented a risk assessment of the areas in which you manage behavioral health patients; particularly those areas of your ED that are perhaps not as absolutely safe as they might otherwise be, in order to have sufficient flexibility to use those rooms for “other” patients. Unless you have a pretty significant volume of behavioral health patients, it’s probably going to be tough to designate and “safe” rooms to be used for behavioral health patients only, so in all likelihood you’re going to have to deal with some level of risk. I suppose it would be appropriate at this juncture to point out that it is nigh on impossible to provide an absolutely risk-free environment; the reality of the situation is that for the management of individuals intent on hurting themselves, the “safety” of the environment on its own is not enough. Just as with any risk, we work to reduce the risk to the extent possible and work to manage what risks remain. That said, if you have not documented an assessment of the physical environment in the areas in which you manage behavioral health patients, it is probably a worthwhile activity to have in your back pocket. I think an excellent starting point would be to check out the most recent edition of the Design Guide for the Built Environment of Behavioral Health Facilities, which is available from the Facilities Guidelines Institute. There’s a ton of information about products, strategies, etc. for managing this at-risk patient population. And please keep in mind that, as you go through the process, you may very well uncover some risks for which you feel that some level of intervention is indicated (this is not a static patient population—they change, you may need to change your environment to keep pace), in which case it is very important to let the clinical folks know that you’ve identified an opportunity and then brainstorm with them to determine how to manage the identified risk(s) until such time as corrective measures can be taken. Staff being able to speak to the proactive management of identified risks is a very powerful strategy for keeping everybody safe. So please keep that in mind, particularly if you haven’t formally looked at this in a bit.

As a closing thought for the week, I know there are a number of folks (could be lots) who purchased those customizable EOC manuals back in the day and ever since have been managing like a billion policies, which, quite frankly, tends to be an enormous pain in the posterior. I’m not entirely certain where all these policies came from, but I can tell you that the list of policies that you are required to have is actually fairly limited:

  • Hazard Communications Plan (OSHA)
  • Bloodborne Pathogens Exposure Control Plan (OSHA)
  • Respiratory Protection Program (OSHA)
  • Emergency Operations Plan (CMS & Accreditation Organizations)
  • Interim Life Safety Measures Policy (CMS & Accreditation Organizations)
  • Radiation Protection Program (State)
  • Safety Management Plan (Accreditation Organizations)
  • Security Management Plan (Accreditation Organizations)
  • Hazardous Materials & Waste Management Plan (Accreditation Organizations)
  • Fire Safety Management Plan (Accreditation Organizations)
  • Medical Equipment Management Plan (Accreditation Organizations)
  • Utility Systems Management Plan (Accreditation Organizations)
  • Security Incident Procedure (Accreditation Organizations)
  • Smoking Policy (Accreditation Organizations)
  • Utility Disruption Response Procedure (Accreditation Organizations)

Now I will freely admit that I kind of stretched things a little bit (you could, for example, make the case that CMS does not specifically require an ILSM policy; you could also make the case that it is past time for the management plans to go the way of <insert defunct thing here> at the very least leaving it up to the individual organizations to determine how useful the management plans might be in real life…). At any rate, there is no requirement to have any policies, etc., beyond the list here (unless, of course, I have left one out). So, no policy for changing a light bulb (regardless of whether it wants to change) or policy for writing policies. You’ll want to have guidelines and procedures, but please don’t fall into the policy “trap”: Keep it simple, smarty!

A toast(er) to all that have gone…

Earlier this week, I received a question regarding the need to do a risk assessment that would allow (or prohibit) the use of toasters in break rooms, etc., due to the open heating element. I should probably mention that this “finding” was not at the hands of The Joint Commission, but rather one of the other acronymic accreditation agencies, but these things do tend to travel across agency boundaries, so it may be a topic of conversation for your “house.” At any rate, the request was aimed more at identifying a format for documenting the risk assessment (an example of which follows), as the surveyor who cited the toasters indicated that a risk assessment supporting continued use of the toasters would be sufficient. Special survey hint: If a surveyor indicates that a risk assessment would be an acceptable strategy for whatever practice or condition might be in question, you should consider that a pretty good indication that there is no specific regulatory guidance in any direction for the subject at hand. Though I will also note that if a surveyor does not “bite” on a risk assessment, it doesn’t mean that there is a specific regulation/statute/etc. that specifies compliance, so even if there appears to be no relief from a risk assessment, a thorough review of what is actually required is always a good idea. Which probably represents a good point to discuss the risk assessment components:

  1. Issue Statement. Basically a recap of what the condition or practice that has been identified as being problematic/a vulnerability, etc.. Using this week’s topic—the use of open element appliances in break rooms, etc. (no reason to confine the discussion to toasters; might as well include toaster oven, grills, and other such appliances)
  2. Regulatory Analysis. Reviewing what is specifically indicated in the regulations: CMS Conditions of Participation; Accreditation Agency standards and performance elements; state and local laws and regulations should definitely be discussed, as well as any other Authorities Having Jurisdiction (AHJ) that might weigh in on the topic. For the open element appliance discussion, I always encourage folks to check with their property insurer (they are a very important, and frequently overlooked, AHJ); they might not tell you that you can or can’t do something (again, based on whether there is an actual regulatory requirement), but they might tell you that if you do X and have a fire, etc., they might elect not to cover damages.
  3. Literature Review. Review any manufacturer recommendations or information from specialty society or trade associations. Staying with our friends the toasters, most of the devices in use in your organization are probably manufactured “For Household Use Only”; you might be hard-pressed in the risk assessment to be able to indicate definitively that the devices are being used in accordance with that level of use (I mean I love toast as much as the next person, but I don’t toast a whole loaf every day…). As a consultative aside, my philosophy has always been to encourage (okay, mandate, but only when I was in a position to make the call) the use of commercial-grade toasters. Yes, they are more expensive, but they are also less likely to self-immolate, which (in my book) is rather a good thing. We definitely don’t need things bursting into flames in our break rooms, etc.
  4. Review of Safety, Quality and Risk Management Data. Check your records. You know you’ve had accidental activations of the fire alarm system (though I do believe that toaster events have faded to a distant second behind microwave popcorn). Is there evidence that your organization is not doing an appropriate job of managing these devices/appliances. I suppose you could take into consideration anecdotal data, but I would be very careful as that can be tricky.
  5. Operational Considerations and Analysis. Discuss how things are being managed now; how often are the appliances being cleaned, serviced, etc. Is that often enough? Is there sufficient smoke detection, suppression, etc.? Do you need to have “official” guidelines for safe toaster use (no sticky, gooey toaster strudels, etc.)? If you’re going to allow something (recognizing that a prohibition is the easiest thing to police from a surveillance perspective), you may find that folks will require a bit of sensible direction to manage the risks effectively.
  6. Organizational Position and Policy Statement/Approval and Adoption. Once you’ve figured out what you want to do, just outline the position you are adopting, make sure that what you’re doing is not in opposition to any existing policy or plan, and then run it through the appropriate committees for final approval and adoption by the organization. In most instances, there is absolutely no reason to establish a specific policy for these things; set it up as a guideline or a protocol or a standard operating procedure (SOP). There are really very few policies that are required by law or regulation. Please don’t feel the need to populate your EOC manuals with a million and one incidental policies (I think this might be a good topic of future conversation).

There are many ways to “skin” a risk assessment and the methodology indicated above may not be suitable for all audiences, but it is a very good way to document the thoughtful analysis of an issue (be it identified during a survey or during your own surveillance activities), particularly when logic does not immediately prevail. (And believe me, logic doesn’t prevail as often as it used to. It makes me sad to think about all the gyrations that have been “committed” because we’ve been forced to deal with something that is “possible” as opposed to “probable” or “actual.” And if you’re thinking that the management of cardboard is somewhere in that equation, you would indeed be correct…) It all goes back to the subtle dynamics between what you “have” to do versus what you “could” do—to a very large extent, at least in terms of the regulations, we get to make our own way in the world. But that world is full of surveyors who are perfectly willing to disagree with any decision we’ve ever made; and they tend not to allow us to do the risk assessment math in our heads (pity, that). This is a pretty straightforward way to get your work on paper. I hope you find it useful.

Ah, the fresh (de)scent of hell…

Two relatively disparate topics of conversation this week; one that I suppose could be characterized as good news, the other not so much…

First, the good news: The Joint Commission is continuing in its review and revision of the various and sundry accreditation programs and has earmarked a number of EC performance elements for the scrap heap, one of which is kind of interesting (and none of which is what I had really hoped for—the management plans, like the monster in some horror flicks, just keep coming back for more). So the requirements that are either redundant or will be left up to the decision of each organization are as follows:

 

  • The requirement to monitor and report all incidents in which medical equipment is suspected in or attributed to the death, serious injury, or serious illness of any individual. Reason: All required by the Safe Medical Devices Act of 1990.
  • The requirement to have procedures that address how to obtain emergency repair services. Reason: Issue should be left to organization discretion.
  • The requirement to provide emergency access to all locked and occupied spaces. Reason: Should be left to organization discretion.
  • Requirement for staff and LIPs to describe or demonstrate methods for eliminating and minimizing physical risks in the environment of care. Reason: Left to organization discretion.
  • Requirement for staff and LIPs to describe or demonstrate how to report EC risks. Reason: Left to organization discretion.
  • Requirement for semiannual environmental tours in patient care areas. Reason: Left to organization discretion.
  • Requirement for annual environmental tours in non-patient care areas. Reason: Organization discretion.
  • Requirement to use environmental tours to identify environmental deficiencies, et al. Reason: (all together now!) Organization discretion.
  • Requirement for representatives from clinical, administrative, and support services to participate in the analysis of EC data. Reason: You guessed it!
  • Requirement to evaluate changes to determine if they resolved environmental safety issues. Reason: not quite what you might be thinking—It’s because this element is implicit in the requirement for your organization to take action on the identified opportunities to resolve environmental safety issues. But wait: How are we going to identify opportunities if we are wicked discreet about the environmental tours? Hmmm…

So we lose 10 performance elements that will now become “ghost” standards (don’t get any ectoplasm on you…icky!) Clearly the expectation that these elements are going to be present somehow and/or somewhere in your EC program is not going away and, to be honest, I’m not convinced (at least at the moment) that you’ll be able to risk assess your way out of a lot of this stuff. I’m most disappointed (after the management plans—I really, really, really don’t have a whole lot of use for them—they bring no intrinsic value to the process and are naught but an exercise in paperwork) in the removal of the specific requirements for staff to be able to describe or demonstrate methods for eliminating risks and to be able to report EC risks. I suppose you could decide that folks don’t have to know that stuff, but I have spent a lot of time and energy beating the drum for the “spread” of safety to point of care/point of service folks. Safety does not live on a committee; it does not live on a periodic survey process. Safety lives everywhere in your “house” every moment of every day. Somehow removal of those two EP’s makes me a little verklempt…

But not as verklempt as some of the folks in Chicago might be of late. Quick background: Periodically, CMS is charged with notifying Congress as to how the various and sundry accreditation organizations are faring when it comes to surveying to the Conditions of Participation, which is pretty much the fundamental task of the deemed status process. At any rate, the information that CMS shared with those pesky Congresspersons can be found here. Of particular interest to this conversation is the information beginning at the bottom of p. 38 of the document, where you will find a table that outlines the disparity rate between Condition-level findings identified by the accrediting organizations (referred to as AOs in the report) and those found by CMS during validation surveys. While (and I don’t think it’s much of a surprise) CMS does ferret out things that were missed during the regular accreditation survey, of the “big three” accreditors of hospitals (AOA/HFAP, DNV, TJC), only TJC did not improve its disparity score in FY 2014 (as the only accrediting agency for psychiatric and critical access hospitals, it didn’t do real well there, either—see pp. 39-40).

But where things get kind of ugly for us is the table (lucky #13) on p. 44, which lists the types of findings missed most frequently in hospitals by the accreditation organizations as compared to CMS. And the most frequently missed Condition of Participation? Why, it’s our old friend, the Physical Environment! The environment fares somewhat better in psychiatric hospitals (which, to be honest, surprises me a little, but it may be a question of a small sample size; unless, of course, your sample size is HUGE!) and about the same in critical access hospitals. At this point, I think I’ve probably yapped enough for one week, but I would encourage you to check out the analysis of the physical environment findings starting on p. 49. It doesn’t paint a particularly bright picture, particularly if there were any of you folks anticipating a return to the clinical side of things during surveys. All signs point to even more scrutiny (happy, happy, joy, joy!) of the physical environment…imagine that.

Batten down the hatches, mateys—we’re in for quite a blow!