Catching up on some recent e-mail questions, there was one regarding hazard surveillance activities and the oversight of off-site buildings (not identified under the hospital’s license) where there are no patient services provided. So the question became whether these locations had to be included in the hazard surveillance program and whether they would be subject to a Joint Commission visit during survey.
So, taking a look at the Joint Commission “role,” while it is most unlikely that the survey team would visit an off-site location with no patient services (not quite 0%, but something very close to that), a pain-in-the-butt surveyor might check the hazard surveillance round documentation vs. the list of hospital departments. Now the standard/elements in question, EC.04.01.01, EPs 12 and 13, only refers to patient and nonpatient care areas, so I think the thing to do is to be very specific in identifying locations in the scope of your management plans (I mean, what exactly is a nonpatient care area? A nonpatient area—got that; a patient care area—got that. But this hybrid is a little vague). Ultimately, the whole process sets up based on what you’ve identified as the appropriate inclusions, etc., so you can certainly make the determination of what would rule in to the program, or indeed rule out of the program.
That said, I have a concern in the event that OSHA were to rear its ever so lovely head. It would be of critical importance to demonstrate some sort of oversight; one strategy that comes to mind would be to develop a self-inspection process for those areas and fold that into the formal surveillance process. As a safety professional, I’m having a hard time saying that these “other” locations can be culled out of the main process (in my experience, it is never a good thing for people to think that they are somehow being ignored or not appropriately tended). I think the thing to do is to set up a less-invasive process that will allow some sort of feedback loop if environmental issues crop up in these other locations. Better you find out about issues than to have somebody drop a dime to the big “O.”
Recently I fielded a question from someone who was reviewing an MRI safety plan and was curious about how the four-zone “defense” would work when the MRI is in a self-contained trailer that is not part of the building. Now my first thought on this was whether the MRI service is provided under contract (including staffing) or whether the service was staffed by hospital folks and the trailer under some kind of lease arrangement. My somewhat snotty response would have been to lean on the contract folks to work out how it all fits together, but then I was thinking: What if we had to work it out on our own, for whatever reason?
At first blush, I can see establishing three zones without too much difficulty. It may be necessary to combine Zones 2 and 3, but Zone 1 (i.e., general public) would be outside the trailer and Zone 4 (i.e., screened MRI patients under constant direct supervision of trained MRI personnel) would be in the magnet room, so that’s pretty reasonable. But again, what about Zones 2 (i.e., unscreened MRI patients) and Zone 3 (i.e., screened MRI patients and personnel)? I don’t think there would be enough room in the trailer for screened and unscreened folks, but maybe Zone 1 could be the ground level outside the trailer, Zone 2 could be the area just outside the trailer (on the lift and/or stairs—depends on the configuration of the trailer), which would leave Zone 3 in the control area.
Or I suppose you could do a risk assessment to demonstrate that the risks can be appropriately and reliably managed without adoption of the four-zone setup, but one would need to make sure that all staff in the area (and I do mean “all” staff) can speak to the protection measures in place. Querying of staff has been coming up in recent TJC surveys and if staff cannot speak to the zones and differing levels of safety/protection, it results in citations under the management of hazardous energy sources element of the Hazardous Materials and Waste Management standard. Probably not a bad thing to check on after lunch this afternoon.
Not that long ago, I participated in a discussion regarding the terminal cleaning of surgical procedure rooms, particularly the recommended frequency of using wet vacuums and a liberal application of water to ensure appropriate results. As a function of that discussion, it appeared that the organization in question had adopted a weekly frequency for the use of wet vacs, and it also appeared that the OR rooms were noted to have a buildup of materials (you know what they are—no need to get graphic here) in the corners of the rooms. The overarching question was what the TJC survey risk might be given the current practice and conditions.
Now I honestly don’t know why people struggle so much with this, but here goes (and yes, for those of you keeping score, before we finish this conversation, I will invoke the mighty risk assessment).The Joint Commission standards do not specify any particular methodology for the cleaning of operating rooms, primarily (to my way of thinking) because there is far too much variability in the surgical environment for them to “require” anything more than what they do now: EC.02.06.01, EP 20, which requires that “areas used by patients are clean and free of offensive odors.” And that, as they say, is that. (By the way, in case I have not previously noted this, this EP is now a Direct Impact finding and is showing up with far greater frequency in survey results. I characterize this as the “return of the white gloves.”)
Based on the conversation related above, if there is a buildup of nasty stuff in the corners, then clearly whatever frequency and/or method they are employing is not particularly effective—even if they are using a wet vac only once a week, they shouldn’t be getting a whole lot in the way of buildup in the corners, so putting my EVS hat on, there is at least some competency/education opportunities lurking in them thar corners. Will a water vac be enough? Read on…
Even the CDC, in its Guidelines for Environmental Infection Control points to other “published” guidance when it comes to the cleaning of OR’s , which, to my mind, points to the AORN recommendations (I don’t know if there is a more contemporary edition; this is from 2007, but I think the basic concepts still apply). Now there are a whole slew of things to do in the AORN guidance, including a process for before/between-case cleaning (which indicates the process would be in place even after the room is terminally cleaned). As a bit of consultative advice, I would be inclined to advise folks not to cherry-pick the “easy” pieces from the AORN guidance. If one were to consider the AORN guidance as a “best” practice, then we would need to look at is as a whole, at least as a starting point. As with all universal guidance documents, the “one size fits all” aim almost never results in universal applicability, but if we are going to adopt something along the lines of a hybrid kind of deal (wait for it!), we would need to have this framed within the context of a risk assessment.
So if, for instance, a surveyor were to cite a hospital for not following the AORN guidance (and there is not TJC standard that says you have to; good idea, yes, mandate, no), without citing some level of failure relative to the cleaning process, then that would be an opportunity for clarification. But, if the organization can demonstrate that whatever methodology it is using is effective, based on data analysis, infection rates, etc., then who’s to say that its “strategy” is not every bit as valid as the recommendations of AORN? But, if the same hospital gets cited because it has sanitation opportunities in their OR’s, then it is dead in the water, so to speak. As something of an editorial comment, it has been a very long time since I’ve been in an OR that I would call clean; most are okay, but for instance, it is very rare indeed that the anesthesia machine does not have a buildup of dust. Now I know that the dust comes primarily from linen and the undoing of surgical packs, etc. in the room (that stuff does nothing but aerosolize lint—yuck!), which means pretty much every time an OR room gets used, we are introducing more stuff that will need to be cleaned.
In conclusion: Buildup in the corners will likely lead to a TJC or CMS finding and that is as it should be; folks who are experiencing less-than-ideal levels of cleanliness need to look at changing their process, but I don’t think it necessarily makes sense to tie your future to adopting anyone’s specific guidance, until you’ve figured out the reasons for having a deficient process in the first place. That way, if you should choose not to adopt what can legitimately be characterized as a best practice (the AORN guidance), then they better have a real good reason (which would include validation of the effectiveness of the alternative) for doing so.
This is just like the cardboard issue. There is no regulatory mandate in any direction on this, but it is a risk to be managed effectively. Nowadays, each organization’s responsibility, and, strangely enough, The Joint Commission’s expectations, is to know whether it is managing identified risks appropriately. There can truly be no more reliance on the “no news is good news” philosophy—reactive risk management will only cause trouble during survey. Improvement is a beautiful thing, but you need to understand how and why improvement occurred.
Boston’s buzzing today as hockey fans celebrate the Bruins winning their first Stanley Cup in 39 years, but that’s not the only action that took place here this week. Earlier in the week, the National Fire Protection Association (NFPA) held its 2011 Conference and Expo in Boston, which was followed by the NFPA Technical Meeting on Tuesday and Wednesday.
Of particular interest to healthcare facilities folks, at the Technical Meeting the association approved new versions of NFPA 101, Life Safety Code® (LSC), and NFPA 99, Standard for Health Care Facilities. The 2012 editions of each standard are expected to be published officially in the next few months.
Once the 2012 editions are published, CMS and The Joint Commission are expected to follow suit and adopt the 2012 editions. Currently, both require hospitals to comply with the 2000 edition of the LSC. The most recent edition of the LSC was published in 2009.
It could take up to 18 months before CMS adopts a newer edition of the LSC. Once that happens, The Joint Commission, Det Norske Veritas, and the Healthcare Facilities Accreditation Program will also adopt it, and then accredited hospitals must comply with the new requirements.
Visit the NFPA’s Conference blog for more information on the votes and see the upcoming issue of Healthcare Life Safety Compliance for details and analysis of these actions and what they’ll mean for your facility.
By now, I suspect that most of you have heard about some of the “editorial” changes that will be taking effect in a couple of weeks—just in time for the Independence Day festivities, though I don’t know that this should result in much in the way of fireworks.
So the first item revolves around the whole business occupancy as emergency services provider and/or community-designated disaster receiving station, which I suppose is a concern for some folks. But I can’t think of too many folks with business occupancies that provide emergency services or (even less likely I’m thinking) community-designated disaster receiving stations, and even if you do, why would you not include these locations in your regular exercise schedule? Again, something of which to be mindful, but I shouldn’t think would be a problem as long as you’re paying attention. Which leads me to the “other” point, otherwise known as Note 4 under EM.03.01.03, which appears to pile on a bit when it comes to your post-emergency exercise activities.
As with so many of the more intricate meanderings of the Joint, as far as I’m concerned, this merely clarifies what was already implied in the standard. To be honest, the exercise section of the Emergency Management chapter is actually kind of useful (along with the standards covering the management of volunteer practitioners during an emergency—that is a very well-crafted set of standards/expectation and can actually assist folks in identifying appropriate strategies, but I digress).
My interpretation of the whole EM.03.01.03 magillah is that it is a clear move to a classic performance improvement cycle: You do an exercise, you identify improvements, implement the improvements, use the next exercise to evaluate the changes, identify more improvements, implement them, use the next exercise to evaluate those changes, and so on. Where this can be tricky is when you’re “playing” with the community because they will almost invariably have a different agenda for the exercise than the hospital will, so the hospital then has to become creative in building their improvements (and the evaluation thereof) of that drill. Sometimes the improvements are so broad-ranging that they will easily “fit” in any scenario, but others maybe not so much. The other point to keep in mind (and this dovetails very nicely with the recent blog item on interim gas measures) is that if you cannot implement an identified improvement prior to the next drill, you are supposed to identify interim measure to “bridge the gap” until implementation (the “note” for EM.03.01.03, EP 16 states that when modifications requiring substantive resources cannot be accomplished for the next response exercise, interim measures are put in place until final modifications can be made). As far as I’m concerned the “requirement” in the new note already existed as EP 17, which requires subsequent exercises to reflect the modifications and interim measures identified in previous exercises. They’ve changed the language some (and perhaps made it more clear, but I thought that what they had was clear enough. It may be that they did no one favors by burying it at the end of the chapter and decided to move it to a position of greater prominence. All we have is conjecture at this point. So, my advice would be to utilize your organization’s performance improvement model to track exercises and performance therein and keep the ball rolling.
As to how this may impact organizational resources (or the lack thereof), I don’t know that I’m prepared to throw that towel in just yet. I see far too much procrastination when it comes to emergency management efforts; it’s almost always part of someone’s job, but not the primary part, so accountability for emergency planning slides down the hierarchy. Part of it is, from a historical basis, most hospitals don’t have to deal with what I will loosely describe as “overwhelming” events, so it becomes a cost- (or risk-) benefit analysis. We have to devote our resources towards the stuff that’s most important. Now I’m certainly not prepared to say organizations feel that emergency preparedness is not important, but when you don’t have the resources to even make critical infrastructure improvements (which can actually increase the likelihood of an overwhelming event), you spend more time fixing stuff, putting out fires, etc. As I’ve noted in more places than I care to think, emergency preparedness is a journey not a destination. We will never get to a point where we can look back and say that we’ve done all we can (unless, of course, the Mayans are correct about next year, then maybe, just maybe we won’t have to worry as much about this stuff).
The “best” result you can really expect from an emergency response exercise is the identification of questions or issues that you can’t immediately resolve. That’s where you find your real improvement opportunities and/or vulnerabilities. There will always be quick fixes, but to find a real process opportunity—that’s real gold.
A client of mine recently happened upon one of these opportunities. (Now this may be something you’ve already dealt with—and good for you. Everybody comes to these things in their own way, form, time, etc. But if this is a concept you’ve not really addressed, then it’s something to consider for future exercises.)
The general scenario was one that resulted in an influx of patients. One of the downstream events during the exercise was that the ICU was directed by Incident Command to plan for the admission of pediatric and other patients who wouldn’t be considered typical to the populations served in the ICU. In the course of the exercise, concerns were raised by the ICU staff regarding how this “shift” would be accounted for in hospital policy, what happens to existing policies for “normal” operations, and the recognition that staff caring for these patients do not necessarily have demonstrated competencies relative to the needs of these patient populations. This finally led to the question of the accountability/liability of the hospital and any individual practitioner responding to the immediate needs.
As you can readily see, there are a lot of complications involved here, some of which are working in opposition. First we’ll start with Joint Commission requirement EM.02.02.11, EP 4, which requires the hospital to have a strategy for managing an increase in demand for clinical services for certain vulnerable populations, including pediatrics. Fortunately (I’m choosing to be optimistic about this), that’s pretty much all The Joint Commission says about it: we have to have a process, but how that process works is entirely up to us. The next complication is going to be under what circumstances would we need to plan for such an event? Would it be an emergency of such far-reaching consequences that the “normal” rules are suspended? In such a case, we may have a little leeway (note the “may” – more on that in a moment) in terms of how we emergently manage these patient populations, though I suspect that it will be of fairly limited duration (we could certainly look to post-Katrina New Orleans of an example of how “bad” things can get and there’ll still be someone to jump ugly on your decisions after the dust has cleared).
Part of our due diligence, now that the question is raised, is to consult with the state board of registration of nursing to see if they have any guidance. Clearly, we could get in a situation in which baseline competencies and scope of practice might be exceeded. From a risk management perspective, we need to have a very, very clear understanding of what that can and cannot mean. I can’t imagine that the question hasn’t been pondered by someone at the state level, maybe not quite as succinctly as this, but it’s a question that can equally apply to any and every healthcare organization in the state (not to mention the country, but I guess I just did). The other part of the due diligence would be to try and craft some basic expectations/competencies to be used as a framework during emergent events. I don’t know how much you could set up ahead of time (and I suppose from a compliance perspective, one would have to consider the merit of Memorandums of Understanding with healthcare organizations that may have ready access—and would be willing to share—to some of these “other” resources).
At any rate, this is something for which there is a regulatory expectation of planning and identification of response capabilities. Although the requirement does not force us to “have” these resources, it does require that we have a plan for managing such a situation should it arise.
Hospital safety is being questioned after a patient shot and killed a doctor at Florida Hospital in Orlando on May 27.
Last month, a 53-year-old patient shot and killed a 41-year-old transplant surgeon in the hospital’s parking garage and then killed himself, reports the Los Angeles Times.
Since the murder, the hospital has stepped up its security and police escorts are available for those who need it.
Security experts say physicians are becoming more common targets of angry patients.
One of the occupational hazards of my job is that I tend not to leave the job behind when I travel. I am forever looking at exit signs, blocked exit doors, testing battery-powered emergency lights in hotels, etc. One of the things that’s become somewhat the bane of my existence are certain configurations of hand washing facilities. Now I believe that if one has a manually operated sink, then one must have access to paper towels to turn off the water or risk re contamination of the hands.
Now I know the risk is probably fairly small, but it is a risk that can be managed pretty easily. I have no real preference between automatic and manual-op faucets (I tend to find the manual-op a little more reliable). But if there’s a manual-op sink, I want to see the paper towels (I refuse to use my shirt tail to turn off a faucet, toilet paper just doesn’t “feel” right, and leaving the water running is not very green). And I’ve heard about using elbows, etc. to turn off the water (try that with a knob-like fixture!), but somehow that doesn’t seem all that effective either. There’s a certain fast food restaurant (one tangential hint—its founder was the opposite of a buoyant substance) that uniformly has manual-op sinks and hand dryers in its restrooms, and it drives me nuts (a short distance to travel many days, but I digress).
In this age of superbugs, etc. (I’m writing this on May 19, so I’m assuming that some of us will still be around after this weekend’s departure on the part of whoever it is who’ll be departing: I don’t believe that I will be in that number, but if you are, I’m hoping the next plane doesn’t have as many hand washing challenges), hand washing remains one of, if not the most, effective means of protecting ourselves, and perhaps, all of humanity, from these pesky critters. So please, please, please (at this point, I have to drop to the ground, so I can don my cape) make sure that your hand washing setups are conducive to maintaining good hygiene. If you got manual-op faucets, give the washers something to turn off the faucet, and if we’re lucky, maybe the lights…
The entire 367-bed St. John’s Regional Medical Center in Joplin, MO, needed to be evacuated after a tornado tore through the city May 22.
The tornado caused extensive damage to the hospital. The nine-story facility’s walls were moved 10 feet out of place, debris was strewn about patient rooms, and medical records and other documents were blown away, reports CBS News. Hospital staff promptly evacuated the 183 patients out of the facility and from the emergency room and set up a triage center in the parking lot. After evaluating patients, nurses sent them to other facilities for treatment.
St. John’s readied the facility before the tornado struck, declaring “Condition Gray,” , prompting patients and visitors to go into protected parts of the building such as stairwells.
A physician at the hospital, Jim Riscoe, MD, praised the hospital’s staff to CBS News, saying they acted professionally and communicated well.
One little item that seems to be popping up with increasing regularity is citing deficiencies in medical gas and vacuum testing reports that are not being managed in a demonstrative way. In some instances, folks have been tagged for the amount of time between identification and resolution, others for not having demonstrably managing the risks associated with a delay in resolving something, etc.
Now we’ve certainly had the “you can live and die during a survey based on what your vendors are documenting” discussion before, though that exchange has tended to focus on life safety device testing. This has the appearance of a fairly significant sea change relative to survey findings and I believe that it represents an overall move towards a risk management process that is not unlike the process outlined under the interim life safety measure (ILSM) standard and associated performance elements. Only time will tell as to how this effects the tenor of surveys and survey results, but I would ask you to think about this:
Any time a deficiency (any type of deficiency—doesn’t specifically have to be a life safety deficiency) or otherwise “unsafe” condition cannot be resolved immediately; there is an inherent risk to be managed. That’s just the nature of the beast. And that risk can be fairly variable in terms of its impact on our facilities and occupants, but how do we know if we don’t run it up the ol’ risk assessment flagpole and see if salutes are forthcoming.
Certainly, it is within our purview to prioritize our responses to stuff that needs to get done, but it would seem that if we do not have our t’s crossed and i’s dotted (until our eyes are crossed), we could be at risk for citation for taking too much time to complete our corrective action (yeah, I know, how much is too much?). But I think if we have a solid, documented process for assessing the risk, identifying any interim strategies for minimizing the risks until such time as the condition can be corrected, we can effectively manage the risk involved with a delay, both from an operational standpoint and a survey management standpoint.
At the end of the day (cliché alert), it is our responsibility to ensure that the safety is maintained at an appropriate level (also meaning that it is our responsibility to identify what appropriate looks like) at all times, even when things are not in perfect shape. When things are not in perfect shape, the mission then becomes one of figuring out how we make sure that any gaps are bridged, which effectively means some sort of interim measure (if you look closely, you’ll see the concept of interim measure doesn’t just show up under the ILSM standard – it’s in the emergency power standards, as well as in the emergency management chapter). It is up to us to manage our resources effectively, and sometimes that means that we can’t fix everything right away – and that’s okay.
As long as we make sure that the associated risks are managed just as effectively as we manage those resources. So, keep close tabs on those maintenance and testing documents getting shoved under your door, any one of them could be trouble during survey.
Which reminds me, if your testing vendors are not providing you with a complete summary of any and all deficient findings identified during the testing and maintenance activities, I plead with you to demand this of them. The surveyors now have more than enough time to find that little mission critical device failure buried on page 15 of the testing report and they’re going to ask you what you’re doing about it. If you didn’t get to page 15 in the report, you’re going to end up with some egg on your face and a probably an RFI (Requirement for Improvement), which won’t make anyone happy. A simple executive summary of deficiencies (or, if they didn’t find anything, a simple statement that says “yes, we didn’t find anything”) gives you one place to look and one access point to managing the risks associated with the finding. I know I don’t have time to read through all those reports and I suspect your time can be better spent, but perhaps that’s just me…