In our intermittently continuing series on the (final!) adoption of the 2012 Life Safety Code®, we turn to the one area about which I have still the most concerns—the magic land of NFPA 99. My primary concern is that while NFPA 99 contains lots and lots of references to risk assessments and the processes therein, I’m still not entirely convinced that the CMS oversight of the regulatory compliance process is going to embrace risk assessments to the extent that would allow us to plot our own compliance courses. I guess I will have to warily keep my fingers crossed and keep an eye on what actually occurs during CMS surveys of the physical environment. So, on to this week’s discussion…
When considering the various and sundry requirements relating to the installation and ongoing inspection, testing and maintenance of electrical system components, one of the key elements is the management of risk associated with electrical shock in wet procedure locations. NFPA 99 defines a wet procedure location as “(t)he area in a patient care room where a procedure is performed that is normally subject to wet conditions while patients are present, including standing fluids on the floor or drenching of the work area, either of which condition is intimate to the patient or staff.”
Typically, based on that description, the number of areas that would “rule in” for consideration as wet procedure locations is pretty limited (and depending on the nature, etc., of the procedures being performed maybe even less limited than that). But in the modern age, the starting point for this discussion (and this is specifically provided for under section 188.8.131.52.8.4 of the 2012 edition of NFPA 99) is that operating rooms are to be considered wet procedure locations—unless a risk assessment conducted by the healthcare governing body (yow!) determines otherwise (all my yammering over the years about risk assessments is finally paying off—woo hoo!). By the way, there is a specific definition of “governing body”: the person or persons who have overall legal responsibility for the operations of a healthcare facility. This means you’re going to have to get your boss (and your boss’ boss and maybe your boss’ boss’ boss) to play in the sandbox on this particular bit of assessmentry.
Fortunately, our good friends at ASHE have developed a lovely risk assessment tool (this is a beta version) to assist in this regard and they will share the tool with you in exchange for just a few morsels of information (and, I guess, a pledge to provide them with some useful feedback as you try out the tool—they do ask nicely, so I hope you would honor their request if you check this out—and I really think you should). Since I’m pretty certain that we can attribute a fair amount of expertise to any work product emanating from ASHE (even free stuff!), I think we can reasonably work with this tool in the knowledge that we would be able to present it to a surveyor and be able to discuss how we made the necessary determinations relative to wet procedure locations. And speaking of surveys and surveyors, I also don’t think it would be unreasonable to think that this might very well be an imminent topic of conversation once November 5 rolls around and we begin our new compliance journey in earnest. Remember, there is what I will call an institutional tendency to focus on what has changed in the regulations as opposed to what remains the same. And I think that NFPA 99 is going to provide a lot of fodder for the survey process over the next little while. I mean think about it, we’re still getting “dinged” for requirements that are almost two decades old—I think it will be a little while before we get our arms (and staff) around the ins and outs of the new stuff. Batten down the hatches: Looks like some rough weather heading our way!
At any rate, here’s the link to the wet procedure location assessment tool.
Hope everyone has a safe and festively spooky (or spookily festive) All Hallows Eve!
Talk about your regulatory supergroups: it’s almost like the second coming of Crosby, Stills, Nash, Young and all manner of goodness or maybe the Fantastic Four (or in this case, the Spectacular Six)! Back on September 21, the modern healthcare environment equivalent of the Justice League (AAMI, ASHE, AORN, ASHRAE, APIC, and FGI) published a Joint Interim Guidance (JIG) on HVAC in the Operating Room and Sterile Processing Department. The intent of this JIG (sometimes acronyms can be fun) is to address what the Spectacular Six deem “the biggest challenge for owners and designers of health care facilities,” which is “to understand the purpose and scope of the various requirements” of the “conflicting and sometimes unclear” standards and guidelines for the management of heating, ventilation, and air-conditioning (HVAC) so “patient and staff safety and comfort can be managed.” You can check out the JIG document here.
Without spoiling too much of your interest in discovering all the particulars, the JIG speaks to the dichotomy inherent between the minimum design requirements and criteria that are used to construct an HVAC system (the FGI/ASHRAE/ASHE side of the equation) and the guidelines that “are intended to guide the daily operation of the HVAC system and clinical practice once the health care facility is occupied (this is where AAMI/AORN/APIC come in). As any number of you have experienced in real (and sometimes really painful) time, this dichotomy is very much at the heart of the regulatory survey process at the moment (somehow in my heart of hearts I knew that we could be continuing this conversation). But mayhap there is a light at the end of the tunnel (that is not an oncoming train!): the Spectacular Six have begun working towards a harmonization of the HVAC guidance in the various standards. They’ve been working on this since late April and the JIG is, for all intents and purposes, their first work product. I think it’s an excellent start and I hope that their work is allowed to continue with minimal interference from outsider influences (who that could be, I have no idea…).
An important part of the JIG is their advice to healthcare organizations (it’s on p. 2 of the document—I’ll let you reflect on it in your spare time) and those of you who’ve been following this space for a while will guess that my fave-o-rite topic is going to feature quite prominently in this movement: our old friend, the risk assessment! (Admit it, you knew it was going to go that route!) The goal of the assessment is to make a determination of HVAC operating parameters for critical areas that meet patient, personnel, and product storage needs, with an eye towards the identification of appropriate corrective measures to mitigate risk, etc. I think we’ve been honking this horn a bit as we’ve traversed this landscape, but I think the critical opportunity/challenge is going to be based on the Spectacular Six’s intent to communicate/advocate directly with the regulatory folks in this regard. I haven’t yet seen anything official from CMS in this regard, but based on its adoption of the tenets of the Joint Quality Advisory from January of this year (again, a number of web locales for info, including this one), I think we can reasonably anticipate some level of guidance to the surveyor corps in the not-too-distant future.
So perhaps we can work through this in relatively straightforward fashion; at the end of the day, our charge is to ensure that we are providing the safest possible environment for patients, staff, and visitors. But in meeting that charge, we also need to make sure that we are not writing checks that our building systems can’t cash. We should be able to identify appropriate, safe performance parameters that appropriately address all the risk factors and identify a response plan that we can effectively implement when we have conditions that increase the risk to unacceptable levels. If you ask me, that sounds like business as usual…
In looking through the ol’ e-mail bag, I received a request for info relative to what the “magic number” was to be able to “count” an influx exercise in compliance with the Emergency Management standards. In looking at this question, I thought to myself that I don’t know that we’ve tackled this one yet here in the blogosphere and since my experience has generally been that questions like this rarely occur to just one set of folks, I figured there’s no time like the present. And so…
In looking at the performance element in question, all it says is that at least one of the two emergency response exercises includes an influx of simulated patients…and that’s all it says (you can go check!) Which means, in the parlance of the survey world, it is up to each organization to define what level of incoming patient volume is sufficient to be characterized as requiring implementation of the emergency response plan. In short, there is no “magic number” to guide us, so it becomes something akin to a risk assessment in that we need to determine what number and acuity (in some ways, acuity is the more important metric when discussing such matters) of incoming patients pushes our “normal” response capabilities past the point where response would still be considered normal. For smaller hospitals, the influx number is likely to be smaller; one really sick or injured patient might be enough to tip the scales where larger organizations might be able to manage volumes in the tens (I’m thinking hundreds is a bridge too far for the purposes of this conversation) without breaking a sweat.
To be honest, my experience has been that a lot of hospitals (perhaps even most hospitals) “do” influx on a regular basis. Volume is rarely a static reality—it ebbs and flows like the tide, though not nearly as predictably. And since it is for all intents and purposes impossible to predict what types (illness/injury/level of acuity) of patients your organization might have to manage, you have to employ some level of response flexibility.
Getting back to the magic number, it’s going to be pretty much up to each organization to determine what constitutes an influx. To complete the thought relative to managing influx on a regular basis, I’m not entirely certain why the standards-based requirement hasn’t morphed (Evolved? Mutated?) into a requirement for an evacuation exercise. We do influx all the time; evacuation, not so much. So what’s your magic number?
In our continuing coverage of stories from the survey beat, I have an interesting one to share with you regarding my most favorite of subjects: risk assessments. During a recent FSA survey (what’s that, you ask? Why, that’s the nifty replacement for the “old” PPR process—yet another kicky acronym, in this case standing for Focus Standards Assessment), a hospital was informed by the surveyor that it was required to conduct an annual risk assessment regarding emergency eyewash stations. Now I will admit that I got this information secondhand, so you may invoke the traditional grain of salt. But it does raise an interesting question in regards to the risk assessment process: Is it a one-and-done or is there an obligation to revisit things from time to time?
Now, purely from a contrarian standpoint, I would argue against a “scheduled” risk assessment on some specific recurring basis, unless, of course, there is a concern that the management of the risk (in question) as an operational consideration is not as easily assured as might otherwise serve the purpose of safety. If we take the eyewash equipment as an example, as it deals primarily with response to a chemical exposure, I would consider this topic as being a function of the Hazardous Communications standard, which is, by definition, a performance standard. So as long as we are appropriately managing the involved risks, we should be okay. And I know that we are monitoring the management of those risks as a function of safety rounds and the review of occupational injury reports, etc. If you look at a lot of the requirements relating to monitoring, a theme emerges—that we need to adjust to changes in the process if we are to properly manage the risks. If someone introduces a new chemical product into the workplace, then yes, we need to assess how that change is going to impact occupational safety. But again, if we are monitoring the EC program effectively, this is a process that “lives” in the program and really doesn’t benefit from a specific recurrence schedule. We do the risk assessment to identify strategies to manage risks and then we monitor to ensure that the risks are appropriately managed. And if they aren’t being appropriately managed…then it’s time to get out the risk assessment again.
Last week, we started the discussion regarding findings relative to the inspection, testing, and maintenance of medical gas systems, which reminds me that I kind of skirted exactly how those findings were manifesting themselves.
The most common variant is for organizations that have established a less-frequent-than-annual schedule for the med gas system components, particularly the outlets (as they are usually the most numerous of the system components). Folks are doing half or a third or a quarter of their outlets on an annual basis, and they have not specifically identified the time frame in the Utility Systems Management Plan (USMP; feel free to give your USMP a quick check to see if you’ve defined the time frame(s) for the med gas system components and that your practice accurately reflects what is in the management plan, which is the other most common way this standard generates findings). Make sure you identify the time frame for the testing, etc., and make sure that what the management plan says accurately reflects the process (I know there’s a certain inescapable logic to this, but I’ve seen folks get tagged for this, so please just take a moment to make sure…).
How do we determine those time frames? Well, once again we can ping back through to EC.02.05.01, this time stopping at EP 4, which requires the identification (in writing—but of course) of inspection, testing, and maintenance intervals “based on criteria such as manufacturers’ recommendations, risk levels, or hospital experience.” I think that pretty much captures the gamut of possible criteria, but I’ll throw the question out to the studio audience: Anyone using anything other than those criteria? If so, please share. This would be required for all the utility systems on the inventory, so the next question becomes: What’s on your inventory and how did you populate that inventory?
Jumping back a wee bit further to EC.02.05.01, EP 2, it appears that we would choose between an inventory that contains all operating components of utility systems or we would establish the inventory based on “risks for infection, occupant needs, and systems critical to patient care (including all life-support systems).” Now, I’m not at all certain what you folks might doing individually (I suspect it will have at least something to do with the complexity of your systems and the component elements), but I’m going to guess we have a mix of both strategies of inventory creation. So the task then becomes one of fitting the medical gas system, in total or in pieces, into that decision, then considering the criteria noted under EP 4 to wrap things up in the form of a lovely little risk assessment. Then update the USMP to reflect whatever it is you’ve determined and you should be good to go.
A word of caution/advice: Once you’ve done the risk assessment, picked the maintenance strategy, determined the frequency, and updated the USMP, please remember that is always a wise move to periodically evaluate the decision you made relative to, well, basically anything in your USMP/inventory thing. And a fine spot to do that (if you prefer to call it an opportunity, you’ll get no grief from me) is the annual evaluation process. It comes down to a simple question: Have the maintenance strategies, frequencies, and activities provided reliable performance in support of patient care activities? And while the answer is also pretty simple (yes or no, maybe with a periodic instance of “don’t know” thrown in for good measure), it might be useful to develop a measurement that will tell you when the process is not working well. Could be something like “unscheduled disruptions resulting from preventable conditions” (which might indicate you need to increase your frequencies) or delays in care and/or treatment as the result of unscheduled disruptions (I am a very big fan of EC measurements that tie performance in the care environment back to the bedside—powerful stuff), things like that.
We always want to try and base our risk decisions on data, but sometimes you have to pick a course based on that rapidly vanishing commodity—common sense. When that occurs, I’d want to have some means of “telling” whether the decision was a good one, fold that into (or through) the annual evaluation process, and then move on to the next challenge (and there will surely be another challenge…any minute now). Hope you found this discussion helpful. I will again solicit any feedback that might be percolating out there—I love to know what you all are doing with this stuff, and so does the rest of the class.
All things being equal, I suspect that folks are wrestling with the very expansive elements of the Sentinel Event Alert regarding the management of medical device alarms (Sentinel Event Alert #50). I think the important thing to do is to first document a risk assessment that takes into account the various clinical alarm systems, identifying those that might legitimately be considered critical and use that as the starting point. I think it would behoove you to involve the folks in clinical engineering as they probably have performance data that would be very useful (equipment that they find during preventive maintenance activities that perhaps are not appropriately configured for audibility, etc.), it would also be important to include any potential occurrence reporting data that might indicate that there have been issues involving audibility of clinical alarms, etc. To be honest, I am not so sure that any one organization’s approach will be sufficiently universal to be used across the board beyond a simple “these are the specific risks involved with the clinical alarms you’ll be using” (in recognition that this may vary based on clinical location/service) and “these are the specific strategies we are using to appropriately eliminate/mitigate those risks.” I know that may sound like an overly simple approach, but if you do a good job on the risk assessment groundwork, you will have everything you need to manage the education process. As a further enticement, the Sentinel Event Alert web page noted above also includes links to a couple of podcasts that discuss the Alert in pretty fair detail. I don’t know that I’d recommend listening to it on the treadmill, but it’s probably a good way to combine work and exercise…
Recently, a client sent me a question regarding assessing his surgical procedure rooms as wet locations. This was primarily as a function of the changes to NFPA 99, which brings the concept of wet locations and surgery back into the mix (the 2012 edition of NFPA 99 defines wet procedure locations as the “area in a patient care room where a procedure is performed that is normally subject to wet conditions while a patient is present, including standing fluids on the floor or drenching of the work area, either of which condition is intimate to the patient or staff,” NFPA 99 – 2012: 3.3.184). Previously, operating rooms were not considered wet locations as a rule, but now it appears that the pendulum has swung in the other direction.
To that end, the American Society of Healthcare Engineering (ASHE) issued an advocacy statement last year recommending that organizations form a risk assessment group to develop a process for evaluating surgical procedure rooms to determine which of these areas, if any, might legitimately be considered wet locations. Now based on the definition from NFPA 99, you could probably rule out a lot of procedure areas (rooms designated for eye surgery, neurosurgery, ENT surgery, etc.), but in other areas it may require some observations of the procedures being performed to determine the extent of standing fluids, etc. Once you’ve determined that you have wet locations, then you would need to move to provide appropriate protection (GFCI protection, isolated power, etc.). And there are other considerations as well, based on the activities in the space, the “state” of the equipment used in the space, etc. There can be any number of contributing factors that could increase the risk to staff and patients in wet locations; Appendix B of the 2005 edition of NFPA 99 speaks of such things as line-powered equipment that is within reach; a damaged line cord, attachment plugs, or exposed metal presenting a risk of direct exposure to a conductor; damaged equipment with “live” metal, exposed metal that has become ungrounded, a person making contact with a live metal surface, etc.
As with so many things, the key process is the almighty risk assessment, so if you’ve not yet wrestled with this bear, you might find it useful to start the process (in full disclosure, the ASHE advocacy statement came out last year—and if you don’t think certain three-letter regulatory agencies are not familiar with this bit of news, I would encourage you to think otherwise). Sometimes codes change for good reasons, sometimes maybe not so much, but we have an obligation to provide the safest possible environments for patients and staff and this looks like something that can be at least determined fairly simply (fixing this if you have issues is likely to be much less simple).
One of things that continuously comes up on my pondering list is how to enlist the eyes, ears, noses, and fingers of frontline staff in the pursuit of the early identification of risks in the physical environment. Unless one of the facilities maintenance folks happens to be in the right place at the right time, in all likelihood, an aberrant condition is going to manifest itself to somebody working out at the point of care/point of service. And my firm belief is that the organizations that manage environmental risks most effectively (including the “risks” associated with unannounced regulatory survey visits) are the organizations that have most effectively harnessed these hundreds, if not thousands, of agents in the field 24/7.
So, my latest take on this is that we can subdivide the totality of every (and, really, any) organization into two main constituencies—finders and fixers. The key is to get the finders mobilized, so the fixers (who, truth be told, in most organizers are currently finder-fixers) can focus on actually repairing/replacing stuff. I’m at a loss to explain why this can be such a difficult undertaking, so I’ll ask you, dear reader: What do you think? Or if you’ve found a way to really mobilize the “finders” in your organization, how did you make it happen? Did you have to guilt them into it, did you establish a “bounty” system for reporting conditions, etc.? I am firmly convinced that if we can enlist these folks in the identification of hazards, we can really move towards a process for ensuring constant readiness.
I’m going to guess that you all out in the audience do not necessarily place The Joint Commission’s Perspectives periodical on your list of must-reads, but for the May and June 2012 issues (and who knows beyond that), you really owe it to yourself to grab a copy and prepare for some hard-hitting door and barrier conversation with our esteemed colleague, one Mr. George Mills, Director of the Engineering Department at The Joint Commission.
At any rate, I think we can point to an increasing level of frustration on the part of the various and sundry regulatory agencies (and us, don’t forget us) relative to the number of findings in the life safety (LS) chapter and the omnipresence of these issues in the most frequently cited standards during surveys. How do we make this go away? The answer to that question, interestingly enough, is adopting a risk-based strategy for the ongoing inspection and maintenance of whatever building component is in play – this month its doors. [more]
I’m presuming (and please don’t attempt to disabuse me of this notion) that you are all dutifully conducting security risk assessments on a regular basis. As you conduct them, I’m sure you find risks of some events that are greater than some other areas. So, I to ask: When you’ve completed your security risk assessment, do you identify specific strategies, including the use of technology, for minimizing those risks to the extent possible? If you’re not including that facet in the risk assessment process, you might want to consider doing so.
Recently, I was looking at a survey report in which an ambulatory surgery center was cited during a TJC survey because they had not installed a panic alarm “at the registrar’s desk in order to obtain immediate assistance in an emergent or hostile situation.” Now, as with so many things that have been popping up during surveys, I don’t disagree with the concept of having panic alarms at those customer service/interaction points where unhappy folks (or folks of any ilk) can experience the need to vent their frustrations, etc. But in that disagreement, I think I’d first be looking at what tools have been provided to staff to actively manage, if not de-escalate, these negative encounters. I would much prefer to avoid having to use a panic alarm by appropriately managing the encounter, much like I would just as soon not “need” to have an emergency eyewash station.
I’m a great believer in the proactive management of risk, but I’m also a great believer in implementing risk management and response strategies that make operational sense. So, the question to the studio audience is: Where have you installed panic alarms and where have you not installed panic alarms, and why? There’s always the risk that some surveyor will disagree with your strategy, but if that strategy was derived through thoughtful analysis of the involved risks, does that not meet the intent of all this?
I like the concept of best practice as much as anyone, but I also recognize that there is a tremendous amount of variability in the safety landscape. Just because something works in one place does not necessarily mean that it will work in all cases—that’s the mystical, magical, and ultimately mythical power of the panacea. One size doesn’t fit all—never has, never will. But if we’re going to be held to that type of an expectation, how does that help anyone? Ok, jumping down from soapbox for now, but rest assured, you’ll see me back up here before too long.