As the trees turn over their colors, it (sometimes) gives me time to go back over stuff we’ve covered out here in the blogosphere, with the intent of trying to capture some things of note that I think are worth mentioning, even if they are not quite “hot off the presses.”
One of the interesting shifts is the subtle redefining of several of the compliance time frames invoked throughout the standards and performance elements. Not all of the definitions changed, but in the interest of full disclosure, I think we should include the lot of them:
- Every 36 months/every three years = 36 months from the date of the last event, plus or minus 45 days
- Annually/every 12 months/once a year/every year = one year from the date of the last event, plus or minus 30 days
- Every six months = six months from the date of the last event, plus or minus 20 days
- Quarterly/every quarter = every three months, plus or minus 10 days
- Monthly/30-day intervals/every month = 12 times per year, once per month
- Every week = once per week
A particularly curious impact of this shift is the abandonment of the time-honored “not less than 20 days and not more than 40 days” intervals for emergency power testing activities. Now we have “at least monthly” for those very same activities, which probably means that you may want to consider scheduling your generator tests earlier in the month so if you have to postpone/delay the testing of your generator(s) to work around patient care activities, etc. You don’t want to run out of “month.” It will be interesting to see how this translates into the survey process.
The other thing that I’m “watching” is how that definition of quarterly is going to dovetail with how you would conduct fire drills. Is there going to be yet another “counting” vulnerability? I know the Conditions of Participation indicate that fire drills are to be conducted at “unexpected times under varying conditions,” which somehow seems to fly in the face of an every three months plus or minus 10 days. Maybe that’s a big enough window to keep things unexpected; I guess we’ll see how things unfold.
One of my favorite pastimes when I’m driving (and I do get to drive a fair amount) is to listen to the public radio station in whatever area I might be traveling. And if the travel deities are truly smiling on me, I get to listen to a revolving set of programs collectively known as Public Radio Remix. My “description” of Public Radio Remix (if you’re curious about any and all manner of things, please check it out) is something akin to driving cross-country late at night where radio stations fade in and out and you end up experiencing a fairly wide swath of the human condition (bear with me—I do have some relevant content to share).
One of my favorite shows on Public Radio Remix is one called 99% Invisible, which started out as a project of a public radio station in San Francisco (KALW) and the American Institute of Architects (AIA). Now, I know you folks would probably recognize the AIA as you’ve been taken to task over the years relative to compliance with the AIA’s Guidelines for the Design and Construction of Health Care Facilities (I’m pretty sure the folks at The Joint Commission have a copy or two of the Guidelines on their shelves), so this is where I kind of tie this back around to our normal avenue of discussion.
At any rate, recently I was driving to the airport early in the morning and I “bumped” into the episode that revolved primarily around the design, etc., of fire escapes, but went on to cover a lot of elements of egress. I will tell you that I had my safety geek “on” for the drive to the airport that morning. The episode is a wee bit less than 20 minutes in length, but if you have a spare 20 minutes (Okay, I think this is interesting enough to recommend you use one of your non-spare 20 minutes), I think you’ll find this a pretty cool story.
I must warn you that you might find yourself “trapped” in the 99% Invisible experience (there are many very interesting stories in addition to this one), so I will ask you to please enjoy responsibly.
In preparation for our journey into the restrooms of your mind (sorry—organization), you might consider a couple of things. Practicing this during surveillance rounds is probably a good thing; increasing folks’ familiarity with the potential expectations of the process is a good thing. But in practicing, you can also consider identifying an organizational standard for responding to restroom call signals, that way you can build at least a little flexibility into the process, maybe enough to push back a little during survey if you can allow for some variability.
Another restroom-related finding has had to do with the restrooms in waiting areas in clinic settings (ostensible restrooms that can be used by either patients or non-patient who may be in the waiting area). There is a requirement for a nurse call to be installed in patient restrooms, but there is no requirement for a nurse call to be installed in a public restroom. So what are these restrooms in waiting areas? I would submit to you that, in general, restrooms in waiting areas ought to be considered public restrooms and thus not required to have nurse calls. Are there potential exceptions to this? Of course there are—and that’s where the risk assessment comes into play. Perhaps you have a clinic setting in which the patient population being served is sufficiently at risk to warrant some extra protections. Look at whether there were any instances of unattended patients getting into distress, etc. (attended versus unattended is a very interesting parameter for looking at this stuff). Also, look at what the patients are being seen for; maybe cardiac patients are at a sufficiently high enough risk point to warrant a little extra.
At the end of the process, you should have a very good sense of what you need to have from a risk perspective. That way if you have a surveyor who cites you for not having a nurse call in a waiting area restroom, you can point to the risk assessment process (and ongoing monitoring of occurrences, etc.) as evidence that you are appropriately managing the associated risks—even without the nurse call. In the absence of specifically indicated requirements, our responsibility is to appropriately manage the identified/applicable risks—and how we do that is an organizational decision. The risk assessment process allows us the means of making those decisions defensible.
One of the more common questions that I receive during my travels is “When do you need to do a risk assessment?” I wish that there were a simple response to this, but (as I have learned ad nauseam) there are few things in this safety life that are as simple as I’d like them to be. But I can give you an example of something that you might be inclined to look at as a function of your risk assessment process: restrooms (oh boy oh boy oh boy)!
While I can’t honestly characterize this as a trend (I suspect that, at the moment, this is the provenance of a handful or so of surveyors), there seems to be an increasing amount of attentions paid to restrooms—both public and patient—during surveys. These attentions have included nurse call alarms (or lack thereof), the ability of staff to be able to “enter” restrooms to assist someone in distress, the length of the nurse call cords, etc. Now you might not think that there was a whole heck of a lot of trouble that could result from this type of scrutiny, but I can tell you that things can get a little squirrelly during survey (mostly the rescuing someone from the restroom) if you don’t have your arms around these spaces.
For example (and I think we’ve talked about this as a general observation a while back), there are some surveyors that will almost delight in locking themselves in a restroom, activating the nurse call system and wait to see how long it takes for staff to respond to–and enter!—the restroom (there is a Joint Commission performance element that requires hospitals to be able to access locked, occupied spaces; this would be one of those). Although there is no specific standards-based timeframe for response in these situations, the tacit expectation is that staff will be ready to respond, including emergency entry into the restroom, upon their arrival on the scene. This means that they would either immediately possess the means of entering the restroom or would have an immediate means at their disposal. This, of course, would be subject to the type of lock on the restroom door, etc., but for the purposes of this situation, we must assume that the patient is unable to unlock the door on their own. So, this becomes both a patient safety risk and a potential survey risk.
Stay tuned for some thoughts on how best to manage these types of situations.
We’ve been observing Fire Prevention Week (Fire Safety Week’s “real” name) since 1920, when President Woodrow Wilson issued a proclamation establishing National Fire Prevention Day, and was expanded to a week in 1922. If you’re interested in the “story” of Fire Prevention Week, please check out the National Fire Protection Association (NFPA) website—it even includes mention of Mrs. O’Leary’s cow.
While there is much to applaud in the healthcare industry relative to our maintaining our facilities in fire-safe shape, there are still improvement opportunities in this regard. And one of the most compelling of those opportunities resides in the area of surgical fire prevention. According to the Association of periOperative Registered Nurses (AORN) in the October 2014 issue of AORN Journal, 550 million to 650 million surgical fires still occur annually in procedural environments where the risks of fire reach their zenith.
As we’ve seen from past experiences, AORN is certainly considered a source of expert information and guidance and I think the surgical environments would be well-served to start looking at the three strategies for strengthening their fire safety programs:
– Bring together a multidisciplinary team of fire safety stakeholders
- Think about fire safety in the context of high reliability to tackle the systematic and non-systematic causes for surgical fires
– Make fire prevention part of daily discussion
I don’t want to steal all the thunder, so my consultative advice is to seek out a copy of the article (you can try here and make preventing surgical fires part of your Fire Prevention Week).
In what is clearly one of the busiest years for regulatory upheaval in the healthcare safety world (at least in recent memory), CMS has, yet again, turned things on their ear—and to what all appearances seems to be a most positive potential outcome—in its ongoing series of categorical waivers. And this on a topic that has caused a ton of gnashed teeth and much sorrowful wailing: the use of relocatable power taps.
You will recall (it seems no more than minutes ago) that back in June (2014), George Mills, director of The Joint Commission’s Department of Engineering, was tasked with the dubious honor of announcing to the world that, basically, the use of relocatable power taps to power medical equipment in patient care areas was on the no-no list. Since then, many (okay, probably just about everyone to one degree or another) facilities and safety folks have been spending countless hours trying to figure out how to make this happen. So I guess this means that CMS has decided that Mr. Mills doesn’t have to get painted with the “bad guy” brush any longer as they have issued a categorical waiver that provides a fair amount of flexibility for the presence of RPTs in the patient care environment.
Now history has taught us, if nothing else, that that flexibility is going to vary quite a bit depending on your facility and the results of the inevitable risk assessment; but presumably you’ve already started the risk assessment process like good little girls and boys, yes? There is a lot of fairly useful (at least at first blush—we also have learned how useful can become useless in the blink of an eye) information to be had in the memo, which you can find here. If you have not yet had a chance to look this over, I would encourage you to do so before you make any “big” decisions on how you’re going to manage these pesky little items (hopefully, this “relief” is not coming too late to avoid having undo sweeping seizures of power strips, etc.).
Maybe it’s Christmas come a bit early (or maybe we just power-shifted into winter), but I would encourage you to unwrap this present very carefully (some assembly required) and try not to break it on the first day…
No doubt there will be questions, so please use this forum as you wish.
Recently I fielded a question regarding the requirements for organizations to have department-level emergency response plans and what those requirements might represent in terms of specific elements, etc. I have to admit that my initial reaction was that I really didn’t see much rationale in the creation of detailed department-level response plans; to be honest, it sounded very much like busy work, but that may just be me. But upon reflection of what is actually required (at least for the moment—still waiting on the Conditions of Participation “version” of emergency response—I’m sure that will result in some interesting conversation), while I can’t make a completely unassailable case for department-level plans (with some exceptions, but those may pivot on an organization versus department assessment), there may be some value in at least looking at the concept (in recognition that there is nothing in the requirements that specifies department-level plans; department level planning is certainly in the mix, but written plans, not so much).
By parsing the response elements to the tried and true Joint Commission model, we’d want to account for communications, management of resources and assets, management of staff roles and responsibilities, management of safety and security, management of utility systems and capacities, and the management of patient care and support activities (is that six elements? Yes!). My thought is that the critical infrastructure needs would “live” in the organization’s response plan and that most of the department-level plans would be along the lines of “consult with incident command” during a response activation—and isn’t that kind of the purpose of IC anyway?
Which leads me to the question of how much a department-level plan is going to deviate from/bring value to what is already included in the organizational response plan? I’m having a very difficult time convincing myself that what any organization “needs” when it comes to emergency response is yet another layer of plans. For all intents and purposes, the more layers you have underneath the command function, the more intricate the communications lines, etc. and to my way of thinking, intricacy is not necessarily a hallmark of effective emergency response. When I think of the command function/structure, while you certainly want to have some “distance” between the deciders and the doers, I would think that (at least at the organization level) you would want an org chart that is reasonably “flat” (precipitous command structures make me nervous; they just seem to be less flexible in the moment).
So, dear audience, have any of you folks gone down this road of developing department-level response plans (recognizing that there are certain departments, like materials management and food services, that have a role in supporting the entire organization’s response capabilities)? If you have, has it been worth the efforts to do so? Or did you look at it and decide, from a prioritization standpoint, that the value in doing so did not represent a worthwhile investment? Any feedback/discussion would be very much appreciated.
Another frequent survey finding of late (and I have to admit that, on many levels, this one really befuddles me) is a cornucopia of issues relating to fire alarm and sprinkler testing documentation. Basically, everything under EC.02.03.05 (and I do mean everything—it’s the soup, it’s the nuts, it’s the documentation—oy!). I had managed to convince myself that there was no way that EC.02.03.05 would continue to be among the most frequently cited standards, and sure enough, it’s #4 on The Joint Commission’s list of top-cited standards for the first half of 2014. For some reason (and we will discuss what contributing factors I’ve seen in the field in a moment), this one doesn’t seem to go away.
What I’ve seen pretty much breaks down into two fairly broad (but curiously specific on some levels) categories: the quality of the service (and by extension, the documentation) of fire alarm and sprinkler system testing vendors; and, a failure to “embrace” the elements of documentation that are prescribed by TJC.
The documentation requirements are, for all intents and purposes, very straightforward—come survey time, you either have all the elements—name of the activity, date of the activity, required frequency of the activity, name and contact information, including affiliation, of the person(s) who performed the activity, the NFPA standard(s) referenced for the activity; and the results of the activity. All your fire alarm, fire suppression, etc. documentation absolutely, positively has to have all of those elements. Doesn’t matter if the testing, etc. is performed by vendors or by in-house staff—every activity has to have this documentation every time. If you don’t have this in place for every activity, every time it happens, then you will be cited during survey. If the paperwork doesn’t indicate the testing results for each of your notification appliances (horns, strobes, etc.), then no soup for you! Someone in your organization had best be verifying that each of the required document elements is in place for all your testing activities – all of ‘em, all of ‘em, all of ‘em.
And speaking of looking over your documentation, please make sure that there are no ugly little deficiencies buried in the report that might push questions about how long it took to fix something—or indeed whether that ugly little deficiency has been corrected at all! Remember, the clock starts ticking when the deficiency is identified, and you know how much time you have (and believe you me, it ain’t much time) to get things taken care of. Also, make sure that those device counts are consistent from quarter to quarter/year to year and if they’re not consistent, that you have an explanation as to why the numbers don’t match up. If you had 60 pull stations tested last year and didn’t add or take any away, then there darn well better be 60 pull stations tested 12 months later. And if you have testing activities chunked into quarters, make sure the same chunks are tested in the same quarters year to year. I know this sounds simple (I also know I probably sound like a lunatic, but if you had seen what I’ve seen this year…), but way too many folks are getting jammed on this for me to stay quiet for long.
In the nearly six months I’ve been back in the consulting world, one trend during Joint Commission surveys stands out as the most likely to result in survey heartache (and heartburn). And that trend, my friends, has everything to do with the management of environmental conditions in surgical (and other environments). Clearly, the folks at TJC have struck a motherlode of potential findings—and I have no reason to think that these strikes will be abating any time soon. My advice to you is to start cracking the books—one tome in particular (okay, not so much a tome because it’s really not quite long enough, but if we were to somehow measure its impact…).
For those of you who have not yet procured a copy of the American Society of Heating, Refrigeration and Air-Conditioning Engineers ASHRAE-170 Standard for Ventilation of Health Care Facilities, I cannot encourage you too much to bite the bullet and get yourself a copy of this august standard. I can almost guarantee that doing so will decrease the likelihood of survey ugliness, perhaps even for the foreseeable future.
Now, this volume—a mere 14 pages in length—contains a lovely table (pages 9-12, for those of you keeping score at home) that identifies all the areas in a hospital (hey, maybe even your hospital…imagine that!) in which there are specific design parameters for temperature, humidity, air flow, air exchange rates, pressurization. Pretty much everything that is causing so much pain during TJC surveys of late (I’ve seen a significant increase in the number of Condition-level TJC survey results, which is almost exclusively the result of managing these conditions).
Once you have this volume in your hot little hands, turn to page 9 and start looking at all the places where you can expect scrutiny (word to those facing survey in the near future, there is an indication that the focus is expanding to include any areas in which invasive procedures are performed. Can you say interventional radiology and IVF? I knew you could.). My recommendation is to start working through the list (and, rest assured, it’s a pretty lengthy list) and identify where you are compliance-wise relative to the design parameters listed. And if you should find that you have some compliance vulnerabilities in these areas, please, please, please reach out to your infection control practitioner to start working on a risk assessment/response protocol to manage the risks associated with those non-compliant conditions. It may be the only thing standing between you and an awful journey into the darkness of a condition-level finding—a journey none of us would want to make.
Well, now that we are well and truly ensconced in the post-July 2014 world, perhaps things will quiet down a bit on the updated standards front. It’s been a very busy first half of 2014 relative to The Joint Commission’s ongoing alignment with the CMS Conditions of Participation and perhaps they’ll allow the smoke to clear a bit so we can get down to figuring out how much impact the changes to the standards will have in the medical equipment and utility systems management pursuits. Kind of makes you wonder what’s left to update/align, but let’s hold that card for another day.
So, the last salvo in June saw some fairly interesting edits of (to? you be the judge) the medical equipment and utility systems management standards and performance elements (visit here for details). As near as I can tell, the most eventful changes relate to the change of the life support and non-life support equipment categories to a somewhat more expansive (or at least it seems that way to me) categorization of high-risk (which includes life support medical and utility systems equipment) and non-high-risk (which includes pretty much everything else). To be honest, most (probably all, but I don’t want to use too big a blanket for this) of the programs I’ve had the privilege to review/evaluate have moved to the high-medium-low-no risk strategy for assigning preventive maintenance activities and frequencies, so I’m not sure that this will require any fundamental changes to how folks are administering their programs. But (and there’s always, always, always one of those when there is an official change in the standards), I am curious to see how these changes will be applied during accreditation surveys. I expect the life safety surveyors to have a good grasp on the practical impact of the changes, but what about the rest of the survey team as they wander around the corridors of healthcare organizations across the country. It’s not unheard of for standards changes to “drive” an increase in findings in those particular areas as surveyor knowledge expands/contracts/evolves/mutates so it will be interesting to see what types of findings may fall out of the changes.
I guess my best advice at the moment is to do a careful assessment of where your program is relative to the “new” standards, particularly if you have adopted an “alternative equipment maintenance” (AEM) program (this must be that alternative lifestyle I keep hearing about…). I suspect we are all going to need to be prepared to make full use of the post-survey process (especially the clarification process) to demonstrate the “compliance-ness” of our programs. As I tell folks at virtually every stop on my never-ending tour of hospitals, there will always be surveyors that will disagree with programmatic decisions that you’ve made. Your task/responsibility is to have a very clear understanding of how your program meets the intent and the spirit of the standards, regardless of how something might “look” to a surveyor. At the end of the day, it’s about supplying to our customers safe and reliable medical and utility systems equipment—and as long as we can demonstrate that within the confines of the standards –then we have fully honored that obligation. And that, my friends, is what compliance-ness is all about.