RSSRecent Articles

How many plans must a response planner plan before he is called a response planner?

Recently I fielded a question regarding the requirements for organizations to have department-level emergency response plans and what those requirements might represent in terms of specific elements, etc.  I have to admit that my initial reaction was that I really didn’t see much rationale in the creation of detailed department-level response plans;  to be honest, it sounded very much like busy work, but that may just be me. But upon reflection of what is actually required (at least for the moment—still waiting on the Conditions of Participation “version” of emergency response—I’m sure that will result in some interesting conversation), while I can’t make a completely unassailable case for department-level plans (with some exceptions, but those may pivot on an organization versus department assessment), there may be some value in at least looking at the concept (in recognition that there is nothing in the requirements that specifies department-level plans; department level planning is certainly in the mix, but written plans, not so much).

By parsing the response elements to the tried and true Joint Commission model, we’d want to account for communications, management of resources and assets, management of staff roles and responsibilities, management of safety and security, management of utility systems and capacities, and the management of patient care and support activities (is that six elements? Yes!).  My thought is that the critical infrastructure needs would “live” in the organization’s response plan and that most of the department-level plans would be along the lines of “consult with incident command” during a response activation—and isn’t that kind of the purpose of IC anyway?

Which leads me to the question of how much a department-level plan is going to deviate from/bring value to what is already included in the organizational response plan? I’m having a very difficult time convincing myself that what any organization “needs” when it comes to emergency response is yet another layer of plans. For all intents and purposes, the more layers you have underneath the command function, the more intricate the communications lines, etc. and to my way of thinking, intricacy is not necessarily a hallmark of effective emergency response. When I think of the command function/structure, while you certainly want to have some “distance” between the deciders and the doers, I would think that (at least at the organization level) you would want an org chart that is reasonably “flat” (precipitous command structures make me nervous; they just seem to be less flexible in the moment).

So, dear audience, have any of you folks gone down this road of developing department-level response plans (recognizing that there are certain departments, like materials management and food services, that have a role in supporting the entire organization’s response capabilities)? If you have, has it been worth the efforts to do so? Or did you look at it and decide, from a prioritization standpoint, that the value in doing so did not represent a worthwhile investment? Any feedback/discussion would be very much appreciated.

Make sure you go on (okay, over) the papers!

Another frequent survey finding of late (and I have to admit that, on many levels, this one really befuddles me) is a cornucopia of issues relating to fire alarm and sprinkler testing documentation. Basically, everything under EC.02.03.05 (and I do mean everything—it’s the soup, it’s the nuts, it’s the documentation—oy!). I had managed to convince myself that there was no way that EC.02.03.05 would continue to be among the most frequently cited standards, and sure enough, it’s #4 on The Joint Commission’s list of top-cited standards for the first half of 2014. For some reason (and we will discuss what contributing factors I’ve seen in the field in a moment), this one doesn’t seem to go away.

What I’ve seen pretty much breaks down into two fairly broad (but curiously specific on some levels) categories: the quality of the service (and by extension, the documentation) of fire alarm and sprinkler system testing vendors; and, a failure to “embrace” the elements of documentation that are prescribed by TJC.

The documentation requirements are, for all intents and purposes, very straightforward—come survey time, you either have all the elements—name of the activity, date of the activity, required frequency of the activity, name and contact information, including affiliation, of the person(s) who performed the activity, the NFPA standard(s) referenced for the activity; and the results of the activity. All your fire alarm, fire suppression, etc. documentation absolutely, positively has to have all of those elements. Doesn’t matter if the testing, etc. is performed by vendors or by in-house staff—every activity has to have this documentation every time. If you don’t have this in place for every activity, every time it happens, then you will be cited during survey. If the paperwork doesn’t indicate the testing results for each of your notification appliances (horns, strobes, etc.), then no soup for you! Someone in your organization had best be verifying that each of the required document elements is in place for all your testing activities – all of ‘em, all of ‘em, all of ‘em.

And speaking of looking over your documentation, please make sure that there are no ugly little deficiencies buried in the report that might push questions about how long it took to fix something—or indeed whether that ugly little deficiency has been corrected at all! Remember, the clock starts ticking when the deficiency is identified, and you know how much time you have (and believe you me, it ain’t much time) to get things taken care of. Also, make sure that those device counts are consistent from quarter to quarter/year to year and if they’re not consistent, that you have an explanation as to why the numbers don’t match up. If you had 60 pull stations tested last year and didn’t add or take any away, then there darn well better be 60 pull stations tested 12 months later. And if you have testing activities chunked into quarters, make sure the same chunks are tested in the same quarters year to year. I know this sounds simple (I also know I probably sound like a lunatic, but if you had seen what I’ve seen this year…), but way too many folks are getting jammed on this for me to stay quiet for long.

This one belongs on your shelf…big time!

In the nearly six months I’ve been back in the consulting world, one trend during Joint Commission surveys stands out as the most likely to result in survey heartache (and heartburn). And that trend, my friends, has everything to do with the management of environmental conditions in surgical (and other environments). Clearly, the folks at TJC have struck a motherlode of potential findings—and I have no reason to think that these strikes will be abating any time soon. My advice to you is to start cracking the books—one tome in particular (okay, not so much a tome because it’s really not quite long enough, but if we were to somehow measure its impact…).

For those of you who have not yet procured a copy of the American Society of Heating, Refrigeration and Air-Conditioning Engineers ASHRAE-170 Standard for Ventilation of Health Care Facilities, I cannot encourage you too much to bite the bullet and get yourself a copy of this august standard. I can almost guarantee that doing so will decrease the likelihood of survey ugliness, perhaps even for the foreseeable future.

Now, this volume—a mere 14 pages in length—contains a lovely table (pages 9-12, for those of you keeping score at home) that identifies all the areas in a hospital (hey, maybe even your hospital…imagine that!) in which there are specific design parameters for temperature, humidity, air flow, air exchange rates, pressurization. Pretty much everything that is causing so much pain during TJC surveys of late (I’ve seen a significant increase in the number of Condition-level TJC survey results, which is almost exclusively the result of managing these conditions).

Once you have this volume in your hot little hands, turn to page 9 and start looking at all the places where you can expect scrutiny (word to those facing survey in the near future, there is an indication that the focus is expanding to include any areas in which invasive procedures are performed. Can you say interventional radiology and IVF? I knew you could.). My recommendation is to start working through the list (and, rest assured, it’s a pretty lengthy list) and identify where you are compliance-wise relative to the design parameters listed. And if you should find that you have some compliance vulnerabilities in these areas, please, please, please reach out to your infection control practitioner to start working on a risk assessment/response protocol to manage the risks associated with those non-compliant conditions. It may be the only thing standing between you and an awful journey into the darkness of a condition-level finding—a journey none of us would want to make.

May you live in interesting times…no duh!

Well, now that we are well and truly ensconced in the post-July 2014 world, perhaps things will quiet down a bit on the updated standards front. It’s been a very busy first half of 2014 relative to The Joint Commission’s ongoing alignment with the CMS Conditions of Participation and perhaps they’ll allow the smoke to clear a bit so we can get down to figuring out how much impact the changes to the standards will have in the medical equipment and utility systems management pursuits. Kind of makes you wonder what’s left to update/align, but let’s hold that card for another day.

So, the last salvo in June saw some fairly interesting edits of (to? you be the judge) the medical equipment and utility systems management standards and performance elements (visit here for details). As near as I can tell, the most eventful changes relate to the change of the life support and non-life support equipment categories to a somewhat more expansive (or at least it seems that way to me) categorization of high-risk (which includes life support medical and utility systems equipment) and non-high-risk (which includes pretty much everything else). To be honest, most (probably all, but I don’t want to use too big a blanket for this) of the programs I’ve had the privilege to review/evaluate have moved to the high-medium-low-no risk strategy for assigning preventive maintenance activities and frequencies, so I’m not sure that this will require any fundamental changes to how folks are administering their programs. But (and there’s always, always, always one of those when there is an official change in the standards), I am curious to see how these changes will be applied during accreditation surveys. I expect the life safety surveyors to have a good grasp on the practical impact of the changes, but what about the rest of the survey team as they wander around the corridors of healthcare organizations across the country. It’s not unheard of for standards changes to “drive” an increase in findings in those particular areas as surveyor knowledge expands/contracts/evolves/mutates so it will be interesting to see what types of findings may fall out of the changes.

I guess my best advice at the moment is to do a careful assessment of where your program is relative to the “new” standards, particularly if you have adopted an “alternative equipment maintenance” (AEM) program (this must be that alternative lifestyle I keep hearing about…). I suspect we are all going to need to be prepared to make full use of the post-survey process (especially the clarification process) to demonstrate the “compliance-ness” of our programs. As I tell folks at virtually every stop on my never-ending tour of hospitals, there will always be surveyors that will disagree with programmatic decisions that you’ve made. Your task/responsibility is to have a very clear understanding of how your program meets the intent and the spirit of the standards, regardless of how something might “look” to a surveyor. At the end of the day, it’s about supplying to our customers safe and reliable medical and utility systems equipment—and as long as we can demonstrate that within the confines of the standards –then we have fully honored that obligation. And that, my friends, is what compliance-ness is all about.

Can one be too resourceful? I think not…

I generally don’t use this space as a means of promotion/marketing, but every once in a while, I like to share information on resources that I think could be really useful to the safety community. Certainly those of you who’ve been “with me” since the start of this journey in the blogosphere (way back in June 2007—oh, the places we have seen!) will have learned about resources, met some folks (my esteemed former Greeley colleague Brother Brad Keyes being a notable) and hopefully found various kernels of knowledge and insight that have somehow fostered a greater understanding and sense of community.

So, this posting I wanted to chat a bit about some gents in Florida who I have come to rely on for information and insight in the realms of emergency power and life safety compliance: Messrs. Dan Chisholm, Sr. and Dan Chisholm, Jr. I would encourage you all to visit their respective websites and sign up for email updates—always good stuff. The latest missives involve discussion of confusion over diesel fuel testing requirements on the emergency power side of things; and discussion of the conversion of patient rooms into combustible storage rooms and some considerations that can be expected with the adoption of the 2012 Life Safety Code® (Dean Samet of TSIG being a guest contributor for that article—way to go, Dean!)

All too frequently, compliance comes down to being able to account for the various and sundry interpretations of the codified landscape—and it never hurts to have a little expertise in your back pocket! So if you haven’t yet made their acquaintance, please check out Dan, Sr. and Dan, Jr. and bring their expertise into your practice. You’ll be glad you did!

Transformers: When finders become fixers

Way back when (and it was way longer ago than I might have thought—time flies when you’re having fun), we first discussed the idea of finders and fixers in the healthcare world (go here for a refresh on that conversation). Since then, I have proselytized that fairly simple concept in a majority of my consulting work, but I recently had kind of a breakthrough that I wanted to throw out there for your consideration.

One of my personal primary directives when I am consulting is that if I find something that I can resolve on my own, I feel it is my obligation to fix it. So you may see me doing something as simple as picking up trash as I walk along (even outside) or wiping up a spill in a refrigerator—basically conditions that one could consider “quick” fixes. I started thinking about how we could kind of take things to the “next” level in the evolution of the finder and fixer equation. And I came up with a hybrid creation of finder/fixers; wouldn’t that be a pretty nifty way of managing minor conditions and deficiencies in the environment? Folks at the point of care/point of service that are so empowered that they, as a matter of course, would just resolve the issue on their own. I think that would be pretty cool.

Part of me thinks that the finder/fixer thing might be a little bit of a bridge too far, but maybe just imagining such a world might make it a little more possible. So: do you have any finder/fixers in your organization? And if you do, did you “grow” them or did they emerge fully-formed? I’m really trying to spread this accountability for managing the environment as far as I can and any data/information can only help.

Tiptap through the tulips

In what has turned out to be one of the busier periods when it comes to changes in regulatory oversight of the physical environment, we have George Mills, senior engineer at The Joint Commission, announcing a considerable shift relative to the use of power strips/relocatable power taps with medical equipment in patient care areas. According to the press release from the Association for the Advancement of Medical Instrumentation (AAMI)’s 2014 annual conference (at which Mr. Mills was a featured speaker), we should consider the following areas as being included in “patient care areas”: operating rooms, patient rooms, and “areas devoted to recovery, exams, and diagnostic procedures.” That looks like a pretty inclusive list from where I’m sitting. What say you? BTW, if you want to see the whole press release, you can find it courtesy of the AAMI.

At any rate, from wherever you’re sitting, this is going to be a pretty big freaking deal for way more organizations than not. All that said, at least at the moment (as of June 13, 2014), I’ve not seen anything in writing from CMS (generally, when a change of this magnitude comes down the pike, they’ll send out a letter to inform their surveyors how to enforce new requirements); my hope is that perhaps things will have smoothed out a bit when that missive arrives.

A couple other items from the AAMI conference include the announcement that CMS has made the determination that ultrasound equipment is considered radiologic equipment and can’t be included in any alternative equipment management program. The long and short of that is that ultrasound devices will have to be inspected, tested, and maintained in accordance with manufacturer recommendations. Not sure how many folks have strayed from that path, but if you have, you need to stray back.

The final bit of word from the conference (and feel free to make your own determination as to whether its good news or bad news) is that, effective July 1, 2014, all hospitals that use TJC for accreditation must maintain a written inventory of all medical equipment and identify “high-risk” medical equipment, which would include (as you would probably be able to guess), but is not limited to, life support equipment. And by way if revisiting the whole alternative equipment management program concept, if you are indeed managing any of the equipment in your inventory through the graces of an alternative equipment management program, then those devices must be identified as such. As we’ve seen in the past, requirements for written information/documentation can result in a fair amount of scrutiny, so I think we can expect the same thing to happen with these changes.

Is this evidence of a refocusing of the survey process on all things medical equipment (don’t forget to keep clinical alarm safety on the front burner too!) during the survey process? Tough to say, but past practices would seem to indicate perhaps, yes. Beyond that, only time will tell…

When things really start to add up (and not in a particularly nice way…)

Our continuing coverage of the survey wars brings us to the June 4 edition of Joint Commission Online in which it was revealed that we can anticipate that Joint Commission survey reports are going to be bulking up over the next little while (you can determined whether that bulk is the result of banned substances). This “bulk” is being introduced as TJC strives ever harder towards alignment with the requirements (and expectations) of the folks at CMS, and I’m all a-tingle—not!

Henceforth (kind of makes it sound almost biblical), TJC will be adding a section to every survey report that will be entitled Opportunities for Improvement or OFI (to differentiate OFIs from PFIs and RFIs and any other FIs that might be swirling around the compliance world). The OFI section is going to be reserved for all those pesky little single instances of non-compliance that fall under “C” performance elements (you will no doubt recall that “A” performance elements are already balanced on a single instance of non-compliance—you either have it or you don’t. And if you don’t…).

In current survey practice, “C” performance elements, to generate an RFI, require the survey team to identify at least two instances of non-compliance. For example, during the facility tour, the LS surveyor finds a single door that does not close and latch properly. In the past, that finding would be absent from the final report, but now it will reside in the OFI section of the report. The good news here is that you will not have to submit an Evidence of Standards Compliance (ESC) for the items in the OFI section, you just have to fix them. Also, any open PFIs from previous surveys or PFIs approved by the surveyor during the survey visit will also be enumerated in the survey report. You’ll still be resolving them via the normal process, within six months of the projected completion date, etc., so that piece of it doesn’t change. I guess it’s just a means of keeping open PFIs on everyone’s radar (the likelihood of this being an offshoot of the number of overdue PFIs found in recent TJC review of eSOCs is anybody’s guess, but I’m betting… yeah, pretty much) I suppose another by-product of this “highlighting” of open PFIs is added impetus to make sure that you get things resolved prior to your triennial survey, but it is certainly not a requirement. You just have to adhere to your committed completion dates.

All that said, clearly we’ll be dealing with more “findings” on the report, which presumably means that TJC will have more evidence for CMS that it really is looking carefully at compliance issues and that it is identifying deficiencies during the survey process. I do believe that everyone in the process—the regulators and the regulated—are committed to providing safe quality care to the patients, but I guess how that care is going to be delivered is subject to interpretation. Same as it ever was…

Wait ’til your father gets home…

Well, it would seem that there are any number of folks out there in the safety world who are not familiar with the expectations relating to the timely completion of PFIs and now The Joint Commission has decided that they need to use a bigger stick (bigger than a finding of Conditional Accreditation) to garner the attentions of the miscreants who have PFIs that are more than six months past their projected completion date and have not requested an extension.

So according to information on The Joint Commission website on May 30, notifications addressed to the organization’s primary accreditation contact and the facilities director have gone out to all organizations with overdue PFIs. And not to put too fine a point on things, if the overdue PFIs have not been resolved or an extension requested, a second notification to those same folks will go out on June 12, with the addition of each organization’s CEO.

If I’m doing my math correctly, those of you who were in the penalty box as of May 30 have just about a week to either resolve the deficiency or request an extension—or have a little face time with your CEO (I’m thinking that third curtain is probably the one you’d want to avoid). A third notification is scheduled to go out on June 23, if the first two messages weren’t sufficiently impactful. And if that still isn’t enough, there’ll be an opportunity for some phone time with one of the engineers from the Standards Interpretation Group (SIG), with which further recalcitrance will be rewarded with an on-site visit. I’m getting goose bumps just thinking about it.

Now, I know that sometimes things can get a little hectic as we do battle against the forces of evil, but this is one priority that’s going to have stay way up on the list (BTW: Going forward, any PFIs that go more than six months past the projected completion date will generate an automatic notification to the engineering folks in Chicago). And thus, I encourage you to do a couple of things:

  • If you have an open PFI that has gone more than six months past the projected completion date (and that means you got a notification), either resolve the issue or request an extension
  • Be very judicious in identifying projected completion dates for future PFIs. Make sure you give yourself enough time to resolve the deficiency; if you have to build in some time for the vagaries of the budgeting process, then please do so (and don’t forget to assess for ILSM implementation—a most important thing to remember). It is possible that you might be “challenged” during a survey relative to the completion timeframe, so you need to be thoughtful (for example, giving yourself 10 years to replace a door is probably a scenario that will raise some eyebrows) about how you allocated time. But as long as you are “honest” in your ILSM assessment, you will be able to demonstrate that you are appropriately managing the risks associated with the deficiency—for as long as it takes to resolve the deficiency.

I might like you better…never say never!

One of the things I frequently share with folks when I’m doing client engagements is that (at least when I, or they, are doing the looking) it is a very good thing to find things: deficiencies, inconsistent practices, etc. My experience has been that the folks in the field will tell you that (insert deficiency/practice here) could “never happen,” and my experience has also been that pretty much everything happens eventually (never being a very, very long time, indeed). You can have the best systems, processes, education, staff that ever has been, but, inevitably, something within those systems, processes, education, staff, will break down or otherwise not quite make the mark. I guess the management of safety and risk in the physical environment boils down to the management of imperfections. Other than certain things in nature (Old Faithful, the sun), the list of things that work perfectly every time is pretty short and I suspect that there’s nothing on the perfect list that is wrought by human hands.

At any rate, I think the underlying subtext is to take full advantage of as many opportunities to poke around as you can make available. The Joint Commission requires surveillance rounds at a set frequency (twice per year in patient areas, at least once everywhere else), but that has to be considered the baseline. The more often you can look, the greater your chances of finding something you’ve not seen before (and presumably had been told could “never happen”). People make mistakes all the time. It is, after all, the nature of humans.