RSSRecent Articles

Be prepared

As the flu season commences, the specter of Ebola Virus Disease (EVD) and its “presentation” of flu-like symptoms is certainly going to make this a most challenging flu season. While (as this item goes to press) we’ve not seen any of the exposure cases that occurred in the United States result in significant harm to folks (the story in Africa remains less optimistic), it seems that it may be a while before we see an operational end to needing to be prepared to handle Ebola patients in our hospitals. But in recognition that preparedness in general is inextricably woven into the fabric of day-to-day operations in healthcare, right off the mark we can see that this may engender some unexpected dynamics as we move through the process.

And, strangely enough, The Joint Commission has taken an interest in how well hospital are prepared to respond to this latest of potential pandemics. Certainly, the concept of having respond to a pandemic has figured in the preparation activities of hospitals across the country over the past few years and there’s been a lot of focus in preparations for the typical (and atypical) flu season. And, when The Joint Commission takes an interest in a timely condition in the healthcare landscape, it increases the likelihood that questions might be raised during the current survey season.

Fortunately, TJC has made available its thoughts on how best to prepare for the management of Ebola patients and I think that you can very safely assume that this information will guide surveyors as they apply their own knowledge and experience to the conversation. Minimally, I think that we can expect some “coverage” of the topic in the Emergency Management interview session; the function of establishing your incident command structure in the event of a case of EVD showing up in your ED; whether you have sufficient access to resources to respond appropriately over the long haul, etc.

Historically, there’s been a fair amount of variability from flu season to flu season—hopefully we’ll be able to put all that experience to work to manage this year’s course of treatment. As a final thought, if you’ve not had the opportunity to check out the latest words from the Centers for Disease Control and Prevention (CDC) on the subject, I would direct your attention to recent CDC info on management of patients and PPE.

I suppose, if nothing else, the past few weeks of our encounter with Ebola demonstrates something along the best laid plans of mice and men: it’s up to us to make sure that those plans do not go far astray (with apologies to Robert Burns).

Let’s raise a toast to the qualified individuals—you know who they are (don’t you?)

There’s been a fair amount of discussion in the trenches about the changing dynamics relative to the maintenance of medical equipment and utility systems equipment in the face of all the changes in Joint Commission standards, invocation of CMS as the supreme authority, etc. And while I’m absolutely convinced that this is a conversation that is likely to evolve/mutate over time (the topic of relocatable power taps springs to mind as an example of some fairly rapid-cycle change in approach), I figure we can at least start the conversation and see what happens.

From a compliance standpoint, I think the critical dynamic is how much, if any, of your medical equipment and/or utility systems equipment inventory is being maintained through the application of an alternative maintenance strategy, which effectively means any maintenance strategy that is not in strict adherence to what is recommended by the original equipment manufacturer (OEM). I know a lot of folks are using a database system for managing preventative maintenance activities, but I will admit to not being sufficiently familiar with each system’s capabilities, so I may go somewhat far afield in this portion of the program. Please feel free to issue an e-dope slap if I get things going sideways.

If you are fortunate enough to have a system that is (or is capable of) keeping track of the inventory as a function of whether it is being maintained under an alternative maintenance strategy, then that becomes the basis of any discussion in this regard during a survey. Once you’ve done that initial “sort” (if it is applicable; if you’re maintaining everything in accordance with the OEM, then you really only need to state that in the applicable management plan), then I would recommend a second sort for each category (you could call one category OEM and the other AEMS—Alternative Equipment Maintenance Strategy) to determine what high-risk equipment resides in each of those categories, if any. Ideally (at least from a strict compliance approach), you are not managing any high-risk equipment through an alternative strategy; that will make things potentially less contentious during survey. If you do find that you have some high-risk equipment in the AEMS bucket, then you’re probably going to have to move those processes into the OEM bucket—hopefully a minimally disruptive task.

I guess the other piece of this that could end up being a pain in the butt is if folks had elected not to include all of the medical and/or utility system equipment components, in which case you’re going to have to “capture” anything you might have left out of the inventory to start with. As an editorial aside, every time I think about this whole thing, the more peeved I become because while I understand what they’re getting at from the larger perspective of ensuring safety and operational reliability, I can’t escape the thought that the focus on this whole thing is completely in inversion to the performance of this process over time. In general, we are not hurting people through our medical equipment/utility systems, but it is what it is, I guess. Still, I can’t help but think they must have better things to do than worry about this stuff.

At any rate, the next step depends on whether you’re managing any of this equipment through an alternative maintenance strategy/system. If that is the case, you need to go through the list of equipment being managed in this fashion and provide support the determination whether it is safe to permit medical equipment to be maintained in an alternate manner, based on criteria that includes:

  • How the equipment is used, including the seriousness and prevalence of harm during normal use
  • Likely consequences of equipment failure or malfunction, including seriousness of an prevalence of harm
  • Availability of alternative or back-up equipment in the event the equipment fails or malfunctions
  • Incident history of identical or similar equipment
  • Maintenance requirements of the equipment

 

The other piece of this is that the above analysis has to be performed by a “qualified individual(s)” (and, as with the responsibility for maintaining the SOC, there is no specific direction in regards to what constitutes a qualified individual–I would recommend making that determination and running it through the EOC Committee for their knowledge/approval. At the end of the day, the complexities all grow out of the use of alternative maintenance strategies for medical/utility systems equipment, with the end game being (as is noted under both EC.02.04.01 and EC.02.05.01) “the strategies of an AEM program must not reduce the safety of equipment and must be based on accepted standards of practice” (they specifically cite AAMI/ANSI EQ56: 2013 Recommended Practice for a Medical Equipment Management Program).

Unfortunately, I think this is mostly going to be grunt work unless you’ve never gone down the AEM road (which would make little or no sense; I don’t think anyone has enough resources to absolutely manage every piece of medical and/or utility system equipment in accordance with OEM). It’s getting everything into the correct categories and moving on from there.

You may need to reset your compliance calendar

As the trees turn over their colors, it (sometimes) gives me time to go back over stuff we’ve covered out here in the blogosphere, with the intent of trying to capture some things of note that I think are worth mentioning, even if they are not quite “hot off the presses.”

One of the interesting shifts is the subtle redefining of several of the compliance time frames invoked throughout the standards and performance elements. Not all of the definitions changed, but in the interest of full disclosure, I think we should include the lot of them:

  •           Every 36 months/every three years = 36 months from the date of the last event, plus or minus 45 days
  •           Annually/every 12 months/once a year/every year = one year from the date of the last event, plus or minus 30 days
  •           Every six months = six months from the date of the last event, plus or minus 20 days
  •           Quarterly/every quarter = every three months, plus or minus 10 days
  •           Monthly/30-day intervals/every month = 12 times per year, once per month
  •           Every week = once per week

 

A particularly curious impact of this shift is the abandonment of the time-honored “not less than 20 days and not more than 40 days” intervals for emergency power testing activities. Now we have “at least monthly” for those very same activities, which probably means that you may want to consider scheduling your generator tests earlier in the month so if you have to postpone/delay the testing of your generator(s) to work around patient care activities, etc. You don’t want to run out of “month.” It will be interesting to see how this translates into the survey process.

The other thing that I’m “watching” is how that definition of quarterly is going to dovetail with how you would conduct fire drills. Is there going to be yet another “counting” vulnerability? I know the Conditions of Participation indicate that fire drills are to be conducted at “unexpected times under varying conditions,” which somehow seems to fly in the face of an every three months plus or minus 10 days. Maybe that’s a big enough window to keep things unexpected; I guess we’ll see how things unfold.

Good egress

One of my favorite pastimes when I’m driving (and I do get to drive a fair amount) is to listen to the public radio station in whatever area I might be traveling. And if the travel deities are truly smiling on me, I get to listen to a revolving set of programs collectively known as Public Radio Remix. My “description” of Public Radio Remix (if you’re curious about any and all manner of things, please check it out) is something akin to driving cross-country late at night where radio stations fade in and out and you end up experiencing a fairly wide swath of the human condition (bear with me—I do have some relevant content to share).

One of my favorite shows on Public Radio Remix is one called 99% Invisible, which started out as a project of a public radio station in San Francisco (KALW) and the American Institute of Architects (AIA). Now, I know you folks would probably recognize the AIA as you’ve been taken to task over the years relative to compliance with the AIA’s Guidelines for the Design and Construction of Health Care Facilities (I’m pretty sure the folks at The Joint Commission have a copy or two of the Guidelines on their shelves), so this is where I kind of tie this back around to our normal avenue of discussion.

At any rate, recently I was driving to the airport early in the morning and I “bumped” into the episode that revolved primarily around the design, etc., of fire escapes, but went on to cover a lot of elements of egress. I will tell you that I had my safety geek “on” for the drive to the airport that morning. The episode is a wee bit less than 20 minutes in length, but if you have a spare 20 minutes (Okay, I think this is interesting enough to recommend you use one of your non-spare 20 minutes), I think you’ll find this a pretty cool story.

I must warn you that you might find yourself “trapped” in the 99% Invisible experience (there are many very interesting stories in addition to this one), so I will ask you to please enjoy responsibly.

I don’t think you’re spending enough time in the restroom…

In preparation for our journey into the restrooms of your mind (sorry—organization), you might consider a couple of things. Practicing this during surveillance rounds is probably a good thing; increasing folks’ familiarity with the potential expectations of the process is a good thing. But in practicing, you can also consider identifying an organizational standard for responding to restroom call signals, that way you can build at least a little flexibility into the process, maybe enough to push back a little during survey if you can allow for some variability.

Another restroom-related finding has had to do with the restrooms in waiting areas in clinic settings (ostensible restrooms that can be used by either patients or non-patient who may be in the waiting area). There is a requirement for a nurse call to be installed in patient restrooms, but there is no requirement for a nurse call to be installed in a public restroom. So what are these restrooms in waiting areas? I would submit to you that, in general, restrooms in waiting areas ought to be considered public restrooms and thus not required to have nurse calls. Are there potential exceptions to this? Of course there are—and that’s where the risk assessment comes into play. Perhaps you have a clinic setting in which the patient population being served is sufficiently at risk to warrant some extra protections. Look at whether there were any instances of unattended patients getting into distress, etc. (attended versus unattended is a very interesting parameter for looking at this stuff). Also, look at what the patients are being seen for; maybe cardiac patients are at a sufficiently high enough risk point to warrant a little extra.

At the end of the process, you should have a very good sense of what you need to have from a risk perspective. That way if you have a surveyor who cites you for not having a nurse call in a waiting area restroom, you can point to the risk assessment process (and ongoing monitoring of occurrences, etc.) as evidence that you are appropriately managing the associated risks—even without the nurse call. In the absence of specifically indicated requirements, our responsibility is to appropriately manage the identified/applicable risks—and how we do that is an organizational decision. The risk assessment process allows us the means of making those decisions defensible.

More songs about risk assessments…

One of the more common questions that I receive during my travels is “When do you need to do a risk assessment?” I wish that there were a simple response to this, but (as I have learned ad nauseam) there are few things in this safety life that are as simple as I’d like them to be. But I can give you an example of something that you might be inclined to look at as a function of your risk assessment process: restrooms (oh boy oh boy oh boy)!

While I can’t honestly characterize this as a trend (I suspect that, at the moment, this is the provenance of a handful or so of surveyors), there seems to be an increasing amount of attentions paid to restrooms—both public and patient—during surveys. These attentions have included nurse call alarms (or lack thereof), the ability of staff to be able to “enter” restrooms to assist someone in distress, the length of the nurse call cords, etc. Now you might not think that there was a whole heck of a lot of trouble that could result from this type of scrutiny, but I can tell you that things can get a little squirrelly during survey (mostly the rescuing someone from the restroom) if you don’t have your arms around these spaces.

For example (and I think we’ve talked about this as a general observation a while back), there are some surveyors that will almost delight in locking themselves in a restroom, activating the nurse call system and wait to see how long it takes for staff to respond to–and enter!—the restroom (there is a Joint Commission performance element that requires hospitals to be able to access locked, occupied spaces; this would be one of those). Although there is no specific standards-based timeframe for response in these situations, the tacit expectation is that staff will be ready to respond, including emergency entry into the restroom, upon their arrival on the scene. This means that they would either immediately possess the means of entering the restroom or would have an immediate means at their disposal. This, of course, would be subject to the type of lock on the restroom door, etc., but for the purposes of this situation, we must assume that the patient is unable to unlock the door on their own. So, this becomes both a patient safety risk and a potential survey risk.

Stay tuned for some thoughts on how best to manage these types of situations.

How are you celebrating Fire Safety Week (October 5-11)?

We’ve been observing Fire Prevention Week (Fire Safety Week’s “real” name) since 1920, when President Woodrow Wilson issued a proclamation establishing National Fire Prevention Day, and was expanded to a week in 1922. If you’re interested in the “story” of Fire Prevention Week, please check out the National Fire Protection Association (NFPA) website—it even includes mention of Mrs. O’Leary’s cow.

While there is much to applaud in the healthcare industry relative to our maintaining our facilities in fire-safe shape, there are still improvement opportunities in this regard. And one of the most compelling of those opportunities resides in the area of surgical fire prevention. According to the Association of periOperative Registered Nurses (AORN) in the October 2014 issue of AORN Journal, 550 million to 650 million surgical fires still occur annually in procedural environments where the risks of fire reach their zenith.

As we’ve seen from past experiences, AORN is certainly considered a source of expert information and guidance and I think the surgical environments would be well-served to start looking at the three strategies for strengthening their fire safety programs:

–          Bring together a multidisciplinary team of fire safety stakeholders

-          Think about fire safety in the context of high reliability to tackle the systematic and non-systematic causes for surgical fires

–           Make fire prevention part of daily discussion

I don’t want to steal all the thunder, so my consultative advice is to seek out a copy of the article (you can try here and make preventing surgical fires part of your Fire Prevention Week).

Sometimes miracles really do happen…BREAKING NEWS!

In what is clearly one of the busiest years for regulatory upheaval in the healthcare safety world (at least in recent memory), CMS has, yet again, turned things on their ear—and to what all appearances seems to be a most positive potential outcome—in its ongoing series of categorical waivers. And this on a topic that has caused a ton of gnashed teeth and much sorrowful wailing: the use of relocatable power taps.

You will recall (it seems no more than minutes ago) that back in June (2014), George Mills, director of The Joint Commission’s Department of Engineering, was tasked with the dubious honor of announcing to the world that, basically, the use of relocatable power taps to power medical equipment in patient care areas was on the no-no list. Since then, many (okay, probably just about everyone to one degree or another) facilities and safety folks have been spending countless hours trying to figure out how to make this happen. So I guess this means that CMS has decided that Mr. Mills doesn’t have to get painted with the “bad guy” brush any longer as they have issued a categorical waiver that provides a fair amount of flexibility for the presence of RPTs in the patient care environment.

Now history has taught us, if nothing else, that that flexibility is going to vary quite a bit depending on your facility and the results of the inevitable risk assessment; but presumably you’ve already started the risk assessment process like good little girls and boys, yes? There is a lot of fairly useful (at least at first blush—we also have learned how useful can become useless in the blink of an eye) information to be had in the memo, which you can find here. If you have not yet had a chance to look this over, I would encourage you to do so before you make any “big” decisions on how you’re going to manage these pesky little items (hopefully, this “relief” is not coming too late to avoid having undo sweeping seizures of power strips, etc.).

Maybe it’s Christmas come a bit early (or maybe we just power-shifted into winter), but I would encourage you to unwrap this present very carefully (some assembly required) and try not to break it on the first day…

No doubt there will be questions, so please use this forum as you wish.

How many plans must a response planner plan before he is called a response planner?

Recently I fielded a question regarding the requirements for organizations to have department-level emergency response plans and what those requirements might represent in terms of specific elements, etc.  I have to admit that my initial reaction was that I really didn’t see much rationale in the creation of detailed department-level response plans;  to be honest, it sounded very much like busy work, but that may just be me. But upon reflection of what is actually required (at least for the moment—still waiting on the Conditions of Participation “version” of emergency response—I’m sure that will result in some interesting conversation), while I can’t make a completely unassailable case for department-level plans (with some exceptions, but those may pivot on an organization versus department assessment), there may be some value in at least looking at the concept (in recognition that there is nothing in the requirements that specifies department-level plans; department level planning is certainly in the mix, but written plans, not so much).

By parsing the response elements to the tried and true Joint Commission model, we’d want to account for communications, management of resources and assets, management of staff roles and responsibilities, management of safety and security, management of utility systems and capacities, and the management of patient care and support activities (is that six elements? Yes!).  My thought is that the critical infrastructure needs would “live” in the organization’s response plan and that most of the department-level plans would be along the lines of “consult with incident command” during a response activation—and isn’t that kind of the purpose of IC anyway?

Which leads me to the question of how much a department-level plan is going to deviate from/bring value to what is already included in the organizational response plan? I’m having a very difficult time convincing myself that what any organization “needs” when it comes to emergency response is yet another layer of plans. For all intents and purposes, the more layers you have underneath the command function, the more intricate the communications lines, etc. and to my way of thinking, intricacy is not necessarily a hallmark of effective emergency response. When I think of the command function/structure, while you certainly want to have some “distance” between the deciders and the doers, I would think that (at least at the organization level) you would want an org chart that is reasonably “flat” (precipitous command structures make me nervous; they just seem to be less flexible in the moment).

So, dear audience, have any of you folks gone down this road of developing department-level response plans (recognizing that there are certain departments, like materials management and food services, that have a role in supporting the entire organization’s response capabilities)? If you have, has it been worth the efforts to do so? Or did you look at it and decide, from a prioritization standpoint, that the value in doing so did not represent a worthwhile investment? Any feedback/discussion would be very much appreciated.

Make sure you go on (okay, over) the papers!

Another frequent survey finding of late (and I have to admit that, on many levels, this one really befuddles me) is a cornucopia of issues relating to fire alarm and sprinkler testing documentation. Basically, everything under EC.02.03.05 (and I do mean everything—it’s the soup, it’s the nuts, it’s the documentation—oy!). I had managed to convince myself that there was no way that EC.02.03.05 would continue to be among the most frequently cited standards, and sure enough, it’s #4 on The Joint Commission’s list of top-cited standards for the first half of 2014. For some reason (and we will discuss what contributing factors I’ve seen in the field in a moment), this one doesn’t seem to go away.

What I’ve seen pretty much breaks down into two fairly broad (but curiously specific on some levels) categories: the quality of the service (and by extension, the documentation) of fire alarm and sprinkler system testing vendors; and, a failure to “embrace” the elements of documentation that are prescribed by TJC.

The documentation requirements are, for all intents and purposes, very straightforward—come survey time, you either have all the elements—name of the activity, date of the activity, required frequency of the activity, name and contact information, including affiliation, of the person(s) who performed the activity, the NFPA standard(s) referenced for the activity; and the results of the activity. All your fire alarm, fire suppression, etc. documentation absolutely, positively has to have all of those elements. Doesn’t matter if the testing, etc. is performed by vendors or by in-house staff—every activity has to have this documentation every time. If you don’t have this in place for every activity, every time it happens, then you will be cited during survey. If the paperwork doesn’t indicate the testing results for each of your notification appliances (horns, strobes, etc.), then no soup for you! Someone in your organization had best be verifying that each of the required document elements is in place for all your testing activities – all of ‘em, all of ‘em, all of ‘em.

And speaking of looking over your documentation, please make sure that there are no ugly little deficiencies buried in the report that might push questions about how long it took to fix something—or indeed whether that ugly little deficiency has been corrected at all! Remember, the clock starts ticking when the deficiency is identified, and you know how much time you have (and believe you me, it ain’t much time) to get things taken care of. Also, make sure that those device counts are consistent from quarter to quarter/year to year and if they’re not consistent, that you have an explanation as to why the numbers don’t match up. If you had 60 pull stations tested last year and didn’t add or take any away, then there darn well better be 60 pull stations tested 12 months later. And if you have testing activities chunked into quarters, make sure the same chunks are tested in the same quarters year to year. I know this sounds simple (I also know I probably sound like a lunatic, but if you had seen what I’ve seen this year…), but way too many folks are getting jammed on this for me to stay quiet for long.