RSSAuthor Archive for Steve MacArthur

Steve MacArthur

Steve MacArthur is a safety consultant with The Greeley Company in Danvers, Mass. He brings more than 30 years of healthcare management and consulting experience to his work with hospitals, physician offices, and ambulatory care facilities across the country. He is the author of HCPro's Hospital Safety Director's Handbook and is contributing editor for Briefings on Hospital Safety. Contact Steve at stevemacsafetyspace@gmail.com.

A hospital in trouble is a temporary thing: Post-survey blues!

As you might well imagine, based on the number of findings floating around, as well as CMS’ continuing scrutiny of the various and sundry accreditation organizations (the latest report card is out and it doesn’t look too lovely—more on that next week after I’ve had a chance to digest some of the details), there are a fair number of organizations facing survey jeopardy for perhaps the first time in their history. And a lot of that jeopardy is based on findings in the physical environment (ligature risks and procedural environment management being the primary drivers), which has resulted in no little chagrin on the part of safety and facility professionals (I don’t think anyone really thinks that it would or could in their facility, but that’s not the type of philosophy that will keep the survey wolves at bay). The fact of the matter is (I know I’ve said this before, though it’s possible that I’ve not yet bent your collective ears on this point) that there are no perfect buildings, particularly in the healthcare world. They are never more perfect than the moment before you put people in them—after that, it is a constant battle.

Unlike any other time in recorded history, the current survey epoch is all about generating findings and the imperfect nature of humans and their interactions with their environment create a “perfect storm” of opportunities to grow those numbers. And when you think about it, there is always something to find, so those days of minimal to no findings were really more aberrant than it probably seemed at the time.

The other piece of this is the dreaded adverse accreditation decision: preliminary denial of this, termination of that and on, and on. The important thing to remember when those things happen is that you will be given (well, hopefully it’s you and not your organization sailing off into the sunset without you) an opportunity to identify corrective action plans for all those pesky little findings. I can’t tell you it doesn’t suck to be in the thick of an adverse accreditation decision because it truly, truly does suck, but just keep in mind that it is a process with an end point. There may be some choppy seas in the harbor, but you have the craft (both figuratively and literally) to successfully make landfall, so don’t give up the ship.

Shine on you crazy fire response plan!

On the things I’ve been doing over the past couple of weeks has been reading through the EC/LS/EM standards and performance elements to see what little pesky items may have shown up since the last time I did a really thorough review. My primary intent is to see if I can find any “Easter eggs” that might provide fodder for findings because of a combination of specificity and curiosity. At any rate, while looking through the fire safety portion of the manual, I noticed a performance element that speaks to the availability of a written copy of your fire response plan. That makes sense to me; you can never completely rely on electronic access (it is very reliable, but a hard-copy backup seems reasonable). The odd component of the performance element is the specificity of the location for the fire response plan to be available—“readily available with the telephone operator or security.”

Now, I know that most folks can pull off that combo as an either/or, but there are smaller, rural facilities that may not have that capacity (I think my personal backup would be the nursing supervisor), so it makes me wonder what the survey risks are for those folks who don’t have 24/7 switchboard or security coverage. At the end of the day, I would think that you could do a risk assessment (what, another one!?!?!?) and pass it through your EC Committee (that kind of makes the Committee sound like some sort of sieve or colander) and then if the topic comes up during survey, you can push back if you happen to encounter a literalist surveyor (insert comment about the likelihood of that occurring). As there is no specific requirement to have 24/7 telephone operator or security presence (is it useful from an operational standpoint to do so, absolutely—but nowhere is it specifically required), I think that this should be an effective means of ensuring you stay out of the hot waters of survey. For me, “readily available” is the important piece of this, not so much how you make it happen.

At any rate, this may be much ado about nothing (a concept of which I am no stranger), but it was just one of those curious requirements that struck me enough to blather on for a bit.

As a closing note, a quick shout-out to the folks in the areas hit by various and sundry weather-related emergencies the past little while. I hope that things are moving quickly back to normal and kudos for keeping things going during very trying times. Over the years, I’ve worked with a number of folks down in that area and I have always been impressed with the level of preparedness. I would wish that you didn’t have to be tested so dramatically, but I am confident that you all (or all y’all, as the case may be) were able to weather the weather in appropriate fashion.

Everybody here comes from somewhere: Leveling the post-survey field

Well, if the numbers published in the September Perspectives are any indication, a lot of folks are going to be working through the post-survey Evidence of Standards Compliance process, so I thought I would take a few moments to let you know what has changed since the last time (if ever—perhaps your last survey was a clean one) you may have embarked upon the process.

So, what used to be a (relatively) simple accounting of Who (is ultimately responsible for the corrective action), What (actions were taken to correct the findings), When (each of the applicable actions were taken), and How (compliance is going to be sustained) has now morphed into a somewhat more involved:

  • Assigning Accountability (for corrective actions and sustained compliance)
  • Assigning Accountability – Leadership Involvement (this is for those especially painful findings in the dark orange and red boxes in the SAFER matrix – again, corrective actions and sustained compliance)
  • Correcting the Non-Compliance – Preventive Analysis (again, this is for those big-ticket findings – the expectation is that there will be analysis of the findings/conditions cited to ensure that the underlying causative factors were addressed along with the correction of the findings)
  • Correcting the Non-Compliance (basically, this mashes together the What and When from the old regimen)
  • And last, but by no means least, Ensuring Sustained Compliance

This last bit is a multifocal outline of how ongoing compliance will be monitored, how often the monitoring activities will occur (don’t over-promise on those frequencies, boys and girls; keep it real and operationally possible), what data is going to be collected from the monitoring process, and, to whom and how often, that data is going to be reported.

Now, I “get” the whole sustaining correction “thing,” but I’ve worked in healthcare long enough to recognize that, while our goal may be perfection in all things, perfection tends not to exist within our various spheres of influence. And I know lots of folks feel rather more inadequate than not when they look at the list of findings at the end of survey (really, any survey—internal, external—there’s always lots to find), which I don’t think brings a ton of value to the process. Gee thanks, Mr. Surveyor, for pointing out that one sprinkler head with dust on it; gee thanks, Ms. Surveyor, for pointing out that missing eyewash check. I believe and take very seriously our charge to ensure that we are facilitating an appropriate physical environment for care, treatment, and services to be provided to patients in the safest possible manner. If I recall, the standards-based expectation refers to minimize or eliminate, and I can’t help thinking that minimization (which clearly doesn’t equal elimination).

Ah, I guess that’s just getting a little too whiny, but I think you see what I’m saying. At any rate, be prepared to provide a more in-depth accounting of the post-survey process than has been the case in the past.

The other piece of the post-survey picture is the correction of those Life Safety Code® deficiencies or ligature risk items that cannot be corrected within 60 days; the TJC portal for each organization, inclusive of the Statement of Conditions section, has a lot of information/instruction regarding how those processes unfold after the survey. While I know you can’t submit anything until you’ve been well and truly cited for it during survey, I think it would be a really good thing to hop on the old extranet site and check out what questions you need to consider, etc., if you have to engage a long-term corrective action or two. While in some ways it is not as daunting as it first seems, there is an expectation for a very (and I do mean very, very) thorough accounting of the corrective actions, timelines, etc., and I think it a far better strategy to at least eyeball the stuff (while familiarity is said to breed contempt, it also breeds understanding) before you’re embroiled in the survey process for real.

Pay a great deal of attention to the man behind the curtain: More ligature survey stuff!

This week’s installment is rather brief and (at least for the moment) is germane only to those folks with inpatient behavioral health units. During a recent TJC survey of a behavioral health hospital, I was able to catch a glimpse into the intentions of the information revealed last November (holy moly, it’s almost been a year!). I have to admit that the “cadence” of this particular guidance was a little confusing to me at the time, but now I “get” it.

In discussing the recommendations regarding nursing stations (nursing stations with an unobstructed view so that a patient attempt at self-harm at the nursing station would be easily seen and interrupted), the article in Perspectives goes on to indicate that areas behind self-closing/self-locking doors do not need to be ligature-resistant. The consideration that I want to share with you is that a self-closing/self-locking door is not the same as a door that is always locked (maybe you figured that out as a proactive stance, but I always considered control over locked spaces to be sufficiently reliable, but it would seem not to be the case). At any rate, if you take the guidance at its word, if you have a space on your behavioral health unit that has ligature risks contained therein, then you best have doors that self-close and lock. You may have a lot of doors that secure ligature-present spaces that do not self-close and lock; if that’s the case, you may want to reach out to the Standards Interpretation Group for official feedback on this. All I can tell you is that it’s been cited in at least one recent survey and it does reflect the content shared last November (I think it would have been my inclination to separate the nursing station concept from the “other” areas for the sake of clarity, but I can see where things “fall” now that it’s come up during a survey), so it’s definitely worth some consideration in your “house.”

I’ve been there, I know the way: More Executive Briefings goodness

You’ve probably seen a smattering of stuff related to the (still ongoing as I write this) rollout of this year’s edition of Joint Commission Executive Briefings. As near as I can tell, during the survey period of June 1, 2017 to May 31, 2018, there were about 27 hospitals that did not “experience” a finding in the Environment of Care (EC) chapter (98% of hospitals surveyed got an EC finding) and a slightly larger number (97% with a Life Safety chapter finding) that had no LS findings. So, bravo to those folks who managed to escape unscathed—that is no small feat given the amount of survey time (and survey eyes) looking at the physical environment. Not sure what he secret is for those folks, but if there’s anyone out there in the studio audience that would like to share their recipe for success (even anonymously: I can be reached directly at stevemacsafetyspace@gmail.com), please do, my friends, please do.

Another interesting bit of information deals with the EC/LS findings that are “pushing” into the upper right-hand sectors of the SAFER matrix (findings with moderate or high likelihood of harm with a pattern or widespread level of occurrence). Now, I will freely admit that I am not convinced that the matrix setup works as well for findings in the physical environment, particularly since the numbers are so small (and yes, I understand that it’s a very small sample size). For example, if you have three dusty sprinkler heads in three locations, that gets you a spot in the “widespread” category. I don’t know, it just makes me grind my teeth a little more fiercely. And the EP cited most frequently in the high likelihood of harm category? EC.02.02.01 EP5—handling of hazardous materials! I am reasonably confident that a lot of those findings have to do with the placement/maintenance of eyewash stations (and I’ve seen a fair number of what I would characterize as draconian “reads” on all manner of considerations relating to eyewash stations, which reminds me: if you don’t have maintenance-free batteries for your emergency generators and you don’t have ready access to emergency eyewash equipment when those batteries are being inspected/serviced, then you may be vulnerable during your next survey).

At the end of the day, I suppose there is no end to what can be (and, clearly, is) found in the physical environment, and I absolutely “get” the recent focus on pressure relationships and ligature risks (and, soon enough, probably Legionella–it was a featured topic of coverage in the EC presentation), but a lot of the rest of this “stuff” seems a little like padding to me…

If it’s September, it’s time for Executive Briefings!

I suspect that, over the next few weeks, as I learn of stuff coming out of the various and sundry Joint Commission Executive Briefings sessions, I’ll be sharing some thoughts, etc., in those regards here in the ol’ blog.

The first thing to “pop” at me was some information regarding Chapter 15 (Features of Fire Protection) in NFPA 99 Health Facilities Code (2012 edition) relating to the management of surgical fire risks. If you’ve not had a chance to check out section 13 of said chapter, I think it will be worth your while as there are a couple of things that in the past one might have described as a best practice. But, with the official adoption of NFPA 99 by CMS, this has become (more or less, but definitely more than before) the law of the land. From a practical standpoint, I can absolutely get behind the concepts contained in this section (I’m pretty comfortable with the position that any surgical fire is at least one more than we should have), but from a strict compliance standpoint, I know that it can be very challenging to get the folks up in surgery to “play ball” with the physical environment rules and regulations.

As one might expect, the whole thing breaks down into a few components: hazard assessment; establishment of fire prevention procedures; management of germicides and antiseptics; establishment of emergency procedures; orientation and training. I think the piece of this that might benefit from some focused attention relates to the management of germicides and antiseptics, particularly as a function of the required “timeout” for the germicide/antiseptic application process. And yes my friends, I did say “required”; Section 15.13.3.6 indicates (quite specifically) that a preoperative “timeout” period shall be conducted prior to the initiation of any surgical procedure using flammable liquid germicides or antiseptics to verify that:

  • Application site of flammable germicide or antiseptic is dry prior to draping and use of electrosurgery, cautery, or a laser
  • Pooling of solution has not occurred or has been corrected
  • Any solution-soaked materials have been removed from the operating room (OR) prior to draping and use of electrosurgery, cautery, or a laser

Now, I will freely and openly admit that I’ve not done a deep dive into the later chapters of NFPA 99 (though that’s on my to-do list), so I hadn’t bumped into this, but I can definitely see this being a potential vulnerability, particularly in light of the recent FDA scrutiny (and it goes to Linda B’s question in follow-up to a recent blog posting—I probably should have turtled to this at that point—mea maxima culpa). At any rate, nothing in this section of NFPA 99 is arguable unless you don’t have it in place and a surveyor “goes there,” so perhaps you should be sure that your OR folks are already “there” sooner rather than later.

Two closing items:

  • The good folks at the Facilities Guidelines Institute have provided a state-by-state resource identifying which states have adopted the FGI guidelines (completely, partially, not really). You can find that information here.
  • Also,  Triumvirate Environmental is presenting a couple of webinars over the next little while that might be of interest. The one this week (sorry for the short notice) deals with the recently established by EPA’s Hazardous Waste e-Manifest Program and then the week after next, there’s a program on Best Practices to Optimize Your Waste Documentation Program. While I can’t call these crazy risky survey vulnerabilities, EC.02.02.01 is still percolating around the top of the most frequently cited list, so it never hurts to obtain greater familiarity with this stuff.

Enjoy your week safely!

Changing (not so much) perspectives on survey trends: Infection Control and Medication Safety

By now I suspect that you’re probably seen/heard that the survey results for the first half of 2018 are only surprising to the extent that there are no surprises (well, maybe a small one, but more on that in a moment). There’s a little bit of jockeying for position, but I think that we can safely say that the focus on the physical environment (inclusive of environmental concerns relating to infection control and prevention) is continuing on apace. There’s a little bit of shifting, and the frequencies with which the various standards are being cited is a wee bit elevated, but the lion’s share of the survey results that I’ve seen are indicative of them continuing to find the stuff they will always be able to find in this era of the single deficiency gets you a survey “ding.” The continuing hegemony of LS.02.01.35 just tells me that dusty sprinklers, missing escutcheons, stacked-too-high storage, etc., can be found just about anywhere if the survey team wants to look for it.

One interesting “new” arrival to the top 10 is IC.02.01.01, which covers implementation of the organization’s infection control plan. I have seen this cited, and, interestingly enough, the findings have involved the maintenance of ice machines (at least so far) and other similar utility systems infection control equipment such as sterilizers (for which there is a specific EP under the utilities management standards). I suspect that what we have here is the beginning of a focus on how infection control and prevention oversight dovetails with the management of the physical environment. I know that this is typically a most collaborative undertaking in hospitals, but we have seen how the focus on the “low hanging fruit” can generate consternation about the overall management of programs. As I’ve noted countless times, there are no perfect environments, but if don’t/can’t get survey credit for appropriately managing those imperfections, it can be rather disheartening.

Couple other items of note in the September issue of Perspectives, mostly involving the safe preparation of medications. As you know, there are equipment, utility systems, environmental concerns, etc., that can influence the medication preparation processes. The Consistent Interpretations column focuses on that very subject and while the survey finding numbers seem to be rather modest, it does make me think that this could be an area of significant focus moving forward. I would encourage you to check out the information in Perspectives and keep a close eye on the medication preparation environment(s)—it may save you a little heartache later on.

Never say never: The ligature risk conversation continues…

I truly was thinking that perhaps I could go a couple more weeks without coming back to the ligature risk topic, but continued percolation in this area dictates otherwise. So here’s one news item and one (all too consultative) recommendation.

If you took a gander at the September issue of Briefings on Accreditation and Quality, you will have noted that the Healthcare Facilities Accreditation Program (HFAP) isn’t revising their existing standards in the wake of the recent CMS memorandum indicating that The Joint Commission’s (TJC) focus work on the subject of managing physical environment risks and behavioral health patients is an acceptable starting point (and I am very serious about that descriptor—I don’t see this ending real soon, but more on that in a moment). I’m not sure if HFAP makes as much use of Frequently Asked Questions forums as TJC does (and with that use, the “weight” of standards), so it may be that they will start to pinpoint things (strategies, etc.) outside of revising their standards (which prompts the question—at least to me—as to whether TJC will eventually carve out the FAQs into specific elements of performance…only time will tell). At any rate, HFAP had done some updating prior (already approved by CMS) to the recent CMS memorandum, but, in using existing CMS guidance (which tends not to be too specific in terms of how you do things), should be in reasonable shape. You can see a little more detail as to where the applicable HFAP standards “live” by checking out this and this. I would imagine that the other accreditation organizations are looking at/planning on how to go after this stuff in the field and I suspect that everyone is going to get a taste of over-interpretation and all that fun stuff.

In the “dropping of the other shoe” department, recent survey results are pointing towards a more concerted look at the “back end” of this whole process—clear identification of mitigation strategies, education of applicable staff to the risks and mitigation strategies, and building this whole process into ongoing competency evaluation. You really have to look at the proactive risk assessment (and please, please, please make sure that you identify everything in the environment as a risk to be managed; I know it’s a pain in the butt to think so, but there continues to be survey findings relating to items the survey team feels are risks that were not specifically identified in the assessment) as the starting point and build a whole system/program around that assessment, inclusive of initial and ongoing education, ongoing competency evaluation, etc. Once again, I would seem that we are not going to be given credit for doing the math in our (collective) head; you have to be prepared to “show” all your work, because if you don’t, you’ll find yourself with a collection of survey findings in the orange/red sections of the ol’ SAFER matrix—and that is not a good thing at all. We are (likely) not perfect in the management of behavioral health patients and that is clearly the goal/end game of this, but right now anything short of that has to be considered a vulnerability. If you self-identify a risk that you have not yet resolved and you do not specifically indicate the mitigation strategy (in very nearly all circumstances, that’s going to be one-to-one observation), then you are at survey risk. I cannot stress enough that (at least for the now) less is not more, so plan accordingly!

Conflagrations of unknown origin: Surgical fire prevention trends

Well, it appears that there remain opportunities for providing a fire-safe experience for surgical patients, at least based on the latest missive from the FDA. The safety communication (released at the end of May 2018) indicates that reports continue to be received by FDA of preventable surgical fires. I can’t think of too many circumstances—OK, none—in which a surgical fire could legitimately be considered unpreventable, though I have no doubt that you all have tales to tell of clinicians who feel that everything was done correctly and there was still a fire. I’d be interested in hearing some of those.

At any rate, the communication indicates several component strategies for appropriately managing the risk(s) associated with surgical fires—and if you guessed that a risk assessment figures into that equation, it may be that we have covered this ground before. So:

  • A fire risk assessment at the beginning of each surgical procedure
  • Encourage communication among surgical team members
  • Safe use and administration of oxidizers
  • Safe use of any devices that may serve as an ignition source
  • Safe use of surgical suite items that may serve as a fuel source
  • Plan and practice how to manage a surgical fire

I don’t think there’s anything that is particularly revelatory—these are by no means new expectations (for us or by us). It does appear that the FDA is going to be leaning on the various accreditation organizations (TJC, DNV, HFAP, CIHQ, AAAHC, etc., though TJC is the only organization specifically mentioned—aren’t they special!), so I think we may see yet another round of ratcheting things up in regards to surgical fire drills, providing education to clinicians, etc. I don’t know how much reaching out you might do relative to actual events in your surgical procedure areas (I can’t say that I always see a ton of information beyond fire drill and education documentation), but I think you’ll want to be able to speak to this as a proactive undertaking. Somebody must be monitoring these types of things and if it’s not you, you need to figure out who it is and keep yourself informed.

As something of a preemptory thought, I ran across a podcast entitled “Nurses for Health Environments” (you can find some background and links to the podcast here). I haven’t had a chance to check it out (I listen to podcasts as I work towards my 10K steps before breakfast, but I always seem to have a backlog of stuff to listen to), but I do believe that (particularly as a very large percentage of the healthcare culture) partnering with nurses and other clinicians in managing the environment makes a great deal of sense. Stewardship of the environment has to happen at every level of every organization, so I would urge you to check it out and maybe recommend it as a listening opportunity for the clinicians in your organization. I’ve always believed that marketing is an important piece of what we do as safety professionals and any (and every) insight into what folks are thinking about, etc., is worth consideration.

How green is your dashboard? Using the annual evaluation process to make improvements

I was recently fielding a question about the required frequencies for hazard surveillance rounds (hint: there are no longer required frequencies—it is expected that each organization will determine how frequency of rounding and effective management of program complement each other) and it prompted me to look at what was left of the back end of the EC chapter (and there really isn’t a lot compared to what was once almost biblical in implication). I think we can agree that there has been a concerted effort over time to enhance/encourage the management of the physical environment as a performance improvement activity (it’s oft been said that the safety committee is among the most important non-medical staff committees in any organization—and even more so if you have physician participation) and there’s been a lot of work on dashboards and scorecards aimed at keeping the physical environment in the PI mix.

But in thinking back to some of the EC scorekeeping documents I’ve reviewed over the years (and this includes annual evaluations of the program), the overarching impression I have is one of a lot of green with a smattering of yellow, with a rather infrequent punctuation of red. Now I “get” that nobody wants to air their dirty laundry, or at least want to control how and where that type of information is disseminated, but I keep coming back to the list of most frequently cited standards and wonder how folks are actually managing the dichotomy of trying to manage an effective program and having a survey (aimed at those imperfections that make us crazy) that flies in the face of a mostly (if not entirely) green report card.

While it’s always a good thing to know where you stand relative to your daily compliance stuff, when it comes down to communication of PI data, it’s not so much about what you’re doing well, but where you need to make improvements. I venture to predict that the time will come when the survey process starts to focus on how improvement opportunities are communicated to leadership and how effective those communications are in actually facilitating improvement. It’s not so much about “blaming” barriers, but rather the facilitation of barrier removal. There will always be barriers to compliance in one form or another; our task is to move our organizations past those barriers. With the amount of data that needs to be managed by organizational leadership, you have to make the most of those opportunities when direct communications are possible/encouraged. And if there are considerations for which the assistance of organizational leadership is indicated, you have a pipeline in place to get that done with the annual evaluation process.