RSSAll Entries Tagged With: "risk assessments"

Shoo beedoobee – splattered, splattered!

In the never ending discourse on the subject of emergency eyewash stations, I’d like to take a moment to remind folks that it appears that the TJC surveyors have access to the ANSI Emergency Eyewash and Shower Equipment Standards and they have become very diligent in ferreting out (apologies to the ferrets in the audience – I don’t mean to offend) practices that are not consistent with the “recommendations” contained therein. And so, let me say this:

If your organization has chosen to maintain your emergency eyewash stations on a lesser frequency than weekly, then you had best conducted a risk assessment to demonstrate that you are ensuring the same level of safety that you would if your were maintaining them on a weekly basis. Water temperature, water pressures, “cleanness” of the flushing liquid, whether access to the equipment is obstructed – these all need to be considered in the mix, because, and I can tell you this with a great deal of certainty, if you are doing these inspections less than weekly, you will be cited during survey. If you have not conducted the risk assessment to demonstrate that the lesser frequency is appropriate, then you will have to move to the weekly program. (Sort of a “you can pay me now or you can pay me later” kind of deal.) But rest assured that eyewash stations are definitely in the mix, so make like a Boy Scout (do I really need to finish that thought? I didn’t think so…)

By the way, I’ve also caught wind of the invocation of AAMI standards when it comes to the placement of emergency eyewash equipment in the contaminated section of central sterile. (Also, don’t forget to keep those pressure relationships in check. Clean central sterile should never be negative to dirty central sterile). Now I will freely admit that I am not as conversant with the AAMI standards as I am with, say, OSHA standards, and perhaps even the ANSI standards dealing with this stuff. Again, the rhetorical question becomes: How many rocks do we need to turn over before we can safely determine that there isn’t some funky consensus standard lurking in the weeds that is not in strict concurrence with accepted practice? Why can’t these guys just get along…jeez!

She’s a laughing, giggling whirly bird – oh heli!

Interesting development on the survey front in the last couple of weeks. I’m not at all sure what it means, but I thought I would share it with you all, make of it what you will.

During a recent survey in the Sunshine State, a hospital was cited for not having the “recommended (maximum) rotor circumference signage on the pad nor the other recommended signs that are recommended by the FAA” (signs such as “MRI in use,” etc.). Now we could certainly have a good time parsing the whole “recommended signs that are recommended” phraseology, but I keep coming back to that word “recommended.”   How far do we have to go to ensure that we have somehow accounted for every recommendation for every possible risk that we might encounter?. Yeah, beats me, too, but in the interest of furthering the applicable knowledge base, let’s step to the web for some edification:

First I draw your attention to the Advisory Circular issued by the FAA back in 2004. I can’t seem to lay my mitts on anything more contemporary than this, but if you find something more recent, please share.

Now, as we scan the first page of this most comprehensive document, we see a little statement that I think makes the TJC survey citation a little more squishy than I would prefer: “This AC is not mandatory and does not constitute a regulation except when Federal funds are specifically dedicated for heliport construction. “ To me, “not mandatory” sounds like a really big case of the “we don’t have to’s,”  what do you think?

Turning to Chapter 4, Hospital Heliports (this is on page 95 of the document), I will freely admit that there’s a lot of interesting/cool information. (Did you know that the FAA recommends Portland Cement Concrete for ground level facilities—who knew? Do you have Portland Cement Concrete for your ground level facilities? I certainly hope so). Anyway, the chapter describes a lot of important stuff about hazards and markings, including MRI impact, etc.

I’m going to guess that you’ve been having helicopters fly in and out of your airspace on a regular basis, and in all likelihood, some of you are already up to speed on this.  For those of you for whom this might be virgin territory, my advice would be to consult with the folks actually doing the flying and find out what they want to see with your helicopter setup. This would constitute what I euphemistically refer to as a risk assessment. You may have encountered that term once or twice here in Mac’s Safety Space. I can’t imagine that you’d not have heard by now if the pilots using your pad have issues. (I’ve never found them to be shy about safety—nor should they be.) Still, it’s never a bad idea to reach out periodically to make sure that everything is both hunky and dory; it’s really the least we can do.

Breaker, breaker…

Recently I received a question from a colleague regarding a survey finding an RFI under EC.02.05.01, performance element numero 7, which requires hospitals to map the distribution of its utility systems. The nature of the finding was that there was an electrical panel in which the panel schedule did not accurately reflect the status of the breakers contained therein.

My guess is that there was a breaker labeled as a “spare” that was in the “on” position, which is a pretty common finding if one should choose to look for such a condition. At any rate, the finding went on to outline that staff were unaware of the last time the mapping of the electrical distribution was verified. The question thus became: How often do we need to be verifying panel schedules, since the standard doesn’t specify and there is no supporting FAQ, etc., to provide guidance.

Now, first, I don’t know that this would be the most appropriate place to cite this condition; my preference would be for EP #8, which requires the labeling of utility systems controls to facilitate partial or complete emergency shutdowns, but I digress. Strictly speaking, any time any work is done in an electrical panel, the panel schedule should be verified for accuracy, which means that any breaker that is in the “on” position should be identified as such on the panel schedule. This is not specifically a Joint Commission requirement, but I think that we can agree that the concept, once one settles the matter as a function of logic and appropriate risk management behavior, “lives” in NFPA 70 the National Electrical Code®.

As I noted above, unfortunately, this is a very easy survey finding if the surveyor looks at enough panels; it is virtually impossible to not have at least one breaker in the “on” position that is identified on the panel schedule as a spare or not identified at all. That said, if you get cited for this, you are probably going to have to wrestle with this at some point and your facilities folks are going to have to come up with a process for managing this risk, as it’s really not safe to have inaccurately labeled electrical panels.

As to a desired frequency, without having any sense of how many panels are involved, which would be a key indicator for how often the folks would be able to reasonably assure compliance (a concept not very far away from the building maintenance program [BMP] concept), it’s tough to predict what would be sufficient. That said, the key compliance element remains who has access to the electrical panels. From my experience, the problem with the labeling of the breakers comes about when someone pops a breaker and tries to reset it without reaching out to the facilities folks. Someone just goes flipping things back and forth until the outlet is working again (floor buffing machine operators are frequent offenders in this regard).

From a practical standpoint, I think the thing to do in the immediate (if it’s not already occurred) is to condcut a survey of all the panels to establish a baseline and go from there, paying particular attention to the breakers that are not properly labeled in the initial survey. Those are the breakers I’d try to secure a little better, just to make sure that they are not accessible by folks who shouldn’t be monkeying around with them. Another unfortunate aspect of this problem is that both EP 7 and EP 8 are “A” performance elements, so it’s a one-strike-and-you’re-out scenario. Certainly worth a look-see, perhaps during hazard surveillance rounds.

So many panels, so little time…

From the muddy banks of compliance

Let’s break from form a little bit and start with a question:

How often are you (and by you, I mean your organization) screening contracted staff, including physicians, physician assistants, nurse practitioners, etc.?

A recent TJC survey resulted in a finding under the HR standards because the process was being administered on a biannual cycle. The finding vaguely referenced OSHA guidelines in identifying this deficiency, but the specific regulatory reference point was not provided (though apparently a call to Chicago validated that this was the case). Now, anyone who’s worked with me in real time knows that I have an exhaustive (and, at times, exhausting) curiosity about such matters. The deficiency “concepts” are usually sourced back to a “they;” as in, “they told me I had to do this” “they told me I had to that.” I am always, always, always curious as to who this “they” might be and whether “they” were good enough to provide the applicable chapter and verse. The answer, more often than not, is “no.” Perhaps someday we’ll discuss the whimsical nature of the” Authority Having Jurisdiction” (AHJ) concept, but we’ll save that for another day.

At any rate, I did a little bit of digging around to try and locate a regulatory source on this and in this instance, the source exists; however, the standard is not quite as mandatory as one might first presume (If you’re thinking that this is going to somehow wrap around another risk assessment conversation, you are not far from wrong). So, a wee bit of history:

Back in 1994, the CDC issued their Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health-Care Facilities, ( which, among other things, advises a risk-based approach to screening (Appendix C speaks to the screening requirements for all healthcare workers, regardless of who they work for. The guidance would be to include contract folks. The risk level is determined via a risk assessment (Appendix B of the Guidelines is a good start for that). So, for a medium exposure risk environment, CDC recommends annual screening, but for a low exposure risk environment, they recommend screening at time of hire, with no further screening required (unless your exposure risk increases, which should be part of the annual infection control risk assessment).

But, in 1996, OSHA issued a directive that indicates annual screening as the minimum requirement , even for low-risk exposure risks, and even while referencing the CDC guidance: ( with medium risk folks having semi-annual screening and high-risk folks being screened on a quarterly basis. So, friends, how are you managing folks in your environment, particularly the aforementioned contracted staff? Do you own them or is it the responsibility of their contracted employer? Does this stuff give you a headache when you think about it too much? It sure gives me one…occupational hazard, I guess. At any rate, it’s certainly worth checking to see whether a risk assessment for TB exposure has been conducted. The OSHA guidance document clearly indicates that if you haven’t, it’s the responsibility of the surveyor to conduct one for you, and I don’t know that I’d be really keen on having that happen.

And now for something completely…the same

Time for a quick roundup of some recent survey trends:

  • We’ve talked about the overarching issues with weekly testing of plumbed eyewash stations any number of times over the years and I am always happy to respond to direct questions. The key element here is that if your organization is not conducting an at least weekly testing regimen for your plumbed eyewash stations and has not documented a risk assessment indicating that a lesser frequency is appropriate, it will likely be cited. My consultative advice: If you’re not testing at least weekly, please do so, or do the risk assessment homework.
  • With the extra life safety surveyor time during survey, the likelihood of encounters with frontline staff is on the rise. And apparently, it is not enough for folks to know what they are doing, but there is also an expectation that they will understand why they do what they do, primarily in the context of supporting patient care (which we all do—everything that happens in a hospital can trace back to the patient). I guess it won’t be enough for folks to be able to respond appropriately when asked how they would respond to a fire. They also need to understand how their response fits into the grand scheme of things. I really believe that folks understand why their jobs are important; we just need to prepare them for the question. Probably more on this as it develops.
  • 96 bottles of beer on the wall, 96 bottles of beer—but will that be enough beer to last 96 hours (I guess it depends on how thirsty you are)? So the question becomes this: If a surveyor asks to see your 96-hour capability assessment, what would you do, and perhaps most importantly, can you account for it in your Emergency Operations Plan? My general thought in this regard is that the 96-hour benchmark would be something that one would re-visit periodically, just as you would your hazard vulnerability assessment, in response to changing conditions, both internal and external.
  • As a final thought for this installment, please make sure that you (that would be the royal “you) are conducting annual fire drills in all those lovely little off-site locations listed as business occupancies on your Statement of Conditions. And make very sure that staff is aware that you are conducting those fire drills. There’s been a wee bit of an upsurge in fire drill findings based on the on-site staff not being able to “remember” any fire drills, in some instances, for several years. The requirement is annual and I don’t think any of us wishes to get tagged for something as incidental as this one.

Your mother should know…but what if she doesn’t?

I’ve noticed a little word popping up in recent survey reports, a word that strikes fear in my heart. I don’t know how widespread this might be, so if you folks have an opportunity to weigh in on this conversation, I’d be really interested in what you all are seeing out there. And so, today’s word of the day is “should.” Now scream real loud!

I think we are all pretty familiar with those things that are legitimately (perhaps specifically is a better descriptor) required by the standards. I like to refer to those requirements as “have to’s” – we have to test certain fire alarm components at certain frequencies, we have to conduct quarterly fire drills, etc. Now I’m certainly not someone who is going to complain when a regulator (of any stripe) provides us with concrete strategies (this is where we move from the “have to’s” to the “how’s,” inching ever closer to “should”) for achieving or maintaining compliance.

But the question I keep coming back to is how are we supposed to know about these unwritten requirements (we used to call them ghost standards—boo!) if they are, as noted, unwritten. And the reason I keep coming back to this question is because I’ve been seeing a number of findings lately that revolve around what an organization “should” do. And please, I am not necessarily disagreeing with the wisdom engendered in many, if not most of these findings (we’ll mention a couple of examples in a moment), but there is a point at which the line between what is consultative advice and what is actually required blurs so completely that any tipping point, compliance-wise, is almost completely subjective.

As an example (and remember, I’m not disagreeing with the concept), a recent survey report included a finding because the “clean” side of central sterile was negative to the adjacent corridor, with the qualifier “air flow should always be positive from clean areas to less clean areas”. Concept-wise, I’m down with that (I’m not loving the use of “always” in this type of context—how long is always, or maybe it should be how frequently is always, but I digress), but where in the Joint Commission standard does it say that, apart from the all-encompassing appropriate provision of pressure relationships, etc. That “should” really undercuts the whole statement. Is this something we “must” do within the context of the standard or are we trying to leverage behaviors by acting like something is deficient when it is not necessarily the case?

Another “should” that came up recently involved the results of a vendor’s testing of the medical gas system. Now you and I both know that our vendors are not always the most efficient when it comes to providing written documentation of their activities. In this particular instance, the testing had been done in June, and the report had been delivered in August, mere moments before the survey started. Within the report, there were some issues with master alarms that required repair work—repair work that had not yet been completed. Now, as near as I can tell, each organization still gets to prioritize the expenditure of resources, etc., presumably based on some sort of risk assessment (that’s a “should” for all you folks keeping track), but the finding in question ends with a resounding statement that the facility “should” have required the vendor to provide a deficiency report at the time of the inspection. Conceptually, you’ll get no argument from me, and as consultative advice I will tell you that it is a positively stupendous idea to know what problems are out there before your testing vendors leave the premises. Remember, you “own” the fixes as soon as they are identified, and if there are delays, you’d best have a pretty gosh-darn good reason for it. In fact, I would have to consider that strategy as a best practice in managing maintenance and testing activities, but where does it say that in the standards?

All we are is dust in the…

One of the critical processes when one embarks upon a program of construction and/or renovation is the management of infection control risks, particularly if the work is to be done in, or adjacent to, occupied patient areas. Now, I’m sure that you are all more than familiar with the infection control risk assessment (ICRA) matrix (you can find one in HCPro’s Infection Prevention Policy and Procedure Manual for Hospitals as well as in a number of locations on the Web). One question I’ve encountered recently is not so much about the risk assessment piece itself, but rather how one determines the amount of oversight (including how frequently IC rounds would be done in construction areas, and who would be qualified to conduct those rounds, etc.) and operational considerations like waste removal (i.e., frequencies and methodologies), cleaning floors (i.e., frequencies and methodologies), what types of walk-off mats to use, and stuff like that.

Now, if we know anything about anything, we know that there is not generally a great deal of guidance when it comes to the specifics of these types of things. And by now, we also know that there’s going to be some sort of risk assessment when it comes to making those decisions. So, the question I put to you folks in the field, in the spirit of sharing: How are you working through these types of operational decisions? Have you done anything that worked really well? Anything that worked so poorly that you get hives just thinking about it? The ICRA will help us determine what we have to do. How then do we take the next step to effectively implementing those identified strategies?

I look forward to hearing from you all, even if it’s to ask pointed questions. Operators are standing by…call now!

Determining the need for a quality report on clinical alarms

Q: Is there a certain standard or EP that speaks specifically to alarms of medical equipment and the requirement for someone to do a report?

A: Years ago, there was a NPSG that related to clinical alarm audibility on the units and ensuring that they could be heard on all points on the unit, but this has been gone for some time now. For some reason, (relating to the focus of the NPSG, my opinion is that upon closer examination, it was not so much an equipment management failure mode as it was a function of the behaviors of clinicians) it’s somewhat notorious and exalted status has diminished over time (although, based on information provided at the recent Joint Commission Executive Briefings, there is a sense that concerns surrounding this may be on the rise).


A balancing act – no nets, no problem!

I’m sure you’ve all been discussing the shooting that happened last week at Johns Hopkins, as I have. I don’t know that this changes the landscape all that much – we know this threat exists, and we know that there is only so much preventative medicine that we can employ without turning our facilities into armed camps.

At this point, I am not familiar with a lot of specific detail – sometimes a person’s parent can be the focus of a lot of ill feelings, and sometimes those feelings will prompt an action far in excess of normal behavior. That being said, I trust that you are all establishing a means of continuously identifying workplace violence risks, and establishing response plans with municipal law enforcement.


Implementing workplace violence policies

Howdy, safety profs!

I’ve received a number of inquiries lately looking for workplace violence policies. I figured if a few might have questions, then that’s enough indication to me that there may be some other folks as well looking for these elusive policies.

With all that said, to be honest, I don’t know that I would advise pursuing policy development. It’s more than likely that any policies you would need to support the management of risks associated with workplace violence are already in existence. The key to compliance is to follow the risk assessment recommendations in the SEA and, for all intents and purposes, conduct a gap analysis based on the elements identified in the SEA.