RSSRecent Articles

AHAP Conference opportunities

The Association for Healthcare Accreditation Professionals (AHAP) is hosting its 6th Annual Conference May 10, 2012 – May 11, 2012 in Orlando, FL. It offers so many amazing opportunities to save money, get expert advice, and show off your hospital a bit. I should also note that if you’re one of the first 50 paid registrants you’ll receive a free full-day ticket to any Walt Disney World® Theme Park*! Download the online brochure to learn more.

So what is it and why am I talking about it? The 6th Annual AHAP Conference brings together survey professionals from across the country to discuss solutions and best practices to achieve continual survey readiness and compliance with ever-changing standards and regulations.

What are the opportunities?

  •  Accreditation Specialist Boot Camp.
  • Presentation of the first annual Accreditation Professional of the Year award
  • Unique roundtable discussion with representatives from HFAP, DNV, the American Heart Association, and The Joint Commission
  • Exciting new poster event featuring research and best practices from your peers. Find out how to submit a poster and save 50% on your registration.
  • Learn about:
    • Regulatory changes in 2012 and top RFIs: Staying ahead of The Joint Commission and CMS
    • What accreditation professionals need to know about Life Safety Code®
    • To certify or not to certify? Seeking The Joint Commission disease-specific certifications
    • Making the switch from The Joint Commission to DNV: One hospital’s experience with both surveys
    • Understanding tracer methodology and the survey process
    • A practical approach to policy management
    • Suicide Risk: Solutions to rapid assessment, Environment of Care, and documentation issues
    • Understanding hospital recognition programs for optimal cardiovascular and stroke care

Learn more.

*Offer ends March 8th.

Things can only get better…

As we sprint rapidly toward March and beyond (it seems like it was just January!), I suspect that folks are wrapping up their annual evaluations of the objectives, scope, performance, and effectiveness of your environment of care program. One of the “open” questions I like to ask during EC interview sessions is, in looking back over the last 12 months or so, how did you manage to improve the management of risk in the physical environment, or, in the vernacular: What got better over the last year?

This question usually engenders a fair amount of discussion, depending on the group (try to avoid having too many wallflowers in your real EC interview – it’s all about group participation). But the question that I don’t always have the time to ask is: Based on the initiatives.just described, how do you know that it actually represented improvement? Do you look at some specific metrics/benchmarks/ performance measures to make that determination? Is it based on impression? If you were tasked with really having to prove the value of the EC program’s efforts, how would you do that?

The other side of that coin: What about initiatives that didn’t work? Realistically, nobody succeeds with everything that might be implemented over the course of time. Sometimes organizational culture is not ready for certain types of changes; sometimes other factors come into play. So, do you have any home runs or ugly strike outs to share?  Minimally, it will serve as a means of identifying (yet again) that the similarities in our various practices generally outweigh the disparities ; we are, after all, in this together. Looking forward to hearing how folks are doing! So, in a nutshell:

What got better – how do you know?

What didn’t get better – do you know why?

Manufacturer recommendations?!? We don’t need no stinkin’ manufacturer recommendations…

As you’re no doubt aware, there is some movement afoot relative to the inclusion (or exclusion, depending on your preference and organizational experience) of the risk assessment concept when it comes to the establishment of preventative maintenance frequencies for medical equipment. The Joint Commission has historically encouraged the use of data and past performance to provide a backdrop for the most efficient utilization of clinical engineering resources. However, CMS has been pretty adamant and absolute in their preference for manufacturer recommendations for PM frequencies to be the “be all, end all” source for determining such things.

As we take on the next in our informal series—CMS – what up with them?—we bounce once again to the web–

Click here.

–and find that, lo and behold, there has been some relaxation in terms of scheduled PM frequency. The caveat, at least for the moment, is that although we can judiciously schedule preventative maintenance activities to our heart’s content, we’d best not stray from the manufacturer recommendations for what those activities will include.

Now, off the top of my head, I can’t think of too many instances in which you would modify manufacturer recommendations for such activities, but maybe you can. I’m not sure how effective this will be; my gut says it helps in the long run; the fundamental change is toward a more flexible planning (they’d have been looking for us to follow manufacturer recommendations for the PM activities anyways, so this is really nothing new as near as I can tell – please feel free to disabuse me of that notion).

What say all you clinical engineers out there?

You’ve got a new favorite…

Generally speaking, we in Safety Land don’t get too involved with Centers for Medicare & Medicaid Services (CMS) doings until they show up on our doorstep. But sometimes, the Feds weigh in on matters that can have far-ranging implications for safety operations. I think we need go back no further than the turn of 2010 to 2011, when it looked as if CMS was going to turn the whole world into a healthcare occupancy. Fortunately, through the good graces and advocacy of ASHE, that’s a bullet dodged. Bravo.

At any rate, there is a means of tracking interpretations, utterances, and the like—and it’s web-based (your tax dollars actually at work):

https://www.cms.gov/SurveyCertificationGenInfo/PMSR/list.asp

Basically, this site is a repository of all sort of what we might euphemistically characterize as “CMS Survey and Certification memoranda, guidance, clarifications and instructions to State Survey Agencies and CMS Regional Offices.” (Okay, not my characterization; it’s what CMS calls this stuff.) Certainly not everything found here is germane to safety and the environment, but it is searchable. (I couldn’t offer an opinion yet on how efficient the search capacity might be; to be determined.) The information could be considered a—if not the—final word on what’s happening at the ol’ Centers for Medicare & Medicaid Services. I don’t know that you would need to check it every day (and I can’t quite find a means of signing up for e-mail notifications of new postings), but probably worthy of a drop in from time to time.

And I’ll return to claim your hand – as the King of California

I generally don’t single out any of the myriad potential demographics of this portion of the blogosphere, but the Joint Commission’s January 2012 Perspectives, has singled out of you good folks keeping the safety faith out in California, based on some state-level legislation promulgated back in 2010.

The focus of the legislation is those folks engaged in CT scanning activities, which I’m going to guess includes just about everyone (the standard applies to ambulatory, critical access, and hospital accreditation). I don’t see this as a particular nuisance for folks. I believe that everyone with a compliant radiation control program is on top of this, but if you’re not—even if outside of California—this new element of performance (EP) might be worthy of consideration moving forward.

EP #17 (an “A” EP) under EC.02.04.03 requires at least an annual measurement of the actual radiation dose produced by each CT imaging system, and further requires that the radiation dose displayed on the system is within 20% of the actual amount of radiation dose measured. Naturally, the dates of these verifications would be documented (and, by extension, made available during survey.)

(We’ll be chatting more about what documents and documentation could be considered “reviewable” during survey—it’s a long list.

Now, a 20% margin is a pretty wide range, I would say. In fact, if there’s anyone out there in Cali who’d care to weigh in, would you mind speaking to how you’re managing this process and what your experiences have been? I’m going to guess the 20% tag is fairly attainable on a regular basis, but maybe not. It’s not really something that I’ve focused on in the past. It does seem that legislation on the coasts tends to ripple across the compliance landscape, so maybe a future concern is best dealt with now.

At any rate, if you have stories to share, by all means, please include us.

Breaker, breaker…

Recently I received a question from a colleague regarding a survey finding an RFI under EC.02.05.01, performance element numero 7, which requires hospitals to map the distribution of its utility systems. The nature of the finding was that there was an electrical panel in which the panel schedule did not accurately reflect the status of the breakers contained therein.

My guess is that there was a breaker labeled as a “spare” that was in the “on” position, which is a pretty common finding if one should choose to look for such a condition. At any rate, the finding went on to outline that staff were unaware of the last time the mapping of the electrical distribution was verified. The question thus became: How often do we need to be verifying panel schedules, since the standard doesn’t specify and there is no supporting FAQ, etc., to provide guidance.

Now, first, I don’t know that this would be the most appropriate place to cite this condition; my preference would be for EP #8, which requires the labeling of utility systems controls to facilitate partial or complete emergency shutdowns, but I digress. Strictly speaking, any time any work is done in an electrical panel, the panel schedule should be verified for accuracy, which means that any breaker that is in the “on” position should be identified as such on the panel schedule. This is not specifically a Joint Commission requirement, but I think that we can agree that the concept, once one settles the matter as a function of logic and appropriate risk management behavior, “lives” in NFPA 70 the National Electrical Code®.

As I noted above, unfortunately, this is a very easy survey finding if the surveyor looks at enough panels; it is virtually impossible to not have at least one breaker in the “on” position that is identified on the panel schedule as a spare or not identified at all. That said, if you get cited for this, you are probably going to have to wrestle with this at some point and your facilities folks are going to have to come up with a process for managing this risk, as it’s really not safe to have inaccurately labeled electrical panels.

As to a desired frequency, without having any sense of how many panels are involved, which would be a key indicator for how often the folks would be able to reasonably assure compliance (a concept not very far away from the building maintenance program [BMP] concept), it’s tough to predict what would be sufficient. That said, the key compliance element remains who has access to the electrical panels. From my experience, the problem with the labeling of the breakers comes about when someone pops a breaker and tries to reset it without reaching out to the facilities folks. Someone just goes flipping things back and forth until the outlet is working again (floor buffing machine operators are frequent offenders in this regard).

From a practical standpoint, I think the thing to do in the immediate (if it’s not already occurred) is to condcut a survey of all the panels to establish a baseline and go from there, paying particular attention to the breakers that are not properly labeled in the initial survey. Those are the breakers I’d try to secure a little better, just to make sure that they are not accessible by folks who shouldn’t be monkeying around with them. Another unfortunate aspect of this problem is that both EP 7 and EP 8 are “A” performance elements, so it’s a one-strike-and-you’re-out scenario. Certainly worth a look-see, perhaps during hazard surveillance rounds.

So many panels, so little time…

From the muddy banks of compliance

Let’s break from form a little bit and start with a question:

How often are you (and by you, I mean your organization) screening contracted staff, including physicians, physician assistants, nurse practitioners, etc.?

A recent TJC survey resulted in a finding under the HR standards because the process was being administered on a biannual cycle. The finding vaguely referenced OSHA guidelines in identifying this deficiency, but the specific regulatory reference point was not provided (though apparently a call to Chicago validated that this was the case). Now, anyone who’s worked with me in real time knows that I have an exhaustive (and, at times, exhausting) curiosity about such matters. The deficiency “concepts” are usually sourced back to a “they;” as in, “they told me I had to do this” “they told me I had to that.” I am always, always, always curious as to who this “they” might be and whether “they” were good enough to provide the applicable chapter and verse. The answer, more often than not, is “no.” Perhaps someday we’ll discuss the whimsical nature of the” Authority Having Jurisdiction” (AHJ) concept, but we’ll save that for another day.

At any rate, I did a little bit of digging around to try and locate a regulatory source on this and in this instance, the source exists; however, the standard is not quite as mandatory as one might first presume (If you’re thinking that this is going to somehow wrap around another risk assessment conversation, you are not far from wrong). So, a wee bit of history:

Back in 1994, the CDC issued their Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health-Care Facilities, (http://www.cdc.gov/mmwr/pdf/rr/rr5417.pdf) which, among other things, advises a risk-based approach to screening (Appendix C speaks to the screening requirements for all healthcare workers, regardless of who they work for. The guidance would be to include contract folks. The risk level is determined via a risk assessment (Appendix B of the Guidelines is a good start for that). So, for a medium exposure risk environment, CDC recommends annual screening, but for a low exposure risk environment, they recommend screening at time of hire, with no further screening required (unless your exposure risk increases, which should be part of the annual infection control risk assessment).

But, in 1996, OSHA issued a directive that indicates annual screening as the minimum requirement , even for low-risk exposure risks, and even while referencing the CDC guidance: (http://www.osha.gov/pls/oshaweb/owadisp.show_document?p_table=DIRECTIVES&p_id=1586) with medium risk folks having semi-annual screening and high-risk folks being screened on a quarterly basis. So, friends, how are you managing folks in your environment, particularly the aforementioned contracted staff? Do you own them or is it the responsibility of their contracted employer? Does this stuff give you a headache when you think about it too much? It sure gives me one…occupational hazard, I guess. At any rate, it’s certainly worth checking to see whether a risk assessment for TB exposure has been conducted. The OSHA guidance document clearly indicates that if you haven’t, it’s the responsibility of the surveyor to conduct one for you, and I don’t know that I’d be really keen on having that happen.

In your eyes – the light, the heat … the chemicals?

A couple of weeks ago, a client was asking me about who should be performing the weekly checks of eyewash stations. A clinical surveyor consultant had given them the impression that this should be the responsibility of maintenance staff. Now, I’m not sure if this direction was framed as a “must” or a “would be a good idea,” but what I can tell you is that there is no specific regulatory guidance in any direction on this topic. I do, however, have a fairly succinct opinion on the topic—yeah, I know you’re surprised to hear that!—which I will now share with you.

Certainly we want to establish a process to ensure the checks will be done when they need to be done. I agree that maintenance folks are typically more diligent when it comes to such routine activities than clinical folks often are. However, from an end-user education standpoint, I think it is way more valuable for the folks who may have to use the device in the area to actually practice its operation. If they do have a splash exposure, they would have a moderately increased familiarity with the location, proper operation, etc., of the device. Ideally, the eyewash will never have to be used because all our engineering controls and PPE will prevent that splash (strictly speaking, the eyewash is a last resort for when all our other safeguards have failed or otherwise broken down.

I’m also a believer (not quite like Neil Diamond, maybe more like Smashmouth) that providing for the safety of an organization is a shared responsibility. Sure, we have folks who call ourselves safety professionals help guide the way. But real safety lives at the point of care/point of service, where everyone works. So it’s only appropriate that each one of us take a piece of the action.

Now be thankful…

As we begin 2012, I am curious as to how folks fared with their EC programs last year (2011). Whether it be blessing, curse, reason to give thanks, reason to give up—never! – what worked for you, what didn’t work , and what do you feel comfortable sharing with the rest of the safety community?

From my experiences, I witnessed yet another year in which folks were charged with doing more with less. I have no sense that 2012 will be bringing any wealth of riches to hospital safety programs. Part of the problem is the safety community has once again proven itself as more than adept at finding a way to make things work, make sure folks are safe, and generally make sure the wheels don’t fall off the safety bus. So, to paraphrase that estimable sage, one P. Frampton: I want you to show me the way. The only unique thing about challenges is how we meet them. In the spirit of giving, I exhort you to share your wisdom with this community.

And in exchange? You would have my personal gratitude and my warmest wishes to you and your family for a most joyous New Year. (Hey, I’ve got a budget too…)

I need to know

Another challenge that’s been rearing it’s ugly little head is the requirement for staff and licensed independent practitioners (LIP) to describe or demonstrate actions to be taken in the event of an environment of care incident, as well as knowing how to report an environment of care risk. I will freely admit that this one can be most tricky to pull off).

The tricky piece, at least in my estimation, is that any data that would be gathered during survey would be the result of direct interaction with staff in the care environment. For staff, one strategy would be for them to contact their immediate supervisor to report a risk, or to be able to articulate the use of a work order system to notify facilities, biomedical, safety, and/or environmental services of conditions needing resolution. Alternatively, some hospitals have a single phone number for reporting unsafe conditions. Presumably, staff can also speak to their specific roles in emergency response activations such as fire, security, disaster, etc.

As to the LIPs, this task can be exponentially more difficult as, strictly speaking, the expectations of this group are pretty much the same as the rest of the house. I’m presuming that you have an emergency phone number to report codes and fire events. An LIP who is able to articulate familiarity with those codes and events would be useful toward a finding of compliance. They really ought to be able to articulate past the point of ignoring something and to at least be able to put in motion some sort of reasonably attainable resolution.

Again, I’ve not seen this come up a great deal with the LIPs, though certainly the rest of the cadre of front line staff would be considered targets during a survey. I think the key approach is to very clearly and very simply define what constitutes appropriate responses of staff and practitioners. When The Joint Commission doesn’t specifically define what they mean in a standard, it behooves us to define how compliance works in our organizations.