RSSRecent Articles

What’s lurking with the storage revisions in the EC proposal?

One other item that I found interesting in the proposed revisions to the EC standards was in the revised design and maintenance of the environment (currently EC.8.10, EP #1, soon to be EC.7.01).

The revision states that the “organization provides sufficient storage space to meet patient needs.”

Now you might say, “So what, that’s already in there!” and you wouldn’t be incorrect. But the current EP under EC.8.10 speaks to specific design elements relating to space for patient personal property, while a requirement for “sufficient storage space to meet patient needs” can be extrapolated into the rest of the environment, maybe to include corridor clutter and stuff like that.

Now it may be that the pending Life Safety Code compliance chapter will preclude the use of this EP as a “general duty clause” relating to storage issues in healthcare. I’m guessing that the building maintenance program will take a larger profile in the new LS chapter-and I know there has been some discussion relative to expanding the BMP to include maintaining corridor widths-so perhaps that’s how this will end up as a compliance issue.

That said, I can’t help but think that as I travel around the country, I have yet to see new construction in healthcare that really provides an appropriate “answer” for the storage of patient-support equipment. It’s been a past practice to invoke the facilities master plan concept as a response to regulatory scrutiny of less-than-ideal storage accommodations. This revision for EC.7.01 may represent a ratcheting up of what will be tolerated.

I guess we’ll have to keep an eye on this one.

Proposals for safety, security, and staff education

What’s that, you say? The Joint Commission’s proposed EC revisions combine safety and security?

I’m thinking you can’t be too surprised that emergency management is moving out on its own chapter (cue “She’s Leaving Home” by The Beatles).

With safety and security, some of the critical items are still there (e.g., abduction events, identification, security-sensitive areas, and grounds and equipment). Seemingly in the pursuit of streamlined compliance, the safety/security changes seem to represent a smaller degree of specific “requirements” in that they’ve more or less piggy-backed multiple common items into a single EP.

For example, the separate elements involving procedures to follow in the event of a security incident and those related to the handling of an abduction event have been rolled into one. Again, nothing Earth-shattering as far as that goes; the revisions just take advantage of the economies of concept (or maybe construct is le mot juste).

Another interesting development in the field review is the potential return of the staff education and competency requirements relative to safety from the haven of the human resources chapter.

To be honest, I don’t know that this is going to be especially helpful as I frequently found that it was easier to get the HR folks to give you more than a nanosecond at orientation when they understood that they had some risk exposure during survey. I am hopeful, but by no means certain, that we won’t lose any ground if this comes to pass, which you can take as encouragement to use the comment period to make mention of this issue.

We’ll chat more about how much time we get to orient and educate staff in the future. Suffice to say for now, I haven’t run into anyone that was devoting too much time to safety education.

The more things change … defragging the EC chapter

No doubt many of you have heard that the comment period has begun for the proposed changes to the environment of care chapter (in case you missed it, here’s the link).

Before you click open the offerings, my consultative recommendation is to look at the crosswalk last. I looked at it first and it gave me such a headache that I thought this one is going to take a team from CSI-HCPro to unravel the many mysteries of this crime scene.

NB: This little screed is based on the likelihood of fairly limited morphing of the proposed changes during the comment period. I didn’t find a great deal that was truly objectionable, which was very nice given the apocalyptic slant in the news and current events realm, but I digress.

I suppose this could be reduced to a smoldering (and perhaps moldering) pile of papyrus before this is over, but I suspect not.

By the way, those of you who have been obsessive about including the specific EC numbers in your policies now will have to change them all again. If I may be so bold as to suggest this, just reference “Joint Commission environment of care standards” in your policies rather than specific standard numbers. Don’t abbreviate (those of you who changed all your JCAHO references know what I mean)-I’d run with the full language. My gut tells me that the terms “Joint Commission” and “environment of care” will be pretty much standard language in perpetuity (just so you know – do say that with fingers well-crossed).

My initial thought is that The Joint Commission’s proposed revisions represent something of a simplification-almost like defragging your hard drive to optimize performance by moving “common” performance elements together.

The best example of this is the proposed EC.1.01.1 in the revised standards, as it delineates the requirement for management plans in one fell swoop as opposed to a performance element in each function section.

By the way, the familiar seven EC management plans dwindle to five in the proposed revisions, with the combining of safety and security, and the removal from the EC chapter of the emergency management requirements. It appears emergency management is coming to a new chapter near you.

More on this later . . .

Joint Commission opens field review for EC revisions

Hi everyone –

It’s Scott Wallask up here at HCPro with a quick a note for you. If you haven’t already seen it, The Joint Commission posted a field review of proposed revisions and renumbering to the EC standards for 2009.

A good thing to read before you look at the actual standards revisions is the link to the “Important chapter information.” You’ll see that the emergency management standards are being proposed for their own chapter in the manual, as is the current EC.5.20 for Life Safety Code compliance (sounds like a LS chapter is coming).

Thanks,
Scott Wallask
Senior Managing Editor
swallask@hcpro.com

Ahh, a gray area in the risk assessment process

Let’s look at a less straightforward example of a risk assessment and its possible problems.

Say, for example, you have a wheeled medication control device that’s located in an area that is not completely secured and is near a ground-level exit. Suppose the device is plugged into a wall outlet that will send a signal to the pharmacy (staffed 24/7) if someone unplugs it from the outlet or the data link, and the device is further monitored by an operator at the switchboard during off-shifts when the area is not occupied. The arrangement sounds simple enough.

Now let’s imagine that when The Joint Commission arrives, a surveyor tells you he or she wants to see a demonstration of the alarm to the pharmacy, mostly due to the device’s proximity to that ground-level exit. So, the device is unplugged and then the waiting begins (talk about sweating bullets). Fully 10 minutes elapse before a response from the pharmacy, and now you’re looking at an RFI.

“But,” you tell the surveyor, “we did a risk assessment and we believe that this is an appropriate slate of interventions.” The surveyor, however, is not budging and is not going to be persuaded just because you invoke the risk assessment. It may be something you can overturn on appeal, but could you have done more?

In a perfect world, you’d be able to provide The Joint Commission with performance data that supports a finding of full compliance. So the question then becomes what kind of performance data could we have for this type of situation? I’m glad you asked.

As part of your schedule for periodic testing of your security systems (you know, testing panic alarms, intrusion alarms, door alarms-all that good stuff), you could also do some field validation of those items for which you’ve done risk assessments.

For the example noted above, I’d be inclined to use an off-shift fire drill (probably third shift, when I know the switchboard staff will be at its “thinnest”) to see if I can get in, unplug the medication control device, and get out-all without interference.

If I can’t get out, then I have the beginnings of a compelling data set to document a successful intervention. I don’t think I’d be inclined to rely on a single attempt to prove my point; I’d try for a few attempts at least.

Even if I couldn’t get out with the device, I might be able to take a bunch of medications, which also sends us back to the drawing board.

Ultimately, when it comes to doing risk assessments, there are a couple of truisms that ought to be observed as you move through the process:

  1. There are likely going to be multiple interventions that could be employed to handle risks-I know I’ve said this ad nauseam-so pick one and live with it for a while. Once you’re comfortable you’ve got a good sense of it, then look at other interventions. If you employ too many interventions at once, it is nigh impossible to figure out what actually worked. Incremental improvements will help give you the edge on solidifying improvements. You don’t want to go back a year later (or even less) and find that your improvement didn’t quite hold on.
  2. Go back periodically to validate that the expectations you established when you started this thing are still being met (sometimes one must make assumptions going into the risk assessment process, and we know what can happen when we assume). I think we can stipulate for everyday application that if an issue, condition, or practice rises to the point where you invoke the mighty risk assessment, then you want to collect some performance data (and report that data to your safety committee).

It is too easy to assume that everything will be accepted at face value during a survey (it’s nice if it goes that way, but let’s be real here). It’s not enough to say that something is performing acceptably. You need to back it up with supporting data (the scientific method lives on).

Use surveillance rounds, fire drills, whatever-gather as much data as you can and be prepared to present it to a surveyor, your boss, and your boss’ boss. Performance data greatly reduces the likelihood of dispute during survey, and also points you in the direction of further improvements.

Safety hotspots during surveys

What will be big-ticket survey focuses in the coming months?

Based on my experience:

  • Emergency power is going to continue to be big
  • Life safety is a perennial challenge
  • I think we’re going to see increasing attention paid to the management of security sensitive areas within healthcare

I fear that we are not going to see declines in violent episodes in the healthcare environment, so the responsibility is ours to appropriately manage that risk.

Security video concerns and Spam

As an aside, I saw a documentary not that long ago about security advances in facial and body recognition technology. John Cleese of Monty Python fame was prominently featured.

Regular video footage, though useful, can be defeated via disguise, which is my point with this Cleesian digression. Just remember this little cautionary tale if your security department uses video to monitor suspicious people.

Even though you can’t depend on pictures as an absolute identifier (more on identification technology in the future, with a special guest), it is worth checking the video images during your drills to make sure that you’re getting the quality (angles, clarity, etc.) that will keep you out of hot water when your boss wants to “go to the videotape.”

A loop or noose with risk assessments?

In past discussions relative to risk assessments, I feel like I’ve given short shrift to an important part of the process: closing the loop and making sure it stays closed.

In many cases, it’s not merely enough to have conducted a risk assessment (EC.1.10, EP #4); there is also an expectation that the interventions you identify to manage the risks “…achieve the lowest potential adverse impact on the safety and health…” (EC.1.10, EP #5).

And, at least as far as the scientific method is concerned, the only way you can be sure that you’ve achieved that goal is to collect and analyze performance data relative to the intervention.

For instance, there are a number of ways that you can provide your staff members with access to material safety data sheets. Sometimes it seems like new technologies emerge every day in this realm. Be that as it may, OSHA’s hazard communication standard, like many of the risk management concerns you’re likely to face, is primarily a performance-based undertaking. OSHA doesn’t necessarily tell you how to do it, beyond the goal of ensuring access (see these interpretations of the hazcom standard, 1910.1200).

So long as you can demonstrably meet the requirement of ensuring access, from a compliance standpoint you should be in good shape. That said, I’m sure you have processes in place that can also help you comply with the hazcom standard, such as:

  • Hazard surveillance rounds
  • Spot-checking during fire drills
  • Annual evaluations of the hazardous materials and waste management program

Thus, these activities become the source of data in support of, or in opposition to, your organization’s compliance.

But wait-we’re not done spinning this one . . .

Return of the son of the risk assessment process

In speaking with folks over the last few months, there is still a great deal of interest–though I stop short of characterizing it as confusionary (I love to make up words, much like The Colbert Report)–about risk assessments.

In particular, people want to know how to do a risk assessment, what it needs to look like, what are the surveyors looking for, etc.

Don’t be thinking about this so much as a show and tell endeavor, but rather an establishment of a process that helps you identify the risks for which your organization is most vulnerable. And the key point relative to survey is that you know best what the risks are and how to evaluate them.

As a word of caution, a process does not usually involve a single step, so please, please, please refrain from manifesting your hazard surveillance rounds as being analogous to a risk assessment. The rounds are an important part of the process to be sure (it’s tough to proactively identify those pesky risks unless you go out and look for them), but they’re not the whole process.

Stay safe,
Steve Mac.
smacarthur@greeley.com

A statement of the survey conditions

So, what’s up in the 2007 survey year?

The short (and not at all sweet) answer-environment of care RFIs! And with no end in sight as far as 2008 is concerned.

An interesting, related phenomenon to the rise of RFIs in the EC is how they’ve impacted the world of the survey coordinator. But let’s take look at the level of exposure.

Currently, there are 24 EC standards, which means there are 24 opportunities in the EC by which organizations can receive an RFI (and that number will rise to 31 in 2008 with the newly configured emergency management standards). When you’re dealing with an increasingly shrinking threshold for conditional and/or preliminary denial of accreditation (use this link to see the thresholds for your organization: http://www.jcrinc.com/14866/), having this much potential for adding to the RFI “nut” creates concern on the part of survey coordinators everywhere.

And, of course, we have the presence of two standards that can result in conditional or preliminary denial without getting a whiff of a threshold:

  • EC.5.20, EP #5-Sufficient progress toward the completion of your PFIs
  • EC.5.50, all three EPs-Identifying and implementing interim life safety measures

I think that we can stipulate that many, if not most, survey coordinators have a fairly limited comfort zone when it comes to all things great and EC. But a common theme has been bubbling to the top this year, and that is the sweeping assurances from the EC folks in the hospital that everything is A-OK, only to find that the survey teams have a significantly differing assessment.

As we continue through the process, it is quite possible that you’ll see more input from resources external to your organization (perhaps, dare I say, in the guise of consultants-please don’t shudder at the thought). And so, it may become a big part of your job to “manage” these resources to the benefit of your practice and your organization.

My best advice, consultative though it may be, is to reach out to the folks charged with managing the survey process in your house. Your input in the decision-making process might be the difference between a torturous review of your program and an opportunity to use this external voice to advocate for your position.