Last week, the good folks at The Joint Commission announced the list of the five most challenging standards for hospitals surveyed during the first six months of 2016 (for those of you remaining reluctant to subscribe to the email updates, you can find the details for all accreditation programs here. For the purpose of this discussion, the focus will be on the hospital accreditation program—but if you want to talk detail specific to your organization—and you are not a hospital, just drop a line).
While there has been some jockeying for position (the once insurmountable Integrity of Egress is starting to fade a wee bit—kind of like an aging heavyweight champion), I think we can place this little grouping squarely in the realm of the management of the physical environment:
- 02.06.01—safe environment
- 02.02.01—reducing the risk of infections associate with medical equipment, devices and supplies
- 02.05.01—utility systems risks
- 02.01.20—integrity of egress
- 02.01.35—provision and maintenance of fire extinguishing systems
I suspect that these will be a topic of conversation at the various and sundry TJC Executive Briefings sessions to be held over the next couple of weeks or so, though it is interesting to note that about while project REFRESH (the survey process’s new makeover) has (more or less) star billing (we covered this a little bit back in May) , they are devoting the afternoon to the physical environment, both as a straight ahead session helmed by George Mills, but also as a function of the management of infection control risks, with a crossover that includes Mr. Mills. I shan’t be a fly on the wall for these sessions (sometimes it’s better to keep one’s head down in the witless protection program), but I know some folks who know some folks, so I’m sure I’ll get at least a little bit of the skinny…
I don’t think we need to discuss the details of the top five; we’ve been rassling with them for a couple of years now and PEP or no PEP (more on the Physical Environment Portal in a moment), I don’t believe that there’s much in the way or surprises lurking within these most challenging of quintuplets (if you have a pleasant or unpleasant surprise to share, please feel free to do so). And therein, I think, lies a bit of a conundrum/enigma/riddle. As near as I can tell, TJC and ASHE have devoted a fair amount of resources to populating the PEP with stuff. LS.02.01.35 has not had its day in the port-ular sunshine yet, but it’s next on the list for publication…perhaps even this month; not sure about IC.02.02.01, though I believe that there is enough crossover into the physical environment world, that I think it might be even be the most valuable portal upon which they might chortle. And it does not appear to have had a substantial impact on how often these standards are being cited (I still long for the days of the list of the 20 most frequently cited standards—I suspect that that list is well-populated with EC/LS/IC/maybe EM findings). As I look at a lot of the content, I am not entirely certain that there’s a lot of information contained therein that was not very close to common knowledge—meaning, I don’t know that additional education is going to improve thing. Folks know what they’re not supposed to do. And with the elimination of “C” performance elements and the Plans for Improvement process, how difficult is it going to be to find a single
- door that doesn’t latch
- sprinkler head with dust or paint on it
- fire extinguisher that is not quite mounted or inspected correctly
- soiled utility room that is not demonstrably negative
- day in which temperature or humidity was out of range
- day of refrigerator temperature out of range with no documented action
- missing crash cart check
- infusion pump with an expired inspection sticker
- lead apron in your offsite imaging center that dodged its annual fluoroscopy
- missed eyewash station check
- mis- or unlabeled spray bottle
- open junction box
I think you understand what we’re looking at here.
At any rate, I look at this and I think about this (probably more than is of benefit, but what can one do…), even if you have the most robust ownership and accountability at point of care/point of service, I don’t see how it is possible to have a reasonably thorough survey (and I do recognize that there is still some fair variability in the survey “experience”) and not get tapped for a lot of this stuff. This may be the new survey reality. And while I don’t disagree that the management of the physical environment is deserving of focus during the survey process, I think it’s going to generate a lot of angst in the world of the folks charged with managing the many imperfections endemic to spaces occupied by people. I guess we can hope that at some point, the performance elements can be rewritten to push towards a systematic management of the physical environment as a performance improvement approach. The framework is certainly there, but doesn’t necessarily tie across as a function of the survey process (at least no demonstrably so). I guess the best thing for us to do is to focus very closely on the types of deficiencies/imperfections noted above and start to manage them as data, but only to the extent that the data can teach us something we don’t know. I’ve run into a lot of organizations that are rounding, rounding, rounding and collecting scads of information about stuff that is broken, needs correction, etc., but they never seem to get ahead. Often, this is a function of DRIP (Data Rich, Information Poor) at this point, I firmly believe that if we do not focus on making improvements that are aimed at preventing/mitigating these conditions (again, check out that list above—I don’t think there’s anything that should come as a surprise), the process is doomed to failure.
As I tell folks all the time, it is the easiest thing in the world to fix something (and we still need to keep the faith with that strategy), but it is the hardest thing in the world to keep it fixed. But that latter “thing” is exactly where the treasure is buried in this whole big mess. There is never going to be a time when we can round and not find anything—what we want to find is something new, something different. If we are rounding, rounding, rounding and finding the same thing time after time after time, then we are not improving anything. We’re just validating that we’re doing exactly the opposite. And that doesn’t seem like a very useful thing at all…
In the absence of any new content on The Joint Commission’s Physical Environment Portal (the PEP ain’t none too peppy of late), I guess we’re going to have to return to our old standby for the latest and greatest coming out of Chicago: Perspectives! The August Perspectives has a fair amount of content pertinent to our little circle, so it probably makes too much sense to cover those key items and announcements.
The front page headline (as it should be) relates the ongoing tale of the dearly departing PFI process (which, I suppose, kind of makes this something of an obituary). Effective August 1, 2016, open PFI items will no longer be reviewed by the survey team nor will they be included in the Final Report generated by the survey. All Life Safety chapter deficiencies will become Requirements for Improvement (RFI) with a 60-day submittal window for your Evidence of Standards Compliance (and remember, one of the other TJC practices that departed this year was the “C” performance elements, so all of those pesky Opportunities for Improvement (OFI) at the end of your past survey reports will now become RFIs). Also, only equivalency requests related to survey events will be reviewed. More on that part of the big picture in a moment.
Also in the August Perspectives comes the official print announcement that the requirements of the 2012 Life Safety Code® will not be surveyed until November 1, 2016 (which should make for a very interesting few months in survey land for those of you moving towards the “closing” of your survey window), giving everyone on the regulatory compliance team a chance to complete the online education program, and give CMS time to update the survey forms and K-Tags. Apparently, the self-directed education program takes about 20 hours to complete (you can see the entire CMS memorandum here). The education program includes a pre- and post-test, and requires a passing score of 85%. I’m kind of curious about the format (I’m thinking perhaps the classic multiple choice format) and even more curious about whether they would ever make such a thing available to safety and facilities professionals. Presumably this means that whoever comes to your door on Tuesday, November 1 to survey your building will have passed the test. Would it be rude to ask them how they fared?
Next we turn to the “Clarifications and Expectations” column which, for all intents and purposes, is something of a recap of the PFI stuff, with the additional indication that TJC will no longer offer extensions and the automatic six-month grace period is no longer available. Ostensibly, this means that those of you with open PFIs had probably better start cleaning things up. I’m still waiting to see something (anything?) on the subject of the inaccessible fire and smoke dampers; I think I’ve mentioned previously of instances in which CMS has forced the issue of correcting the dampers, but I can’t help but think that that could be a very big pain in the posterior for some folks. I’d like to think that if these were simple to fix, they would already have been corrected (we wouldn’t take advantage of the process, would we?) so this could create a fairly burdensome situation for folks.
For those archivists among you, there is some interesting background on the 60-day time limit. Section §488.28(d) of the Code of Federal Regulations states: “Ordinarily a provider or supplier is expected to take the steps needed to achieve compliance within 60 days of being notified of the deficiencies, but the State survey agency may recommend that additional time be granted by the Secretary in individual situations, if in its judgment, it is not reasonable to expect compliance within 60 days, for example, a facility must obtain the approval of its governing body, or engage in competitive bidding.” Now that does provide a little sense of what will “fly” if one is forced to ask for a time-limited waiver (TLW—another acronym for the alphabet soup of compliance), but it’s tough to say whether any flexibility extends beyond those elements (who would ever have thought that competitive bidding might be helpful!).
Anyway, one thing relating to the SOC/PFI maelstrom (at least tangentially—and not mentioned in the August Perspectives) is the question of whether or not the presentation of the categorical waivers at the beginning of the survey process is still required. Certainly, the effective adoption date of the 2012 LSC (July 5, 2016) might potentially be the tipping point for informing the survey team of any categorical waivers your organization might have adopted, but I think the most appropriate cutoff date (if you will) for this practice would be on November 1, 2016 when CMS (and its minions) are charged with surveying to the requirements of the 2012 LSC. My overarching thought in this regard is that presenting the waivers to the survey team at the start of the survey certainly doesn’t hurt you and since the 2000 edition of the LSC is still the primary survey reference, it seems most appropriate to continue highlighting the waivers for the time being.
Back to Perspectives: One final EC-related item, for those of you with memory care units, there is specific coverage of the expectations under EC.02.06.01 relative to patient stimulation (or overstimulation), outdoor spaces for patients and residents with dementia, and other environmental elements. While these requirements apply to the Memory Care Certification chapter of the Nursing Care Center manual, again, if you happen to have a memory care unit within your span of control, you might find these expectations/performance elements useful in managing the environment. Even when not required, sometimes there are elements worth considering. After all, improving the patient experience as a function of the physical environment is one of our most important charges.
We’ll see how long this particular screed goes on when we get to the end…
In my mind (okay, what’s left of it), the “marketing” of safety and the management of the physical environment is an important component of your program. I have also learned over time that it is very rare indeed when one can “force” compliance onto an organization. Rather, I think you have to coax them into seeing things your way. At this point, I think we can all agree that compliance comes in many shapes, colors, sizes, etc., with the ideal “state” of compliance representing what it is easiest (or most convenient) for staff to do. If we make compliance too difficult (both from a practical standpoint, as well as the conceptual), we tend to lose folks right out of the gate—and believe you me—we need everybody on board for the duration of the compliance ride.
For instance, I believe one of the cornerstone processes/undertakings on the compliance ride is the effectiveness of the reporting of imperfections in the physical environment (ideally, that report is generated in the same moment—or just after—the imperfection “occurs”). There are few things that frustrate me more than a wall that was absolutely pristine the day before, and is suddenly in possession of a 2- to 3-inch hole! There’s no evidence that something bored out of the wall (no debris on the floor under the hole), so the source of the hole must have been something external to the hole (imagine that!). So you go to check and see if some sort of notification had occurred and you find out, not so much. Somebody had to be there when it happened and who knows how many folks had walked by since its “creation,” but it’s almost like the hole is invisible to the naked eye or perhaps there’s some sort of temporal/spatial disruption going on—but I’m thinking probably not.
I’m reasonably certain that one can (and does) develop an eye/sense for some of the more esoteric elements of compliance (e.g., the surveyor who opens a cabinet drawer, reaches in, and pulls out the one expired item in the drawer), but do we need to educate folks to recognize holes in the wall as something that might need a wee bit of fixing? It would seem so…
At any rate, in trying to come up with some sort of catch phrase/mantra, etc., to promote safety, I came up with something that I wanted to share with the studio audience. I’d appreciate any feedback you’d be inclined to share:
WE MUST BE ABLE:
I’m a great believer in the power of the silly/hokey concept when you’re trying to inspire folks; when you think of the most memorable TV ads, the ones that are funny tend to be the most memorable in terms of concept and product (the truly weird ads are definitely memorable, but more often than not I couldn’t tell you what product was being advertised). I think that as a four-part vision, the above might be pretty workable. What do you think?
One of the most interesting parts of my job is helping folks through the actual Joint Commission survey process. Even as a somewhat distant observer, I can’t help but think that the average survey (in my experience) is about a day longer than it needs to be. Now, I recognize that some of that on-site time is dedicated to entering findings into the computer, so I get that. But there are certain parts of the process, like, oh I don’t know, the EC/EM interview session, that could be significantly reduced, if not dispensed with entirely. Seriously, once you’ve completed the survey of the actual environment, how much more information might you need to determine whether an organization has its act together?
At any rate, I suppose this rant is apropos of not very much, but the thought does occur to me from time to time. So I ask you: is there anybody out there who feels the length of the survey was just right or, heaven forbid, not long enough? As I’ve always maintained, TJC (or, for that matter any regulatory survey type—including consultants) tend to look their best when you see them in the rear view mirror as you drive off into the future. I know the process is intended to be helpful on some level, but somehow, the disruption never seems to result in a payoff worth the experience. But hey, that may just be me…
Any thoughts you’d like to share would be most appreciated.
I’m sure you’ve all had a chance to look over the April 2014 issue of Perspectives, in which EC and LS findings combined to take seven of the top 10 most frequently cited standards during 2013, with issues relating to the integrity of egress taking the top spot.
At this point, I don’t think there are any surprises lurking within those most frequently occurring survey vulnerabilities (if someone out there in the audience has encountered a survey finding that was surprising, I would be most interested in hearing about it). The individual positions in the Top 10 may shift around a bit, but I think that it’s pretty clear that, at the very least, the focus of the TJC survey process has remained fairly constant these past couple of years.
Generally speaking, my sense about the TJC survey cycle is that specific focus items tend to occur in groups of threes (based on the triennial survey cycle, with the assumption being that during each three year period, every hospital would be surveyed—and yes, I do know what happens when you assume…) and I think that 2013 may well represent the end of the first go-round of the intensive life safety survey process (I really believe that 2009-2010 were sort of beta-testing years). So the question I have for you good citizens of the safety world: Has anyone been surveyed yet this year? With follow-up questions of:
- Did you feel you were better prepared to manage the survey process this time?
- Was the survey process different this time?
- More of the same?
- More difficult?
- Less difficult?
I’m hoping to get a good sense of whether the tidal wave of EC/LS findings has indeed crested, so anyone interested in sharing would have my gratitude. Please feel free to respond to the group at large by leaving a comment here or if you prefer a little more stealthy approach, please e-mail me at email@example.com or firstname.lastname@example.org.
During a recent survey, an interesting question was posed to the folks in Facilities, a question more than interesting enough to bring to your attention. The folks were asked to produce a policy that describes how they prioritize corrective maintenance work orders and they, in turn, asked me if I had such a thing. In my infinitely pithy response protocol, I indicated that I was not in the habit of collecting materials that are not required by regulatory standard. Now, I’m still not sure what the context of the question might have been (I will be visiting with these folks in the not too distant future and I plan on asking about the contextual applications of such a request), but it did give me cause to ponder the broader implications of the question.
I feel quite confident that developing a simple ranking scheme would be something that you could implement without having to go the whole policy route (I am personally no big fan of policies—they tend to be more complicated than they need to be and it’s frequently tougher to follow a policy 100% of the time, which is pretty much where the expectation bar is set during survey). I think something along the lines of:
Priority 1 – Immediate Threat to Health/Safety
Priority 2 – Direct Impact on Patient Care
Priority 3 – Indirect Impact on Patient Care
Priority 4 – No patient care impact
Priority 5 – Routine repairs
would work pretty well under most, if perhaps not all, circumstances. The circumstance I can “see” that might not quite lend itself to a specific hierarchy is when you have to run things on a “first come, first served” basis. Now I recognize that since our workforces are incredibly nimble (unlike regulatory agencies and the like), we can re-prioritize things based on their impact on important processes, so the question I keep coming back to is how can a policy ever truly reflect the complexities of such a process without somehow ending up with an “out of compliance with your policy” situation? This process works (or I guess in some instances, doesn’t) because of the competence of the staff involved with the process. I don’t see where a policy gets you that, but what do I know?
I could have sworn that I had covered this last year, but I can find no indication that I ever got past the title of this little piece of detritus, so I guess better late than never.
One of the more interestingly painful survey findings that I’ve come across hinge on the use of a household item that previously had caused little angst in survey circles—I speak of the mighty tissue paper! There has been any number of survey dings resulting from tissue paper either being blown or sucked in the wrong direction, based on whether a space is supposed to be positive or negative. And this lovely little finding has generated a fair amount of survey distress as it usually (I can’t say all, but I don’t know of this coming up in a survey in which the following did not occur) drives a follow-up visit from CMS as a Condition-level finding under Physical Environment/Infection Control.
The primary “requirements” in this regard reside under A-Tag 0726 and can be found below. Now I’m thinking that tissue paper might not be the most efficacious measure of pressure relationships, which (sort of—give me a little leeway here) begs the question of whether you should be prepared to “smoke” the doorway/window/etc. for which the tissue paper might not be as sensitive to the subtleties of pressures. I think it’s a reasonable thing to plan for—as much because there can be a whole lot at stake. So, I’ll ask you to review the materials below and be prepared to discuss…
(Rev. 37, Issued: 10-17-08; Effective/Implementation Date: 10-17-08)
§482.41(c)(4) – There must be proper ventilation, light, and temperature controls in pharmaceutical, food preparation, and other appropriate areas.
Interpretive Guidelines §482.41(c)(4)
There must be proper ventilation in at least the following areas:
• Areas using ethylene oxide, nitrous oxide, glutaraldehydes, xylene, pentamidine, or other potentially hazardous substances;
• Locations where oxygen is transferred from one container to another;
• Isolation rooms and reverse isolation rooms (both must be in compliance with Federal and State laws, regulations, and guidelines such as OSHA, CDC, NIH, etc.);
• Pharmaceutical preparation areas (hoods, cabinets, etc.); and
• Laboratory locations.
There must be adequate lighting in all the patient care areas, and food and medication preparation areas.
Temperature, humidity and airflow in the operating rooms must be maintained within acceptable standards to inhibit bacterial growth and prevent infection, and promote patient comfort. Excessive humidity in the operating room is conducive to bacterial growth and compromises the integrity of wrapped sterile instruments and supplies. Each operating room should have separate temperature control. Acceptable standards such as from the Association of Operating Room Nurses (AORN) or the American Institute of Architects (AIA) should be incorporated into hospital policy.
The hospital must ensure that an appropriate number of refrigerators and/or heating devices are provided and ensure that food and pharmaceuticals are stored properly and in accordance with nationally accepted guidelines (food) and manufacturer’s recommendations (pharmaceuticals).
Survey Procedures §482.41(c)(4)
• Verify that all food and medication preparation areas are well lighted.
• Verify that the hospital is in compliance with ventilation requirements for patients with contagious airborne diseases, such as tuberculosis, patients receiving treatments with hazardous chemical, surgical areas, and other areas where hazardous materials are stored.
• Verify that food products are stored under appropriate conditions (e.g., time, temperature, packaging, location) based on a nationally-accepted source such as the United States Department of Agriculture, the Food and Drug Administration, or other nationally-recognized standard.
• Verify that pharmaceuticals are stored at temperatures recommended by the product manufacturer.
• Verify that each operating room has temperature and humidity control mechanisms.
• Review temperature and humidity tracking log(s) to ensure that appropriate temperature and humidity levels are maintained.
Kind of vague, yes indeedy do! Purposefully vague—all in the eye of the beholder. Lots of verification and ensuring work, if you ask me, but this should give you a sense of some of the things about which you might consider focusing a little extra attention.
An interesting topic came across my desk relative to a January 2013 survey, and it pertains to the use of your HVA process as a means of driving staff education initiatives.
During the Emergency Management interview session during this particular survey, the surveyor wanted to know about the organization’s hazard vulnerability analysis (HVA) process and how it worked. So, that’s pretty normal—there are lots of ways to administer the HVA process—I prefer the consensus route, but that’s me.
But then the follow-up question was “How do you use the HVA to educate staff and their actions to take?” Now, when I first looked at that, I was thinking that the HVA process is designed more as a means of prioritizing response activities, resource allocations, and communications to local, regional, and other emergency response agencies, etc., but staff education? Not really sure about that…
But the more I considered the more I thought to myself, if you’re going to look at vulnerability as a true function of preparedness, then you would have to include the education of staff to their roles and responsibilities during an emergency as a critical metric in evaluating that level of preparedness. The HVA not only should tell you where you are now, but also give you a sense of where you need to take things to make improvements and from those improvements, presumably there will be some element of staff education. A question I like to ask of folks is: “What is the emergency that you are most likely to experience for which you are least prepared?” Improvement does not usually reside in things you already do well/frequently. It’s generally the stuff that you don’t get to practice as often that can be problematic during real-life events. One example is the management of volunteer practitioners—this can be a fairly involved process. But if you haven’t practiced it during an exercise, there may be complexities that will get in the way of being able to appropriately respond during the emergency. Which is why I recommend if you haven’t practiced running a couple of folks through the volunteer process, what better time than during an exercise?
In case you’ve not heard (I don’t see as much info on the various list servs I monitor when it comes to the timing of the unannounced survey process), there have been some instances this year when unannounced Joint Commission surveys have been occurring months earlier than anticipated (nobody has gone outside of their official “window,” which opens 18 months prior to the anniversary date of your last triennial survey).
Certainly during 2012, there were some folks who went six weeks or so early, but we’re talking four or five months early. There was some indication that the incredibly reliable nasty weather in the Midwest and Northeast over the last little while has resulted in some schedule juggling by the folks in Chicago (and doing as much traveling as I do, I can well understand the impact of stinky weather). As has become an increasingly familiar mantra, you can’t predict future survey activities/results based on past experiences, but I figured it might be worth sharing the possibility. You all are supremely prepared for your survey I’m sure, but I figured I’d share that bit of info.
One other survey process wrinkle of some note is the tale of an organization that was anticipating a five-day onsite survey and ended up having a four-day survey—with additional surveyors on the team to compress the five days of activities into four days. So, for those of you with five-day surveys who have blocked off Mondays in hopes of maybe blocking out the entire week, there may be a little surprise in store. This has only happened once that I know of, but if anyone out there has a story to share on that front, I’m sure we would all be very interested to hear.
At any rate, as I type this, I am looking out at a very gray day at the airport in Chicago with a forecast of snow. I guess we’re not quite a week into spring, so this must just be a period of transition. Hopefully we transition pretty darn quickly. I do wonder “where those birdies is” (with apologies to Mr. Nash)…
Another perpetually sticky wicket in the survey process (and we’ve discussed this, oh, once or twice before) is the timeliness of documentation from maintenance and testing vendors and the expectations of how that process has to be managed. During an ASHE-sponsored webinar last fall, George Mills posited the scenario in which there is a delay (delay times can vary, but you probably have a pretty good idea of how long you have to wait for reports to come back from your vendors) in receiving a report for fire alarm testing in which a handful of devices failed during routine testing. If you don’t receive the failure information immediately upon its identification by the vendor, what you are saying, in effect, is that it’s okay for me not to “know” (there’s that word again) how reliable my fire alarm system is for a month while I’m waiting for the report. If any of you think that it is indeed “okay” not to know might want to think about another line of work. From an empirical standpoint, a failed fire alarm device puts the building occupants (patients, staff—you know, those folks) at a greater risk, which is never, never, never a good idea. And what if you don’t get the report for six weeks, the failed devices haven’t been replaced, and now you’re looking at the possibility for having to manage the deficiency with a PFI, ILSM assessment—the whole magillah. Truthfully, you have better things to do with your time.
Mr. Mills’ suggestion (and I think it’s a good one, having made the suggestion at least once or twice in the past) is to ensure (either contractually or otherwise) that any deficiencies identified are in your possession before they “complete” their work. You can set it up so they let you know at the end of each testing day (that would be my preference) or at the end of the engagement. But you have got to have that information in your possession as soon as it can be made available to you. The occupants of your building depend on each and every element of your systems—fire alarm, fire extinguishment, medical gas and vacuum, emergency power—you know that list by heart and it’s your responsibility that they are managed appropriately.