RSSAll Entries Tagged With: "survey prep"

So many FSAs, so little time…and all we get is MBW

Flexible Spending Account, Federal Student Aid, Food Services of America, Focused Standards Assessment.

So, I am forced to pick one. While I’m sure the lot of them is most estimable in many ways, I suppose the choice is clear: the freaking Focused Standards Assessment (kind of makes it an FFSA, or a double-F S A…what the…).

Just to refresh things a bit, the FSA is a requirement of the accreditation process in which a healthcare organization (I’m thinking that if you weren’t in healthcare, you probably would be choosing one of the other FSAs) reviews its compliance with a selected batch of Joint Commission accreditation requirements. The selections include elements from the National Patient Safety Goals, some direct and indirect impact standards and performance elements, high-risk areas, as well as the RFIs from your last survey—and I know you’ve continued to “work” those Measures of Success from your last survey. Ostensibly, this is very much an “open book” test, if you will—a test you get to grade for yourself and one for which there is no requirement to share the results with the teacher (in this case, The Joint Commission—I really don’t understand why folks submit their results to TJC, but some do—I guess some things are just beyond my ken…).

The overarching intent is to establish a process that enhances an organization’s continuous survey readiness activities (of course, as I see various and sundry survey results, I can’t help but think that the effectiveness of this process would be tough to quantify). I guess it’s somewhat less invasive than the DNV annual consultative visits, though you could certainly bring in consultants to fulfill the role of surveyor for this process if some fresh eyes are what your organization needs to keep things moving on the accreditation front.

I will freely admit to getting hung up a bit on the efficacy of this as a process; much like the required management plans (an exercise in compliance), this process doesn’t necessarily bring a lot of value to the table. Unless you actually conduct a thorough evaluation of the organization’s compliance with the 45 Environment of Care performance elements, 13 Emergency Management performance elements, 23 Life Safety performance elements (15 for healthcare occupancies, eight for ambulatory healthcare occupancies)—and who really has the time for all that—then does the process have any value beyond MBW (more busy work)? I throw the question out to you folks—the process is required by TJC, so I don’t want anyone to get in trouble for sharing—but if anyone has made good use of this process, I would be very interested in hearing all about it.

This is my last piece on the FSA process for the moment, unless folks are clamoring for something in particular. I had intended to list the EPs individually, but I think my best advice is for you to check them out for yourself. That said, I have a quick and dirty checklist of the required elements (minus the EP numbers, but those are kind of etched into my brain at this point). If you want a copy, just email me at smacarthur@greeley.com.

Fear is not sustainable

A Welshman of some repute once noted that “fear is a man’s best friend” and while that may have been the case in a Darwinian sense, I don’t know that the safety community can rely as much on it as a means of sustainable improvement. I’ve worked in healthcare for a long time and I have definitely encountered organizational leaders that traded in the threat of reprisal, etc., if imperfections were encountered in the workplace (and trust me when I say that “back in the day” something as simple as a match behind a door—left by a prickly VP to see how long it stayed there—could result in all sorts of holy heck), it typically resulted in various recriminations, fingerpointing, etc., none of which ended up meaning much in the way of sustained improvement. What happened was (to quote another popular bard—one from this side of the pond), folks tended to “end up like a dog that’s been beat too much,” so when the wicked witch goes away, the fear goes too, and with it the driving force to stay one step ahead of the sheriff (mixing a ton of metaphors here—hopefully I haven’t tipped the obfuscation scales).

At any rate, this all ties back to the manner in which the accreditation surveys are being performed, which is based on a couple of “truisms”:

 

  1. There is no such thing as a perfect building/environment/process, etc.
  2. Buildings are never more perfect than the moment before you put people in them.
  3. You know that.
  4. The regulators know that.
  5. The regulators can no longer visit your facility and return a verdict of no findings, because there are always things to find.
  6. See #1.

Again, looking at the survey process, the clinical surveyors may look at, I don’t know, maybe a couple of dozen patients at the most, during a survey. But when it comes to the physical environment, there are hundreds of thousands of square feet (and if you want to talk cubic feet, the numbers get quite large, quite quickly) that are surveyed—and not just the Life Safety (LS) surveyor. Every member of the survey team is looking at the physical environment (with varying degrees of competency—that’s an editorial aside), so scrutiny of the physical environment has basically evolved (mutated?) since 2007 from a couple hours of poking around by an administrative surveyor to upwards of 30 hours (based on a three-day survey; the LS surveyor accounts for 16 hours, and then you will have the other team members doing tracers that accounts for at least another 16 hours or so) of looking around your building. So the question really becomes how long and how hard will they have to look to find something that doesn’t “smell” right to them. And I think we all know the answer to that…

It all comes back (at least in my mind’s eye) to how effectively we can manage the imperfections that we know are out there. People bump stuff, people break stuff, people do all kinds of things that result in “wear and tear” and while I do recognize that the infamous “non-intact surface” makes is more difficult to clean and/or maintain, is there a hospital anywhere that has absolutely pristine horizontal and vertical surfaces, etc.? I tend to think not, but the follow-up question is: to what extent do these imperfections contribute to a physical environment that does not safely support patient care? This is certainly a question for which we need to have some sense of where we stand—I’m guessing there’s nobody out there with a 0% rate for healthcare-acquired infections, so to what degree can we say that all these little dings and scrapes do not put patients at risk to the extent that we cannot manage that level of risk? My gut says that the environment (or at least the environmental conditions that I’m seeing cited during surveys) is not the culprit, but I don’t know. As you all know by now (if you’ve been keeping tabs on me for any length of time), I am a big proponent of the risk assessment process, but has it come to the point where we have to conduct a risk assessment for, say, a damaged head wall in a patient room? Yes, I know we want to try and fix these types of conditions, but there are certain things that you can’t do while a patient is in the room and I really don’t think that it enhances patient care to be moving patients hither and yon to get in and fix surfaces, etc. But if we don’t do that, we run the risk of getting socked during a survey.

The appropriate management of the physical environment is a critical component of the safe delivery of healthcare and the key dynamic in that effort is a robust process for reporting imperfections as soon as possible (the “if you see something, say something” mantra—maybe we could push on “if you do something, say something”) so resources can be allocated for corrective actions. And somehow, I don’t think fear is going to get us to that point. We have to establish a truly collaborative, non-knee-jerk punitive relationship with the folks at the point of care, point of service. We have to find out when and where there are imperfections to be perfected as soon as humanly possible, otherwise, the prevalence of EC/LS survey findings will continue in perpetuity (or something really close to that). And while there may be some employment security pour moi in that perpetual scrutiny, I would much rather have a survey process that focuses on how well we manage the environment and not so much on the slings and arrows of day-to-day wear and tear. What say you?

Portal chortling: Who wants to be surveyed at Christmas?

I know that this is typically characterized as a season of giving, but I have somewhat of a huge favor to ask of you folks out there in the depths of the blogosphere, so I hope you will bear with me.

With an almost astonishing regularity, the first of each month continues to bring with it a new module being posted in the Environment of Care portal. For the month of December 2015, the featured topic is the Built Environment, inclusive of elements covered under EC.02.06.01, which (as you may recall) was the #1 most frequently cited standard during the first six months (Freudian typo: When I first typed this passage, I came up with “first sux months”—make of that what you will…).

Since I know a lot of folks have been tapped on this one (both as a function of the published data and my own experiences), I was keen to look over the new material—including the latest fireside chat from our partners in compliance George Mills, director of engineering at TJC and Dale Woodin, executive director at ASHE, which covers EP 1 and EP 13 (in separate episodes). One of the interesting things I noticed was, in describing the many and varied findings that are generated under EP 1, a direct comparison was made to OSHA’s General Duty Clause as a function of how this particular EP is being used. Now, the GDC concept as a part of TJC’s survey efforts is certainly not unknown to us (in the “old” days, EC.02.01.01 used to be the catch-all for general safety findings) and basically it comes down to pretty much anything that isn’t quite as it should be (what I have taken to euphemistically describing as imperfections). Could be stained ceiling tiles, could be non-intact flooring, wall, or horizontal surfaces. Could be nurse call cords that are not properly configured (too long, too short, too wrapped around restroom grab bars), could be improperly segregated compressed gas cylinders. The list of possibilities is pretty much infinite.

The second video episode talks about maintaining temperature, humidity, and air-pressure relationships in the “other” locations (pretty much everywhere that isn’t an invasive procedure area or an area that supports invasive procedure areas). I know that there’s been some consternation from findings relating to issues such as pressure relationships in clean utility and soiled utility rooms (clean rooms have to blow and soiled rooms have to suck, so to speak), pressure relationships in pharmacies (positive), laboratories (negative) and so on. There’s some discussion about how these types of conditions might manifest themselves in the environment and the importance of staying on top of these things, particularly during surveys (personal note: my consultative advice is to have an action plan for checking all these various areas that have pressure relationship requirements the moment you learn that “Elvis,” my code name for TJC, is in the building). It is very, very clear that the Life Safety surveyor is going to be checking pressure relationships early on in the survey process—you want to have a very, very good idea of where you stand in the applicable areas.

At any rate, the favor I have to ask (and I’m sure I’ve gone on long enough that the favor is blissfully in the past) is for those of you who’ve viewed the contents of the portal (according to TJC figures, there were 48,000 views of the first two modules; I know I account for a couple of those, but clearly others have checked things out, though it might be interesting to see how many of that number are TJC surveyors…), particularly those of you who have been surveyed in the last few months: Has the material actually been helpful? Part of me feels that the materials are presented in such a general fashion that it makes them less useful from a practical standpoint (perhaps the better part of me), but since I don’t have to worry so much about day-to-day stuff anymore, I will freely admit that I’m too far away from it to be able to say. That said, I am really keen to hear if you think they’ve done a good job, not-so-good job, or somewhere in between. Pretty much any sense of whether the material has been helpful (of course, I could ask the same question about this space, as well, so feel free to weigh in—I always like feedback).

As a final note for this week’s epistle, you may be curious to read about what TJC’s leadership thinks about the portal. You may recall a bit of hand-wringing at the beginning of the year, by Mark Pelletier, the COO of accreditation and certification operations at TJC, regarding the recent “spike” in EC/LS findings (you can find my comments, including a link to Mr. Pelletier’s comments from January, here). As we all know very well, the torture in the EC/LS world has continued (presumably until morale is restored), but the EC Portal is being looked upon as “a light at the end” (at the end of what, I’m not sure, as it isn’t specifically indicated). The thing I keep coming back to in my mind’s eye, is that the typical list of findings is what (again, my “imperfections”) are the types of conditions and practices that, while not perfect (yes, we are imperfect) are not conditions that significantly increase the risks to patients, staff, visitors, etc. If these imperfections are not managed correctly, they could indeed become something unmanageable, but I’m just not convinced that the environment is the big boogie man when it comes to healthcare-acquired infections, which is pretty much the raison d’etre for this whole focus. I keep telling myself that it’s job security, but it frustrates the bejeezus out of me…

Mr. Pelletier’s latest can be found here.

And on that note, I wish you a most joyous holiday season and a safe and inspiring New Year! I may find the urge to put fingers to keys twixt now and the end of the year, but if I do not, please know that it’s taking every ounce of my self-control not to pontificate about something. Consider the silence my gift to you!

Be well and stay in touch as you can!

One score and no years ago: Guess who’s 20?

John Palmer, who edits Briefings on Hospital Safety, among other nifty periodicals, asked me to weigh in on the 20th anniversary of the EC chapter, with a particular emphasis on how (or where) things are now in comparison to the (oh so very dark) pre-EC days. And he did this in full recognition of my tendency to respond at length (imagine that!). At any rate, I decided that these thoughts would be good to share with all y’all (I can’t absolutely swear to all the dates; I think I’m pretty close on all of them, but if there are temporal errors, I take full responsibility…)

Prior to the “creation” of the Environment of Care as a chapter (you can trace the term Environment of Care back to 1989), The Joint Commission had a chapter in the accreditation manual known as Plant Technology and Safety Management (PTSM). The PTSM standards, while significantly more minimalistic than the present-day requirements, did cover the safety waterfront, as it were, but with the advent of Joint Commission’s Shared Visions/New Pathways marketing (you may assume that I am using “marketing” as the descriptor with a little bit of tongue in cheek) of their accreditation services, I guess the term I would use most is a modernization of the various standards chapters began, including the “birth” of the EC chapter in 1995. With that, things became a little more stratified, particularly with the “reveal” of the seven EC functions (safety, security, hazmat, emergency, life/fire, medical equipment, utility systems). This raised the profile of the physical environment a bit, but a true concerted focus on the EC really didn’t occur until 2007, when the Life Safety surveyor program was introduced, primarily in response to data gleaned by CMS from validation surveys. The Joint Commission survey process, prior to 2007, really didn’t have a reliable means of capturing life safety and related deficiencies. Since then, the survey focus on the physical environment has continued to grow, much to the point that now it very much eclipses the clinical component of the survey process, at least in terms of the number and types of findings.

Are things “better”? I suppose one could make the case that things have improved incrementally over time, but it’s tough to say how much direct influence the EC chapter has had on things (and the subsequent “peeling off” of the Life Safety and Emergency Management chapters). Clearly, the healthcare environment is significantly different than 20 years ago, both in terms of the inherent risks and the resources available to manage those risks. (Increased technology is pretty much a good thing, but reductions in spending, probably not so much. I’m sure you can come up with a pretty good list of pros and cons without too much difficulty.) You could also make the case (purely based on the number of findings…and I think TJC has a dog in that fight) that if hospitals are safer, it’s because of the level of scrutiny. I tend to think that the “true” answer resides on the development of the healthcare safety professional as a vocational endeavor, with the added thought that unsafe places tend not to stay in business for very long these days. So perhaps somewhere in the middle…

Better? Worse? Different…definitely! I will say that I firmly believe that the amount of survey jeopardy being generated at the moment leans towards the hyperbolic; there are certainly organizations that need to get their acts together a little more fully than they do at present. But not every organization that ends up in the manure is completely deserving of that status. I recognize that TJC has to be super-diligent in demonstrating their value to the accreditation process. But being accredited can’t become the be-all, end-all of the process. The responsibility of each healthcare organization (and, by extension, each caregiver) is to take care of patients in the safest possible manner and being attentive to the survey process can’t come at the expense of that responsibility. Sometimes, I fear, it does just that. I could probably say something pithy about job security, but…

If you set things up correctly…they will still find stuff!

Those of you who are frequent readers of this little space are probably getting tired of me harping on this subject. And while I will admit that I find the whole thing a tad disconcerting, I guess this gives me something to write about (the toughest thing about doing the blog is coming up with stuff I think you folks would find of interest). And so, there is an extraordinary likelihood that you will have multiple EC/LS findings during your next triennial Joint Commission visit—and I’m not entirely convinced that there’s a whole lot you can do to prevent that from happening (you are not powerless in the process, but more in that in a moment).

Look at this way: Do you really think that you can have a regulatory surveyor run through your place for two or three days and at the end “admit” that they couldn’t find any deficiencies? I’ve worked in healthcare long enough to remember when a “no finding” survey was possible, but the odds are definitely stacked against the healthcare professionals when it comes to this “game.” And what amazes me even more than that is when folks are surprised when it happens! Think about, CMS has been taking free kicks on TJC’s noggin for almost 10 years at this point—because they weren’t finding enough issues during the triennial survey process. BTW, I’m not saying that there’s a quota system in place; although there are certainly instances in which surveyors over-interpret standards and performance elements, I can honestly say that I don’t find too many findings that were not (more or less) legitimate. But we’re really and truly not talking about big-ticket scary, immediate jeopardy kind of conditions. We are definitely talking mostly about the minutiae of the safety world—the imperfections, if you will—the slings and arrows of outrageous fortune that one must endure when one allows humans to enter one’s hallowed halls. People mess stuff up. They usually don’t mean to (though there are some mistakes, and I think you can probably think of some examples in your own halls), but as one is wont to say, feces occurs. And there’s a whole segment of each healthcare organization charged with cleaning up that feces—wherever and however it occurs.

So what it all comes down to is this: you have to know what’s going on in your building and you have to know where you stand as a function of compliance, with the subset of that being that you have to have a robust process for identifying conditions soon enough and far enough “upstream” to be able to manage them appropriately. We’ve discussed the finder/fixer dynamic in the past (here’s a refresher), so I won’t belabor that point, but we need to use that process to generate compliance data. Strictly speaking, you really, really, really need to acquaint yourselves with the “C” Elements of Performance; compliance is determined as a rate and if you can demonstrate that your historical compliance rate is 90% or better, then you are in compliance with that standard/EP. But if you’re not using the surveillance process, the finder/fixer process, the tracer process, the work order process, the above the ceiling permitting process, ad nauseum, to generate data that can be used to determine compliance, then you are potentially looking at a very long survey process. Again, it goes back to my opening salvo; they are going to find “stuff” and if you are paying good attention to what goes on in your organization, then they shouldn’t be able to find anything that you don’t already know about.

The management of the physical environment is, at its heart, a performance improvement undertaking. As a support process for hardwiring ongoing sustained improvement, a process for the proactive risk assessment of conditions in the physical environment is essential. As an example, the next assessment would use the slate of findings from your most recent surveillance rounds to extrapolate the identification of additional risks in the physical environment. For all intents and purposes, it is impossible to provide a physical environment that is completely risk free, so the key focus becomes one of identification of risks, prioritizing the resolution of those risks that can be resolved (immediate and long-term), and to develop strategies for managing those risks that are going to require resource planning and allocation over an extended period of time. The goal of the process is to ensure that the organization can articulate the appropriate management of these risks and to be able to provide data (occurrence reporting, etc.) to support the determination of that level of safety. By establishing a feedback loop for the management of risk, it allows the organization to fully integrate past actions into the improvement continuum. If you think of the improvement continuum as a football field (it is, after all, the season for such metaphors) or indeed any game “environment,” you need to know where you are in order to figure out where you need to go/be. The scrutiny of the physical environment has never been greater and there’s no reason to think that that is going to change any time soon. Your “power” is in preparing for the survey by being prepared to make full use of the post-survey clarification process—yup, they found a couple of doors that didn’t close and latch, a fire extinguisher that missed a monthly inspection or two, and on and on. Anticipate what they’ll find based on what you see every time you “look” (again, it’s nothing “new” to you—or shouldn’t be) and start figuring out where you are on the grid. That way, they can find what they want (which they will; no point in fighting it anymore) and you can say, thanks for pointing that out, but I know that my compliance rate for doors/fire extinguishers/etc. is 90%, 91%, 92%, etc. We want them to work very hard to find stuff, but find stuff they will (that’s a little Yoda-esque). We just have to know what do “aftah.”

Same as it ever was… same as it ever was… same as it ever was…

As the back-to-school sales reach their penultimate conclusion, I look back on the year so far and am amazed at how quickly we’ve blown through fully two-thirds of 2015—yow! For a while it seemed like winter was never going to release us from its icy grasp and now we’re looking forward to its return, so I guess we have naught to do but look forward towards the onslaught of 2016. I hope, for all our sakes, it is a kinder and gentler new year.

But before the past little while takes on the rosy hue of nostalgia (as it almost always does), our friends in Chicago have provided an excellent opportunity to reflect on the “sins” of the past by revealing the most frequently cited standards during the first six months of 2015. And to almost no one’s surprise, four out of the top five most frequently cited standards (at the moment, the “reveal” is only for the top five—I guess we’ll find out about the rest of the top 10 at some point) are smack dab in the middle of the management of the physical environment, with the top three most frequently cited standards for hospitals being EC.02.06.01 (#1 with 59% of hospitals surveyed being cited), IC.02.02.01 (#2 and 54%) and EC.02.05.01 (#3 and 53%; looks like a real fight for that #2 spot), all of which reflect elements tying together the management of the physical environment with the control and prevention of infection (not everything cited is in the physical environment/infection control bucket, but from what I can gather, rather a fair amount is related to just that).

At this point (and I full recognize that this is a rather reiterative statement), I’m going to crawl out on a limb and say that the single greatest survey vulnerability for any (and every) healthcare organization is the management of the surgical/procedural/support environments. The hegemony of this aspect of the survey (and regulatory compliance) process comes very close to defying understanding. At this point, there’s no real surprise that this is an (if not the, and I would argue “the” is the word) area of intense survey scrutiny, so what’s the deal?!? Forty percent of the hospitals surveyed from January to June appear to have done okay on this, or is that number really a red herring? It would not surprise me that 100% of the hospitals surveyed ran afoul of one of the top three. Anybody out there surveyed so far this year that managed to escape, relatively, scott-free on this?

I’ve certainly done a lot of yammering in this regard over the past few months (years?) and it appears that I am raging against the dying of the light to minimal effect. I have a lot of ideas about this, but I guess I’m putting it out there: has anybody really got this under control? I think we all have a stake in this thing and the sooner we can get our hands on an effective process for managing this, the better. I will admit that it is entirely possible that, particularly given the age of a lot of hospital infrastructure components, this is not going to go away until they stop focusing as much on it. At this point, I haven’t run into too many folks that have been cited under the big three for whom infection rates are anything other than what would normally be expected—though perhaps infection control rates are higher than they “should/can” be—I guess we could be in the midst of a paradigm shift on this. I don’t want to have to wait to find out.

Letting the days go by…

Leave it better than you found it!

This past week (and this coming week as well), I’ve been on vacation in Maine (code name: A Beautiful Place by the Sea), which affords me the luxury of observing a lot of human behaviors, some interesting, some not so much. Some winning, and others that just grate.

There’s been a movement to reduce the amount of “invasive” plant species that have, in some instances, overtaken the natural landscape (and no, I’m pretty sure that this reduction is going to extend to tourists, though I bet there are moments…). So something of a reclamation project is underway, the result of which will (ideally) be a sustainable and less intrusive beautification. Where things go a little awry is in the areas somewhat off the more deliberately beauteous locales and offers what appears to be too many opportunities for the dark underside of human behavior to hold sway. Each morning, I make a circuit of the area and have noted beer and soda cans tossed into bushes, dirty diapers tossed under those same bushes and all matter of detritus left behind, presumably because the effort to properly dispose of these items was greater than what could be tolerated in the moment. My walk, at least partially, includes collecting some trash (I will admit that I’ve avoided the dirty diapers—I will have to prepare better in the future) along the way, but I have a pretty good sense of where the waste receptacles are along the way, so it’s not like I have to lug the stuff for miles.

At this point, you’re probably asking yourself: What does this have to do with healthcare safety and the myriad related conditions and practices that I might encounter during the workday? Well, the thought that keeps returning to the front of my head goes back to the age-old task of trying to “capture” these conditions at the point at which they occur, or at least when they are identified (yes, it’s another “see something, say something” tale). When we encounter unsafe conditions during rounds—damaged walls, unattended spills, etc.—we “know” that these things did not happen by themselves, so what prevented the originator of the condition from at least saying, “Oh poop, I need to tell somebody about that hole in the wall/spill on the floor so it can be remedied.” Not a particularly difficult thing conceptually, but human behavior-wise, it seems like it is an impossible task. I suppose you could look at it as job security (hahaha!), but having to manage all these little “dings” keeps us away from paying attention to the big and bigger dings that we know are out there. I suspect that I’m probably not supposed to be thinking about this stuff so much when on vacation, but I guess that’s part of my brain that never really shuts off. And don’t get me started about people who leave shopping carts out in the middle of the parking lot at the grocery store (yes, that’s me pushing a line of carts either to the cart corral or back to the store—it is a most consistent manifestation of my OCD). Hope your August is proving to be most splendid!

Oh no, Mr. Bill!

I always view with great interest the weekly missives coming from The Joint Commission’s various house organs, particularly when there’s stuff regarding the management of the physical environment. And one of the more potentially curious/scary “relationships” is that between the good folks in Chicago and the (I shan’t editorialize) folks at the Occupational Safety & Health Administration. They’ve had a nodding acquaintance over the years, but there is evidence in some quarters (I’ve seen a decided uptick in survey findings relating to hazardous materials and waste inventories—as we’ve noted before, a list of your Safety Data Sheets is not going to be enough on its own to satisfy a finding of compliance with the Hazard Communications standard), that concerns relative to occupational health and safety are becoming a target area during Joint Commission surveys.

At any rate, last week, buried in last Wednesday’s action-packed edition of Joint Commission Online, there was an item highlighting the OSHA updates of key hazards for investigators to focus on during healthcare inspections.

Now I can’t imagine that the list of key hazards would come as a surprise to anyone in the field (in case you were wondering, they are: musculoskeletal disorders (MSD) related to patient or resident handling; bloodborne pathogens; workplace violence; tuberculosis; and slips, trips and falls—surprise!), as these are pretty typically the most frequently experienced occupational risks in our industry. What remains to be seen, and what I suspect we need to be keeping in mind as the wars for accreditation supremacy continue, is whether this OSHA guidance translates across to TJC survey methods and practices (I don’t think TJC is as “beholden” to OSHA as they are to CMS, but who knows what the future may hold). That said, I don’t think it would be unwise or in any way inappropriate to shine as much “light” as possible on your organization’s efforts to manage these occupational risks. I’m guessing your most frequently experienced occupational illness and injury tallies are going to include at least two or three of the big five (I suspect that TB may be the least frequent for hospitals, though if you count unprotected/unanticipated exposures, the numbers might be a little higher). Perhaps (if you have not already done so) some performance indicators relating to the management of these risks (successful or unsuccessful) might be a worthwhile consideration as we continue through the EC/safety evaluation cycle (I know some of you are doing your evaluations based on the fiscal year cycle, of which many are wrapping as we speak). And remember, there’s no rule that says you can’t develop and implement new indicators mid-cycle. Take a good look at the numbers you have and figure out whether your organization is where it needs to be from a performance standpoint. If the numbers are good—it might behoove you to ask the question or whether that level of performance is the result of good design or good fortune (there’s nothing wrong with good fortune, though it does tend to be less reliable than good design). As with so many of our critical processes, the more we can hardwire compliance/good practice, the easier our jobs can be. Perhaps that’s an overly optimistic thought, but as I gaze out over Boston Harbor this morning, optimism doesn’t seem to be misplaced—optimism is good to have when flying!

Sound the alarm…no, wait, silence the alarm…no, wait—what?!?

Now that we have almost reached the summer solstice, I guess it’s time to start thinking/talking about 2016 and what it might bring from an accreditation perspective—it will be here almost before we know it (time flies when you’re having fun—and we’re having too much fun, are we not?)

One of the developments that I am watching with a bit of interest (if only because it is not at all clear how this is going to be administered in the field) is the next step in the clinical alarm National Patient Safety Goal (for those of you keeping score, that’s NPSG.06.01.01 if you need to find it in your accreditation manual—and I’m sure you’re sleeping with that under your pillow…). Presumably at this point, you have covered the elements that are in full surveyability—establishment of alarm system safety as an organizational priority (pretty simple, that one) and identification of the most important alarm signals based on:

  • input from medical staff and clinical departments (Have you got documentation for that?)
  • risk to patients if the alarm signal is not attended or if it malfunctions
  • whether specific alarm signals are needed or unnecessarily contribute to alarm noise and alarm fatigue
  • potential for patient harm based on internal incident history
  • published best practices and guidelines (Can you say AAMI and ECRI? Sure you can!)

Everyone out there in radioland should have this much of the package in place. Now, it’s time to do something with that process.

Starting January 1, 2016, each organization is on the hook for establishing policies and procedures for managing the alarms identified through the process noted above. The policies/procedures need to address elements such as clinically appropriate settings for alarm signals, when the alarms can be disabled; setting, changing and management of alarm parameters; monitoring and responding to alarm signals, etc. And, of course, we need to education staff and LIPs about the purpose and proper operation of alarm systems for which they are responsible (that’s a pretty good swath of education, I’m thinking).

At any rate, I’m curious about a couple of things—how are you folks coming with this process? And while I understand the importance of the safe use of clinical alarms, how much of a deal is this really? I completely recognize that in the zero tolerance world of healthcare in the 21st century, one event that traces back to an issue with the appropriate use, etc. of a clinical alarm is one too many events, particularly as a function of patient safety. Perhaps the question should be this: has mandating this process resulted in your hospital being safer? I know this is a “have to,” though there is certainly enough gray to allow for some customization of approach (I suspect that a cookie cutter approach is not the best strategy—too many different alarms in too many different environments), what’s this going to look like in the hands of our friends from Chicago when they darken our collective doors. If anyone has some feedback on how this is playing during survey, that would be wonderful, even if you just share the story with me (I’ll remove any identifying remarks), I (and the blogosphere) would be forever in your debt.

Happy Flag Week (it hardly seems reasonable to hold it to just a day)!

Opinions are like…

Over time, I’ve developed certain thoughts relative to the management of the survey process, one of which relates to the ever-changing (maybe evolution, maybe mutation) regulatory survey process and I think it boils down to a couple of basic expectations (at least on my part):

  • You always run the risk of having a surveyor disagree with any (and every) decision you’ve ever made relative to the operational management of risk, particularly as a function of standards-based compliance
  • Your (or indeed any) Authority Having Jurisdiction always reserves the right to disagree with anything they, or anyone else, has ever told you was “okay” to put into place (and this includes plan review for new or renovated spaces)

Recent survey experiences are littered with the remains of practices and conditions that were never cited in the past, but in the latest go-round have become representative of a substandard approach to managing whatever risk might be in question. For example, just consider how the survey of the surgical environment has changed (and changed very rapidly, if you ask me) from what was typically a fairly non-impactful experience (there were any number of instances in which the Life Safety surveyor didn’t even dress out to go into the OR proper) to the area generating the top three most frequently cited standards during TJC surveys in 2014. That, my friends, is a whole lot of schwing in the survey process.

The bottom line message is, more or less, based on the adage “Future expectations are not necessarily indicative of past experiences.” You have to look at everything you are doing as a function of how your practices/conditions actually comply with the standards. Just as there are many ways to skin the proverbial catfish (skinning a catfish makes more sense to me in this modern era than skinning a feline), there are many ways to comply with what are typically rather open-ended compliance standards. As long as you can “trace” the practice or condition back to compliance with the standards/performance elements, then, even if you have a surveyor who disagrees with your approach to things, you can feel comfortable that you can “go to the mat” post-survey, using the clarification process to demonstrate how your organization achieves compliance relative to the finding. As a somewhat related aside, it is important to remember that you are only required to respond to what is actually written in the finding. Very often I run into folks who want to respond to more than what is actually in the report, usually because they remember what the surveyor “said” during the survey. Surveyors, like everyone, have opinions about how and what and where, etc., and they certainly have every right to hold those opinions (sometimes in higher regard than is warranted, but I digress). Opinions are rarely based on an absolute standards-based requirement. So, the tip-off comes in different forms: Maybe they say you “should” do something in a certain way or something similarly non-definitive. They typically stay away from things that you “must” or “have to” do. You “have to” comply with the standards and you “have to” comply with your organization’s policies and procedures, but beyond those points, you have to chart your course of compliance. You know best what will work to effectively ensure that you have an appropriately managed care environment (and, presumably, the performance data to back up that knowledge).