RSSAll Entries Tagged With: "Evidence of Standards Compliance"

But I got the crystal ball, he said!

And he held it to the light…

In their (seemingly) never-ending quest to remain something (I’m not quite sure what that something might be, but I suspect it has to do with continuing bouts of hot water and CMS), our friends in Chicago are working towards modifying the process/documentation for providing post-survey Evidence of Standards Compliance (for the remainder of this piece, I will refer to the acronymically inclined ESC). The aim of the changes is to “help organizations focus on detailing the critical aspects of corrective actions take to resolve” deficiencies identified during survey. Previously, the queries included for appropriate ESC submittals revolved around the following: identifying who was ultimately responsible for the corrective action(s); what actions were completed to correct the finding(s); when the corrective actions were completed; and, how will you sustain compliance (that is, as they say, the sticky wicket, to be sure).

The future state will be (more or less) an expansion of those concerns, as well as including extra-special consideration for those findings identified as higher-risk Requirements For Improvement (RFIs) based on their “position” in the matrix thingy in your final report (findings that show up in the dark orange and red areas of the matrix). The changes are roughly characterized as delving “deeper into the specifics” of the original gang of four elements, so now we have the following: assigning accountability by indicating who is ultimately responsible for corrective action and sustained compliance (not a big change for that one); assigning accountability for leadership involvement (only for the high-risk findings—whew!) by indicating which member(s) of leadership support future compliance; corrective actions for the findings of noncompliance—this will combine the “what you did” with the “when you finished it”; for high-risk findings, you will also have to provide information on the corrective actions as a function of preventative analysis (this sounds like a big ol’ pain in the rumpus room, don’t it?); and , finally, an accounting of how you will ensure sustained compliance, which will have to include monitoring activities (including frequency), the type of data to be collected from the monitoring activities, and how, and to whom, the data will be reported.

In the past, there was always the lurking (almost ghoulish) presence of what’s going to happen if you have repeat findings from survey to survey, and this new process sounds like it might be paving the way for more obstreperous future survey outcomes. But I’d like to know a little bit more about what might be considered a repeat finding—does it have to be the same condition in the same place or is it enough to get cited for the same standard/performance element combo. If the former is the case, then I “get” them being a little more fussy about the process (in full recognition that every organization has some repeat-offender tendencies), but if it’s the latter, then (insert deity of choice) help us all, ‘cause it’s probably going to get more ugly before we see improvement. Or maybe it will just be repeats in the high-risk zone of the matrix—I think that’s also pretty reasonable, though I do think they (the Chicagoans) could do a little better in ensuring consistent approaches/interpretations, particularly when it comes to ligature risks.

All that said, I stand on my thought (and let me tell you, that’s not an easy task) that there are no perfect buildings, no perfect physical environments, etc., and that’s pretty much supported by what I’ve seen being cited during surveys—the rough edges are where the greatest number of findings can be generated. And since they only have to find one instance of any condition in order to generate an RFI, the numbers are not in favor of the folks who have to maintain the physical environment. If you’re interested in the official notice, the links below will take you the announcement article, as well as a delightful graphic presentation—oh boy!

You better start swimming or you’ll sink like a stone…

In their pursuit of continuing relevance in an ever-changing regulatory landscape, The Joint Commission announced what appears to be a fairly significant change in the survey reporting process. At first blush, it appears that this change is going to make the post-survey process a little simpler, recognizing that simplification of process sometimes ends up not being quite so simple. But as always, I will choose to remain optimistic until proven otherwise.

So the changes in the process as outlined in this week’s missive shake out into three categories: scoring methodology, post-survey follow-up activities, and submission time frames for Evidence of Standards Compliance (ESC). And I have to say that the changes are not only interesting, but appear to represent something of a shift in the framework for administering surveys. Relative to the scoring methodology, it appears that the intent is to do away with the “A” and “C” categories, as well as the designation of whether the performance element is a direct or indirect impact finding. The new process will revolve around a determination of whether a deficient practice or condition is likely to cause harm and, more or less, how frequently the deficient practice or condition is observed. As with so many things in the regulatory realm, this new methodology reduces to a kicky new acronym: SAFER (Survey Analysis For Evaluating Risk) and comes complete with a matrix upon which each deficiency will be placed. You can see the matrix in all its glory through the link above, but it’s basically a 3 x 3 grid with an x-axis of scope (frequency with which the deficiency was observed) and a y-axis of likelihood to result in harm. This new format should make for an interesting looking survey report, to say the least.

Relative to the post-survey follow-up activities, it appears that the section of the survey report (Opportunities for Improvement) that enumerates those single instances of non-compliance for “C” Elements of Performance will “no longer exist” (which makes sense if they are doing away with the “C” Element of Performance concept). While it is not explicitly noted, I’m going to go out on a limb here and guess that this means that the deficiencies formerly known as Opportunities for Improvement will be “reported” as Requirements for Improvement (or whatever RFIs become in the SAFER model), so we may be looking at having to respond to any and all deficiencies that are identified during the course of the survey. To take that thought a wee bit further, I’m thinking that this might also alter the process for clarifying findings post-survey. I don’t imagine for a moment that this is the last missive that TJC will issue on this topic, so I guess we’ll have to wait and see how things unfold.

As far as the ESC submission timeframes, with the departure of the direct and indirect designations for findings comes a “once size fits all” due date of 60 days (I’m glad it was a “45 days fits all” timeframe), so that makes things a little less complicated. But there is a notation that information regarding the sustainment of corrective actions will be required depending on where the findings fall on the matrix, which presumable means that deficiencies clustered in the lower left corner of the matrix (low probability of harm, infrequent occurrence) will drive a simple correction while findings in the upper right corner of the matrix will require a little more forethought and planning in regards to corrective actions.

The rollout timeframe outlined in the article indicates that psychiatric hospitals that use TJC for deemed status accreditation will start seeing the new format beginning June 6 (one month from this writing) and everyone else will start seeing the matrix in their accreditation survey reports starting in January 2017. I’m really curious to see how this is all going to pan out in relation to the survey of the physical environment. Based on past practices, I don’t know that (for the most part) the deficiencies identified in the EC/EM/LS part of the survey process wouldn’t mostly reside in that lower left quadrant, but I suppose this may result in focus on fewer specific elements (say, penetrations) and a more concerted approach to finding those types of deficiencies. But with the adoption of the 2012 Life Safety Code®, I guess this gives us something new to wait for…

One thing leads to another

An interesting development on the survey front this year; it may be merely a blip on the compliance radar screen (I know of two instances in which this happened for sure—but if you folks know of more, please share), but if this signals a sea change in how The Joint Commission is administering surveys, you’d best have your ducks in a row.

So, I’ve heard tell of two instances in which the survey team arrived at an organization with the results of the previous triennial survey clutched in their paws, with the intent being to validate that the actions submitted as part of the Evidence of Standards Compliance (ESC) process did indeed remedy the cited deficiency. Now I think we can agree that the degree to which we can fix something and keep it fixed over the course of 36 or so months can be a bit of a, how shall we say, crap shoot. As we’ve noted in one fashion or another, lo these many years, fixing is easy—keeping it fixed is way difficult.

And so dear friends, those of you in the survey bucket for 2013 should dig out those survey results from last time, review the ESC submittal and make sure that what was accepted by TJC as a means of demonstrating compliance with the standards is indeed the condition/practice that is in place now. And the reason this is so very, very important, just to complete the thought, is that there is a pesky little standard in the APR chapter of your beloved Accreditation Manual  (APR stands for Accreditation Participation Requirements, and the standard in question is APR.01.02.01) that requires “(t)he hospital provides accurate information throughout the accreditation process.” So if a surveyor gets to thinking that there may have been some less than forthcoming aspect of your action plans, etc., you could be looking at a Preliminary Denial of Accreditation, a most unpleasant state of affairs, I assure you. So let’s give those “old” findings at least one more ride around the track and make sure that we’ve dotted all the “i’s” and crossed all the “t’s.”