RSSAll Entries Tagged With: "post-survey follow-up activities"

Everybody here comes from somewhere: Leveling the post-survey field

Well, if the numbers published in the September Perspectives are any indication, a lot of folks are going to be working through the post-survey Evidence of Standards Compliance process, so I thought I would take a few moments to let you know what has changed since the last time (if ever—perhaps your last survey was a clean one) you may have embarked upon the process.

So, what used to be a (relatively) simple accounting of Who (is ultimately responsible for the corrective action), What (actions were taken to correct the findings), When (each of the applicable actions were taken), and How (compliance is going to be sustained) has now morphed into a somewhat more involved:

  • Assigning Accountability (for corrective actions and sustained compliance)
  • Assigning Accountability – Leadership Involvement (this is for those especially painful findings in the dark orange and red boxes in the SAFER matrix – again, corrective actions and sustained compliance)
  • Correcting the Non-Compliance – Preventive Analysis (again, this is for those big-ticket findings – the expectation is that there will be analysis of the findings/conditions cited to ensure that the underlying causative factors were addressed along with the correction of the findings)
  • Correcting the Non-Compliance (basically, this mashes together the What and When from the old regimen)
  • And last, but by no means least, Ensuring Sustained Compliance

This last bit is a multifocal outline of how ongoing compliance will be monitored, how often the monitoring activities will occur (don’t over-promise on those frequencies, boys and girls; keep it real and operationally possible), what data is going to be collected from the monitoring process, and, to whom and how often, that data is going to be reported.

Now, I “get” the whole sustaining correction “thing,” but I’ve worked in healthcare long enough to recognize that, while our goal may be perfection in all things, perfection tends not to exist within our various spheres of influence. And I know lots of folks feel rather more inadequate than not when they look at the list of findings at the end of survey (really, any survey—internal, external—there’s always lots to find), which I don’t think brings a ton of value to the process. Gee thanks, Mr. Surveyor, for pointing out that one sprinkler head with dust on it; gee thanks, Ms. Surveyor, for pointing out that missing eyewash check. I believe and take very seriously our charge to ensure that we are facilitating an appropriate physical environment for care, treatment, and services to be provided to patients in the safest possible manner. If I recall, the standards-based expectation refers to minimize or eliminate, and I can’t help thinking that minimization (which clearly doesn’t equal elimination).

Ah, I guess that’s just getting a little too whiny, but I think you see what I’m saying. At any rate, be prepared to provide a more in-depth accounting of the post-survey process than has been the case in the past.

The other piece of the post-survey picture is the correction of those Life Safety Code® deficiencies or ligature risk items that cannot be corrected within 60 days; the TJC portal for each organization, inclusive of the Statement of Conditions section, has a lot of information/instruction regarding how those processes unfold after the survey. While I know you can’t submit anything until you’ve been well and truly cited for it during survey, I think it would be a really good thing to hop on the old extranet site and check out what questions you need to consider, etc., if you have to engage a long-term corrective action or two. While in some ways it is not as daunting as it first seems, there is an expectation for a very (and I do mean very, very) thorough accounting of the corrective actions, timelines, etc., and I think it a far better strategy to at least eyeball the stuff (while familiarity is said to breed contempt, it also breeds understanding) before you’re embroiled in the survey process for real.

You better start swimming or you’ll sink like a stone…

In their pursuit of continuing relevance in an ever-changing regulatory landscape, The Joint Commission announced what appears to be a fairly significant change in the survey reporting process. At first blush, it appears that this change is going to make the post-survey process a little simpler, recognizing that simplification of process sometimes ends up not being quite so simple. But as always, I will choose to remain optimistic until proven otherwise.

So the changes in the process as outlined in this week’s missive shake out into three categories: scoring methodology, post-survey follow-up activities, and submission time frames for Evidence of Standards Compliance (ESC). And I have to say that the changes are not only interesting, but appear to represent something of a shift in the framework for administering surveys. Relative to the scoring methodology, it appears that the intent is to do away with the “A” and “C” categories, as well as the designation of whether the performance element is a direct or indirect impact finding. The new process will revolve around a determination of whether a deficient practice or condition is likely to cause harm and, more or less, how frequently the deficient practice or condition is observed. As with so many things in the regulatory realm, this new methodology reduces to a kicky new acronym: SAFER (Survey Analysis For Evaluating Risk) and comes complete with a matrix upon which each deficiency will be placed. You can see the matrix in all its glory through the link above, but it’s basically a 3 x 3 grid with an x-axis of scope (frequency with which the deficiency was observed) and a y-axis of likelihood to result in harm. This new format should make for an interesting looking survey report, to say the least.

Relative to the post-survey follow-up activities, it appears that the section of the survey report (Opportunities for Improvement) that enumerates those single instances of non-compliance for “C” Elements of Performance will “no longer exist” (which makes sense if they are doing away with the “C” Element of Performance concept). While it is not explicitly noted, I’m going to go out on a limb here and guess that this means that the deficiencies formerly known as Opportunities for Improvement will be “reported” as Requirements for Improvement (or whatever RFIs become in the SAFER model), so we may be looking at having to respond to any and all deficiencies that are identified during the course of the survey. To take that thought a wee bit further, I’m thinking that this might also alter the process for clarifying findings post-survey. I don’t imagine for a moment that this is the last missive that TJC will issue on this topic, so I guess we’ll have to wait and see how things unfold.

As far as the ESC submission timeframes, with the departure of the direct and indirect designations for findings comes a “once size fits all” due date of 60 days (I’m glad it was a “45 days fits all” timeframe), so that makes things a little less complicated. But there is a notation that information regarding the sustainment of corrective actions will be required depending on where the findings fall on the matrix, which presumable means that deficiencies clustered in the lower left corner of the matrix (low probability of harm, infrequent occurrence) will drive a simple correction while findings in the upper right corner of the matrix will require a little more forethought and planning in regards to corrective actions.

The rollout timeframe outlined in the article indicates that psychiatric hospitals that use TJC for deemed status accreditation will start seeing the new format beginning June 6 (one month from this writing) and everyone else will start seeing the matrix in their accreditation survey reports starting in January 2017. I’m really curious to see how this is all going to pan out in relation to the survey of the physical environment. Based on past practices, I don’t know that (for the most part) the deficiencies identified in the EC/EM/LS part of the survey process wouldn’t mostly reside in that lower left quadrant, but I suppose this may result in focus on fewer specific elements (say, penetrations) and a more concerted approach to finding those types of deficiencies. But with the adoption of the 2012 Life Safety Code®, I guess this gives us something new to wait for…