Are Your Sites Failing Your Training? If Not, They Should Be!
Did you know that 100% of a seasoned clinical research audience attending a recent webinar all agreed on the key metric that should define investigative site training success? Can you guess what that was?
It had nothing to do with timely completion of training, accuracy on protocol knowledge checks, or site satisfaction of the training approach. Instead, there was unanimous agreement that site training success should be defined as zero defects. More specifically zero IMPORTANT protocol deviations. That’s a lofty goal for sure, especially since we know that too many of these IMPORTANT protocol deviations can sink your trial. But we are convinced that the goal of zero important deviations is absolutely achievable with the right training strategy and methodology. Certainly, this assumes a reasonable study design but barring insurmountable protocol-related issues, there is really no reason why studies should suffer from “deviation-itis.”
As an industry, we’re getting better at identifying and tracking deviations in real-time. Catching and mitigating deviations early on are one weapon in our arsenal for combatting deviations but the damage is already done by the time the deviation has occurred. The subject and/or data integrity may have already been put at risk. It’s like trying to put the toothpaste back in the tube. You may be able to salvage what’s left but you’ve already wasted that resource (granted, toothpaste isn’t as precious an asset as a clinical trial subject, but you get the point).
The other limitation with linking zero defects to training success is that it’s a lagging indicator. Not that it shouldn’t be a key metric, but it doesn’t allow the sponsor or CRO to identify which sites / site staff are most and least likely to deviate, and in which areas the deviations are most likely to occur.
Enter protocol simulation training. By simulating the most realistic scenarios that sites will face in the study, you can actually measure how they will perform. Will they make the right decision on:
- What to do if the subject takes a restricted medication just before the randomization visit?
- How to address a temperature excursion of the investigational product?
- Whether an AE is reportable as an SAE or event of special interest?
- What should be done if the subject misses a dose or study visit where a procedure can’t be completed?
Identifying and taking action on any potential deviation such as these is one thing, but ideally the training allows the learner to critically think through preventive actions. The simulation scenarios should not only model what can go wrong and assess potential corrective actions, but more importantly, how the issues can be prevented.
Simulation allows the site staff to practice the protocol in a safe environment before exposing real subjects and real data to potential risks. There is a well-known adage that real learning takes place when you make a mistake. So we actually want our sites to fail in the training. In fact, failing fast and failing often in the simulation accelerates their protocol learning curve. Conversely, the more the sites know the material, the quicker they can get through the training and have their competency for making the correct decisions reinforced.
Simulation can be done in live Investigator Meeting Workshops, during Site Initiation Visits and through eLearning. Simulated eLearning however provides a number of advantages. One can measure the number of times it takes the learner to get to the correct information. These analytics provide a leading indicator of the site’s performance in the trial. It also creates an opportunity to provide feedback and mentoring in a consistent manner if the learner makes a mistake.
Here’s to allowing sites to FAIL in the simulated trial so they can SUCCEED in the real one!