Telling Ain’t Training

Whether ensuring GCP compliance or the correct application of a study protocol, training is a critical part of any clinical trial. During the life of a study, there may be several points at which a sponsor may require additional or more intensive training to supplement their initial protocol-training. In choosing the most suitable training method, sponsors must identify what they are hoping to achieve with their training efforts. Is their study complex and requires a change in behavior? Will sites need to adjust past compliance methods?

Traditional training methods, such as multi-hundred page slide decks and SIV lectures, can be sufficient for their sites with straightforward protocols, but more often than not, training centered around lectures and lengthy material isn’t enough. After all, sites know better than anyone that telling ain’t training.

Training is always provided in preparation for beginning a new protocol. This is the first point at which a new drug sponsor should look carefully at their training. Companies need to have a clear picture of what the goals of the training are, Jenna Rouse, Chief Experience Officer, advised. For instance, if proof of GCP training for experienced research staff is needed, a simple checklist approach can be sufficient.

But if the training is intended to change behavior, the learners will have to practice behaviors critical to the success of the study, before they are in front of a subject; more interactive approaches, such as use of simulation, may be better suited for this, she said.

And the protocol itself may determine whether a traditional training approach or a more intensive one is most appropriate. For example, if the protocol requires treatments or procedures that differ from the standard of care, physicians, nurses and other research staff will have to overcome existing mental muscle memory in order to absorb the new way of working with patients. In cases like this, the ability to practice in a consequence-free environment using simulation can be a valuable tool to ensure that the protocol is carried out correctly, Rouse said.

Essentially, when a sponsor has a protocol that includes any degree of complexity that requires both short- and longer-term recall, more intensive training may be required, Rouse said, adding, “The level of complexity of the protocol should dictate the complexity of training.”

Visual elements should ideally be combined with quick-reference job aids to trigger easy recollection of specific procedures, she noted.

 

Protocol deviations, amendments demand response


During the course of a clinical trial, protocol deviations may occur. If a significant deviation—one that is critical to the study’s future—occurs, companies likely need to scrutinize their previous training.

Anytime a protocol is amended, research staff must be educated on changes, as well. This additional training is particularly important if the protocol amendment was due to a challenge that put patient safety or data quality at risk, Beth Harper, chief learning officer at Pro-ficiency, said.

Sponsors will usually take action if enrollment is behind or other deadlines are not being met, Harper noted. These are not usually triggered by a quality issue, but still may result in additional training if that is identified as a way to address the root problem.

The CRA often has a critical role to play, conducting a root cause analysis to figure out what might be the source of a deviation at a site. Examples of issues could include a new coordinator who needs more help or guidance, lack of PI oversight or lack of clarity in the protocol, Harper said. ICH guidance requires conduct of a root cause analysis and then action to address that cause.

“The typical response is that the CRA will retrain,” she said. “That could mean sending a reminder to affected staff, conducting a thorough walk-through of the site or just re-presenting the SIV training. The latter is what usually happens, but it is better for retraining to be more tailored.”

Instead, sponsors should select a new approach designed specifically to change the way research staff conduct themselves, Rouse said. For instance, if a webinar or PowerPoint presentation were used for initial training, simulations of realistic scenarios could be provided for re-training in problematic procedures.

And if simulations had previously been used, companies can look at any metrics from that training and compare that information to researcher performance during the study. If conditions on-site are significantly different from those presented in simulations, for instance, this could lead to errors even among researchers who performed well during initial training. An example might be a simulation that included dosing for five-year-old children weighing about 40 lbs not preparing research staff at a site that ended up enrolling teenagers weighing around 120 lbs.

That means that close analysis of the protocol to address any points of confusion that were not caught previously would be an important part at developing any additional training, Harper said. New training could be as simple as adding a vignette to the existing scenarios or creating a mini-module focusing on the specific problem. For instance, a mini-module might be created just for research pharmacists because the initial training didn’t anticipate certain challenges or gray areas that might arise with the pharmacy instructions.

“It is possible that researchers may have made the right decisions in the simulations, but if circumstances were different on site compared to how they were presented in the simulated scenarios, a disconnect could happen,” Rouse said. “It’s necessary to evaluate what is happening at the site to identify these problems and address it via training.”

Above all, companies need to be able to evaluate their training when issues arise at any point in a clinical trial’s life cycle. Ideally, the training should provide some behavior-based result, where it can be determined that researchers understand how to make the right decisions at key decision points. Traditionally, sponsors and research sites have not had analytics in this area until a trial was underway. Use of in-person or remote lecture-based training, for instance, provides little feedback on how well learners understand and can apply the training.

Use of simulation-based training, on the other hand, can provide analytics before a study even starts, giving companies a baseline to compare to if things do go sideways at some point during the trial.

And even when training is identified as the cause behind a problem, it may not be the true root cause, Harper cautioned. Issues with the protocol itself, patient compliance challenges or other issues can affect performance and not be addressed by re-training.

 

Training as part of settlements


Finally, companies often invest in additional or big gun training in the face of regulatory or legal actions. For instance, a warning letter from the FDA, or even just Form 483 observations from an inspection, might pose red flags that training is inadequate. Failure to follow the protocol is consistently the top observation during FDA inspections of clinical research sites.

“With FDA, problems in training would usually be noticed at the submission point, and the drug would then not be approved,” Margaret Richardson, general counsel and head of regulatory affairs at Pro-ficiency, explained. “The sponsor would get that feedback and have to spend lots of money to rectify it at that point.”

And additional training is often required as part of a court settlement or a settlement agreement with a state or federal agency. Court cases or settlements, with DOJ, for instance, will typically include assignment of an ombudsman. The government will review everything to make sure the issues that led to the case are address, the identified problem is solved. In such cases, Richardson explained, the settlement will define what is adequate in terms of retraining, and the training may even be reviewed in advance for adequacy.

Simulation programs like those offered by Pro-ficiency can be useful in these situations because they provide analytics which are able to show not only that researcher staff completed the training, but also what scenarios they were given, where they succeeded immediately and where they may have needed support to perform optimally.

These sorts of problems may or may not result in a permanent change to a company’s training approach. In the case of a major, systemic problem, for instance, training might undergo a more global change, Richardson suggested. For a more discrete issue, on the other hand, the response may be more limited.

For both limited applications to address problems or development of entirely new training programs, however, companies must understand that a read-and-sign approach will make little, if any change. Engagement needs to be a central philosophy for clinical research training, as keeping staff interested will improve the quality of training. And use of real-world scenarios that staff can relate to is an important part both of engagement and ensuring transfer of skills to the real world.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *