Mitigating Study Risk With Performance Analytics

Increasingly, both sponsors and research sites are turning to various types of performance metrics to support risk-based management of clinical trials. But collecting and analyzing performance data after a trial has begun may not be the best approach; if data on site and researcher performance can be collected and analyzed while they are being trained on a protocol, for instance, sponsors have an opportunity to head off key risks before the first patient is enrolled in a study.

The research industry has been paying closer attention to use of performance analytics to improve the way trials are conducted. In 2017, for instance, ACRP launched an initiative with CRO Analytics to provide insight on performance analytics and how they might improve clinical research, including reductions in costs and shorter timelines.

Avoiding problems that put data at risk or cause adverse events, which could undermine a study; this requires vigorous risk management, or “the exercise of thinking in advance about study risks and implementing mitigation strategies,” Dawn Niccum, executive director of QA and compliance at inSeption Group, said in a 2021 issue of Bioprocess Online. “It demands stakeholders identify critical study processes and calculate risk associated with those processes.”

And in a 2020 issue of Clinical Researcher, Patrick Hughes, co-founder and chief commercial officer at CluePoints, noted that the ICH guideline on GCPs extends risk-based management to all aspects of clinical trial execution, an approach that “has opened a tremendous opportunity to plan and manage clinical research more effectively and efficiently.” The ICH E6(R2) guideline on GCPs outlines what an ideal risk-based quality management system should include, such as risk identification, evaluation, control and communication, among other features.

“The first step in proactive data monitoring is to identify what is possible to mitigate, eliminate, and accept,” Hughes wrote. “This all forms part of various plans, including those for data, training, monitoring, statistical analysis, safety, medical monitoring, quality, and other functional plans.”

In other words, performance analytics can help reduce risks and improve clinical research. But when are the metrics gathered that form the basis of those analytics? Ideally, sponsors would want to have that information as early in the process as possible. And protocol-specific training could offer the perfect opportunity to gather data on where critical problems—those that can impact data quality and patient safety—are most likely to occur.

And the earlier sponsors can get quality information on what is likely to happen at each site participating in an upcoming clinical trial, the more likely they will be to get everything right from the start. Getting that information up-front is a huge risk management effort.

 

The value of training-generated analytics


Better use of analytics to measure and track how individual sites, as well as specific staff, are performing in terms of properly following the protocol can help sponsors better target their remediation efforts. Necessary retraining could be provided only at those sites and to those individuals that are having deviations, for instance. And that training could focus specifically on the deviations seen.

Furthermore, sponsors often may provide remedial training to all staff at all sites rather than targeting only the sites—or the individuals—making the errors.

And that’s not all. Sponsors can also find and address these risks during initial training if they plan well. Providing training that allows research staff to practice their roles and tasks under a given protocol before a study begins can allow sponsors to collect clear data on how well-prepared each site and its staff is to conduct the protocol correctly.

The right training can help identify these areas and better orient both sponsors and sites to mitigate these risks. The key is to identify metrics that will help identify where risks are more likely to occur and track the occurrence of risky actions and decisions by clinical research staff.

And among the primary risks that sponsors seek to avoid are protocol deviations, which can have significant impacts on clinical trials. The FDA reported in its annual BIMO metrics report covering 2021 inspectional findings that protocol deviations or failure to follow the investigational plan continue to hold the top spot among most-frequent Form 483 observations, a position it held in 2020 and for the last several years.

When crafting a protocol, sponsors generally take a prescriptive view of where they think problems may be likely to occur. For instance, procedures required under the protocol that differ from the usual standard of care could pose a risk of deviations due to physicians, nurses and other staff falling automatically into the more typical way they are used to doing things.

Performance metrics tracked and collected during initial training can produce analytics that allow sites and sponsors to identify and correct problems before it affects study participants, thus reducing the risk of protocol deviations, as well as risks that could directly affect patient health or data integrity. 

But traditional forms of training don’t provide a way to measure how well various research staff are able to perform their jobs under a given protocol. They don’t let sponsors see if risk has been mitigated until they start seeing deviations after a study has started. When that occurs, sponsors must engage in those costly and time-intensive remediation efforts. In addition to taking up sponsor time and money, these retraining efforts also can eat up a lot of site time, often without additional payment.

Simulation-based training, on the other hand, lends itself especially well to collecting and presenting this type of data. Pro-ficiency, for example, provides simulation-based training that lets sites practice all parts of a protocol in a consequence-free environment. Every decision users make during a simulation experience is tracked, collected, and presented in actionable performance and compliance reports. Moreover, these reports can provide valuable insights to sponsors, with the ability to identify which sites and which individuals are having issues with a particular part of the protocol before they affect study timelines or budgets.

 

This heat map dashboard is just one example of the reporting capabilities simulation-based training offers. From sites to individual investigators, these dashboards can be invaluable in promptly detecting problematic areas and providing proactive assistance before lagging performance leads to deviations.

Simulation-based training customized for a given study, for instance, can track performance at both the individual and site level. These analytics can help sponsors better identify risk areas related to deviation from the protocol and address them with targeted training before the first patient is enrolled.

Behavior-based performance metrics can help sponsors predict performance by sites and their staff. And these metrics, or analytics, can be used to identify weak sites, under-performing staff, or even potential pain points in the protocol itself. Sponsors can take this information and provide carefully targeted support to the individuals or sites that need it most, heading off risks of protocol deviations before they occur within the actual study.

 

Pretesting of staff decisions


This type of training system can be used to determine staff’s first responses to instructions on how to conduct protocol procedures, and to make corrections if their first actions are incorrect.

When determining what metrics to track, sponsors should think through the elements of a study that will most impact its success and what the greatest impact of noncompliance could be. For instance, deviations like lack of an initial or signature on some paperwork are easy to fix, while deviations in dosing or patient procedures would cause more problems. Important considerations are whether lab techs and other staff are handling the investigational product correctly and if patient visits and tests are conducted correctly.

The analytics can help sponsors see what sites’ natural inclinations lie in areas of risk. For instance, is there a particular procedure that they tend to miss the mark on? This will help sponsors drill down to how staff are handling their protocol-specific roles and allow for highly targeted remedial work, if any is needed.

This information allows sponsors to tell which sites or coordinators struggle with correctly applying inclusion/exclusion criteria when enrolling patients, or which sites have problems with the dosing regimen under the protocol. Sponsors can then address each site’s specific weaknesses to uncover the root of the problem and provide additional training to avoid problems when the study begins.

For instance, a protocol might include a complicated preparation procedure for the investigational drug that states the product can’t be shaken, but must be undulated to mix – and if bubbles are seen, it cannot be used. If a researcher’s first response to practicing the procedure is to shake the product, see bubbles and set it down, that can indicate to the sponsor that this staff member did not fully understand the prep instructions. 

Analytics can help point out weak areas in a protocol. In the example above, if the investigational product prep procedure is a challenge at multiple sites, that could be a signal to the sponsor that extra effort is needed in this part of the training before the trial starts.

In short, customized simulation-based training that measures carefully selected performance metrics can provide analytics that allow sponsors to identify key areas of risks, particularly regarding protocol deviations that could impact patient safety or data quality and integrity. Not only does this reduce the risk of citations for protocol deviations once a study is underway, it also can save time and money that would be spent on re-training when deviations began occurring with real study participants.

Analytics allows for both sponsors and sites to understand the problem and make better plans to fix them, taking less time away from the clinical trial and reducing overall study risk.

 

To learn more about a training approach which integrates predictive analytics, visit http://proficiency1.wpenginepowered.com/prescriptive-analytics.

Correcting Site Enrollment In 3 Steps

Patient enrollment is a common subject of discussion among both sponsors and research sites. Challenges to enrollment can be myriad, and the root causes are hard to pin down. But effective enrollment is critical to the ultimate success of any clinical trial, and there are steps that research sites and sponsors alike can take to ensure that each site is capable of enrolling sufficient patients before they sign on for a particular clinical trial.

In theory, sponsors should be picking sites that have known patients that fit a protocol’s inclusion/exclusion criteria. But that does not always happen for a variety of reasons. For instance, a site may have access to many patients with a specified condition, but when it comes to enrolling in accordance with the protocol requirements, sites seem to struggle.

According to some estimates, fewer than 7% of sites enroll the patient numbers they promise, even though sponsors tend to cut site estimates of available patients by one-third to one-half to account for overestimation.

More sophisticated sites do this evaluation. Sites have to be more discriminatory due to tighter resources (personnel). Can’t take studies that are going to sit in contracts approval for a year and don’t know we have patients. Won’t move forward without validation. Some wait for sponsors to say they’ll pay you. Some just don’t.  IT’s their obligation for ICH. Changing slowly.

 

Conduct enrollment feasibility validation


Savvy sites will validate enrollment feasibility by conducting their own data mining against the inclusion/exclusion criteria established in a protocol. This should include an evaluation of how many patients meet the trial criteria out of the total number with the target condition seen during a particular period, such as a month. This can include a review of outreach partners like affiliated healthcare systems and patient advocacy groups, if needed.

It is noteworthy that these activities are included in ICH requirements, which state that research sites must provide evidence of their ability to meet the requirements, including enrollment goals, of a clinical trial.

Information gleaned in this way can be used strictly internally to determine if a particular trial is viable or it may be shared back to the sponsor, possibly with suggestions on modifications to the inclusion/exclusion criteria that might boost the number of potential patients that site could provide while still meeting the demands of the protocol.

If sites take that approach, they must be sure to have data to back up their recommendations. Regardless, however, those suggestions may not be accepted. Safety or regulatory considerations may require that certain exclusions be included in a protocol, for instance, and the sponsor can’t do anything about that.

And despite the ICH requirement and the intrinsic value in these activities, some sites may opt to take on a clinical trial even if a feasibility validation shows scant numbers of viable patients; sites can be concerned that they wouldn’t be selected for future trials if they pass on one.

Another consideration is how—and whether—sites are compensated for these efforts by trial sponsors. Some sponsors and CROs do require an enrollment feasibility validation step and include compensation for the time and effort required to complete that evaluation. However, this is not universal, leading some sites to skip this step simply because they cannot afford to have their key staff spend time on uncompensated work.

 

Develop an enrollment strategy


If a site determines that it can indeed meet the enrollment needs of a protocol, it should develop an enrollment strategy that maximizes the resources it has, including automating some steps or having clerical staff conduct simple yes/no reviews of criteria.

In a nutshell, sites must filter through potential patients to see which do and do not meet the protocol inclusion/exclusion criteria; however, inefficiencies in this process are rampant. some of these decisions are simple—such as whether a patient does or does not have this specific disease state—and can potentially be handled by an algorithmic review of health records and/or clerical staff using simple keywords.

It’s not common for sites to think about segmenting the enrollment criteria in their workflow. Many sites just cut-and-paste the enrollment criteria into a list or spreadsheet without considering the order in which they review each criterion. For larger sites with expansive IT capabilities, AI could be applied to electronic records to screen patients out based on simple, clear-cut criteria like age or stage of disease. Similarly, a medical assistant could pre-screen for some inclusions.

At some point, higher-level staff such as PIs or coordinators will need to be involved, but if other resources can be brought to bear to reduce the number of patients those employees must review, they can focus on the tougher decisions and more detailed screening, initial testing and informed consent procedures, which often occur concurrently.

But just as they struggle with the human resources side of feasibility validation, sites can struggle to have key staff devote too many FTE hours to patient screening. If only one patient out of 100 screens, for instance, that can add up to a lot of hours spent for very little return.

Batching criteria into items that can be prescreened by AI, medical assistants or clerical staff and then more nuanced decisions—such as concomitant conditions or risk combinations—would be made by an investigator or coordinator. By spreading the workload in this way, sites can increase the efficiency of their process for identifying and enrolling prime candidates.

Technology can play a role here, as well. Challenges like inability to log into the CRF system because it’s not yet finalized, having to log into multiple systems to complete screening and enrollment tasks can lead to an amount of work that is disproportionate to the payment the sponsor provides. A carefully considered strategy that plans patient screening and enrollment activities around availability of all necessary technical systems can ease this burden.

 

Provide protocol-specific enrollment training


And sponsors can help with workflow by considering this when developing protocol-specific training for their sites. That training could suggest ways in which sites could batch patient reviews to improve the efficiency of ruling potential patients in or out.

Carefully targeted training is especially important in the current environment of frequent staff turnover at research sites. A WCG survey of 50 site leaders conducted in 2021 indicated that nearly one in five sites has staff turnover rates greater than 30%.

And sponsors need to think hard about how they provide training to sites, Jenna Rouse, chief experience officer at Pro-ficiency, said, because one size does not fit all.

“You have to inform sites the right way, using something that helps visualize and absorb complex enrollment standards, as well as what your expectations are for enrollment,” she cautioned. “Enrollment strategies are more effective than enrollment criteria lists. Helping sites with that will help them enroll more effectively.”

In other words, rather than simple lists of inclusion and exclusion criteria presented to a site, sponsors should strive to provide clear illustrations of those criteria, how they relate to each other and how they relate to trial schedules and visit timelines.

An important consideration is that sites are expected to implement enrollment criteria over a substantial period of time, often with a degree of lag between when they receive training and when they begin enrolling patients in a trial. Providing aids to help remind site staff of how they should conduct enrollment activities and what the sponsors expectations are can be a helpful part of any enrollment strategy. Useful job aids could include checklists aligned with the workflow, as well as reminders about what criteria do and do not need to be reviewed by a physician or research nurse.

And while it may take more effort up front to provide this sort of training, it can also help sponsors avoid having to retrain sites or deal with enrollment shortfalls after a study has begun.

In fact, the notion of pay now or pay later applies to the approach both sites and sponsors should take to patient enrollment planning and implementation. Carefully analyzing a site’s true capacity to provide patients from existing pools, providing detailed and strategically organized training and developing a strategy to efficiently use resources to identify carefully targeted patients more likely to opt into—and stay in—a clinical trial are ultimately necessary to ensure that both enrollment targets and study timelines are properly met.

Want to learn more about how to address pesky enrollment difficulties? Visit http://proficiency1.wpenginepowered.com/enrollment to uncover how to enroll more and lose less.

Reducing Protocol Deviations With Training-Based Metrics

Protocol deviations can have a detrimental effect on new drug development, adding to costs and time to clinical trials and even putting the acceptability of data generated during a study at risk. Despite this, protocol deviations seem to be viewed as just part of doing business in the clinical research arena. And one question that arises is whether training in individual protocols is adequate, given the frequency with which deviations occur.

In its recent analysis of inspectional findings during 2021, the FDA said in its annual BIMO metrics report that protocol deviations or failure to follow the investigational plan was the most common observation listed on Form 483s after a BIMO inspection. This was also the most-frequent Form 483 observation during 2020, and has been at the top of the list for the last several years, the FDA noted.

And in its January/February 2022 Impact Report, Tufts University Center for the Study of Drug Development (CSDD) reported that protocol deviations rose in the 2018-2020 period compared to the 2013-2015 period, as have substantial amendments to protocols. According to the CSDD report, Phase 3 trials had the highest number of deviations per protocol, with an average of 118.5, which affected 32.8% of patients enrolled. Phase 2 studies saw an average of 75.3 deviations affecting 30% of patients, and Phase 1 trials had an average of just 8.7 deviations affecting 15.3% of patients.

Oncology trials saw a higher number of deviations, on average, compared to other areas of research, with Phase 2 and 3 oncology protocols having 30% more deviations compared to non-oncology protocols. Additionally, protocols outsourced to CROs had more deviations and substantial amendments per protocol, CSDD reported.

But despite the potential for negative repercussions, protocol deviations remain an ongoing challenge for researchers. As a matter of fact, in a June 2020 blog post, clinical trial platform provider Castor referred to protocol deviations as “the new normal.”

Precisely quantifying the impact of all of these protocol deviations on clinical research is difficult because of the variation among different areas of study and even among individual clinical trials. However, it is clear that deviations do affect trial progress and outcomes.

 

Industry seeking to change


Despite these figures, however, researchers and sponsors appear dedicated to reducing or even eliminating protocol deviations. In fact, some research organizations use protocol compliance as one of the metrics by which site performance is measured. For instance, Diane Whitham et al., on behalf of the Site Performance Metrics for Multicenter Randomized Trials Collaboration, listed several metrics related to protocol compliance in an October 2018 Trials article, including:

  • Percentage of randomized participants with at least one protocol violation;
  • Percentage of randomized participants receiving allocated intervention as intended per protocol;
  • Number of missed visits per number of randomized participants;
  • Number of late visits per number of randomized participants; and
  • Number of critical or major audit findings per number of randomized participants.
  • And the entirety of a seasoned clinical research audience attending a 2021 webinar agreed that one key metric should define success for site training: zero important protocol deviations.

The message seems to be that the goal of zero deviations—or at least zero important deviations—can be achieved with the right strategy. The industry is clearly keen to avoid the potential blowback from protocol deviations, which can include additional time to complete a trial, the costs associated with extending a trial.

Protocol deviations can increase costs due to the study having to be extended, additional patients enrolled, replicated work and prolonged timeline for product development. Additional monitoring and retraining costs may be incurred, as well. Deviations also may have an impact on data quality that could ultimately have long-term implications on product approval.

In fact, the ICH E9 guidance, Statistical Principles for Clinical Trials, stated clearly that protocol deviations “always bear the potential of invalidating the trial results.”

And all protocol deviations require documentation and reporting, Laurie Halloran, president and CEO of Halloran Consulting Group, noted in the May 2, 2018 Life Science Leader, activities that eat up staff resources and can add to costs in that way. If an unreported deviation is discovered during a regulatory inspection, it is likely to result in penalties from the FDA or other regulators.

In some cases, deviations may lead to protocol amendments, which can delay trial completion, potentially costing $35,000 per day or more, according to a 2015 survey by Clinverse, a statistic still used for cost estimates. Amendments can add an average of 30 days—and associated costs—to the length of a study, according to a 2021 report from CSDD.

Protocol deviations—whether enrolling an unqualified patient, incorrectly performing a visit or procedure or failing to collect data necessary to interpret primary endpoints—can result in unusable data that requires additional time and resources to replicate. In its March/April 2021 Impact Report, CSDD noted that mid-study amendments typically require 30 days to complete before a study can resume, adding significantly to the time required to complete a clinical trial.

 

Addressing deviation risks with training


There are myriad reasons behind protocol deviations. Ever-increasing protocol complexity could be one factor that contributes to protocol deviations. CSDD noted that protocols have been steadily increasing in complexity since 2009, with increasing numbers of endpoints and distinct protocol procedures. Those increases provide more opportunities for investigators or research staff to make mistakes.

Sometimes a change in procedure may be made to ensure patient safety, as in the case of emergency treatment, Halloran said.

And social distancing requirements during the COVID-19 pandemic spurred some protocol deviations, such as replacing in-person site visits for patients with remote or telehealth visits. The FDA in August 2021 updated its guidance, Conduct of Clinical Trials of Medical Products During the COVID-19 Public Health Emergency, recognizing these necessities and clarifying what protocol deviations due to COVID would be acceptable with appropriate reporting.

But these relatively innocuous reasons are not the only causes of protocol deviations. For instance, deviations may occur because the protocol is hard to understand or allows different interpretations at different sites. And they may occur because study teams have not been trained adequately or because knowledge isn’t transferred well during staff turnover. In addition, Insufficient performance-based tracking throughout site training makes it difficult for proactive actions to be taken to address high-risk areas for protocol deviations that may result from site error.

All of these causes can be addressed with the right type of training. Thorough, protocol-specific training can help ensure that investigators, coordinators and other research staff fully understand how to perform all tasks included in the protocol. And properly designed training will develop mental muscle memory so that researchers can perform correctly under pressure or when substantial time has passed since training. Besides developing participants’ comprehension, proper training should track performance and allow sponsors to identify and correct potential problems before they impact a study’s budget or timeline.

For instance, protocol simulation-based training can mimic realistic scenarios and guide research staff through each step, including making the right decision when problems occur, like:

  • A patient takes a restricted medication just before the randomization visit;
  • A temperature excursion occurs for the investigational product; or
  • A patient misses a dose or a visit where a procedure was to be completed.

Ideally, the training allows the learner to critically think through preventive actions. The simulation scenarios should not only model what can go wrong and assess potential corrective actions, but more importantly, how the issues can be prevented. When users make decisions in a simulation experience it can be tracked, collected, and presented in performance and compliance reports. These reports provide invaluable insight that can identify and address individual or site-wide weak points before a protocol deviation occurs. In contrast to traditional training, simulation-based training aims to ensure study training is consistent, intense, and proactive.

Simulation-based training in particular allows research staff to practice a protocol and all its nuances in advance of applying it to real patients and real data, with the associated real potential risks. And in addition to identifying and addressing protocol deviation risks, the designed scenarios can allow research staff to learn how to prevent the issues that lead to deviations before they arise.

And this practical approach to training can help sponsors and researchers reduce the incidence of protocol deviations and avoid the additional costs—in both time and money—associated with fixing those deviations.

 

Contact us to discover how our simulation-based training approach can reduce protocol deviations across studies.

Telling Ain’t Training

Whether ensuring GCP compliance or the correct application of a study protocol, training is a critical part of any clinical trial. During the life of a study, there may be several points at which a sponsor may require additional or more intensive training to supplement their initial protocol-training. In choosing the most suitable training method, sponsors must identify what they are hoping to achieve with their training efforts. Is their study complex and requires a change in behavior? Will sites need to adjust past compliance methods?

Traditional training methods, such as multi-hundred page slide decks and SIV lectures, can be sufficient for their sites with straightforward protocols, but more often than not, training centered around lectures and lengthy material isn’t enough. After all, sites know better than anyone that telling ain’t training.

Training is always provided in preparation for beginning a new protocol. This is the first point at which a new drug sponsor should look carefully at their training. Companies need to have a clear picture of what the goals of the training are, Jenna Rouse, Chief Experience Officer, advised. For instance, if proof of GCP training for experienced research staff is needed, a simple checklist approach can be sufficient.

But if the training is intended to change behavior, the learners will have to practice behaviors critical to the success of the study, before they are in front of a subject; more interactive approaches, such as use of simulation, may be better suited for this, she said.

And the protocol itself may determine whether a traditional training approach or a more intensive one is most appropriate. For example, if the protocol requires treatments or procedures that differ from the standard of care, physicians, nurses and other research staff will have to overcome existing mental muscle memory in order to absorb the new way of working with patients. In cases like this, the ability to practice in a consequence-free environment using simulation can be a valuable tool to ensure that the protocol is carried out correctly, Rouse said.

Essentially, when a sponsor has a protocol that includes any degree of complexity that requires both short- and longer-term recall, more intensive training may be required, Rouse said, adding, “The level of complexity of the protocol should dictate the complexity of training.”

Visual elements should ideally be combined with quick-reference job aids to trigger easy recollection of specific procedures, she noted.

 

Protocol deviations, amendments demand response


During the course of a clinical trial, protocol deviations may occur. If a significant deviation—one that is critical to the study’s future—occurs, companies likely need to scrutinize their previous training.

Anytime a protocol is amended, research staff must be educated on changes, as well. This additional training is particularly important if the protocol amendment was due to a challenge that put patient safety or data quality at risk, Beth Harper, chief learning officer at Pro-ficiency, said.

Sponsors will usually take action if enrollment is behind or other deadlines are not being met, Harper noted. These are not usually triggered by a quality issue, but still may result in additional training if that is identified as a way to address the root problem.

The CRA often has a critical role to play, conducting a root cause analysis to figure out what might be the source of a deviation at a site. Examples of issues could include a new coordinator who needs more help or guidance, lack of PI oversight or lack of clarity in the protocol, Harper said. ICH guidance requires conduct of a root cause analysis and then action to address that cause.

“The typical response is that the CRA will retrain,” she said. “That could mean sending a reminder to affected staff, conducting a thorough walk-through of the site or just re-presenting the SIV training. The latter is what usually happens, but it is better for retraining to be more tailored.”

Instead, sponsors should select a new approach designed specifically to change the way research staff conduct themselves, Rouse said. For instance, if a webinar or PowerPoint presentation were used for initial training, simulations of realistic scenarios could be provided for re-training in problematic procedures.

And if simulations had previously been used, companies can look at any metrics from that training and compare that information to researcher performance during the study. If conditions on-site are significantly different from those presented in simulations, for instance, this could lead to errors even among researchers who performed well during initial training. An example might be a simulation that included dosing for five-year-old children weighing about 40 lbs not preparing research staff at a site that ended up enrolling teenagers weighing around 120 lbs.

That means that close analysis of the protocol to address any points of confusion that were not caught previously would be an important part at developing any additional training, Harper said. New training could be as simple as adding a vignette to the existing scenarios or creating a mini-module focusing on the specific problem. For instance, a mini-module might be created just for research pharmacists because the initial training didn’t anticipate certain challenges or gray areas that might arise with the pharmacy instructions.

“It is possible that researchers may have made the right decisions in the simulations, but if circumstances were different on site compared to how they were presented in the simulated scenarios, a disconnect could happen,” Rouse said. “It’s necessary to evaluate what is happening at the site to identify these problems and address it via training.”

Above all, companies need to be able to evaluate their training when issues arise at any point in a clinical trial’s life cycle. Ideally, the training should provide some behavior-based result, where it can be determined that researchers understand how to make the right decisions at key decision points. Traditionally, sponsors and research sites have not had analytics in this area until a trial was underway. Use of in-person or remote lecture-based training, for instance, provides little feedback on how well learners understand and can apply the training.

Use of simulation-based training, on the other hand, can provide analytics before a study even starts, giving companies a baseline to compare to if things do go sideways at some point during the trial.

And even when training is identified as the cause behind a problem, it may not be the true root cause, Harper cautioned. Issues with the protocol itself, patient compliance challenges or other issues can affect performance and not be addressed by re-training.

 

Training as part of settlements


Finally, companies often invest in additional or big gun training in the face of regulatory or legal actions. For instance, a warning letter from the FDA, or even just Form 483 observations from an inspection, might pose red flags that training is inadequate. Failure to follow the protocol is consistently the top observation during FDA inspections of clinical research sites.

“With FDA, problems in training would usually be noticed at the submission point, and the drug would then not be approved,” Margaret Richardson, general counsel and head of regulatory affairs at Pro-ficiency, explained. “The sponsor would get that feedback and have to spend lots of money to rectify it at that point.”

And additional training is often required as part of a court settlement or a settlement agreement with a state or federal agency. Court cases or settlements, with DOJ, for instance, will typically include assignment of an ombudsman. The government will review everything to make sure the issues that led to the case are address, the identified problem is solved. In such cases, Richardson explained, the settlement will define what is adequate in terms of retraining, and the training may even be reviewed in advance for adequacy.

Simulation programs like those offered by Pro-ficiency can be useful in these situations because they provide analytics which are able to show not only that researcher staff completed the training, but also what scenarios they were given, where they succeeded immediately and where they may have needed support to perform optimally.

These sorts of problems may or may not result in a permanent change to a company’s training approach. In the case of a major, systemic problem, for instance, training might undergo a more global change, Richardson suggested. For a more discrete issue, on the other hand, the response may be more limited.

For both limited applications to address problems or development of entirely new training programs, however, companies must understand that a read-and-sign approach will make little, if any change. Engagement needs to be a central philosophy for clinical research training, as keeping staff interested will improve the quality of training. And use of real-world scenarios that staff can relate to is an important part both of engagement and ensuring transfer of skills to the real world.

SIVs: Identify Skill Gaps Before Retraining

As a former CRA, I have spent the better part of my career mentoring, teaching, and helping CRAs become competent at monitoring clinical trials to ensure compliance with protocol and GCP. In light of this experience, I urge all CRAs and monitors to STOP THE MADNESS of having CRAs reconduct site training at site initiation visits (SIVs). Instead of retraining sites across all topics, I propose that SIVs focus on retraining sites in their weakest areas.

 

A focus on evaluating protocol-specific training already conducted at a research site could hold a key to better focusing site initiation visits (SIVs) on critical factors and reducing the amount of time spend on these meetings.

The SIV is used to confirm that a site has hit several milestones needed to be ready to start enrolling patients. These milestones include:

  • All staff have been property trained and qualified;
  • Any necessary equipment—from e-diaries and electronic data capture (EDC) systems to special equipment like ECGs—is in place and working properly;
  • Staff that must use the equipment has appropriate access and is able to use it correctly; and
  • The study drug is on-site, properly documented and stored, and in sufficient numbers for the expected number of patients.

The process of evaluating these areas—often including both completion of a feasibility questionnaire and an on-site qualification visit—can eat up as much as 30% of a clinical trial’s timeline, according to a March 2021 blog by Clinical Research IO (CRIO).

Traditionally, all of these areas have been reviewed during an in-person meeting, the SIV. And often, this meeting, in order to check off all of the above boxes, has included long lectures about how to conduct the trial in accordance with the protocol. However, sites, along with some sponsors and CROs, have recently begun to question whether this approach is the most effective and efficient method of confirming a site’s readiness to begin a clinical trial.

And this is spurring some sponsors to question the value of the SIV in its traditional form. A better approach, many sites, sponsors and CROs agree, could be to replace the rote SIV with a more targeted and customized approach.

 

Why re-conduct training?

A particular area of focus has been in the training of clinical trial staff. Some research sites are pushing back against long, drawn-out SIVs that include a repeat of training their staff have already completed. Many sites believe they gain nothing from this process; rather, it is a box that is being mindlessly checked off.

In theory, if the staff have already been to the investigator meeting and already done online training, this should not need to be re-done. No site staff should need training in the protocol at this point.

And the training provided at the SIV—typically a lecture-type presentation with a slide deck—may not add anything of value to staff capabilities and understanding. A one-size-fits-all lecture by a CRA who is not a trainer is unlikely to correctly target any site-, protocol- or job-specific areas where weaknesses may legitimately exist.

A better approach would be to provide site-specific training only as needed for identified knowledge gaps. The goal must be to make sure that any problems are addressed before a site begins enrolling patients for a new clinical trial. And rote checking off items on a broad, overly generalized to-do list is probably not the best way to accomplish that goal.

 

Insight available from training metrics

Training systems that use simulation can offer a great tool for sponsors and CRAs to streamline the SIV. For instance, Pro-ficiency’s training approach generates metrics that show how well each participant did during the training and highlights weak areas that may warrant addressing in an SIV. Because the training walks staff through actual protocol scenarios, the metrics can easily flag areas where the protocol may not be well-understood, or where certain staff skills need to be upgraded.

For instance, if the metrics provided show that the PI didn’t understand or struggled with something during training, or that no-one at a given site understood some aspect of the simulation, this information can be used to develop highly targeted training as part of the SIV. In such cases, the SIV could offer an opportunity to run through any specific problematic scenarios again, narrowly targeting only training gaps.

Conversely, if all staff at a site passed their training simulations with flying colors, the sponsor or CRA could correctly conclude that no additional training needs to be included in an SIV.

Metrics like this can act as valuable inputs for creating a customized SIV for an individual site. Anyone with administrator privileges can run reports on Pro-ficiency training results and review the metrics to see who did or did not do well in the training, and where they had difficulty. A CRA can request that information to help determine whether training should be part of a SIV. And if sponsors or CROs have any training concerns, they can ask the CRA to discuss those concerns.

Another approach could be to change the training portion of an SIV from the usual lecture to an opportunity for the site to ask questions based on their experience with the already-completed training. Savvy sites are already doing this, developing lists of questions based on “day-in-the-life” mock runs through the protocol for each position. Many sites may reach out to the sponsor or CRA independently with these questions.

The point is that the focus shifts from the sponsor to the site. This is appropriate, because under GCP requirements, the PI—not the sponsor—is responsible for the quality of the trial, including staff oversight, competency and training.

 

Focus on site-specific needs

An important consideration is what an SIV is intended to accomplish. In many cases, some of these goals can be achieved using other mechanisms. If that is the case, it makes sense for these things to be done outside of an SIV.

When developing SIVs for individual sites, sponsors and CRAs should strive to do away with anything perfunctory and anything that can be verified otherwise. If a particular site has unique needs, such as specialized equipment, customized program planning or another need, addressing that in an SIV could have value. However, even in those cases, much of that work could be done remotely.

Outside of training, SIVs are used to confirm equipment readiness and correct supply and handling of the investigational drug.  If a site is proven to have all of the equipment and drug supplies needed and is ready to start enrollment, it might even be possible to eliminate the SIV altogether and allow that site to begin. And since much of this should have been addressed in training already, the training metrics can be used to identify gaps in these areas.

Both the training portion and the SIV are more effective and valuable if they involve a conversation, rather than someone talking at site staff. This is more easily accomplished by discussing identified problems from previous training in the protocol or on equipment versus subjecting staff to a PowerPoint presentation and lecture that may cover things in which they already are competent, and could also miss areas where confusion does exist.

Even when a formal SIV is warranted, sponsors and sites need to consider whether an in-person meeting is necessary. Many things, even demonstration of equipment availability and staff competency in using it, can be done remotely and involving only the specific personnel necessary.

Another advantage to the selectively targeted SIV approach is how well the process lends itself to being done virtually. For instance, access to systems can be confirmed by checking as an administrator that the necessary people are logged in; this demonstrates the ability to access the necessary systems for data entry and management, communications and other purposes. Similarly, a virtual call via Zoom or other approved platform can be used to view the pharmacy and confirm the presence of necessary equipment.

And the industry seems to be shifting in that direction to some degree. The CRIO blog also indicated that many sponsors and CRAs are increasingly moving to technology to allow more aspects of the SIV process to be conducted remotely.

Virtual meetings can even be used to demonstrate that equipment turns on and functions properly, and to troubleshoot any problems noted. Personnel can even demonstrate proficiency in use of equipment via a virtual call.

Ultimately, it’s important that the SIV process change from a rote, one-way, box-checking approach to a more conversational, interactive, focused and targeted approach that sets sites up for success.

Preventing Protocol Deviations With Simulation-Based Training

Protocol deviations can have a devastating impact on the quality and integrity of key study data and potentially affect patient safety. And there can be serious regulatory repercussions for failing to follow the protocol precisely, as well. Taking the right approach when training investigators, research coordinators and other key staff can be critical to avoiding protocol deviations.

And simulation-based training in particular allows research staff to practice a protocol and all its nuances in advance of applying it to real patients and real data, with the associated real potential risks. And in addition to identifying and addressing protocol deviation risks, the designed scenarios can allow research staff to learn how to prevent the issues that lead to deviations before they occur.

FDA inspectors will focus primarily on protocol deviations that are preventable and that the agency deems important, meaning the deviation may significantly impact the completeness, accuracy and/or reliability of key study data or may significantly affect patient rights or safety.

The FDA’s report on inspection trends for 2020 indicated that failure to follow the study protocol remains the most-frequent Form 483 observation made during BIMO inspections of clinical research operations. Some of the top issues mentioned have included patients enrolled who did not meet inclusion criteria or had an exclusion, incorrect performance of study procedures, missed labs or assessments, missing or late AE/SAE reports and missing protocol-required documentation. Other top observations were:

  • Inaccurate or inadequate case histories;
  • Insufficient accountability records;
  • IRB problems;
  • Failure to report AEs to sponsors; and
  • Problems with informed consent forms or procedures.

And while warning letters issued to clinical investigations are not commonplace, that is also a risk that can derail a study by diverting resources to correct problems noted by the FDA so as to keep the research moving forward. For instance, the FDA has issued four warning letters to clinical investigators in 2021 to date, three of which have included “failure to ensure that the investigation was conducted according to the investigational plan” among the citations. Specific violations included:

  • Enrolling patients that did not meet all inclusion criteria;
  • Enrolling patients that showed exclusions at the initial screening visit;
  • Not applying defined patient randomization criteria and procedures;
  • Failure to perform required lab tests at time points specified in the protocol; and
  • Failure to follow the protocol-specified dosing regimens.

The FDA noted that these trends have been consistent for the last several years, with failure to follow the protocol or investigational plan consistently landing in first place for most-common Form 483 observations. The reasons for protocol deviations are many’, Beth Harper, chief learning officer at Pro-ficiency, a company that provides customized training solutions for clinical research sites, said. And appropriate training can help to alleviate some of the issues that lead to protocol deviations.

 

Complex protocols challenge protocol compliance


Protocol complexity is one factor that could contribute to protocol deviations. In early 2021, the Tufts Center for the Study of Drug Development (CSDD) noted that protocols have been steadily increasing in complexity since 2009. For instance, the CSDD report said that Phase II and III protocols now generally have about 20 endpoints, with an average of 1.6 primary endpoints, up 27% since 2009. Additionally, the mean number of distinct protocol procedures has risen 44% in the same time frame.

And this increase in procedures and endpoints could provide more opportunities for misunderstanding or mistakes.

“As protocols become more complex, it is possible that some procedures may not be feasible within the timelines specified, for instance, because operational needs were not considered alongside scientific requirements,” Harper said.

And traditional training approaches may not be sufficient to the task of ensuring that research staff fully understand the fine points of all procedures included in a protocol. Simulation-based training that requires researchers to go through individual steps of a process or procedure in a life-like scenario can be a more effective way of preparing investigators, coordinators and other research staff to avoid protocol deviations.

The value of this type of training is gaining attention throughout the industry. For instance, Elio Mazzone et al. noted in a paper in the August 2021 Annals of Surgery that competency-based progression simulation training was more effective than traditional training in reducing procedural errors and increasing the number of correct steps taken. A key, the paper said, lies in continuous feedback and the ability to repeat tasks until they are done correctly.

Pro-ficiency’s training solutions, for instance, provide realistic scenarios that give learners freedom to make a variety of decisions. After considering the range of possibilities for a procedure, for instance, the training will guide research staff through likely scenarios, offering multiple options at key decision points and providing immediate feedback as to whether the learner is on the right track or not.

 

Decision-making skills emphasized


And as the research staff progresses through each procedure, from enrollment and consenting through schedule lab tests and exams, Pro-ficiency’s training system tracks each decision made, provides immediate feedback and generates behavior-based analytics that can flag areas where staff may need additional training or where the protocol may be confusing and need clarification.

Additionally, Harper noted, Pro-ficiency experts dissect each protocol to predict potential performance problems, looking for areas where researchers may be tripped up.

During training utilizing simulation methodologies, however, research staff actively participate in making the types of decisions necessary to ensure that the protocol is followed precisely. Simulation-based training that walks researchers through the necessary steps for all processes and procedures required by a protocol can help ensure that they fully understand the protocol.

It can also help prepare research staff to correctly handle unusual situations or grey areas that might arise.

In some cases, research staff may need to go through a scenario several times before getting to the right answer. In those instances, it’s important to be able to determine whether there is a problem with an individual’s understanding, the level of knowledge and experience at a given site or, if multiple employees at multiple sites have similar difficulties, whether the protocol itself is unclear.

And in other cases, a protocol amendment may be required. Alternately, supplemental training and job aids, such as checklists, cheat sheets or decision trees, may be provided to help ensure that researchers conduct all activities in line with the protocol.

It’s also important to remember that there may be considerable lag time—months in the case of rare diseases or specific types of cancer, for instance—between researchers receiving training; job aids can be useful for these instances, as well.

Other factors that might lead to protocol deviations include inexperienced site staff, which also can be addressed via training designed to target weak areas. Simulated scenarios can be developed that will guide research staff through procedures or in using equipment with which they might be unfamiliar.

Other potential factors that could lead to protocol deviations include language barriers, regional availability of supplies, equipment or staff expertise and issues of cultural sensitivity. For instance, CSDD noted in its 2021 report that the mean number of countries in which studies are conducted has also grown substantially since 2009.

As more trials include multinational sites, this can be an important consideration. Voiceovers and subtitles can address language differences, and specialized training modules may be designed for sites facing availability or cultural issues.

Maximizing Site Enrollment with Improved Protocol Comprehension

Sponsors and CROs trying to achieve better enrollment in clinical trials often overlook the most essential stakeholder in the process — the investigative sites that implement the trials and interact with the patients throughout the process. At any given moment, tens of thousands of clinical trials are running across the globe. Given the number of studies currently underway, it is unsurprising that patient enrollment has become a top priority. Ineffectively recruiting patients often delays research, potentially compromising sponsors’ timeline and prompting a dreaded “study rescue” situation. When enrollment is lacking and potential subjects are being excluded left and right, the first thing to determine is whether the study’s protocol is the root of the problem. Sites suffer the most from overly complicated protocols, often putting enrollment, data collection and study fulfillment at risk. Throughout a recruitment process, potential subjects moving through the pre-screening and screening process may be disqualified for unnecessary reasons along the way. Poorly written protocols create unnecessary challenges for sites, confusing investigators and leading to investigators disqualifying ideal subjects, or spending valuable time and resources screening unsuitable candidates. Doing a deep dive diagnostic evaluation of the way your protocol is describing the eligibility criteria, and the way you are teaching your sites how to identify and evaluate the ideal subject for the study can go a long way in preventing and overcoming enrollment challenges. Here are some strategies for improving existing and developing protocols:

 

1. Perfectly describing the eligibility criteria – Segmentation strategies

Lets face it, protocols may never be perfectly executed, but they can be perfectly written, at least from an inclusion / exclusion criteria standpoint. Split out composite or multi-part criteria vs. lumping things together. If you can’t answer a simple yes or no to each part of your criteria, then the site won’t be able to determine if a potential subject is in or out. There is no shame in having a few more easily-understandable criteria. Lumping things together or worse, describing end of study procedures along with an eligibility criterion creates unnecessary confusion. Segmenting the criteria into the pre-screening vs. screening (post-consent) stages can help optimize the site’s workflow to quickly rule out subjects who have permanent exclusions. Clearly articulating the time-based criteria will help sites to understand how long they have to wait before re-evaluating a potential subject.

 

2. Job aids

Site-wide comprehension of your eligibility criteria can be improved with one, cost-effective tool – a well crafted I/E criteria checklist. And no, we don’t mean copying and pasting the criteria as written in the protocol. We mean converting this into an easy-to-use checklist that clearly segments out the component parts of the criteria by stage (pre-screening / screening) and by permanent exclusions vs. time-based or temporary exclusions as described above. Make it simple for site staff to easily walk through the criteria in a way that allows them to quickly rule out subjects who don’t warrant further attention from those who should be put on a watch and wait list. The addition can be incorporated into any on-going study without any additional adjustments.

 

3. Use visuals!

Improving total protocol comprehension across sites does not always require a complete overhaul of the written document, but don’t shy away from flow diagrams or decision trees. If you can map out a process in the protocol, or visualize the steps a site will have to take, you can more clearly describe this in understandable terms in the protocol. These simple infographics enhance understanding with little added effort to the protocol writer. Visually representing key parts of the protocol ensures the important steps are easily translated across sites, and greatly reduces site errors.

 

4. Site Training

When sites struggle to recruit randomized subjects and the protocol wording is not to blame, it is time to reassess your site training approach. The majority of site training is unoptimized, and a missed opportunity for sponsors and CROs. Effective clinical site training prevents deviations by giving users the chance to learn from the known challenges from earlier studies and other sites. Site staff are not all created equal in terms of their experience so a “one size fits all” approach is a surefire way to set your study up for failure. Pro-ficiency’s Simulation-based online training provides users with training tailored to their individual needs. With Pro-ficiency’s superior clinical site training and analytics to track performance, sponsors can ensure sites are prepared and understand their protocol sufficiently to maximize enrollment and minimize deviations. Preparing site staff by providing them training that guarantees true and total protocol comprehension is the safest way to ensure that a poorly written protocol won’t come in the way of your enrollment and study timelines.

Visit http://proficiency1.wpenginepowered.com/our-approach/ to discover a better way to train clinical sites.

Behavior Should Drive Clinical Trial Training, Not Test Scores

Behavior Should Drive Clinical Trial Training, Not Test Scores

Applying analytics that focus heavily on measuring changes to research staff behavior during training can be a strong predictor of performance during an active clinical trial, leading to better protocol and GCP compliance. This can mean fewer errors that can affect data, more effective and efficient patient enrollment and better regulatory compliance, among other benefits.

It’s well-accepted that training is important to ensure that researchers conduct clinical trials in accordance with the research protocols. While it’s easy to provide training materials to research staff and check off the “training completed” box, it can be more challenging to ensure that the training provided will yield behavior changes that ensure adherence to the protocol.

Despite the understanding that training is a crucial part of study start-up activities, “failure to adhere to the protocol” remains a leading observation during BIMO inspections, according to FDA’s 2020 statistics. And beyond regulatory compliance, errors in such critical areas as patient enrollment, investigational product handling and conduct of routine tests and patient visits can hinder the success of a trial.

There are several areas that warrant tracking, as Beth Harper, chief learning officer at Pro-ficiency, discussed in an October blog. For instance, things can often go wrong with patient enrollment if staff don’t fully absorb training on how to assess whether a patient meets enrollment criteria. Enrollment of patients who are not eligible can negatively affect data integrity, while missing qualified patients can hinder enrollment of a sufficient population.

Preparation and administration of study drugs is another critical area. Researchers must be able to take the right action if, for instance, a study drug arrives out of the required temperature range or appears contaminated upon inspection. Other challenges might include patient non-compliance with the dosing regimen or non-tolerance of the specified dose.

Other areas that Harper highlighted included handling of missed site visits, problems with equipment needed to perform study procedures and AE management and reporting.

In addition, deeper analytics are possible, De Castro noted. These could include separating critical from non-critical metrics and measuring the level of study impact that particular wrong decisions might have.

That means it’s important for research sites, as well as sponsors, to have confidence that training has rendered all staff capable of making the right decisions in line with both GCPs and protocol requirements when challenging situations present themselves.

But how can organizations accurately assess training effectiveness? The key lies in the ability to measure how the training affects learners’ decision-making abilities, rather than how well they regurgitate information via a test or other traditional approach.

Traditional training approaches that often feature a simple pass/fail test to assess research staff’s basic knowledge post-training, generally fail to capture this critical assessment.

A simulation-based approach to training, on the other hand, can provide this type of essential feedback to help research sites and sponsors evaluate whether staff that has undergone the training can reliably make the right decisions.

 

Four tiers of training measurements 

For Instance, under Pro-ficiency’s simulation training method, rather than a test or exam to confirm knowledge, the participant is testing continuously by successfully navigating various simulated scenarios. This adaptive learning environment helps to develop critical thinking and decision-making skills necessary to successfully implement a protocol.

This behavior tracking can generate analytics that identify where individual staff or sites that are stronger or weaker. These analytics can even flag areas of a protocol that could prove problematic to all participating research sites. This information can allow sites to take quick action to ensure that all research staff are capable of conducting key tasks correctly.

There are four tiers that research sites and sponsors need to look at when evaluating the effectiveness of training, Catherine De Castro, chief solutions officer at Pro-ficiency, explained.

The first of these is the learners’ reaction to the training; this includes ensuring that the content engages them, which boosts the likelihood of the training having a meaningful impact on behaviors and decision-making over the course of a clinical trial. However, engaging well with training does not guarantee changes to future behavior.

The second tier is application of learning. In traditional training, learners’ ability to apply information is usually measured via some sort of a test. But passing a test is also no guarantee that researchers will alter their behavior or decision-making in real life. Test-taking can measure an individual’s short-term recall of recently learned information, but has little ability to predict whether—or how well–

The third tier—focusing on learners’ behavior—is the meat of the matter. At this level, learners’ behavior is tracked to determine whether the training improves decision-making to avoid deviation from procedures mandated by the protocol. Simulation-based training, such as that provided by Pro-ficiency, can offer a critical way to not only focus on this tier of evaluation during training, but also to track its effectiveness.

“This is a hard thing to track,” De Castro said. “With simulations, you’re not just asking questions and giving answers. You are putting [learners] in specific scenarios, looking at the decisions they are making, and tracking that behavior to predict what they’ll do in real life.”

The tracking of behaviors provided by simulation-based training can help in this area by essentially modeling a decision tree. Each decision by a learner in different scenarios is tracked; the simulation responds with consequences—good or bad—that reflects real-life applications.

The fourth tier, De Castro noted, focuses on the results of training. In the case of clinical trials, the key question to answer is whether the training reduces protocol deviations. Whereas the tracking of behaviors during training can predict research staff performance, data from sponsors collected during the course of a clinical trial is necessary to determine real-world results.

The Fallacy of Footnotes Within Clinical Protocols

As part of our series covering all the do’s and don’ts surrounding protocols, I wanted to call attention to one of the greatest urban legends in clinical research…the requirement to include endless footnotes (in size 6 or 8 font no less!), at the end of the schedule of assessments in the protocol.  I have searched high and low for regulatory guidance documents that mandate such a practice, to no avail.  Common protocol templates created by organizations and industry collaboratives vary on their use of footnotes within the templates, but nowhere have I found that this is a requirement to get a protocol approved.  Yet, it is a common practice and one that contributes more to non-compliance than just about any other facet of the protocol.

It is not uncommon to see upwards of 21 footnotes in a typical Phase III study.  In one recent protocol I evaluated there were 17 footnotes, 6 “*’s” and 8 “notes” provided as part of the schedule of assessments.  I confess that I wasn’t sure what the differences were between these and what prompted the decision to use three different formats for conveying the procedural requirements versus just using footnotes.  But what I was sure about, is that the sites would have difficulty understanding and complying with the requirements.

The solution is simple.  Just add an extra column to the schedule of events with the content of the footnote.  Rather than the site having to hunt around for an important footnote that might appear several pages later in the protocol, the information will be readily available and easily readable right next to the required procedure.

Instead of this…


*Be sure to do this procedure prior to the morning dosing on days with fasting labs*

 

Do this…


A coworker recently mentioned to me that Elon Musk’s claim to fame has been challenging the norm.  The willingness to ask “why are we doing it this way?” has led to some of his innovative discoveries and his success.  So I was inspired to ask the hard question for all you protocol writers out there:

“Why are you creating more unnecessary confusion for your sites by using footnotes instead of adding in an extra column to the schedule of assessments?”

It costs nothing but potentially saves hundreds of protocol deviations.  

While you may not be rewarded with the billions of dollars that Mr. Musk has acquired for his inventions, challenging the norm when you write or review your next protocol will reap even greater rewards….happy sites….cleaner data…safer subjects.  Now that’s what I call a great return on investment!

Increasing Diversity in Clinical Research with High-Quality Training

Over the last several years, the pharma industry, along with the world at large, has become increasingly focused on issues of diversity. This has been reflected in regulatory guidelines, government and industry policies and in the clinical trial recruitment phase of new drug research and development. However, relatively little attention has gone to diversity within the clinical research workforce, which many industry insiders have suggested can significantly enhance study population diversity.

But sponsors and research sites will need to invest in training to ensure that diversity and inclusiveness are priorities not only for individual clinical trials, but also for general operations.

The bulk of focus regarding diversity has been on ensuring sufficient representation in clinical trial populations in order to meet regulatory standards for data submitted in support of marketing authorization. For instance, current NIH policy, as updated in 2017, encourages inclusion of women and minorities in clinical trials. The policy requires valid analysis and reporting of sex/gender, race and ethnicity through ClinicalTrials.gov. And in the autumn of 2020, the FDA issued a guidance on enhancing diversity in clinical trial populations.

In November of 2019, the NIH published a notice of interest in diversity among workers in the research and discovery process; that notice did not, however, address the clinical research workforce. 

Industry groups also have focused on diversity. The Society for Clinical Research Sites (SCRS), for instance, has developed tools through its Diversity Awareness Program to improve recruitment of diverse patient populations in clinical trials. The SCRS program includes an assessment tool to provide guidance for research sites to improve their ability to recruit diverse patient populations for their clinical trials.

“However, little emphasis on diversity in the workplace is evidenced in the clinical research industry,” Erika Stevens, chair of the ACRP board of trustees and leader of Transformation Advisory Solutions at Recherche Transformation Rapide, wrote earlier this year in a column for Clinical Researcher

But that could be changing. In 2019, ACRP’s existing Partners in Workforce Advancement launched a digital media campaign to boost awareness of the clinical research profession among a more diverse student population. 

Stevens also noted that ACRP has launched a Diversity Advisory Council aimed at growing diversity of the clinical research workforce, help to further develop the existing minority workforce and improve engagement within the industry on the value of diversity and inclusion.

“Developing a larger, more diverse workforce is imperative to the existence, quality and efficiency of clinical research and the inclusion of more diverse clinical trial participants,” ACRP wrote on the “Partners in Workforce Advancement” page on its website.

The value of a diverse workforce—particularly in how it can affect clinical trial recruitment—is an increasingly noticeable theme within the industry. For instance, PRA Health Sciences stated on its website that “diverse clinical research starts with the healthcare workforce.”

“Diversity initiatives are a hallmark of clinical research. From FDA-penned guidance and pharma initiatives to years of research showing that diverse trials improve outcomes, researchers have placed ample focus on the importance of diverse trial populations,” PRA wrote. “Diversity is also essential in the clinical research workforce.”

PRA suggested that increasing diversity in the clinical research workforce may improve diversity among clinical trial participants.

“If you see it, you remember it,” Mary Beth Panagos, simulation producer at Pro-ficiency, agreed, discussing how diversity-focused training can enhance diversity in trial populations. “You are looking out for it.”

Training is important because it is possible to unwittingly introduce unconscious bias in people, Panagos said. If a training shows training with all white Americans, for instance, it might be easier to dismiss someone from Bangladesh. 

Minority participants often cite a lack of trust in researchers. That trust can be built via what PRA refers to as culturally competent communications, also known as cross-cultural communication or culturally congruent communication. What it boils down to is the ability of clinical researchers to provide a shared decision-making environment that considers the context of a patient’s background.

“When the clinical research workforce—those in charge of designing and executing trials, as well as recruiting and retaining patients—is already operating from a place of diversity, equity and inclusion, the barriers to creating culturally competent communication decrease,” PRA concludes.

 

Diversity-focused training

And how can sponsors, CROs and research sites ensure culturally competent communication throughout a clinical trial’s life cycle? One answer that looms large is to employ training methods that emphasize diversity and inclusiveness in a variety of ways.

Panagos pointed out that there are many indicators of diversity beyond the commonly considered race and sex. Age, gender, sexual orientation, ethnicity, socioeconomic status, even geographical location and body type, are among the many metrics that can and should be considered.

It begins with representation in training materials, Panagos said. For instance, ensuring that training programs represent things that are considered nontraditional—a father taking a daughter to a doctor’s visit or a black person in a position of authority, for example– is an important part of.

Pro-ficiency’s simulation-focused training, for instance, is designed and developed with diversity in mind, Panagos said. The production team is always focused on ensuring that diverse races, genders, ethnicities, etc. are represented in all roles shown in the training.

And clients have proven to be already on board with these initiatives, with little pushback seen, she said.

A central part of that focus includes hiring actors for the simulations that represent diverse populations. Pro-ficiency, for instance, often uses a North Carolina community theatre organization for a great deal of their casting needs; this organization has proven well-able to provide actors that meet race, age, gender and other needs. Local casting can be important for this type of training, to ensure that populations local to research sites are properly represented.

“Diversity starts in the scripts,” she said. “This includes finding diverse actors. Sometimes we have to hire based on availability, but we make a point to cast diversity.”

“One of the big things is making sure that we are well-represented,” she added. “That means making sure that diversity includes positions of authority in the training materials. For instance, rather than casting a white person as the doctor, research coordinator or other authority figure in a particular scene, the production team should make a point to choose a minority actor for some of those roles.”

And when developing training for a specific clinical trial, it’s important to understand the target patient population. The training should reflect the race, age, body type and health conditions of that population.

Local representation can be important, particularly for large, multi-site trials that include international sites, or even just sites spread across a variety of discrete communities that may be culturally, racially or otherwise distinct from each other.

In addition to representational casting, diversity among the team developing a training program is an important factor, Pro-ficiency’s Natalie Rosen noted. Input from a diverse training development team can help ensure that the training presents different types of learning experiences simultaneously.

Diversity in the training design team helps ensure that the training is designed with diversity and accessibility from the outset, she explained.

For example, by using training that is designed to serve a neurodiverse population, a research site or pharma company can provide training that will work well for a variety of learning styles among neurotypical people, as well. Pro-ficiency’s simulation-based training provides information that is read, heard and seen, Rosen explained. This ensures that all learners can remain engaged throughout the training program.

“A program built around diverse needs from the beginning will benefit everyone,” Rosen said. “When training is designed and applied for the experiences of a very diverse population, it will benefit everyone in that population.”

And there is always room to increase inclusiveness. For example, training could include representations of people with disabilities, chronic health conditions or who are not neurotypical. If these individuals could be found among the target population of a particular study or site, it would be appropriate to include them in training for that trial or that site. And it can be worthwhile to include that representation generally, as well.

“We have metrics that we track across all sites,” Panagos said. “Those metrics look at critical decision points or how often trainees get something wrong. Keeping metrics for that look at diversity is an important part of this process. For instance, problems in understanding a protocol could result from faulty translation in multi-national studies or cultural misunderstandings.”