Research Abstracts to be Presented at the 10th Annual International Meeting on Simulation in Healthcare: Phoenix, Arizona

An Expert Systems-based Virtual Patient Simulation System for Assessing and Mentoring Clinician Decision Making: Acceptance, Reach and Outcomes

D. D. Hadden; TheraSim, Durham, NC.

INTRODUCTION: Traditional clinical training methods are expensive and nonstandardized, remove clinicians from the practice setting, and impact is difficult to measure. We report on user performance with an interactive web-based simulation and data analysis program in which practitioners have managed hundreds of virtual patients with vast arrays of medical conditions.

METHODS: Using an interactive virtual medical records interface, clinicians receive electronic mentoring and testing (dual mode) by reviewing histories, ordering tests, making diagnoses among hundreds of choices, and choosing treatments from more than 1,000 medications and other therapies. The simulations can allow or hide insession diagnostic and therapeutic information which is produced by an expert system based artificial intelligence engine (A.I.). The A.I. provides guidelines and evidence based feedback on the appropriateness of choices. Finally, the simulation shows an explanation of reasonable choices for the case, a mini-review of the general topic, and the user’s errors, warnings and deviations from guideline, evidence and expert consensus – driven recommendations. All choices are recorded for analysis. This paper summarizes 5-year results from 422 cases from 81 CME programs involving 41 medical conditions and appearing in a variety of internet and hospital venues.

RESULTS: Usage_122,990 registered users representing 200 countries have attempted 402,508 sessions with a completion rate of 49. Of the approximately 5 million page views, the average user viewed 71 pages (31 pages/session) and spent 18 minutes/case. Errors–The average score was 60 points (of 100), and 66% scored 80 points. Fewer errors occurred with testing, while there was an increased number of therapy related errors made by users failing to diagnose appropriately. Outcomes_Using an analysis of a neurology program involving 1,946 users and 2,642 sessions, all clinical guidance was turned off for 100 sessions in each of 3 patient simulation cases. Success in making a difficult diagnosis increased from 12% to 36% with guidance operative, an incorrect diagnosis was avoided in 74% with guidance vs. 48% without, and appropriate treatment was more likely with guidance turned on: 74% vs. 44%, 67% vs. 52%, and 42% vs. 23%. User satisfaction remains positive for most users, including average scores of 4.2 of 5.0 using various questionnaires. In 5 HIV training simulation deployments during 2006–2007 in 3 African countries using WHO guidelines and involving 2,780 pre-/post-test simulations, 241 clinicians passed 71% of pre-tests. After clinical feedback was activated, scores increased by 35 points, resulting in a final pass rate of 93% (p 0.001 vs. pre-). Similar improvements were noted in 3 separate programs at 80 hospital sites in 2008 and 2009, the latter utilizing a competency-based model, which eliminated the need for formal post-testing.

DISCUSSION/CONCLUSIONS: Expert systems-based virtual patient simulation shows promise as a mechanism for assessing practitioner skill, detecting skill gaps and for electronic mentoring. These systems have the ability to extend the patient simulation process into chronic and infectious disease states, an area that has been primarily overlooked by mannequin-based simulators.

Reference: https://journals.lww.com/simulationinhealthcare/toc/2009/00440#1075927235