Debriefing is a core component of simulation-based learning. Reflection on the events of a completed simulation is a crucial step whereby participants learn and modify their behavior. This reflection is generally guided by a facilitator, whose goal is to identify knowledge gaps and attempt to address them. Commercially available courses exist to teach debriefing, and workshops devoted to improving these skills are often part of the curriculum at national and international simulation courses. Also, many programs host their own internal debriefing training. Numerous debriefing models exist to help facilitate the debriefing of participants within a simulation.
Fewer models exist to facilitate the debriefing of the faculty running the simulation. As of this publication date, the International Nursing Association for Clinical Simulation and Learning (INASCL) has created a curriculum of best practices in simulation, including session facilitation and debriefing. While the summary article mentions maintaining debriefing skills through observed practice and peer evaluation in general, these evaluation tools are not explicitly addressed.
Several post-simulation evaluation tools have been created to meet this need, but vary in focus. Some assess the simulation as a whole. Others focus only on the debriefing portion. A few focus specifically on the facilitator.
One early tool created to evaluate the faculty debriefer is the Objective Structured Assessment of Debriefing (OSAD). Described in the surgery literature in 2012, the OSAD was initially developed by researchers through a review of the existing literature and focused interviews of providers and receivers of debriefing. These results were then synthesized into a list of eight features essential to an effective debriefing. These features were modified into the categories of the final OSAD, which are then rated on a 5-point Likert scale by trained observers: approach, environment, engagement, reaction, reflection, analysis, diagnosis, and application. Descriptive anchors were included in the 5 point scale. Benefits of the OSAD include its brevity (estimated 5 minutes to completion), validity, and interrater reliability.
A similar development process was used to develop a pediatric-specific OSAD. This tool had related categories and 5-point Likert benchmarks. It was later validated and adopted by several committees and simulation centers for debriefing standardization and faculty development.
Citing concerns for the traditional, paper-based form of the OSAD, other researchers have created a modified electronic version (eOSAD). These authors note the benefit of good interrater reliability, the ease of use with video recorded debriefing sessions, and the ability to add comments, which was missing from the traditional OSAD survey. They also describe protection from data corruption as a benefit, namely by eliminating the risk of losing paper copies or exposing data to a transcription error. One major limitation of the eOSAD, however, is its requirement for real-time computer and internet access.
An alternative to the OSAD for evaluating faculty debriefing and simulation sessions is the Debriefing Assessment for Simulation in Healthcare (DASH). This tool, first described in 2012, evaluates six elements of debriefing. Using descriptions of observable behaviors as anchors, participants receive a rating on a 7-point effectiveness scale. This tool has had validation and testing, demonstrating reliability. It requires standardized training to use, which generally takes place via webinar.
Authors of the DASH boast that it applies to simulations in a variety of domains and disciplines. Indeed, since its development, it has been used to evaluate debriefing in several contexts. In addition to its use in faculty development, it has served to measure outcomes in research studies with a simulation component. For example, a modified version of the DASH (the DASH student version) has been used to measure outcomes in faculty-led vs. resident-led debriefing sessions for medical students and residents. It has also been a tool when evaluating interprofessional simulation debriefings.
More recently, allied health literature introduced an alternative evaluation tool. The Peer Assessment Debriefing Instrument (PADI) adds a component of self-evaluation to the post-debriefing assessment. It evaluates eight aspects of planning and conducting a simulation debriefing. Both the debriefer and evaluator rate performances on multiple elements within each domain on a 4-point Likert scale. They then compare responses, which opens a conversation, allowing the debriefer to focus the feedback on particular areas he or she wants the evaluator to address. It is reliable and valid across healthcare disciplines for evaluating debriefing.
Creators of the PADI suggest its benefits include the short time necessary to learn and implement the tool. They also suggest it could be a valuable source of data when evaluating teaching skills and effectiveness.
Some authors have created their own debriefing evaluation tools for their studies, but these are not in common use outside of that particular setting. Others modify existing scales to fit their needs. For example, a self-reported debriefing quality scale, based on the OSAD and DASH, as was used in the initial development of the TeamGAINS debriefing tool creation.
Still, others allocate a portion of more comprehensive tools for focusing on debriefing. These include the more holistic evaluation of nursing simulation facilitators, as was done with the Facilitator Competency Rubric.
Several tools seek participant evaluations of the debriefer as part of a larger assessment of the simulated learning session. These include the Simulation Design Scale, created by the National League for Nursing, the Simulation Effectiveness Tool-Modified, and the Debriefing Experience Scale. Each of these represents post-event evaluations that students complete by describing their perception of a faculty member or simulation event’s effectiveness.
There is no direct clinical significance to the evaluations of faculty debriefing after simulations. Instead, tools such as the OSAD, DASH, PADI, and others allow faculty to receive an assessment of their debriefing skills during a single simulation event, which may work to improve debriefing skills, which may then improve learning and impact clinical care.
Enhancing Healthcare Team Outcomes
Simulation-based medical education is a growing component of medical and nursing education. It is used to test systems, enhance communication, and improve teamwork. The post-simulation debriefing is the primary venue for exploring and addressing knowledge and behavior gaps. Principles such as psychological safety and non-judgemental attitudes are crucial to enhancing this learning environment. Faculty development of facilitators leading these debriefing sessions may improve debriefing quality, and by extension, the team’s learning.