- Browse by Author
Browsing by Author "Kane, Sara K."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item The amount of supervision trainees receive during neonatal resuscitation is variable and often dependent on subjective criteria(Springer Nature, 2018-08) Kane, Sara K.; Lorant, Diane E.; Medicine, School of MedicineMeasure variation in delivery room supervision provided by neonatologists using hypothetical scenarios and determine the factors used to guide entrustment decisions. Study Design A survey was distributed to members of the American Academy of Pediatrics Section on Perinatal Pediatrics. Neonatologists were presented with various newborn resuscitation scenarios and asked to choose the level of supervision they thought appropriate and grade factors on their importance in making entrustment decisions. Results There was significant variation in supervision neonatologists deemed necessary for most scenarios (deviation from the mode 0.36–0.69). Post-graduate year of training and environmental circumstances influence the amount of autonomy neonatologists grant trainees. Few neonatologists have objective assessment of a trainees’ competence in neonatal resuscitation available to them and most never document how the trainee performed. Conclusion Delivery room supervision is often determined by subjective evaluation of trainees’ competence and may not provide a level of supervision congruent with their capability.Item Creation and validation of tool to assess resident competence in neonatal resuscitation(Elsevier, 2018) Kane, Sara K.; Lorant, Diane E.; Pediatrics, School of MedicineBackground The American Board of Pediatrics requires that pediatricians be able to initiate stabilization of a newborn. After residency, 45% of general pediatricians routinely attend deliveries. However, there is no standard approach or tool to measure resident proficiency in newborn resuscitation across training programs. In a national survey, we found a large variability in faculty assessment of the amount of supervision trainees need for various resuscitation scenarios. Objective documentation of trainee performance would permit competency-based decisions on the level of supervision required and facilitate feedback on trainee performance. Methods A simplified tool was created following the Neonatal Resuscitation Program (NRP) algorithm, with emphasis on communication, leadership, knowledge of equipment, and initial stabilization. To achieve content validity, the tool was evaluated by the NRP steering committee. To assess internal structure of the tool, we filmed 10 simulated resuscitation scenarios 9 of which contained errors. Experienced resuscitation team members used the tool to assess performance of the team leader in the videos. To evaluate the response process, the tool was used to assess experienced resuscitators in real time at academic and non-academic sites. Results The NRP steering committee approved the tool, providing evidence of content validity. Performance of the team leader in the simulated videos was assessed by 16 evaluators using the tool. There was an intra-class coefficient of 0.86 showing excellent agreement. There was no statistical difference in scores between 102 resuscitations led by experienced resuscitators at academic and non-academic hospitals (p=0.98), which demonstrates generalizability. Conclusions The tool we have developed to assess performance in initiating newborn resuscitation shows evidence of construct validity based on assessment of content and internal structure (inter-observer agreement, response processes, and generalizability).Item Development and Implementation of a Quick Response (QR) Code System to Streamline the Process for Fellows’ Evaluation in the Pediatric Intensive Care Unit (PICU) and the Neonatal Intensive Care Unit (NICU) at a Large Academic Center(Springer Nature, 2023-10-22) Kane, Sara K.; Wetzel, Elizabeth A.; Niehaus, Jason Z.; Abu-Sultaneh, Samer; Beardsly, Andrew; Bales, Melissa; Parsons, Deb; Rowan, Courtney M.; Pediatrics, School of MedicineBackground/objective: Useful feedback and evaluation are critical to a medical trainee's development. While most academic physicians understand that giving feedback to learners is essential, many do not consider the components of feedback to be truly useful, and there are barriers to implementation. We sought to use a quick reader (QR) system to solicit feedback for trainees in two pediatric subspecialties (pediatric critical care and neonatal-perinatal medicine) at one institution to increase the quality and quantity of feedback received. Methods: New valuations were modified from the existing evaluations and imported into online systems with QR code capability. Each fellow was given a QR code linking to evaluations and encouraged to solicit feedback and evaluations in a variety of clinical settings and scenarios. Evaluation numbers and quality of evaluations were assessed and compared both pre- and post-intervention. Results: There were increases in the number of evaluations completed for both the pediatric critical care fellows and the neonatal-perinatal medicine fellows. There was no overall change in the quality of written evaluations received. Satisfaction with the evaluation system improved for both faculty and fellows of both training programs. Conclusion: In our critical care units, we were successfully able to implement a QR code-driven evaluation for our fellows that improved access for the faculty and offered the ability of the learner to solicit evaluations, without compromising the number or quality of evaluations. What's new: Quick reader (QR) codes can be used by learners to solicit evaluations and feedback from faculty. They can increase the quantity of written evaluations received without affecting their quality.