- Browse by Subject
Browsing by Subject "Observer Variation"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
Item Foundations for Studying Clinical Workflow: Development of a Composite Inter-Observer Reliability Assessment for Workflow Time Studies(American Medical Informatics Association, 2019) Lopetegui, Marcelo; Yen, Po-Yin; Embi, Peter; Payne, Philip; Medicine, School of MedicineThe ability to understand and measure the complexity of clinical workflow provides hospital managers and researchers with the necessary knowledge to assess some of the most critical issues in healthcare. Given the protagonist role of workflow time studies on influencing decision makers, major efforts are being conducted to address existing methodological inconsistencies of the technique. Among major concerns, the lack of a standardized methodology to ensure the reliability of human observers stands as a priority. In this paper, we highlight the limitations of the current Inter-Observer Reliability Assessments, and propose a novel composite score to systematically conduct them. The composite score is composed of a) the overall agreement based on Kappa that evaluates the naming agreement on virtually created one-seconds tasks, providing a global assessment of the agreement over time, b) a naming agreement based on Kappa, requiring an observation pairing approach based on time-overlap, c) a duration agreement based on the concordance correlation coefficient, that provides means to evaluate the correlation concerning tasks duration, d) a timing agreement, based on descriptive statistics of the gaps between timestamps of same-task classes, and e) a sequence agreement based on the Needleman-Wunsch sequence alignment algorithm. We hereby provide a first step towards standardized reliability reporting in workflow time studies. This new composite IORA protocol is intended to empower workflow researchers with a standardized and comprehensive method for validating observers' reliability and, in turn, the validity of their data and results.