A Psychometric Evaluation of Script Concordance Tests for Measuring Clinical Reasoning

dc.contributor.advisorPike, Gary R. (Gary Robert), 1952-
dc.contributor.authorWilson, Adam Benjamin
dc.contributor.otherHumbert, Aloysius J.
dc.contributor.otherBrokaw, James J.
dc.contributor.otherSeifert, Mark F.
dc.date.accessioned2014-01-29T16:44:45Z
dc.date.issued2013-06
dc.degree.date2013en_US
dc.degree.disciplineDepartment of Anatomy & Cell Biologyen
dc.degree.grantorIndiana Universityen_US
dc.degree.levelPh.D.en_US
dc.descriptionIndiana University-Purdue University Indianapolis (IUPUI)en_US
dc.description.abstractPurpose: Script concordance tests (SCTs) are assessments purported to measure clinical data interpretation. The aims of this research were to (1) test the psychometric properties of SCT items, (2) directly examine the construct validity of SCTs, and (3) explore the concurrent validity of six SCT scoring methods while also considering validity at the item difficulty and item type levels. Methods: SCT scores from a problem solving SCT (SCT-PS; n=522) and emergency medicine SCT (SCT-EM; n=1040) were used to investigate the aims of this research. An item analysis was conducted to optimize the SCT datasets, to categorize items into levels of difficulty and type, and to test for gender biases. A confirmatory factor analysis tested whether SCT scores conformed to a theorized unidimensional factor structure. Exploratory factor analyses examined the effects of six SCT scoring methods on construct validity. The concurrent validity of each scoring method was also tested via a one-way multivariate analysis of variance (MANOVA) and Pearson’s product moment correlations. Repeated measures analysis of variance (ANOVA) and one-way ANOVA tested the discriminatory power of the SCTs according to item difficulty and type. Results: Item analysis identified no gender biases. A combination of moderate model-fit indices and poor factor loadings from the confirmatory factor analysis suggested that the SCTs under investigation did not conform to a unidimensional factor structure. Exploratory factor analyses of six different scoring methods repeatedly revealed weak factor loadings, and extracted factors consistently explained only a small portion of the total variance. Results of the concurrent validity study showed that all six scoring methods discriminated between medical training levels in spite of lower reliability coefficients on 3-point scoring methods. In addition, examinees as MS4s significantly (p<0.001) outperformed their MS2 SCT scores in all difficulty categories. Cross-sectional analysis of SCT-EM data reported significant differences (p<0.001) between experienced EM physicians, EM residents, and MS4s at each level of difficulty. When considering item type, diagnostic and therapeutic items differentiated between all three training levels, while investigational items could not readily distinguish between MS4s and EM residents. Conclusions: The results of this research contest the assertion that SCTs measure a single common construct. These findings raise questions about the latent constructs measured by SCTs and challenge the overall utility of SCT scores. The outcomes of the concurrent validity study provide evidence that multiple scoring methods reasonably differentiate between medical training levels. Concurrent validity was also observed when considering item difficulty and item type.en_US
dc.identifier.urihttps://hdl.handle.net/1805/3877
dc.identifier.urihttp://dx.doi.org/10.7912/C2/2096
dc.language.isoen_USen_US
dc.subjectpsychometricsen_US
dc.subjectscript concordance testen_US
dc.subjectfactor analysisen_US
dc.subjectclinical reasoningen_US
dc.subject.lcshPsychometrics -- Research -- Methodology -- Evaluationen_US
dc.subject.lcshAnalysis of varianceen_US
dc.subject.lcshMultivariate analysisen_US
dc.subject.lcshMedical logic -- Measurementen_US
dc.subject.lcshFactor analysis -- Research -- Methodologyen_US
dc.subject.lcshItem response theory -- Research -- Methodology -- Evaluationen_US
dc.subject.lcshExaminations -- Design and constructionen_US
dc.subject.lcshUncertaintyen_US
dc.subject.lcshSex role -- Research -- Methodologyen_US
dc.subject.lcshCognitive learning theoryen_US
dc.subject.lcshEducational tests and measurements -- Researchen_US
dc.subject.lcshCognitive psychology -- Researchen_US
dc.titleA Psychometric Evaluation of Script Concordance Tests for Measuring Clinical Reasoningen_US
dc.typeThesisen
Files
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.88 KB
Format:
Item-specific license agreed upon to submission
Description: