Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium

If you need an accessible version of this item, please email your request to digschol@iu.edu so that they may create one and provide it to you.
Date
2018-05
Language
American English
Embargo Lift Date
Committee Members
Degree
Degree Year
Department
Grantor
Journal Title
Journal ISSN
Volume Title
Found At
Springer
Abstract

BACKGROUND:

Concussion diagnosis is typically made through clinical examination and supported by performance on clinical assessment tools. Performance on commonly implemented and emerging assessment tools is known to vary between administrations, in the absence of concussion. OBJECTIVE:

To evaluate the test-retest reliability of commonly implemented and emerging concussion assessment tools across a large nationally representative sample of student-athletes. METHODS:

Participants (n = 4874) from the Concussion Assessment, Research, and Education Consortium completed annual baseline assessments on two or three occasions. Each assessment included measures of self-reported concussion symptoms, motor control, brief and extended neurocognitive function, reaction time, oculomotor/oculovestibular function, and quality of life. Consistency between years 1 and 2 and 1 and 3 were estimated using intraclass correlation coefficients or Kappa and effect sizes (Cohen's d). Clinical interpretation guidelines were also generated using confidence intervals to account for non-normally distributed data. RESULTS:

Reliability for the self-reported concussion symptoms, motor control, and brief and extended neurocognitive assessments from year 1 to 2 ranged from 0.30 to 0.72 while effect sizes ranged from 0.01 to 0.28 (i.e., small). The reliability for these same measures ranged from 0.34 to 0.66 for the year 1-3 interval with effect sizes ranging from 0.05 to 0.42 (i.e., small to less than medium). The year 1-2 reliability for the reaction time, oculomotor/oculovestibular function, and quality-of-life measures ranged from 0.28 to 0.74 with effect sizes from 0.01 to 0.38 (i.e., small to less than medium effects). CONCLUSIONS:

This investigation noted less than optimal reliability for most common and emerging concussion assessment tools. Despite this finding, their use is still necessitated by the absence of a gold standard diagnostic measure, with the ultimate goal of developing more refined and sound tools for clinical use. Clinical interpretation guidelines are provided for the clinician to apply with a degree of certainty in application.

Description
item.page.description.tableofcontents
item.page.relation.haspart
Cite As
Broglio, S. P., Katz, B. P., Zhao, S., McCrea, M., McAllister, T., CARE Consortium InvestigatorsHoyApril ReedHazzardJosephKellyLouiseOrtegaJustusPortNicholasPutukianMargotLangfordDianneCampbellDarrenMcGintyGeraldO’DonnellPatrickSvobodaStevenDiFioriJohnGizaChristopherBenjaminHollyBuckleyThomasKaminskiThomasClugstonJamesSchmidtJulianneFeigenbaumLuisEcknerJamesGuskiewiczKevinMihalikJasonMilesJessicaAndersonScottMasterChristinaKontosAnthonyChrismanSaraBrooksAlisonDumaStefanMilesChristopherDykhuizenBrianLintnerLaura, … Lintner, L. (2018). Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium. Sports Medicine (Auckland, N.z.), 48(5), 1255–1268. http://doi.org/10.1007/s40279-017-0813-0
ISSN
Publisher
Series/Report
Sponsorship
Major
Extent
Identifier
Relation
Journal
Sports Medicine
Source
PMC
Alternative Title
Type
Article
Number
Volume
Conference Dates
Conference Host
Conference Location
Conference Name
Conference Panel
Conference Secretariat Location
Version
Final published version
Full Text Available at
This item is under embargo {{howLong}}