- Browse by Subject
Browsing by Subject "Alternative forms"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
Item Avoid or Embrace? Practice Effects in Alzheimer’s Disease Prevention Trials(Frontiers Media, 2022-06-16) Aschenbrenner, Andrew J.; Hassenstab, Jason; Wang, Guoqiao; Li, Yan; Xiong, Chengjie; McDade, Eric; Clifford, David B.; Salloway, Stephen; Farlow, Martin; Yaari, Roy; Cheng, Eden Y. J.; Holdridge, Karen C.; Mummery, Catherine J.; Masters, Colin L.; Hsiung, Ging-Yuek; Surti, Ghulam; Day, Gregory S.; Weintraub, Sandra; Honig, Lawrence S.; Galvin, James E.; Ringman, John M.; Brooks, William S.; Fox, Nick C.; Snyder, Peter J.; Suzuki, Kazushi; Shimada, Hiroyuki; Gräber, Susanne; Bateman, Randall J.; Dominantly Inherited Alzheimer Network Trials Unit (DIAN-TU); Neurology, School of MedicineDemonstrating a slowing in the rate of cognitive decline is a common outcome measure in clinical trials in Alzheimer's disease (AD). Selection of cognitive endpoints typically includes modeling candidate outcome measures in the many, richly phenotyped observational cohort studies available. An important part of choosing cognitive endpoints is a consideration of improvements in performance due to repeated cognitive testing (termed "practice effects"). As primary and secondary AD prevention trials are comprised predominantly of cognitively unimpaired participants, practice effects may be substantial and may have considerable impact on detecting cognitive change. The extent to which practice effects in AD prevention trials are similar to those from observational studies and how these potential differences impact trials is unknown. In the current study, we analyzed data from the recently completed DIAN-TU-001 clinical trial (TU) and the associated DIAN-Observational (OBS) study. Results indicated that asymptomatic mutation carriers in the TU exhibited persistent practice effects on several key outcomes spanning the entire trial duration. Critically, these practice related improvements were larger on certain tests in the TU relative to matched participants from the OBS study. Our results suggest that the magnitude of practice effects may not be captured by modeling potential endpoints in observational studies where assessments are typically less frequent and drug expectancy effects are absent. Using alternate instrument forms (represented in our study by computerized tasks) may partly mitigate practice effects in clinical trials but incorporating practice effects as outcomes may also be viable. Thus, investigators must carefully consider practice effects (either by minimizing them or modeling them directly) when designing cognitive endpoint AD prevention trials by utilizing trial data with similar assessment frequencies.