MildInt: Deep Learning-Based Multimodal Longitudinal Data Integration Framework

dc.contributor.authorLee, Garam
dc.contributor.authorKang, Byungkon
dc.contributor.authorNho, Kwangsik
dc.contributor.authorSohn, Kyung-Ah
dc.contributor.authorKim, Dokyoon
dc.contributor.departmentRadiology & Imaging Sciences, IU School of Medicineen_US
dc.date.accessioned2019-09-09T16:17:08Z
dc.date.available2019-09-09T16:17:08Z
dc.date.issued2019-06-28
dc.description.abstractAs large amounts of heterogeneous biomedical data become available, numerous methods for integrating such datasets have been developed to extract complementary knowledge from multiple domains of sources. Recently, a deep learning approach has shown promising results in a variety of research areas. However, applying the deep learning approach requires expertise for constructing a deep architecture that can take multimodal longitudinal data. Thus, in this paper, a deep learning-based python package for data integration is developed. The python package deep learning-based multimodal longitudinal data integration framework (MildInt) provides the preconstructed deep learning architecture for a classification task. MildInt contains two learning phases: learning feature representation from each modality of data and training a classifier for the final decision. Adopting deep architecture in the first phase leads to learning more task-relevant feature representation than a linear model. In the second phase, linear regression classifier is used for detecting and investigating biomarkers from multimodal data. Thus, by combining the linear model and the deep learning model, higher accuracy and better interpretability can be achieved. We validated the performance of our package using simulation data and real data. For the real data, as a pilot study, we used clinical and multimodal neuroimaging datasets in Alzheimer's disease to predict the disease progression. MildInt is capable of integrating multiple forms of numerical data including time series and non-time series data for extracting complementary features from the multimodal dataset.en_US
dc.identifier.citationLee, G., Kang, B., Nho, K., Sohn, K. A., & Kim, D. (2019). MildInt: Deep Learning-Based Multimodal Longitudinal Data Integration Framework. Frontiers in genetics, 10, 617. doi:10.3389/fgene.2019.00617en_US
dc.identifier.urihttps://hdl.handle.net/1805/20874
dc.language.isoen_USen_US
dc.publisherFrontiersen_US
dc.relation.isversionof10.3389/fgene.2019.00617en_US
dc.relation.journalFrontiers in Geneticsen_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.sourcePMCen_US
dc.subjectMultimodal deep learningen_US
dc.subjectData integrationen_US
dc.subjectGated recurrent uniten_US
dc.subjectAlzheimer’s diseaseen_US
dc.subjectPython packageen_US
dc.titleMildInt: Deep Learning-Based Multimodal Longitudinal Data Integration Frameworken_US
dc.typeArticleen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
fgene-10-00617.pdf
Size:
1.24 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.99 KB
Format:
Item-specific license agreed upon to submission
Description: