- Browse by Subject
Browsing by Subject "Principal component analysis"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item 3D Reconstruction of Coronary Artery Vascular Smooth Muscle Cells.(PLOS, 2016) Luo, Tong; Chen, Huan; Kassab, Ghassan S.; Department of Biomedical Engineering, Purdue School of Engineering and Technology, IUPUIAims: The 3D geometry of individual vascular smooth muscle cells (VSMCs), which are essential for understanding the mechanical function of blood vessels, are currently not available. This paper introduces a new 3D segmentation algorithm to determine VSMC morphology and orientation. Methods and Results: A total of 112 VSMCs from six porcine coronary arteries were used in the analysis. A 3D semi-automatic segmentation method was developed to reconstruct individual VSMCs from cell clumps as well as to extract the 3D geometry of VSMCs. A new edge blocking model was introduced to recognize cell boundary while an edge growing was developed for optimal interpolation and edge verification. The proposed methods were designed based on Region of Interest (ROI) selected by user and interactive responses of limited key edges. Enhanced cell boundary features were used to construct the cell’s initial boundary for further edge growing. A unified framework of morphological parameters (dimensions and orientations) was proposed for the 3D volume data. Virtual phantom was designed to validate the tilt angle measurements, while other parameters extracted from 3D segmentations were compared with manual measurements to assess the accuracy of the algorithm. The length, width and thickness of VSMCs were 62.9±14.9μm, 4.6±0.6μm and 6.2±1.8μm (mean±SD). In longitudinal-circumferential plane of blood vessel, VSMCs align off the circumferential direction with two mean angles of -19.4±9.3° and 10.9±4.7°, while an out-of-plane angle (i.e., radial tilt angle) was found to be 8±7.6° with median as 5.7°. Conclusions: A 3D segmentation algorithm was developed to reconstruct individual VSMCs of blood vessel walls based on optical image stacks. The results were validated by a virtual phantom and manual measurement. The obtained 3D geometries can be utilized in mathematical models and leads a better understanding of vascular mechanical properties and function.Item Explicit Modeling of Ancestry Improves Polygenic Risk Scores and BLUP Prediction(Wiley, 2015-09) Chen, Chia-Yen; Han, Jiali; Hunter, David J.; Kraft, Peter; Price, Alkes L.; Department of Epidemiology, Richard M. Fairbanks School of Public HealthPolygenic prediction using genome-wide SNPs can provide high prediction accuracy for complex traits. Here, we investigate the question of how to account for genetic ancestry when conducting polygenic prediction. We show that the accuracy of polygenic prediction in structured populations may be partly due to genetic ancestry. However, we hypothesized that explicitly modeling ancestry could improve polygenic prediction accuracy. We analyzed three GWAS of hair color (HC), tanning ability (TA), and basal cell carcinoma (BCC) in European Americans (sample size from 7,440 to 9,822) and considered two widely used polygenic prediction approaches: polygenic risk scores (PRSs) and best linear unbiased prediction (BLUP). We compared polygenic prediction without correction for ancestry to polygenic prediction with ancestry as a separate component in the model. In 10-fold cross-validation using the PRS approach, the R(2) for HC increased by 66% (0.0456-0.0755; P < 10(-16)), the R(2) for TA increased by 123% (0.0154 to 0.0344; P < 10(-16)), and the liability-scale R(2) for BCC increased by 68% (0.0138-0.0232; P < 10(-16)) when explicitly modeling ancestry, which prevents ancestry effects from entering into each SNP effect and being overweighted. Surprisingly, explicitly modeling ancestry produces a similar improvement when using the BLUP approach, which fits all SNPs simultaneously in a single variance component and causes ancestry to be underweighted. We validate our findings via simulations, which show that the differences in prediction accuracy will increase in magnitude as sample sizes increase. In summary, our results show that explicitly modeling ancestry can be important in both PRS and BLUP prediction.Item Monitoring compositional changes of the lipid fraction of fingermark residues deposited on paper during storage(Forensic Chemistry, 2016-11-01) Frick, A.A.; Chidlow, G.; Goodpaster, John V.; Lewis, S.W.; van Bronswijk, W.Characterising the changes in fingermark composition as a function of time is of great value for improving fingermark detection capabilities by understanding the processes and circumstances under which target compounds become degraded. In this study, gas chromatography-mass spectrometry was used to monitor relative changes in the lipids from latent fingermarks over 28 days. Principal component analysis of the relative composition of 15 lipids in fingermarks showed that fingermark age was a significant contributor to the variability observed in the data, but that inter-donor variability was also significant. This was attributed principally to changes in the relative amounts of squalene, which rapidly decreased in the fingermarks. It was also observed, however, that most fingermarks exhibited relatively small changes in composition during the first seven days, followed by more rapid changes up to 28 days. Significant inter-donor variation of both initial fingermark composition and the rates and nature of loss processes was observed, which was reflected in the relative projection of samples from different donors. Finally, samples stored with no exposure to light or airflow for 28 days were projected significantly closer to the samples analysed on the day of deposition than those exposed to light, due to the reduced photodegradation rate of squalene.Item Multimodal data integration via mediation analysis with high-dimensional exposures and mediators(Wiley, 2022) Zhao, Yi; Li, Lexin; Alzheimer's Disease Neuroimaging Initiative; Biostatistics and Health Data Science, School of MedicineMotivated by an imaging proteomics study for Alzheimer's disease (AD), in this article, we propose a mediation analysis approach with high-dimensional exposures and high-dimensional mediators to integrate data collected from multiple platforms. The proposed method combines principal component analysis with penalized least squares estimation for a set of linear structural equation models. The former reduces the dimensionality and produces uncorrelated linear combinations of the exposure variables, whereas the latter achieves simultaneous path selection and effect estimation while allowing the mediators to be correlated. Applying the method to the AD data identifies numerous interesting protein peptides, brain regions, and protein-structure-memory paths, which are in accordance with and also supplement existing findings of AD research. Additional simulations further demonstrate the effective empirical performance of the method.Item Principal component analysis of hybrid functional and vector data(Wiley, 2021) Jang, Jeong Hoon; Biostatistics and Health Data Science, School of MedicineWe propose a practical principal component analysis (PCA) framework that provides a nonparametric means of simultaneously reducing the dimensions of and modeling functional and vector (multivariate) data. We first introduce a Hilbert space that combines functional and vector objects as a single hybrid object. The framework, termed a PCA of hybrid functional and vector data (HFV-PCA), is then based on the eigen-decomposition of a covariance operator that captures simultaneous variations of functional and vector data in the new space. This approach leads to interpretable principal components that have the same structure as each observation and a single set of scores that serves well as a low-dimensional proxy for hybrid functional and vector data. To support practical application of HFV-PCA, the explicit relationship between the hybrid PC decomposition and the functional and vector PC decompositions is established, leading to a simple and robust estimation scheme where components of HFV-PCA are calculated using the components estimated from the existing functional and classical PCA methods. This estimation strategy allows flexible incorporation of sparse and irregular functional data as well as multivariate functional data. We derive the consistency results and asymptotic convergence rates for the proposed estimators. We demonstrate the efficacy of the method through simulations and analysis of renal imaging data.Item Principal Component Analysis Reduces Collider Bias in Polygenic Score Effect Size Estimation(Springer, 2022) Thomas, Nathaniel S.; Barr, Peter; Aliev, Fazil; Stephenson, Mallory; Kuo, Sally I-Chun; Chan, Grace; Dick, Danielle M.; Edenberg, Howard J.; Hesselbrock, Victor; Kamarajan, Chella; Kuperman, Samuel; Salvatore, Jessica E.; Medical and Molecular Genetics, School of MedicineIn this study, we test principal component analysis (PCA) of measured confounders as a method to reduce collider bias in polygenic association models. We present results from simulations and application of the method in the Collaborative Study of the Genetics of Alcoholism (COGA) sample with a polygenic score for alcohol problems, DSM-5 alcohol use disorder as the target phenotype, and two collider variables: tobacco use and educational attainment. Simulation results suggest that assumptions regarding the correlation structure and availability of measured confounders are complementary, such that meeting one assumption relaxes the other. Application of the method in COGA shows that PC covariates reduce collider bias when tobacco use is used as the collider variable. Application of this method may improve PRS effect size estimation in some cases by reducing the effect of collider bias, making efficient use of data resources that are available in many studies.Item The Reasons for Heavy Drinking Questionnaire: Factor Structure and Validity in Alcohol-Dependent Adults Involved in Clinical Trials(Rutgers, 2016) Adams, Zachary W.; Schacht, Joseph P.; Randall, Patrick; Anton, Raymond F.; Psychiatry, School of MedicineObjective: People consume alcohol at problematic levels for many reasons. These different motivational pathways may have different biological underpinnings. Valid, brief measures that discriminate individuals' reasons for drinking could facilitate inquiry into whether varied drinking motivations account for differential response to pharmacotherapies for alcohol use disorders. The current study evaluated the factor structure and predictive validity of a brief measure of alcohol use motivations developed for use in randomized clinical trials, the Reasons for Heavy Drinking Questionnaire (RHDQ). Method: The RHDQ was administered before treatment to 265 participants (70% male) with alcohol dependence according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, in three pharmacotherapy randomized clinical trials. Principal components analysis was used in half the sample to determine the RHDQ factor structure. This structure was verified with confirmatory factor analysis in the second half of the sample. The factors derived from this analysis were evaluated with respect to alcohol dependence severity indices. Results: A two-factor solution was identified. Factors were interpreted as Reinforcement and Normalizing. Reinforcement scores were weakly to moderately associated with severity, whereas normalizing scores were moderately to strongly associated with severity. In all cases in which significant associations between RHDQ scores and severity indices were observed, the relationship was significantly stronger for normalizing than for reinforcing. Conclusions: The RHDQ is a promising brief assessment of motivations for heavy alcohol use, particularly in the context of randomized clinical trials. Additional research should address factor structure stability in non-treatment-seeking individuals and the RHDQ's utility in detecting and accounting for changes in drinking behavior, including in response to intervention.