- Browse by Subject
Browsing by Subject "Computational intelligence"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item AI Based Modelling and Optimization of Turning Process(2012-08) Kulkarni, Ruturaj Jayant; El-Mounayri, Hazim; Anwar, Sohel; Wasfy, TamerIn this thesis, Artificial Neural Network (ANN) technique is used to model and simulate the Turning Process. Significant machining parameters (i.e. spindle speed, feed rate, and, depths of cut) and process parameters (surface roughness and cutting forces) are considered. It is shown that Multi-Layer Back Propagation Neural Network is capable to perform this particular task. Design of Experiments approach is used for efficient selection of values of parameters used during experiments to reduce cost and time for experiments. The Particle Swarm Optimization methodology is used for constrained optimization of machining parameters to minimize surface roughness as well as cutting forces. ANN and Particle Swarm Optimization, two computational intelligence techniques when combined together, provide efficient computational strategy for finding optimum solutions. The proposed method is capable of handling multiple parameter optimization problems for processes that have non-linear relationship between input and output parameters e.g. milling, drilling etc. In addition, this methodology provides reliable, fast and efficient tool that can provide suitable solution to many problems faced by manufacturing industry today.Item Computational Analysis of Flow Cytometry Data(2013-07-12) Irvine, Allison W.; Dundar, Murat; Tuceryan, Mihran; Mukhopadhyay, Snehasis; Fang, ShiaofenThe objective of this thesis is to compare automated methods for performing analysis of flow cytometry data. Flow cytometry is an important and efficient tool for analyzing the characteristics of cells. It is used in several fields, including immunology, pathology, marine biology, and molecular biology. Flow cytometry measures light scatter from cells and fluorescent emission from dyes which are attached to cells. There are two main tasks that must be performed. The first is the adjustment of measured fluorescence from the cells to correct for the overlap of the spectra of the fluorescent markers used to characterize a cell’s chemical characteristics. The second is to use the amount of markers present in each cell to identify its phenotype. Several methods are compared to perform these tasks. The Unconstrained Least Squares, Orthogonal Subspace Projection, Fully Constrained Least Squares and Fully Constrained One Norm methods are used to perform compensation and compared. The fully constrained least squares method of compensation gives the overall best results in terms of accuracy and running time. Spectral Clustering, Gaussian Mixture Modeling, Naive Bayes classification, Support Vector Machine and Expectation Maximization using a gaussian mixture model are used to classify cells based on the amounts of dyes present in each cell. The generative models created by the Naive Bayes and Gaussian mixture modeling methods performed classification of cells most accurately. These supervised methods may be the most useful when online classification is necessary, such as in cell sorting applications of flow cytometers. Unsupervised methods may be used to completely replace manual analysis when no training data is given. Expectation Maximization combined with a cluster merging post-processing step gives the best results of the unsupervised methods considered.Item A nonparametric Bayesian perspective for machine learning in partially-observed settings(2014-07-31) Akova, Ferit; Dundar, Mehmet Murat; Qi, Yuan AlanRobustness and generalizability of supervised learning algorithms depend on the quality of the labeled data set in representing the real-life problem. In many real-world domains, however, we may not have full knowledge of the underlying data-generating mechanism, which may even have an evolving nature introducing new classes continually. This constitutes a partially-observed setting, where it would be impractical to obtain a labeled data set exhaustively defined by a fixed set of classes. Traditional supervised learning algorithms, assuming an exhaustive training library, would misclassify a future sample of an unobserved class with probability one, leading to an ill-defined classification problem. Our goal is to address situations where such assumption is violated by a non-exhaustive training library, which is a very realistic yet an overlooked issue in supervised learning. In this dissertation we pursue a new direction for supervised learning by defining self-adjusting models to relax the fixed model assumption imposed on classes and their distributions. We let the model adapt itself to the prospective data by dynamically adding new classes/components as data demand, which in turn gradually make the model more representative of the entire population. In this framework, we first employ suitably chosen nonparametric priors to model class distributions for observed as well as unobserved classes and then, utilize new inference methods to classify samples from observed classes and discover/model novel classes for those from unobserved classes. This thesis presents the initiating steps of an ongoing effort to address one of the most overlooked bottlenecks in supervised learning and indicates the potential for taking new perspectives in some of the most heavily studied areas of machine learning: novelty detection, online class discovery and semi-supervised learning.Item PAGER 2.0: an update to the pathway, annotated-list and gene-signature electronic repository for Human Network Biology(Oxford Academic, 2018-01-04) Yue, Zongliang; Zheng, Qi; Neylon, Michael T.; Yoo, Minjae; Shin, Jimin; Zhao, Zhiying; Tan, Aik Choon; Chen, Jake Yue; BioHealth Informatics, School of Informatics and ComputingIntegrative Gene-set, Network and Pathway Analysis (GNPA) is a powerful data analysis approach developed to help interpret high-throughput omics data. In PAGER 1.0, we demonstrated that researchers can gain unbiased and reproducible biological insights with the introduction of PAGs (Pathways, Annotated-lists and Gene-signatures) as the basic data representation elements. In PAGER 2.0, we improve the utility of integrative GNPA by significantly expanding the coverage of PAGs and PAG-to-PAG relationships in the database, defining a new metric to quantify PAG data qualities, and developing new software features to simplify online integrative GNPA. Specifically, we included 84 282 PAGs spanning 24 different data sources that cover human diseases, published gene-expression signatures, drug-gene, miRNA-gene interactions, pathways and tissue-specific gene expressions. We introduced a new normalized Cohesion Coefficient (nCoCo) score to assess the biological relevance of genes inside a PAG, and RP-score to rank genes and assign gene-specific weights inside a PAG. The companion web interface contains numerous features to help users query and navigate the database content. The database content can be freely downloaded and is compatible with third-party Gene Set Enrichment Analysis tools. We expect PAGER 2.0 to become a major resource in integrative GNPA. PAGER 2.0 is available at http://discovery.informatics.uab.edu/PAGER/.