- Browse by Subject
Browsing by Subject "feature extraction"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item A-MnasNet: Augmented MnasNet for Computer Vision(IEEE, 2020-08) Shah, Prasham; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and TechnologyConvolutional Neural Networks (CNNs) play an essential role in Deep Learning. They are extensively used in Computer Vision. They are complicated but very effective in extracting features from an image or a video stream. After AlexNet [5] won the ILSVRC [8] in 2012, there was a drastic increase in research related with CNNs. Many state-of-the-art architectures like VGG Net [12], GoogleNet [13], ResNet [18], Inception-v4 [14], Inception-Resnet-v2 [14], ShuffleNet [23], Xception [24], MobileNet [6], MobileNetV2 [7], SqueezeNet [16], SqueezeNext [17] and many more were introduced. The trend behind the research depicts an increase in the number of layers of CNN to make them more efficient but with that the size of the model increased as well. This problem was fixed with the advent of new algorithms which resulted in a decrease in model size. As a result, today we have CNN models which are implemented on mobile devices. These mobile models are small and fast which in turn reduce the computational cost of the embedded system. This paper resembles similar idea, it proposes a new model Augmented MnasNet (A-MnasNet) which has been derived from MnasNet [1]. The model is trained with CIFAR-10 [4] dataset and has a validation accuracy of 96.89% and a model size of 11.6 MB. It outperforms its baseline architecture MnasNet which has a validation accuracy of 80.8% and a model size of 12.7 MB when trained with CIFAR-10.Item A Cancellable and Privacy-Preserving Facial Biometric Authentication Scheme(IEEE, 2017) Phillips, Tyler; Zou, Xukai; Li, Feng; Computer and Information Science, School of ScienceIn recent years, biometric, or "who you are," authentication has grown rapidly in acceptance and use. Biometric authentication offers users the convenience of not having to carry a password, PIN, smartcard, etc. Instead, users will use their inherent biometric traits for authentication and, as a result, risk their biometric information being stolen. The security of users' biometric information is of critical importance within a biometric authentication scheme as compromised data can reveal sensitive information: race, gender, illness, etc. A cancellable biometric scheme, the "BioCapsule" scheme, proposed by researchers from Indiana University Purdue University Indianapolis, aims to mask users' biometric information and preserve users' privacy. The BioCapsule scheme can be easily embedded into existing biometric authentication systems, and it has been shown to preserve user-privacy, be resistant to several types of attacks, and have minimal effects on biometric authentication system accuracy. In this research we present a facial authentication system which employs several cutting-edge techniques. We tested our proposed system on several face databases, both with and without the BioCapsule scheme being embedded into our system. By comparing our results, we quantify the effects the BioCapsule scheme, and its security benefits, have on the accuracy of our facial authentication system.Item County-level Geographic Distributions of Diabetes in Relation to Multiple Factors in the United States(IEEE, 2018-03) Oraz, Gulzhahan; Luo, Xiao; Computer and Information Science, School of ScienceThe increasing prevalence of diagnosed diabetes has drawn attention of researchers in recent years. In this study, a feature selection method based on linear regression has been used to identify the most relevant factors that are associated with diabetes prevalence from the national county health ranking data sets. Then, Expectation-Maximization clustering algorithm has been used to identify the geo-clusters of counties based on the factors and their relations to the diabetes prevalence for years from 2014 to 2017. The results have identified the unique county-level geographic disparities and trends in diabetes and the related factors over the past four years.Item CoupleNet: Coupling Global Structure with Local Parts for Object Detection(IEEE, 2017-10) Zhu, Yousong; Zhao, Chaoyang; Wang, Jinqiao; Zhao, Xu; Wu, Yi; Lu, Hanqing; Medicine, School of MedicineThe region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7% on VOC07, 80.4% on VOC12, and 34.4% on COCO. Codes will be made publicly available1.Item Developing New Image Registration Techniques and 3D Displays for Neuroimaging and Neurosurgery(Office of the Vice Chancellor for Research, 2013-04-05) Zheng, Yuese; Jing, Yici; Nguyen, Thanh; Zajac, Sarah; Wright, Jacob; Catania, RobinImage guided surgery requires that the pre-operative data used for planning the surgery should be aligned with the patient during surgery. For this surgical application a fast, effective volume registration algorithm is needed. In addition, such an algorithm can also be used to develop surgical training presentations. This research extends existing methods and techniques to improve convergence and speed of execution. The aim is to find the most promising speed improvements while maintaining accuracy to best fit the neurosurgery application. In the recent phase, we focus on feature extraction and the time-accuracy trade-off. Medical image volumes acquired from MRI or CT medical imaging scans provided by the Indiana University School of Medicine were used as test image cases. Additional synthetic data with ground truth is developed by the Informatics students. The speed-enhancements to the registration are compared against the ground truth evaluated with mean squared error metrics. Algorithm execution time with and without speed improvement is measured on standard personal computer (PC) hardware. Additionally, the informatics students are developing a 3D movie that shows the surgical and preoperative data overlay, which presents the results of the speed improvements from the remaining students’ work. Our testing indicates that an intelligent subset of the data points that are needed for registration should improve the speed significantly. Preliminary results show that even though image registration in real-time is a challenging task for real time neurosurgery applications, intelligent preprocessing provides a promising solution. Final results will be available at paper presentation.Item Discerning Feature Supported Encoder for Image Representation(IEEE, 2019) Wang, Shuyang; Ding, Zhengming; Fu, Yun; Computer Information and Graphics Technology, School of Engineering and TechnologyInspired by the recent successes of deep architecture, the auto-encoder and its variants have been intensively explored on image clustering and classification tasks by learning effective feature representations. Conventional auto-encoder attempts to uncover the data's intrinsic structure, by constraining the output to be as much identical to the input as possible, which denotes that the hidden representation could faithfully reconstruct the input data. One issue that arises, however, is that such representations might not be optimized for specific tasks, e.g., image classification and clustering, since it compresses not only the discriminative information but also a lot of redundant or even noise within data. In other words, not all hidden units would benefit the specific tasks, while partial units are mainly used to represent the task-irrelevant patterns. In this paper, a general framework named discerning feature supported encoder (DFSE) is proposed, which integrates the auto-encoder and feature selection together into a unified model. Specifically, the feature selection is adapted to learned hidden-layer features to capture the task-relevant ones from the task-irrelevant ones. Meanwhile, the selected hidden units could in turn encode more discriminability only on the selected task-relevant units. To this end, our proposed algorithm can generate more effective image representation by distinguishing the task-relevant features from the task-irrelevant ones. Two scenarios of the experiments on image classification and clustering are conducted to evaluate our algorithm. The experiments on several benchmarks demonstrate that our method can achieve better performance over the state-of-the-art approaches in two scenarios.Item Investigation of Malicious Portable Executable File Detection on the Network using Supervised Learning Techniques(IEEE, 2017-05) Vyas, Rushabh; Luo, Xiao; McFarland, Nichole; Justice, Connie; Computer Information and Graphics Technology, School of Engineering and TechnologyMalware continues to be a critical concern for everyone from home users to enterprises. Today, most devices are connected through networks to the Internet. Therefore, malicious code can easily and rapidly spread. The objective of this paper is to examine how malicious portable executable (PE) files can be detected on the network by utilizing machine learning algorithms. The efficiency and effectiveness of the network detection rely on the number of features and the learning algorithms. In this work, we examined 28 features extracted from metadata, packing, imported DLLs and functions of four different types of PE files for malware detection. The returned results showed that the proposed system can achieve 98.7% detection rates, 1.8% false positive rate, and with an average scanning speed of 0.5 seconds per file in our testing environment.Item Review of constraints on vision-based gesture recognition for human–computer interaction(IEEE, 2018-01) Chakraborty, Biplab Ketan; Sarma, Debajit; Bhuyan, M. K.; MacDorman, Karl F.; Human-Centered Computing, School of Informatics and ComputingThe ability of computers to recognise hand gestures visually is essential for progress in human-computer interaction. Gesture recognition has applications ranging from sign language to medical assistance to virtual reality. However, gesture recognition is extremely challenging not only because of its diverse contexts, multiple interpretations, and spatio-temporal variations but also because of the complex non-rigid properties of the hand. This study surveys major constraints on vision-based gesture recognition occurring in detection and pre-processing, representation and feature extraction, and recognition. Current challenges are explored in detail.Item Simplicity of Kmeans versus Deepness of Deep Learning: A Case of Unsupervised Feature Learning with Limited Data(IEEE, 2015-12) Dundar, Murat; Kou, Qiang; Zhang, Baichuan; He, Yicheng; Rajwa, Bartek; Department of Computer and Information Sciences, School of ScienceWe study a bio-detection application as a case study to demonstrate that Kmeans -- based unsupervised feature learning can be a simple yet effective alternative to deep learning techniques for small data sets with limited intra-as well as inter-class diversity. We investigate the effect on the classifier performance of data augmentation as well as feature extraction with multiple patch sizes and at different image scales. Our data set includes 1833 images from four different classes of bacteria, each bacterial culture captured at three different wavelengths and overall data collected during a three-day period. The limited number and diversity of images present, potential random effects across multiple days, and the multi-mode nature of class distributions pose a challenging setting for representation learning. Using images collected on the first day for training, on the second day for validation, and on the third day for testing Kmeans -- based representation learning achieves 97% classification accuracy on the test data. This compares very favorably to 56% accuracy achieved by deep learning and 74% accuracy achieved by handcrafted features. Our results suggest that data augmentation or dropping connections between units offers little help for deep-learning algorithms, whereas significant boost can be achieved by Kmeans -- based representation learning by augmenting data and by concatenating features obtained at multiple patch sizes or image scales.