- Browse by Subject
Browsing by Subject "deep learning"
Now showing 1 - 10 of 19
Results Per Page
Sort Options
Item Application of Edge-to-Cloud Methods Toward Deep Learning(IEEE, 2022-10) Choudhary, Khushi; Nersisyan, Nona; Lin, Edward; Chandrasekaran, Shobana; Mayani, Rajiv; Pottier, Loic; Murillo, Angela P.; Virdone, Nicole K.; Kee, Kerk; Deelman, Ewa; Library and Information Science, School of Computing and InformaticsScientific workflows are important in modern computational science and are a convenient way to represent complex computations, which are often geographically distributed among several computers. In many scientific domains, scientists use sensors (e.g., edge devices) to gather data such as CO2 level or temperature, that are usually sent to a central processing facility (e.g., a cloud). However, these edge devices are often not powerful enough to perform basic computations or machine learning inference computations and thus applications need the power of cloud platforms to generate scientific results. This work explores the execution and deployment of a complex workflow on an edge-to-cloud architecture in a use case of the detection and classification of plankton. In the original application, images were captured by cameras attached to buoys floating in Lake Greifensee (Switzerland). We developed a workflow based on that application. The workflow aims to pre-process images locally on the edge devices (i.e., buoys) then transfer data from each edge device to a cloud platform. Here, we developed a Pegasus workflow that runs using HTCondor and leveraged the Chameleon cloud platform and its recent CHI@Edge feature to mimic such deployment and study its feasibility in terms of performance and deployment.Item AuthN-AuthZ: Integrated, User-Friendly and Privacy-Preserving Authentication and Authorization(IEEE, 2020-10) Phillips, Tyler; Yu, Xiaoyuan; Haakenson, Brandon; Goyal, Shreya; Zou, Xukai; Purkayastha, Saptarshi; Wu, Huanmei; BioHealth Informatics, School of Informatics and ComputingIn this paper, we propose a novel, privacy-preserving, and integrated authentication and authorization scheme (dubbed as AuthN-AuthZ). The proposed scheme can address both the usability and privacy issues often posed by authentication through use of privacy-preserving Biometric-Capsule-based authentication. Each Biometric-Capsule encapsulates a user's biometric template as well as their role within a hierarchical Role-based Access Control model. As a result, AuthN-AuthZ provides novel efficiency by performing both authentication and authorization simultaneously in a single operation. To the best of our knowledge, our scheme's integrated AuthN-AuthZ operation is the first of its kind. The proposed scheme is flexible in design and allows for the secure use of robust deep learning techniques, such as the recently proposed and current state-of-the-art facial feature representation method, ArcFace. We conduct extensive experiments to demonstrate the robust performance of the proposed scheme and its AuthN-AuthZ operation.Item Automated lesion detection of breast cancer in [18F] FDG PET/CT using a novel AI-Based workflow(Frontiers, 2022-11-14) Leal, Jeffrey P.; Rowe, Steven P.; Stearns, Vered; Connolly, Roisin M.; Vaklavas, Christos; Liu, Minetta C.; Storniolo, Anna Maria; Wahl, Richard L.; Pomper, Martin G.; Solnes, Lilja B.; Medicine, School of MedicineApplications based on artificial intelligence (AI) and deep learning (DL) are rapidly being developed to assist in the detection and characterization of lesions on medical images. In this study, we developed and examined an image-processing workflow that incorporates both traditional image processing with AI technology and utilizes a standards-based approach for disease identification and quantitation to segment and classify tissue within a whole-body [18F]FDG PET/CT study. Methods One hundred thirty baseline PET/CT studies from two multi-institutional preoperative clinical trials in early-stage breast cancer were semi-automatically segmented using techniques based on PERCIST v1.0 thresholds and the individual segmentations classified as to tissue type by an experienced nuclear medicine physician. These classifications were then used to train a convolutional neural network (CNN) to automatically accomplish the same tasks. Results Our CNN-based workflow demonstrated Sensitivity at detecting disease (either primary lesion or lymphadenopathy) of 0.96 (95% CI [0.9, 1.0], 99% CI [0.87,1.00]), Specificity of 1.00 (95% CI [1.0,1.0], 99% CI [1.0,1.0]), DICE score of 0.94 (95% CI [0.89, 0.99], 99% CI [0.86, 1.00]), and Jaccard score of 0.89 (95% CI [0.80, 0.98], 99% CI [0.74, 1.00]). Conclusion This pilot work has demonstrated the ability of AI-based workflow using DL-CNNs to specifically identify breast cancer tissue as determined by [18F]FDG avidity in a PET/CT study. The high sensitivity and specificity of the network supports the idea that AI can be trained to recognize specific tissue signatures, both normal and disease, in molecular imaging studies using radiopharmaceuticals. Future work will explore the applicability of these techniques to other disease types and alternative radiotracers, as well as explore the accuracy of fully automated and quantitative detection and response assessment.Item Crime Detection from Pre-crime Video Analysis(2024-05) Kilic, Sedat; Tuceryan, Mihran; Zheng, Jiang Yu; Tsechpenakis, Gavriil; Durresi, ArjanThis research investigates the detection of pre-crime events, specifically targeting behaviors indicative of shoplifting, through the advanced analysis of CCTV video data. The study introduces an innovative approach that leverages augmented human pose and emotion information within individual frames, combined with the extraction of activity information across subsequent frames, to enhance the identification of potential shoplifting actions before they occur. Utilizing a diverse set of models including 3D Convolutional Neural Networks (CNNs), Graph Neural Networks (GNNs), Recurrent Neural Networks (RNNs), and a specially developed transformer architecture, the research systematically explores the impact of integrating additional contextual information into video analysis. By augmenting frame-level video data with detailed pose and emotion insights, and focusing on the temporal dynamics between frames, our methodology aims to capture the nuanced behavioral patterns that precede shoplifting events. The comprehensive experimental evaluation of our models across different configurations reveals a significant improvement in the accuracy of pre-crime detection. The findings underscore the crucial role of combining visual features with augmented data and the importance of analyzing activity patterns over time for a deeper understanding of pre-shoplifting behaviors. The study’s contributions are multifaceted, including a detailed examination of pre-crime frames, strategic augmentation of video data with added contextual information, the creation of a novel transformer architecture customized for pre-crime analysis, and an extensive evaluation of various computational models to improve predictive accuracy.Item Deep Learning Based Crop Row Detection(2022-05) Doha, Rashed Mohammad; Anwar, Sohel; Al Hasan, Mohammad; Li, LingxiDetecting crop rows from video frames in real time is a fundamental challenge in the field of precision agriculture. Deep learning based semantic segmentation method, namely U-net, although successful in many tasks related to precision agriculture, performs poorly for solving this task. The reasons include paucity of large scale labeled datasets in this domain, diversity in crops, and the diversity of appearance of the same crops at various stages of their growth. In this work, we discuss the development of a practical real-life crop row detection system in collaboration with an agricultural sprayer company. Our proposed method takes the output of semantic segmentation using U-net, and then apply a clustering based probabilistic temporal calibration which can adapt to different fields and crops without the need for retraining the network. Experimental results validate that our method can be used for both refining the results of the U-net to reduce errors and also for frame interpolation of the input video stream. Upon the availability of more labeled data, we switched our approach from a semi-supervised model to a fully supervised end-to-end crop row detection model using a Feature Pyramid Network or FPN. Central to the FPN is a pyramid pooling module that extracts features from the input image at multiple resolutions. This results in the network’s ability to use both local and global features in classifying pixels to be crop rows. After training the FPN on the labeled dataset, our method obtained a mean IoU or Jaccard Index score of over 70% as reported on the test set. We trained our method on only a subset of the corn dataset and tested its performance on multiple variations of weed pressure and crop growth stages to verify that the performance does translate over the variations and is consistent across the entire dataset.Item Detecting Traffic Information From Social Media Texts With Deep Learning Approaches(IEEE, 2018-11) Chen, Yuanyuan; Lv, Yisheng; Wang, Xiao; Li, Lingxi; Wang, Fei-Yue; Electrical and Computer Engineering, School of Engineering and TechnologyMining traffic-relevant information from social media data has become an emerging topic due to the real-time and ubiquitous features of social media. In this paper, we focus on a specific problem in social media mining which is to extract traffic relevant microblogs from Sina Weibo, a Chinese microblogging platform. It is transformed into a machine learning problem of short text classification. First, we apply the continuous bag-of-word model to learn word embedding representations based on a data set of three billion microblogs. Compared to the traditional one-hot vector representation of words, word embedding can capture semantic similarity between words and has been proved effective in natural language processing tasks. Next, we propose using convolutional neural networks (CNNs), long short-term memory (LSTM) models and their combination LSTM-CNN to extract traffic relevant microblogs with the learned word embeddings as inputs. We compare the proposed methods with competitive approaches, including the support vector machine (SVM) model based on a bag of n-gram features, the SVM model based on word vector features, and the multi-layer perceptron model based on word vector features. Experiments show the effectiveness of the proposed deep learning approaches.Item Development of an Automated Visibility Analysis Framework for Pavement Markings Based on the Deep Learning Approach(MDPI, 2020-11) Kang, Kyubyung; Chen, Donghui; Peng, Cheng; Koo, Dan; Kang, Taewook; Kim, Jonghoon; Computer and Information Science, School of SciencePavement markings play a critical role in reducing crashes and improving safety on public roads. As road pavements age, maintenance work for safety purposes becomes critical. However, inspecting all pavement markings at the right time is very challenging due to the lack of available human resources. This study was conducted to develop an automated condition analysis framework for pavement markings using machine learning technology. The proposed framework consists of three modules: a data processing module, a pavement marking detection module, and a visibility analysis module. The framework was validated through a case study of pavement markings training data sets in the U.S. It was found that the detection model of the framework was very precise, which means most of the identified pavement markings were correctly classified. In addition, in the proposed framework, visibility was confirmed as an important factor of driver safety and maintenance, and visibility standards for pavement markings were defined.Item Development of Automated Incident Detection System Using Existing ATMS CCTV(Purdue University, 2019) Chien, Stanley; Chen, Yaobin; Yi, Qiang; Ding, Zhengming; Electrical and Computer Engineering, School of Engineering and TechnologyIndiana Department of Transportation (INDOT) has over 300 digital cameras along highways in populated areas in Indiana. These cameras are used to monitor traffic conditions around the clock, all year round. Currently, the videos from these cameras are observed by human operators. The main objective of this research is to develop an automatic real-time system to monitor traffic conditions using the INDOT CCTV video feeds by a collaborative research team of the Transportation Active Safety Institute (TASI) at Indiana University-Purdue University Indianapolis (IUPUI) and the Traffic Management Center (TMC) of INDOT.Item eyeSay: Brain Visual Dynamics Decoding With Deep Learning & Edge Computing(IEEE, 2022-07-25) Zou, Jiadao; Zhang, Qingxue; Biomedical Engineering and Informatics, Luddy School of Informatics, Computing, and EngineeringBrain visual dynamics encode rich functional and biological patterns of the neural system, and if decoded, are of great promise for many applications such as intention understanding, cognitive load quantization and neural disorder measurement. We here focus on the understanding of the brain visual dynamics for the Amyotrophic lateral sclerosis (ALS) population, and propose a novel system that allows these so- called ‘lock-in’ patients to ‘speak’ with their brain visual movements. More specifically, we propose an intelligent system to decode the eye bio-potential signal, Electrooculogram (EOG), thereby understanding the patients’ intention. We first propose to leverage a deep learning framework for automatic feature learning and classification of the brain visual dynamics, aiming to translate the EOG to meaningful words. We afterwards design and develop an edge computing platform on the smart phone, which can execute the deep learning algorithm, visualize the brain visual dynamics, and demonstrate the edge inference results, all in real-time. Evaluated on 4,500 trials of brain visual movements performed by multiple users, our novel system has demonstrated a high eye-word recognition rate up to 90.47%. The system is demonstrated to be intelligent, effective and convenient for decoding brain visual dynamics for ALS patients. This research thus is expected to greatly advance the decoding and understanding of brain visual dynamics, by leveraging machine learning and edge computing innovations.Item Few-Shot Transfer Learning to improve Chest X-Ray pathology detection using limited triplets(arXiv, 2022-04) Bhimireddy, Ananth Reddy; Burns, John Lee; Purkayastha, Saptarshi; Gichoya, Judy Wawira; BioHealth Informatics, School of Informatics and ComputingDeep learning approaches applied to medical imaging have reached near-human or better-than-human performance on many diagnostic tasks. For instance, the CheXpert competition on detecting pathologies in chest x-rays has shown excellent multi-class classification performance. However, training and validating deep learning models require extensive collections of images and still produce false inferences, as identified by a human-in-the-loop. In this paper, we introduce a practical approach to improve the predictions of a pre-trained model through Few-Shot Learning (FSL). After training and validating a model, a small number of false inference images are collected to retrain the model using \textbf{\textit{Image Triplets}} - a false positive or false negative, a true positive, and a true negative. The retrained FSL model produces considerable gains in performance with only a few epochs and few images. In addition, FSL opens rapid retraining opportunities for human-in-the-loop systems, where a radiologist can relabel false inferences, and the model can be quickly retrained. We compare our retrained model performance with existing FSL approaches in medical imaging that train and evaluate models at once.