- Browse by Subject
Browsing by Subject "Generative Adversarial Networks"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Generative Adversarial Networks for Creating Synthetic Free-Text Medical Data: A Proposal for Collaborative Research and Re-use of Machine Learning Models(AMIA Informatics summit 2021 Conference Proceedings., 2021-03) Kasthurirathne, Suranga N.; Dexter, Gregory; Grannis, Shaun J.Restrictions in sharing Patient Health Identifiers (PHI) limit cross-organizational re-use of free-text medical data. We leverage Generative Adversarial Networks (GAN) to produce synthetic unstructured free-text medical data with low re-identification risk, and assess the suitability of these datasets to replicate machine learning models. We trained GAN models using unstructured free-text laboratory messages pertaining to salmonella, and identified the most accurate models for creating synthetic datasets that reflect the informational characteristics of the original dataset. Natural Language Generation metrics comparing the real and synthetic datasets demonstrated high similarity. Decision models generated using these datasets reported high performance metrics. There was no statistically significant difference in performance measures reported by models trained using real and synthetic datasets. Our results inform the use of GAN models to generate synthetic unstructured free-text data with limited re-identification risk, and use of this data to enable collaborative research and re-use of machine learning models.Item Investigation of Backdoor Attacks and Design of Effective Countermeasures in Federated Learning(2024-08) Palanisamy Sundar, Agnideven; Zou, Xukai; Li, Feng; Luo, Xiao; Hu, Qin; Tuceryan, MihranFederated Learning (FL), a novel subclass of Artificial Intelligence, decentralizes the learning process by enabling participants to benefit from a comprehensive model trained on a broader dataset without direct sharing of private data. This approach integrates multiple local models into a global model, mitigating the need for large individual datasets. However, the decentralized nature of FL increases its vulnerability to adversarial attacks. These include backdoor attacks, which subtly alter classification in some categories, and byzantine attacks, aimed at degrading the overall model accuracy. Detecting and defending against such attacks is challenging, as adversaries can participate in the system, masquerading as benign contributors. This thesis provides an extensive analysis of the various security attacks, highlighting the distinct elements of each and the inherent vulnerabilities of FL that facilitate these attacks. The focus is primarily on backdoor attacks, which are stealthier and more difficult to detect compared to byzantine attacks. We explore defense strategies effective in identifying malicious participants or mitigating attack impacts on the global model. The primary aim of this research is to evaluate the effectiveness and limitations of existing server-level defenses and to develop innovative defense mechanisms under diverse threat models. This includes scenarios where the server collaborates with clients to thwart attacks, cases where the server remains passive but benign, and situations where no server is present, requiring clients to independently minimize and isolate attacks while enhancing main task performance. Throughout, we ensure that the interventions do not compromise the performance of both global and local models. The research predominantly utilizes 2D and 3D datasets to underscore the practical implications and effectiveness of proposed methodologies.