- Browse by Author
Center for Biomedical Informatics
Permanent URI for this community
Browse
Browsing Center for Biomedical Informatics by Author "Grannis, Shaun J."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Generative Adversarial Networks for Creating Synthetic Free-Text Medical Data: A Proposal for Collaborative Research and Re-use of Machine Learning Models(AMIA Informatics summit 2021 Conference Proceedings., 2021-03) Kasthurirathne, Suranga N.; Dexter, Gregory; Grannis, Shaun J.Restrictions in sharing Patient Health Identifiers (PHI) limit cross-organizational re-use of free-text medical data. We leverage Generative Adversarial Networks (GAN) to produce synthetic unstructured free-text medical data with low re-identification risk, and assess the suitability of these datasets to replicate machine learning models. We trained GAN models using unstructured free-text laboratory messages pertaining to salmonella, and identified the most accurate models for creating synthetic datasets that reflect the informational characteristics of the original dataset. Natural Language Generation metrics comparing the real and synthetic datasets demonstrated high similarity. Decision models generated using these datasets reported high performance metrics. There was no statistically significant difference in performance measures reported by models trained using real and synthetic datasets. Our results inform the use of GAN models to generate synthetic unstructured free-text data with limited re-identification risk, and use of this data to enable collaborative research and re-use of machine learning models.Item Identifying Biases in Clinical Decision Models Designed to Predict Need of Wraparound Services(AMIA Informatics summit 2021 Conference Proceedings, 2021-03) Kasthurirathne, Suranga N.; Vest, Joshua R.; Grannis, Shaun J.Investigation of systemic biases in AI models for the clinical domain have been limited. We re-created a series of models predicting need of wraparound services, and inspected them for biases across age, gender and race using the AI Fairness 360 framework. AI models reported performance metrics which were comparable to original efforts. Investigation of biases using the AI Fairness framework found low likelihood that patient age, gender and sex are introducing bias into our algorithms.