- Browse by Subject
Browsing by Subject "fairness"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Denoising Individual Bias for Fairer Binary Submatrix Detection(ACM, 2020-10) Wan, Changlin; Chang, Wennan; Zhao, Tong; Cao, Sha; Zhang, Chi; Biostatistics, School of Public HealthLow rank representation of binary matrix is powerful in disentangling sparse individual-attribute associations, and has received wide applications. Existing binary matrix factorization (BMF) or co-clustering (CC) methods often assume i.i.d background noise. However, this assumption could be easily violated in real data, where heterogeneous row- or column-wise probability of binary entries results in disparate element-wise background distribution, and paralyzes the rationality of existing methods. We propose a binary data denoising framework, namely BIND, which optimizes the detection of true patterns by estimating the row- or column-wise mixture distribution of patterns and disparate background, and eliminating the binary attributes that are more likely from the background. BIND is supported by thoroughly derived mathematical property of the row- and column-wise mixture distributions. Our experiment on synthetic and real-world data demonstrated BIND effectively removes background noise and drastically increases the fairness and accuracy of state-of-the arts BMF and CC methods.Item An Interactive Approach to Bias Mitigation in Machine Learning(IEEE, 2021-10) Wang, Hao; Mukhopadhyay, Snehasis; Xiao, Yunyu; Fang, Shiaofen; Computer and Information Science, School of ScienceUnderrepresentation and misrepresentation of protected groups in the training data is a significant source of bias for Machine Learning (ML) algorithms, resulting in decreased confidence and trustworthiness of the generated ML models. Such bias can be mitigated by incorporating both objective as well as subjective (through human users) measures of bias, and compensating for them by means of a suitable selection algorithm over subgroups of training data. In this paper, we propose a methodology of integrating bias detection and mitigation strategies through interactive visualization of machine learning models in selected protected spaces. In this approach, a (partially generated) ML model performance is visualized and evaluated by a human user or a community of human users in terms of potential presence of bias using both objective and subjective criteria. Guided by such human feedback, the ML algorithm can implement a variety of remedial sampling strategies to mitigate the bias using an iterative human-in-the-loop approach. We also provide experimental results with a benchmark ML dataset to demonstrate that such an interactive ML approach holds considerable promise in detecting and mitigating bias in ML models.Item A Penalized Likelihood Method for Balancing Accuracy and Fairness in Predictive Policing(IEEE, 2018-10) Mohler, George; Raje, Rajeev; Carter, Jeremy; Valasik, Matthew; Brantingham, P. Jeffrey; Computer and Information Science, School of ScienceRacial bias of predictive policing algorithms has been the focus of recent research and, in the case of Hawkes processes, feedback loops are possible where biased arrests are amplified through self-excitation, leading to hotspot formation and further arrests of minority populations. In this article we develop a penalized likelihood approach for introducing fairness into point process models of crime. In particular, we add a penalty term to the likelihood function that encourages the amount of police patrol received by each of several demographic groups to be proportional to the representation of that group in the total population. We apply our model to historical crime incident data in Indianapolis and measure the fairness and accuracy of the two approaches across several crime categories. We show that fairness can be introduced into point process models of crime so that patrol levels proportionally match demographics, though at a cost of reduced accuracy of the algorithms.