- Browse by Author
Browsing by Author "Vorm, Eric S."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Assessing Demand for Transparency in Intelligent Systems Using Machine Learning(IEEE, 2018-07) Vorm, Eric S.; Miller, Andrew D.; Human-Centered Computing, School of Informatics and ComputingIntelligent systems offering decision support can lessen cognitive load and improve the efficiency of decision making in a variety of contexts. These systems assist users by evaluating multiple courses of action and recommending the right action at the right time. Modern intelligent systems using machine learning introduce new capabilities in decision support, but they can come at a cost. Machine learning models provide little explanation of their outputs or reasoning process, making it difficult to determine when it is appropriate to trust, or if not, what went wrong. In order to improve trust and ensure appropriate reliance on these systems, users must be afforded increased transparency, enabling an understanding of the systems reasoning, and an explanation of its predictions or classifications. Here we discuss the salient factors in designing transparent intelligent systems using machine learning, and present the results of a user-centered design study. We propose design guidelines derived from our study, and discuss next steps for designing for intelligent system transparency.Item Assessing the Value of Transparency in Recommender Systems: An End-User Perspective(ACM, 2018-10) Vorm, Eric S.; Miller, Andrew D.; Human-Centered Computing, School of Informatics and ComputingRecommender systems, especially those built on machine learning, are increasing in popularity, as well as complexity and scope. Systems that cannot explain their reasoning to end-users risk losing trust with users and failing to achieve acceptance. Users demand interfaces that afford them insights into internal workings, allowing them to build appropriate mental models and calibrated trust. Building interfaces that provide this level of transparency, however, is a significant design challenge, with many design features that compete, and little empirical research to guide implementation. We investigated how end-users of recommender systems value different categories of information to help in determining what to do with computer-generated recommendations in contexts involving high risk to themselves or others. Findings will inform future design of decision support in high-criticality contexts.