Into the Black Box: Designing for Transparency in Artificial Intelligence

dc.contributor.advisorMiller, Andrew
dc.contributor.authorVorm, Eric Stephen
dc.contributor.otherBolchini, Davide
dc.contributor.otherReda, Khairi
dc.contributor.otherFedorikhin, Sasha
dc.date.accessioned2019-12-27T14:10:23Z
dc.date.available2019-12-27T14:10:23Z
dc.date.issued2019-11
dc.degree.date2019en_US
dc.degree.disciplineSchool of Informatics and Computing
dc.degree.grantorIndiana Universityen_US
dc.degree.levelPh.D.en_US
dc.descriptionIndiana University-Purdue University Indianapolis (IUPUI)en_US
dc.description.abstractThe rapid infusion of artificial intelligence into everyday technologies means that consumers are likely to interact with intelligent systems that provide suggestions and recommendations on a daily basis in the very near future. While these technologies promise much, current issues in low transparency create high potential to confuse end-users, limiting the market viability of these technologies. While efforts are underway to make machine learning models more transparent, HCI currently lacks an understanding of how these model-generated explanations should best translate into the practicalities of system design. To address this gap, my research took a pragmatic approach to improving system transparency for end-users. Through a series of three studies, I investigated the need and value of transparency to end-users, and explored methods to improve system designs to accomplish greater transparency in intelligent systems offering recommendations. My research resulted in a summarized taxonomy that outlines a variety of motivations for why users ask questions of intelligent systems; useful for considering the type and category of information users might appreciate when interacting with AI-based recommendations. I also developed a categorization of explanation types, known as explanation vectors, that is organized into groups that correspond to user knowledge goals. Explanation vectors provide system designers options for delivering explanations of system processes beyond those of basic explainability. I developed a detailed user typology, which is a four-factor categorization of the predominant attitudes and opinion schemes of everyday users interacting with AI-based recommendations; useful to understand the range of user sentiment towards AI-based recommender features, and possibly useful for tailoring interface design by user type. Lastly, I developed and tested an evaluation method known as the System Transparency Evaluation Method (STEv), which allows for real-world systems and prototypes to be evaluated and improved through a low-cost query method. Results from this dissertation offer concrete direction to interaction designers as to how these results might manifest in the design of interfaces that are more transparent to end users. These studies provide a framework and methodology that is complementary to existing HCI evaluation methods, and lay the groundwork upon which other research into improving system transparency might build.en_US
dc.identifier.urihttps://hdl.handle.net/1805/21600
dc.identifier.urihttp://dx.doi.org/10.7912/C2/957
dc.language.isoen_USen_US
dc.subjectEmbodied Artificial Intelligenceen_US
dc.subjectHuman Centered Computingen_US
dc.subjectHuman Computer Interactionen_US
dc.subjectHuman Systems Integrationen_US
dc.subjectMachine Learningen_US
dc.subjectUser Centered Designen_US
dc.titleInto the Black Box: Designing for Transparency in Artificial Intelligenceen_US
dc.typeDissertation
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Vorm_iupui_0104D_10407.pdf
Size:
2.65 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.99 KB
Format:
Item-specific license agreed upon to submission
Description: