- Browse by Author
Davide Bolchini
Permanent URI for this collection
Dr. Bolchini's research seeks to identify the potential and limits of aural navigation paradigms to enhance the effectiveness of web navigation by performing a series of evaluation studies involving visually-impaired participants using screen readers and sighted participants using mobile devices. The navigation in a website is made possible by web pages which visually communicate virtually instantaneously extensive information, including content, overall semantics, orientation cues, and navigation possibilities. For users who are visually impaired or who cannot look at a screen while performing other tasks (e.g. driving or walking), this multidimensional communication may be difficult or even impossible to access. Existing aural technologies (e.g. screen readers, aural browsers) and web accessibility standards, although powerful and enabling, do not fully address this problem, as they read aloud content rather than conceptually translating a complex communication process. In this context, audio is a strictly linear channel which makes aural navigation in large information architectures a very difficult and frustrating task. Supported by the National Science Foundation, Dr. Bolchini's research explores innovative design strategies for the aural navigation of complex web information architectures, where users exclusively or primarily listen to, rather than look at, content and navigational prompts.
Browse
Browsing Davide Bolchini by Author "Bolchini, Davide"
Results Per Page
Sort Options
Item Active Reading Behaviors in Tablet-based Learning(AACE, 2015-07) Palilonis, Jennifer; Bolchini, Davide; Department of Human-Centered Computing, School of Informatics and ComputingActive reading is fundamental to learning. However, there is little understanding about whether traditional active reading frameworks sufficiently characterize how learners study multimedia tablet textbooks. This paper explores the nature of active reading in the tablet environment through a qualitative study that engaged 30 students in an active reading experience with two tablet textbook modules. We discovered novel study behaviors learners enact that are key to the active reading experience with tablet textbooks. Results illustrate that existing active reading tools do little to support learners when they struggle to make sense of and subsequently remember content delivered in multiple media formats, are distracted by the mechanics of interactive content, and grapple with the transient nature of audiovisual material. We collected valuable user feedback and uncovered key deficiencies in existing active reading tools that hinder successful multimedia tablet textbook reading experiences. Our work can inform future designs of tools that support active reading in this environment.Item ANFORA (AURAL NAVIGATION FLOWS ON RICH ARCHITECTURES)(Office of the Vice Chancellor for Research, 2012-04-13) Ghahari, Romisa R.; George-Palilonis, Jennifer; Bolchini, DavideExisting web applications make users focus their visual attention on the mobile device while browsing content and services on-the-go. To support eyes-free, mobile experiences, designers can minimize the in-teraction with the device by leveraging the auditory channel. Whereas acoustic interfaces have shown to be effective to reduce visual atten-tion, a perplexing challenge is designing aural information architec-tures typical of the web. To address this problem, we introduce Aural Navigation Flows on Rich Architectures (ANFORA), a novel design framework that transforms existing information architectures as linear, aural flows. We demonstrate our approach in ANFORAnews, a semi-aural mobile site designed to browse large collections of news stories. A study with frequent news readers (N=20) investigated the usability and navigation experience with ANFORAnews in a mobile setting. Aural flows are enjoyable, easy-to-use and appropriate for eyes-free, mobile contexts. Future work will optimize the mechanisms to customize con-tent and control the aural navigation.Item Endorsement, Prior Action, and Language: Modeling Trusted Advice in Computerized Clinical Alerts(ACM, 2016-05) Chattopadhyay, Debaleena; Duke, Jon; Bolchini, Davide; Human-Centered Computing, School of Informatics and ComputingThe safe prescribing of medications via computerized physician order entry routinely relies on clinical alerts. Alert compliance, however, remains surprisingly low, with up to 95% often ignored. Prior approaches, such as improving presentational factors in alert design, had limited success, mainly due to physicians' lack of trust in computerized advice. While designing trustworthy alert is key, actionable design principles to embody elements of trust in alerts remain little explored. To mitigate this gap, we introduce a model to guide the design of trust-based clinical alerts-based on what physicians value when trusting advice from peers in clinical activities. We discuss three key dimensions to craft trusted alerts: using colleagues' endorsement, foregrounding physicians' prior actions, and adopting a suitable language. We exemplify our approach with emerging alert designs from our ongoing research with physicians and contribute to the current debate on how to design effective alerts to improve patient safety.Item Fast and Discreet access to web services for the Blind through Screenless Browsing(Office of the Vice Chancellor for Research, 2016-04-08) Bolchini, Davide; Abhishek Dara, Joe; Bhat, Dhanashree; Pachhapukur, Shilpa; Chamboneth, YhareliWeb services on our smartphones have become an integral part of our daily lives. Services like Google Maps and Yelp have helped us explore the world better. However, the blind and visually impaired (BVI) spend unnecessary cognitive and mechanical effort navigating complex menus displayed on a mobile device before they can locate and access the content of their interest. More direct access may happen via voice-based services (e.g., S iri ), but at the cost of breaking privacy and social boundaries. To combat this issue, we propose Screenless Browsing : combining hand gesture recognition with aural navigation patterns that enable the BVI to quickly navigate aural menus through nimble, discrete hand movements. We propose to decouple the friction-prone mechanical interaction with a mobile display from the navigation experience. We demonstrate our approach by: (1) Introducing novel aural browsing menus that combine web content with binary splitting, dynamic sorting and playlists to accelerate navigation across collections; (2) Mapping aural menu navigation to the robust and simple vocabulary of hand movements enabled by M yo , an off-the-shelf muscle-controlled armband; (3) Reify our approach by iteratively prototyping Screenless Browsing of mobile applications for the BVI; (4) Conduct a user study to assess the limits and potential of our approach with participants from the Indiana School for the Blind and Visually Impaired (ISBVI). We believe that the ability to access web services on the move without taking the phone out of the pocket will empower the BVI to navigate and explore places effectively. Our work exemplifies a novel way to to reduce unwanted friction with the device and maximize the content experience for the BVI.Item From Critique to Collaboration: Rethinking Computerized Clinical Alerts(Office of the Vice Chancellor for Research, 2016-04-08) Bolchini, Davide; Chattopadhyay, Debaleena; Jia, Yuan; Ghahari, Romisa R.; Duke, JonThe safe prescribing of medications via computerized physician order entry routinely relies on clinical alerts. Alert compliance, however, remains surprisingly low—with up to 96% of such alerts ignored daily. Prior approaches, such as improving presentational factors in alert design, had limited success, mainly due to physicians’ lack of trust in computerized advice. While designing trustworthy alert is key, actionable design principles to embody elements of trust in alerts remain little explored. To address this issue, we focus on improving the trust between physicians and computerized advice by examining why physicians trust their medical colleagues. To understand trusted advice among physicians, we conducted three contextual inquiries in a hospital setting (n = 22) and corroborated our findings with a survey (n = 37). Drivers that guided physicians in trusting peer advice included: timeliness of the advice, collaborative language, empathy, level of specialization, and medical hierarchy. Based on these findings, we introduced seven design directions for trust-based alerts: endorsement, transparency, team sensing, collaborative, empathic, conflict mitigating, and agency laden. Grounded in these results, we then proposed a model to guide the design of trust-based clinical alerts. Our model constitutes of three key dimensions, using colleagues’ endorsement, foregrounding physicians’ prior actions, and adopting a suitable language. Using this model, we iteratively designed, pruned, and validated a set of novel alert designs. We are currently evaluating eleven alert designs in an online survey with physicians. The ongoing survey evaluates the likelihood of alert compliance and the perceived value of our proposed trust-based alerts. Next, we are planning in-lab studies to evaluate physicians’ cognitive load during decision making and measure attention to different trust cues using gaze duration and trajectories. Our work contributes to the current debate on how to design effective alerts to improve patient safety. Acknowledgements. This research material is based on work supported by the National Science Foundation under Grant #1343973. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the NSF.Item Guidelines to Incorporate a Clinician User Experience (UX) into the Design of Patient-Operated mHealth(ACM, 2017-05) Tunnell, Harry; Faiola, Anthony; Bolchini, Davide; Human-Centered Computing, School of Informatics and ComputingThis interactivity demonstration paper highlights how a patient-operated mHealth solution can be designed to improve clinician understanding of a patient's health status during a first face-to-face encounter. Patients can use smartphones to retrieve difficult-to-recall-from memory personal health information. This provides an opportunity to improve patient-clinician collaboration. To explore this idea, a mixed method study with 12 clinicians in a simulated encounter was conducted. A smartphone personal health record was prototyped and used for an experimental study. Communication, efficiency, and effectiveness was improved for clinicians who experienced the prototype. Study outcomes included a validated set of design guidelines for mHealth tools to support better patient-clinician communication.Item InterActive Reading: Understanding Strategies Learners Use to Study Multimedia Content in Tablet-Based Textbooks(Office of the Vice Chancellor for Research, 2013-04-05) Palilonis, Jennifer; Bolchini, DavideActive reading of educational textbooks is a complex meta-cognitive process. The traditional framework for active reading is conceptualized as the combination of three types of actions: annotation (e.g. highlighting and note taking), reorganization (e.g. outlining and summarizing) and browsing (e.g. studying annotations and outlines to prepare for future recall). However, as the traditional textbook paradigm evolves to include interactive, multimedia tablet-based products, dramatic changes are on the horizon for the ways in which educational content is delivered and consumed. Tablet devices allow textbook authors, publishers and developers to integrate multimedia content, such as video, audio, animations and interactive visualizations, with traditional expository text, designed as a browse-able book. However, existing tablet devices (i.e., iPad; Kindle Fire) only offer tools that support traditional active reading learning for text-based content. This research project reports findings of an exploratory qualitative study that examines what new active reading strategies emerge when learners engage with tablet-based multimedia textbooks. Participants were presented with one of two tablet textbooks developed using Apple’s iBook Author. The texts included a number of content forms, including traditional expository text, videos & animations, clickable keywords, image galleries, and interactive information graphics. Concept mapping tests were conducted to determine what students learned during their tablet study sessions, and semi-structured interviews were conducted to determine how easy or difficult it was for participants to actively study videos and animations. Early results suggest that the active learning tools developed for the tablet–namely, highlighting and bookmarking–are not sufficient for multimedia content and new tools must be developed to better support such activities. Future research and development are discussed.Item Laid-Back, Touchless Collaboration around Wall-size Displays: Visual Feedback and Affordances(http://www.powerwall.mdx.ac.uk/, 2013-04-27) Chattopadhyay, Debaleena; Bolchini, DavideTo facilitate interaction and collaboration around ultra-high-resolution, Wall-Size Displays (WSD), post-WIMP interaction modes like touchless and multi-touch have opened up new, unprecedented opportunities. Yet to fully harness this potential, we still need to understand fundamental design factors for successful WSD experiences. Some of these include visual feedback for touchless interactions, novel interface affordances for at-a-distance, high-bandwidth input, and the techno-social ingredients supporting laid-back, relaxed collaboration around WSDs. This position paper highlights our progress in a long-term research program that examines these issues and spurs new, exciting research directions. We recently completed a study aimed at investigating the properties of visual feedback in touchless WSD interaction, and we discuss some of our findings here. Our work exemplifies how research in WSD interaction calls for re-conceptualizing basic, first principles of Human-Computer Interaction (HCI) to pioneer a suite of next-generation interaction environments.Item Navigating the Aural Web(Office of the Vice Chancellor for Research, 2011-04-08) Bolchini, DavideThe current paradigm of web navigation poses great obstacles to users in two eyes-free scenarios: mobile computing and information access for the visually-impaired. The common thread of these scenarios is the inability to efficiently navigate complex information architectures, due to the mechanical and cognitive limitations emerging while listening to instead of looking at information and navigation prompts. New paradigms for aural navigation design are still unexplored, yet they are crucial to address increasingly important requirements. Inspired by the effective practice of human-to-human aural dialogues, we present a work-in-progress research funded by a 3-year NSF grant that introduces innovative design strategies for aural navigation in complex information architectures typical of the web. Specifically, in this exhibit we introduce and demonstrate design patterns supporting aural back navigation in large collections, aimed at improving the efficiency and usability of aural navigation. Current evaluation thrusts of the new navigation techniques involve blind users accessing the web through screen readers and sighted users using a mobile application prototype.Item Navigating the Aural Web: Augmenting User Experience for Visually Impaired and Mobile Users(Office of the Vice Chancellor for Research, 2013-04-05) Bolchini, Davide; Yang, Tao; Gadde, Prathik; Ghahari, Romisa RohaniThe current web navigation paradigm structures interaction around vision and thus hampers users in two eyes-free scenarios: mobile computing and information access for the visually impaired. Users in both scenarios are unable to navigate complex information architectures efficiently because of the strictly linear perceptual bandwidth of the aural channel. To combat this problem, we are conducting a long-term research program aimed at establishing novel design strategies that can augment the aural navigation while users browse complex information architectures typical of the web. A pervasive problem in designing for web accessibility (especially for screen reader users) is to provide efficient access to a large collection of contents, which is manifested in long lists indexing the underlying contents. Cognitively managing the interaction with long lists is cumbersome in the aural paradigm because users need to listen attentively to each list item to make a decision about what link to follow and then select a link. For every non relevant page selected, screen reader users need to go back to the list to select another page. Our most recent study studies compared the performance of index-based web navigation to guided-tour navigation (navigation without lists) for screen-reader users. Guided-tour navigation allows users to move directly back and forth across the content pages of a collection, bypassing lists. An experiment (N=10), conducted at the Indiana School for the Blind and Visually Impaired (ISBVI), examined these web navigation strategies during fact-finding tasks. Guided-tour significantly reduced time on task, number of pages visited, number of keystrokes, and perceived cognitive effort while enhancing the navigational experience. By augmenting existing navigational methods for screen-reader users, our research offers design strategies to web designers to improve web accessibility without costly site redesign. This research material is based upon work supported by the National Science Foundation under Grant #1018054.