- Browse by Subject
Browsing by Subject "embodied interaction"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Designing embodied interactions for informal learning: two open research challenges(ACM, 2019-06) Cafaro, Francesco; Trajkova, Milka; Alhakamy, A’aeshah; Human-Centered Computing, School of Informatics and ComputingInteractive installations that are controlled with gestures and body movements have been widely used in museums due to their tremendous educational potential. The design of such systems, however, remains problematic. In this paper, we reflect on two open research challenges that we observed when crafting a Kinect-based prototype installation for data exploration at a science museum: (1) making the user aware that the system is interactive; and, (2) increasing the discoverability of hand gestures and body movements.Item Framed Guessability: Improving the Discoverability of Gestures and Body Movements for Full-Body Interaction(ACM, 2018-04) Cafaro, Francesco; Lyons, Leilah; Antle, Alissa N.; Human-Centered Computing, School of Informatics and ComputingThe wide availability of body-sensing technologies (such as Nintendo Wii and Microsoft Kinect) has the potential to bring full-body interaction to the masses, but the design of hand gestures and body movements that can be easily discovered by the users of such systems is still a challenge. In this paper, we revise and evaluate Framed Guessability, a design methodology for crafting discoverable hand gestures and body movements that focuses participants' suggestions within a "frame," i.e. a scenario. We elicited gestures and body movements via the Guessability and the Framed Guessability methods, consulting 89 participants in-lab. We then conducted an in-situ quasi-experimental study with 138 museum visitors to compare the discoverability of gestures and body movements elicited with these two methods. We found that the Framed Guessability movements were more discoverable than those generated via traditional Guessability, even though in the museum there was no reference to the frame.Item Move Your Body: Engaging Museum Visitors with Human-Data Interaction(ACM, 2020-04) Trajkova, Milka; Alhakamy, A’aeshah; Cafaro, Francesco; Mallappa, Rashmi; Kankara, Sreekanth R.; Human-Centered Computing, School of Informatics and ComputingMuseums have embraced embodied interaction: its novelty generates buzz and excitement among their patrons, and it has enormous educational potential. Human-Data Interaction (HDI) is a class of embodied interactions that enables people to explore large sets of data using interactive visualizations that users control with gestures and body movements. In museums, however, HDI installations have no utility if visitors do not engage with them. In this paper, we present a quasi-experimental study that investigates how different ways of representing the user ("mode type") next-to a data visualization alters the way in which people engage with a HDI system. We consider four mode types: avatar, skeleton, camera overlay, and control. Our findings indicate that the mode type impacts the number of visitors that interact with the installation, the gestures that people do, and the amount of time that visitors spend observing the data on display and interacting with the system.Item Show Me How You Interact, I Will Tell You What You Think: Exploring the Effect of the Interaction Style on Users’ Sensemaking about Correlation and Causation in Data(ACM, 2021-06) Alhakamy, A'aeshah; Trajkova, Milka; Cafaro, Francesco; Human-Centered Computing, School of Informatics and ComputingFindings from embodied cognition suggest that our whole body (not just our eyes) plays an important role in how we make sense of data when we interact with data visualizations. In this paper, we present the results of a study that explores how different designs of the ”interaction” (with a data visualization) alter the way in which people report and discuss correlation and causation in data. We conducted a lab study with two experimental conditions: Full body (participants interacted with a 65” display showing geo-referenced data using gestures and body movements); and, Gamepad (people used a joypad to control the system). Participants tended to agree less with statements that portray correlation and causation in data after using the Gamepad system. Additionally, discourse analysis based on Conceptual Metaphor Theory revealed that users made fewer remarks based on FORCE schemata in Gamepad than in Full-Body.