- Browse by Author
Browsing by Author "Wachs, Juan"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Agreement Study Using Gesture Description Analysis(IEEE, 2020-10) Madapana, Naveen; Gonzalez, Glebys; Zhang, Lingsong; Rodgers, Richard; Wachs, Juan; Neurological Surgery, School of MedicineChoosing adequate gestures for touchless interfaces is a challenging task that has a direct impact on human-computer interaction. Such gestures are commonly determined by the designer, ad-hoc, rule-based or agreement-based methods. Previous approaches to assess agreement grouped the gestures into equivalence classes and ignored the integral properties that are shared between them. In this work, we propose a generalized framework that inherently incorporates the gesture descriptors into the agreement analysis (GDA). In contrast to previous approaches, we represent gestures using binary description vectors and allow them to be partially similar. In this context, we introduce a new metric referred to as Soft Agreement Rate (SAR) to measure the level of agreement and provide a mathematical justification for this metric. Further, we performed computational experiments to study the behavior of SAR and demonstrate that existing agreement metrics are a special case of our approach. Our method was evaluated and tested through a guessability study conducted with a group of neurosurgeons. Nevertheless, our formulation can be applied to any other user-elicitation study. Results show that the level of agreement obtained by SAR is 2.64 times higher than the previous metrics. Finally, we show that our approach complements the existing agreement techniques by generating an artificial lexicon based on the most agreed properties.Item Eye-Tracking Metrics Predict Perceived Workload in Robotic Surgical Skills Training(Sage, 2019) Wu, Chuhao; Cha, Jackie; Sulek, Jay; Zhou, Tian; Sundaram, Chandru P.; Wachs, Juan; Yu, Denny; Urology, School of MedicineObjective: The aim of this study is to assess the relationship between eye-tracking measures and perceived workload in robotic surgical tasks. Background: Robotic techniques provide improved dexterity, stereoscopic vision, and ergonomic control system over laparoscopic surgery, but the complexity of the interfaces and operations may pose new challenges to surgeons and compromise patient safety. Limited studies have objectively quantified workload and its impact on performance in robotic surgery. Although not yet implemented in robotic surgery, minimally intrusive and continuous eye-tracking metrics have been shown to be sensitive to changes in workload in other domains. Methods: Eight surgical trainees participated in 15 robotic skills simulation sessions. In each session, participants performed up to 12 simulated exercises. Correlation and mixed-effects analyses were conducted to explore the relationships between eye-tracking metrics and perceived workload. Machine learning classifiers were used to determine the sensitivity of differentiating between low and high workload with eye-tracking features. Results: Gaze entropy increased as perceived workload increased, with a correlation of .51. Pupil diameter and gaze entropy distinguished differences in workload between task difficulty levels, and both metrics increased as task level difficulty increased. The classification model using eye-tracking features achieved an accuracy of 84.7% in predicting workload levels. Conclusion: Eye-tracking measures can detect perceived workload during robotic tasks. They can potentially be used to identify task contributors to high workload and provide measures for robotic surgery training. Application: Workload assessment can be used for real-time monitoring of workload in robotic surgical training and provide assessments for performance and learning.Item From the Dexterous Surgical Skill to the Battlefield-A Robotics Exploratory Study(Oxford University Press, 2021) Gonzalez, Glebys T.; Kaur, Upinder; Rahma, Masudur; Venkatesh, Vishnunandan; Sanchez, Natalia; Hager, Gregory; Xue, Yexiang; Voyles, Richard; Wachs, Juan; Surgery, School of MedicineIntroduction: Short response time is critical for future military medical operations in austere settings or remote areas. Such effective patient care at the point of injury can greatly benefit from the integration of semi-autonomous robotic systems. To achieve autonomy, robots would require massive libraries of maneuvers collected with the goal of training machine learning algorithms. Although this is attainable in controlled settings, obtaining surgical data in austere settings can be difficult. Hence, in this article, we present the Dexterous Surgical Skill (DESK) database for knowledge transfer between robots. The peg transfer task was selected as it is one of the six main tasks of laparoscopic training. In addition, we provide a machine learning framework to evaluate novel transfer learning methodologies on this database. Methods: A set of surgical gestures was collected for a peg transfer task, composed of seven atomic maneuvers referred to as surgemes. The collected Dexterous Surgical Skill dataset comprises a set of surgical robotic skills using the four robotic platforms: Taurus II, simulated Taurus II, YuMi, and the da Vinci Research Kit. Then, we explored two different learning scenarios: no-transfer and domain-transfer. In the no-transfer scenario, the training and testing data were obtained from the same domain; whereas in the domain-transfer scenario, the training data are a blend of simulated and real robot data, which are tested on a real robot. Results: Using simulation data to train the learning algorithms enhances the performance on the real robot where limited or no real data are available. The transfer model showed an accuracy of 81% for the YuMi robot when the ratio of real-tosimulated data were 22% to 78%. For the Taurus II and the da Vinci, the model showed an accuracy of 97.5% and 93%, respectively, training only with simulation data. Conclusions: The results indicate that simulation can be used to augment training data to enhance the performance of learned models in real scenarios. This shows potential for the future use of surgical data from the operating room in deployable surgical robots in remote areas.Item Medical Telementoring Using an Augmented Reality Transparent Display(Elsevier, 2016-06) Andersen, Daniel; Popescu, Voicu; Cabrera, Maria Eugenia; Shanghavi, Aditya; Gomez, Gerardo; Marley, Sherri; Mullis, Brian; Wachs, Juan; IU School of NursingBackground The goal of this study was to design and implement a novel surgical telementoring system called the System for Telementoring with Augmented Reality (STAR) that uses a virtual transparent display to convey precise locations in the operating field to a trainee surgeon. This system was compared with a conventional system based on a telestrator for surgical instruction. Methods A telementoring system was developed and evaluated in a study which used a 1 × 2 between-subjects design with telementoring system, that is, STAR or conventional, as the independent variable. The participants in the study were 20 premedical or medical students who had no prior experience with telementoring. Each participant completed a task of port placement and a task of abdominal incision under telementoring using either the STAR or the conventional system. The metrics used to test performance when using the system were placement error, number of focus shifts, and time to task completion. Results When compared with the conventional system, participants using STAR completed the 2 tasks with less placement error (45% and 68%) and with fewer focus shifts (86% and 44%), but more slowly (19% for each task). Conclusions Using STAR resulted in decreased annotation placement error, fewer focus shifts, but greater times to task completion. STAR placed virtual annotations directly onto the trainee surgeon's field of view of the operating field by conveying location with great accuracy; this technology helped to avoid shifts in focus, decreased depth perception, and enabled fine-tuning execution of the task to match telementored instruction, but led to greater times to task completion.Item Robust High-Level Video Stabilization for Effective AR Telementoring(IEEE, 2019-03) Lin, Chengyuan; Rojas-Muñoz, Edgar; Cabrera, Maria Eugenia; Sanchez-Tamayo, Natalia; Andersen, Daniel; Popescu, Voicu; Noguera, Juan Antonio Barragan; Zarzaur, Ben; Murphy, Pat; Anderson, Kathryn; Douglas, Thomas; Griffis, Clare; Wachs, Juan; Medicine, School of MedicineThis poster presents the design, implementation, and evaluation of a method for robust high-level stabilization of mentees first-person video in augmented reality (AR) telementoring. This video is captured by the front-facing built-in camera of an AR headset and stabilized by rendering from a stationary view a planar proxy of the workspace projectively texture mapped with the video feed. The result is stable, complete, up to date, continuous, distortion free, and rendered from the mentee's default viewpoint. The stabilization method was evaluated in two user studies, in the context of number matching and for cricothyroidotomy training, respectively. Both showed a significant advantage of our method compared with unstabilized visualization.Item Sensor-based indicators of performance changes between sessions during robotic surgery training(Elsevier, 2021) Wu, Chuhao; Cha, Jackie; Sulek, Jay; Sundaram, Chandru P.; Wachs, Juan; Proctor, Robert W.; Yu, Denny; Urology, School of MedicineTraining of surgeons is essential for safe and effective usage of robotic surgery, yet current assessment tools for learning progression are limited. The objective of this study was to measure changes in trainees’ cognitive and behavioral states as they progressed in a robotic surgeon training curriculum at a medical institution. Seven surgical trainees in urology who had no formal robotic training experience participated in the simulation curriculum. They performed 12 robotic skills exercises with varying levels of difficulty repetitively in separate sessions. EEG (electroencephalogram) activity and eye movements were measured throughout to calculate three metrics: engagement index (indicator of task engagement), pupil diameter (indicator of mental workload) and gaze entropy (indicator of randomness in gaze pattern). Performance scores (completion of task goals) and mental workload ratings (NASA-Task Load Index) were collected after each exercise. Changes in performance scores between training sessions were calculated. Analysis of variance, repeated measures correlation, and machine learning classification were used to diagnose how cognitive and behavioral states associate with performance increases or decreases between sessions. The changes in performance were correlated with changes in engagement index (rrm = −.25, p < .001) and gaze entropy (rrm = −.37, p < .001). Changes in cognitive and behavioral states were able to predict training outcomes with 72.5% accuracy. Findings suggest that cognitive and behavioral metrics correlate with changes in performance between sessions. These measures can complement current feedback tools used by medical educators and learners for skills assessment in robotic surgery training.