- Browse by Subject
Browsing by Subject "augmented reality"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Augmented Reality Future Step Visualization for Robust Surgical Telementoring(Wolters Kluwer, 2019-02) Andersen, Daniel S.; Cabrera, Maria E.; Rojas-Muñoz, Edgar J.; Popescu, Voicu S.; Gonzalez, Glebys T.; Mullis, Brian; Marley, Sherri; Zarzaur, Ben L.; Wachs, Juan P.; Surgery, School of MedicineIntroduction Surgical telementoring connects expert mentors with trainees performing urgent care in austere environments. However, such environments impose unreliable network quality, with significant latency and low bandwidth. We have developed an augmented reality telementoring system that includes future step visualization of the medical procedure. Pregenerated video instructions of the procedure are dynamically overlaid onto the trainee's view of the operating field when the network connection with a mentor is unreliable. Methods Our future step visualization uses a tablet suspended above the patient's body, through which the trainee views the operating field. Before trainee use, an expert records a “future library” of step-by-step video footage of the operation. Videos are displayed to the trainee as semitransparent graphical overlays. We conducted a study where participants completed a cricothyroidotomy under telementored guidance. Participants used one of two telementoring conditions: conventional telestrator or our system with future step visualization. During the operation, the connection between trainee and mentor was bandwidth throttled. Recorded metrics were idle time ratio, recall error, and task performance. Results Participants in the future step visualization condition had 48% smaller idle time ratio (14.5% vs. 27.9%, P < 0.001), 26% less recall error (119 vs. 161, P = 0.042), and 10% higher task performance scores (rater 1 = 90.83 vs. 81.88, P = 0.008; rater 2 = 88.54 vs. 79.17, P = 0.042) than participants in the telestrator condition. Conclusions Future step visualization in surgical telementoring is an important fallback mechanism when trainee/mentor network connection is poor, and it is a key step towards semiautonomous and then completely mentor-free medical assistance systems.Item Augmented Reality in Medical Education and Training(Taylor & Francis, 2016) Herron, Jennifer; Ruth Lilly Medical LibraryAugmented reality, while not necessarily a new technology, is becoming more well-known and gaining some momentum in medical education through Google Glass and Microsoft’s HoloLens. Not only can augmented reality aid in student education, but it also can impact patient care through its ability to enhance medical training. Medical libraries can partake in this new endeavor by being aware of applications in augmented reality that can benefit students and educators.Item A Collaborative Augmented Reality Framework Based on Distributed Visual Slam(IEEE, 2017-09) Egodagamage, Ruwan; Tuceryan, Mihran; Computer and Information Science, School of ScienceVisual Simultaneous Localization and Mapping (SLAM) has been used for markerless tracking in augmented reality applications. Distributed SLAM helps multiple agents to collaboratively explore and build a global map of the environment while estimating their locations in it. One of the main challenges in Distributed SLAM is to identify local map overlaps of these agents, especially when their initial relative positions are not known. We developed a collaborative AR framework with freely moving agents having no knowledge of their initial relative positions. Each agent in our framework uses a camera as the only input device for its SLAM process. Furthermore, the framework identifies map overlaps of agents using an appearance-based method.Item Dynamic Illumination for Augmented Reality with Real-Time Interaction(IEEE, 2019-03) Alhakamy, A’aeshah; Tuceryan, Mihran; Computer and Information Science, School of ScienceCurrent augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost.Item An Empirical Evaluation of the Performance of Real-Time Illumination Approaches: Realistic Scenes in Augmented Reality(Springer, 2019) Alhakamy, A’aeshah; Tuceryan, Mihran; Computer and Information Science, School of ScienceAugmented, Virtual, and Mixed Reality (AR/VR/MR) systems have been developed in general, with many of these applications having accomplished significant results, rendering a virtual object in the appropriate illumination model of the real environment is still under investigation. The entertainment industry has presented an astounding outcome in several media form, albeit the rendering process has mostly been done offline. The physical scene contains the illumination information which can be sampled and then used to render the virtual objects in real-time for realistic scene. In this paper, we evaluate the accuracy of our previous and current developed systems that provide real-time dynamic illumination for coherent interactive augmented reality based on the virtual object’s appearance in association with the real world and related criteria. The system achieves that through three simultaneous aspects. (1) The first is to estimate the incident light angle in the real environment using a live-feed 360∘ camera instrumented on an AR device. (2) The second is to simulate the reflected light using two routes: (a) global cube map construction and (b) local sampling. (3) The third is to define the shading properties for the virtual object to depict the correct lighting assets and suitable shadowing imitation. Finally, the performance efficiency is examined in both routes of the system to reduce the general cost. Also, The results are evaluated through shadow observation and user study.Item Heuristic Based Sensor Ranking Algorithm for Indoor Tracking Applications(Office of the Vice Chancellor for Research, 2013-04-05) Rybarczyk, Ryan; Raje, Rajeev R.; Tuceryan, MihranLocation awareness in an indoor setup is an important function necessary in many application domains such as asset management, critical care, and augmented reality. Location awareness, or tracking, of an object within an indoor setting requires a high degree of accuracy, as room-to-room location may be very important. With the current proliferation of smart devices, with often a multitude of built-in sensors, and inexpensive sensors it is now possible to build a network of sensors, for the purpose of tracking, within an indoor environment without the high cost of installing the needed tracking infrastructure. In an effort to increase accuracy, as well as coverage area, various different sensors may be used in the tracking of an object. In this heterogeneous tracking situation, it is important for the tracking infrastructure to quickly and accurately decide which, all or a subset, of available sensors to use. Challenges related to heterogeneous data fusion and clock synchronization, must be addressed in order to provide accurate location estimates. We have proposed a heuristic based ranking algorithm to address these challenges. In this algorithm, the individual sensors are ranked based upon their quality of service (QoS) attributes and the resulting ranking is used by a filtering service during the sensor selection process. This information is provided to the filtering service when a sensor joins the tracking infrastructure and is subsequently only updated during idle periods, thereby, there avoiding additional overhead. We have implemented this algorithm into the existing prototypical Enhanced Distributed Object Tracking System or e-DOTS. e-DOTS has been extensively experimented with and the results of these experimentation validate the hypothesis that accurate indoor tracking can be achieved using a heterogeneous ensemble of cheap and mobile sensors. Our current investigation involves the incorporation of trust associated with sensors and deploying e-DOTS in a typical healthcare setup.Item “I Want to Experience the Past”: Lessons from a Visitor Survey on How Immersive Technologies Can Support Historic Interpretation(MDPI, 2021-01) Ress, Stella A.; Cafaro, Francesco; Human-Centered Computing, School of Informatics and ComputingThis paper utilizes a visitor survey conducted at an open-air museum in New Harmony, Indiana to discuss design guidelines for immersive technologies that support historic interpretation–specifically, the visitor’s ability to experience the past. We focus on three themes that emerged from the survey: (1) Visitors at this site skewed older, with nearly a quarter over 70; (2) Despite literature suggesting the opposite, visitors at New Harmony liked to learn from a tour guide; and, (3) Visitors said they wanted to “experience the past.” The very notion of a single “experience” of the past, however, is complicated at New Harmony and other historic sites because they interpret multiple periods of significance. Ultimately, our findings suggest immersive technologies must be suited for older visitors, utilize the tour guide, and facilitate visitors’ ability to “experience the past” in such a way that they feel immersed in multiple timelines at the same site.Item Robust High-Level Video Stabilization for Effective AR Telementoring(IEEE, 2019-03) Lin, Chengyuan; Rojas-Muñoz, Edgar; Cabrera, Maria Eugenia; Sanchez-Tamayo, Natalia; Andersen, Daniel; Popescu, Voicu; Noguera, Juan Antonio Barragan; Zarzaur, Ben; Murphy, Pat; Anderson, Kathryn; Douglas, Thomas; Griffis, Clare; Wachs, Juan; Medicine, School of MedicineThis poster presents the design, implementation, and evaluation of a method for robust high-level stabilization of mentees first-person video in augmented reality (AR) telementoring. This video is captured by the front-facing built-in camera of an AR headset and stabilized by rendering from a stationary view a planar proxy of the workspace projectively texture mapped with the video feed. The result is stable, complete, up to date, continuous, distortion free, and rendered from the mentee's default viewpoint. The stabilization method was evaluated in two user studies, in the context of number matching and for cricothyroidotomy training, respectively. Both showed a significant advantage of our method compared with unstabilized visualization.