- Browse by Subject
Browsing by Subject "Augmented Reality"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item e-DTS 2.0: A Next-Generation of a Distributed Tracking System(2012-03-20) Rybarczyk, Ryan Thomas; Raje, Rajeev; Tuceryan, Mihran; Linos, PanosA key component in tracking is identifying relevant data and combining the data in an effort to provide an accurate estimate of both the location and the orientation of an object marker as it moves through an environment. This thesis proposes an enhancement to an existing tracking system, the enhanced distributed tracking system (e-DTS), in the form of the e-DTS 2.0 and provides an empirical analysis of these enhancements. The thesis also provides suggestions on future enhancements and improvements. When a Camera identifies an object within its frame of view, it communicates with a JINI-based service in an effort to expose this information to any client who wishes to consume it. This aforementioned communication utilizes the JINI Multicast Lookup Protocol to provide the means for a dynamic discovery of any sensors as they are added or removed from the environment during the tracking process. The client can then retrieve this information from the service and perform a fusion technique in an effort to provide an estimation of the marker's current location with respect to a given coordinate system. The coordinate system handoff and transformation is a key component of the e-DTS 2.0 tracking process as it improves the agility of the system.Item Extraction and Integration of Physical Illumination in Dynamic Augmented Reality Environments(2020-12) Alhakamy, A'aeshah A.; Tuceryan, Mihran; Fang, Shiaofen; Zheng, Jiang Ya; Mukhopadhyay, SnehasisAlthough current augmented, virtual, and mixed reality (AR/VR/MR) systems are facing advanced and immersive experience in the entertainment industry with countless media forms. Theses systems suffer a lack of correct direct and indirect illumination modeling where the virtual objects render with the same lighting condition as the real environment. Some systems are using baked GI, pre-recorded textures, and light probes that are mostly accomplished offline to compensate for precomputed real-time global illumination (GI). Thus, illumination information can be extracted from the physical scene for interactively rendering the virtual objects into the real world which produces a more realistic final scene in real-time. This work approaches the problem of visual coherence in AR by proposing a system that detects the real-world lighting conditions in dynamic scenes, then uses the extracted illumination information to render the objects added to the scene. The system covers several major components to achieve a more realistic augmented reality outcome. First, the detection of the incident light (direct illumination) from the physical scene with the use of computer vision techniques based on the topological structural analysis of 2D images using a live-feed 360-degree camera instrumented on an AR device that captures the entire radiance map. Also, the physics-based light polarization eliminates or reduces false-positive lights such as white surfaces, reflections, or glare which negatively affect the light detection process. Second, the simulation of the reflected light (indirect illumination) that bounce between the real-world surfaces to be rendered into the virtual objects and reflect their existence in the virtual world. Third, defining the shading characteristic/properties of the virtual object to depict the correct lighting assets with a suitable shadow casting. Fourth, the geometric properties of real-scene including plane detection, 3D surface reconstruction, and simple meshing are incorporated with the virtual scene for more realistic depth interactions between the real and virtual objects. These components are developed methods which assumed to be working simultaneously in real-time for photo-realistic AR. The system is tested with several lighting conditions to evaluate the accuracy of the results based on the error incurred between the real/virtual objects casting shadow and interactions. For system efficiency, the rendering time is compared with previous works and research. Further evaluation of human perception is conducted through a user study. The overall performance of the system is investigated to reduce the cost to a minimum.Item Innovative mixed reality advanced manufacturing environment with haptic feedback(2018-07-13) Satterwhite, Jesse C.; Ben-Miled, Zina; El-Mounayri, Hazim; Rogers, Christian; Wasfy, TamerIn immersive eLearning environments, it has been demonstrated that incorporating haptic feedback improves the software's pedagogical effectiveness. Due to this and recent advancements in virtual reality (VR) and mixed reality (MR) environments, more immersive, authentic, and viable pedagogical tools have been created. However, the advanced manufacturing industry has not fully embraced mixed reality training tools. There is currently a need for effective haptic feedback techniques in advanced manufacturing environments. The MR-AVML, a proposed CNC milling machine training tool, is designed to include two forms of haptic feedback, thereby providing users with a natural and intuitive experience. This experience is achieved by tasking users with running a virtual machine seen through the Microsoft HoloLens and interacting with a physical representation of the machine controller. After conducting a pedagogical study on the environment, it was found that the MR-AVML was 6.06% more effective than a version of the environment with no haptic feedback, and only 1.35% less effective than hands-on training led by an instructor. This shows that the inclusion of haptic feedback in an advanced manufacturing training environment can improve pedagogical effectiveness.Item Registration and Localization of Unknown Moving Objects in Markerless Monocular SLAM(2023-05) Troutman, Blake; Tuceryan, Mihran; Fang, Shiaofen; Tsechpenakis, Gavriil; Hu, QinSimultaneous localization and mapping (SLAM) is a general device localization technique that uses realtime sensor measurements to develop a virtualization of the sensor's environment while also using this growing virtualization to determine the position and orientation of the sensor. This is useful for augmented reality (AR), in which a user looks through a head-mounted display (HMD) or viewfinder to see virtual components integrated into the real world. Visual SLAM (i.e., SLAM in which the sensor is an optical camera) is used in AR to determine the exact device/headset movement so that the virtual components can be accurately redrawn to the screen, matching the perceived motion of the world around the user as the user moves the device/headset. However, many potential AR applications may need access to more than device localization data in order to be useful; they may need to leverage environment data as well. Additionally, most SLAM solutions make the naive assumption that the environment surrounding the system is completely static (non-moving). Given these circumstances, it is clear that AR may benefit substantially from utilizing a SLAM solution that detects objects that move in the scene and ultimately provides localization data for each of these objects. This problem is known as the dynamic SLAM problem. Current attempts to address the dynamic SLAM problem often use machine learning to develop models that identify the parts of the camera image that belong to one of many classes of potentially-moving objects. The limitation with these approaches is that it is impractical to train models to identify every possible object that moves; additionally, some potentially-moving objects may be static in the scene, which these approaches often do not account for. Some other attempts to address the dynamic SLAM problem also localize the moving objects they detect, but these systems almost always rely on depth sensors or stereo camera configurations, which have significant limitations in real-world use cases. This dissertation presents a novel approach for registering and localizing unknown moving objects in the context of markerless, monocular, keyframe-based SLAM with no required prior information about object structure, appearance, or existence. This work also details a novel deep learning solution for determining SLAM map initialization suitability in structure-from-motion-based initialization approaches. This dissertation goes on to validate these approaches by implementing them in a markerless, monocular SLAM system called LUMO-SLAM, which is built from the ground up to demonstrate this approach to unknown moving object registration and localization. Results are collected for the LUMO-SLAM system, which address the accuracy of its camera localization estimates, the accuracy of its moving object localization estimates, and the consistency with which it registers moving objects in the scene. These results show that this solution to the dynamic SLAM problem, though it does not act as a practical solution for all use cases, has an ability to accurately register and localize unknown moving objects in such a way that makes it useful for some applications of AR without thwarting the system's ability to also perform accurate camera localization.