- Browse by Author
Browsing by Author "Alhakamy, A’aeshah"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Augmented Reality-Assisted Deep Reinforcement Learning-Based Model towards Industrial Training and Maintenance for NanoDrop Spectrophotometer(MDPI, 2023-06-29) Alatawi, Hibah; Albalawi, Nouf; Shahata, Ghadah; Aljohani, Khulud; Alhakamy, A’aeshah; Tuceryan, Mihran; Computer and Information Science, School of ScienceThe use of augmented reality (AR) technology is growing in the maintenance industry because it can improve efficiency and reduce costs by providing real-time guidance and instruction to workers during repairs and maintenance tasks. AR can also assist with equipment training and visualization, allowing users to explore the equipment’s internal structure and size. The adoption of AR in maintenance is expected to increase as hardware options expand and development costs decrease. To implement AR for job aids in mobile applications, 3D spatial information and equipment details must be addressed, and calibrated using image-based or object-based tracking, which is essential for integrating 3D models with physical components. The present paper suggests a system using AR-assisted deep reinforcement learning (RL)-based model for NanoDrop Spectrophotometer training and maintenance purposes that can be used for rapid repair procedures in the Industry 4.0 (I4.0) setting. The system uses a camera to detect the target asset via feature matching, tracking techniques, and 3D modeling. Once the detection is completed, AR technologies generate clear and easily understandable instructions for the maintenance operator’s device. According to the research findings, the model’s target technique resulted in a mean reward of 1.000 and a standard deviation of 0.000. This means that all the rewards that were obtained in the given task or environment were exactly the same. The fact that the reward standard deviation is 0.000 shows that there is no variability in the outcomes.Item Designing embodied interactions for informal learning: two open research challenges(ACM, 2019-06) Cafaro, Francesco; Trajkova, Milka; Alhakamy, A’aeshah; Human-Centered Computing, School of Informatics and ComputingInteractive installations that are controlled with gestures and body movements have been widely used in museums due to their tremendous educational potential. The design of such systems, however, remains problematic. In this paper, we reflect on two open research challenges that we observed when crafting a Kinect-based prototype installation for data exploration at a science museum: (1) making the user aware that the system is interactive; and, (2) increasing the discoverability of hand gestures and body movements.Item Dynamic Illumination for Augmented Reality with Real-Time Interaction(IEEE, 2019-03) Alhakamy, A’aeshah; Tuceryan, Mihran; Computer and Information Science, School of ScienceCurrent augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost.Item An Empirical Evaluation of the Performance of Real-Time Illumination Approaches: Realistic Scenes in Augmented Reality(Springer, 2019) Alhakamy, A’aeshah; Tuceryan, Mihran; Computer and Information Science, School of ScienceAugmented, Virtual, and Mixed Reality (AR/VR/MR) systems have been developed in general, with many of these applications having accomplished significant results, rendering a virtual object in the appropriate illumination model of the real environment is still under investigation. The entertainment industry has presented an astounding outcome in several media form, albeit the rendering process has mostly been done offline. The physical scene contains the illumination information which can be sampled and then used to render the virtual objects in real-time for realistic scene. In this paper, we evaluate the accuracy of our previous and current developed systems that provide real-time dynamic illumination for coherent interactive augmented reality based on the virtual object’s appearance in association with the real world and related criteria. The system achieves that through three simultaneous aspects. (1) The first is to estimate the incident light angle in the real environment using a live-feed 360∘ camera instrumented on an AR device. (2) The second is to simulate the reflected light using two routes: (a) global cube map construction and (b) local sampling. (3) The third is to define the shading properties for the virtual object to depict the correct lighting assets and suitable shadowing imitation. Finally, the performance efficiency is examined in both routes of the system to reduce the general cost. Also, The results are evaluated through shadow observation and user study.Item Exploring Casual COVID-19 Data Visualizations on Twitter: Topics and Challenges(MDPI, 2020-09) Trajkova, Milka; Alhakamy, A’aeshah; Cafaro, Francesco; Vedak, Sanika; Mallappa, Rashmi; Kankara, Sreekanth R.; Human-Centered Computing, School of Informatics and ComputingSocial networking sites such as Twitter have been a popular choice for people to express their opinions, report real-life events, and provide a perspective on what is happening around the world. In the outbreak of the COVID-19 pandemic, people have used Twitter to spontaneously share data visualizations from news outlets and government agencies and to post casual data visualizations that they individually crafted. We conducted a Twitter crawl of 5409 visualizations (from the period between 14 April 2020 and 9 May 2020) to capture what people are posting. Our study explores what people are posting, what they retweet the most, and the challenges that may arise when interpreting COVID-19 data visualization on Twitter. Our findings show that multiple factors, such as the source of the data, who created the chart (individual vs. organization), the type of visualization, and the variables on the chart influence the retweet count of the original post. We identify and discuss five challenges that arise when interpreting these casual data visualizations, and discuss recommendations that should be considered by Twitter users while designing COVID-19 data visualizations to facilitate data interpretation and to avoid the spread of misconceptions and confusion.Item Image Denoising Using A Generative Adversarial Network(IEEE, 2019-03) Alsaiari, Abeer; Rustagi, Ridhi; Alhakamy, A’aeshah; Thomas, Manu Mathew; Forbes, Angus G.; Computer and Information Science, School of ScienceAnimation studios render 3D scenes using a technique called path tracing which enables them to create high quality photorealistic frames. Path tracing involves shooting 1000's of rays into a pixel randomly (Monte Carlo) which will then hit the objects in the scene and, based on the reflective property of the object, these rays reflect or refract or get absorbed. The colors returned by these rays are averaged to determine the color of the pixel. This process is repeated for all the pixels. Due to the computational complexity it might take 8-16 hours to render a single frame. We implemented a neural network-based solution to reduce the time it takes to render a frame to less than a second using a generative adversarial network (GAN), once the network is trained. The main idea behind this proposed method is to render the image using a much smaller number of samples per pixel than is normal for path tracing (e.g., 1, 4, or 8 samples instead of, say, 32,000 samples) and then pass the noisy, incompletely rendered image to our network, which is capable of generating a high-quality photorealistic image.Item Move Your Body: Engaging Museum Visitors with Human-Data Interaction(ACM, 2020-04) Trajkova, Milka; Alhakamy, A’aeshah; Cafaro, Francesco; Mallappa, Rashmi; Kankara, Sreekanth R.; Human-Centered Computing, School of Informatics and ComputingMuseums have embraced embodied interaction: its novelty generates buzz and excitement among their patrons, and it has enormous educational potential. Human-Data Interaction (HDI) is a class of embodied interactions that enables people to explore large sets of data using interactive visualizations that users control with gestures and body movements. In museums, however, HDI installations have no utility if visitors do not engage with them. In this paper, we present a quasi-experimental study that investigates how different ways of representing the user ("mode type") next-to a data visualization alters the way in which people engage with a HDI system. We consider four mode types: avatar, skeleton, camera overlay, and control. Our findings indicate that the mode type impacts the number of visitors that interact with the installation, the gestures that people do, and the amount of time that visitors spend observing the data on display and interacting with the system.Item Polarization-Based Illumination Detection for Coherent Augmented Reality Scene Rendering in Dynamic Environments(Springer, 2019) Alhakamy, A’aeshah; Tuceryan, Mihran; Computer and Information Science, School of ScienceA virtual object that is integrated into the real world in a perceptually coherent manner using the physical illumination information in the current environment is still under development. Several researchers investigated the problem producing a high-quality result; however, pre-computation and offline availability of resources were the essential assumption upon which the system relied. In this paper, we propose a novel and robust approach to identifying the incident light in the scene using the polarization properties of the light wave and using this information to produce a visually coherent augmented reality within a dynamic environment. This approach is part of a complete system which has three simultaneous components that run in real-time: (i) the detection of the incident light angle, (ii) the estimation of the reflected light, and (iii) the creation of the shading properties which are required to provide any virtual object with the detected lighting, reflected shadows, and adequate materials. Finally, the system performance is analyzed where our approach has reduced the overall computational cost.