ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "Reinforcement learning"

Now showing 1 - 5 of 5
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Augmented Reality-Assisted Deep Reinforcement Learning-Based Model towards Industrial Training and Maintenance for NanoDrop Spectrophotometer
    (MDPI, 2023-06-29) Alatawi, Hibah; Albalawi, Nouf; Shahata, Ghadah; Aljohani, Khulud; Alhakamy, A’aeshah; Tuceryan, Mihran; Computer and Information Science, School of Science
    The use of augmented reality (AR) technology is growing in the maintenance industry because it can improve efficiency and reduce costs by providing real-time guidance and instruction to workers during repairs and maintenance tasks. AR can also assist with equipment training and visualization, allowing users to explore the equipment’s internal structure and size. The adoption of AR in maintenance is expected to increase as hardware options expand and development costs decrease. To implement AR for job aids in mobile applications, 3D spatial information and equipment details must be addressed, and calibrated using image-based or object-based tracking, which is essential for integrating 3D models with physical components. The present paper suggests a system using AR-assisted deep reinforcement learning (RL)-based model for NanoDrop Spectrophotometer training and maintenance purposes that can be used for rapid repair procedures in the Industry 4.0 (I4.0) setting. The system uses a camera to detect the target asset via feature matching, tracking techniques, and 3D modeling. Once the detection is completed, AR technologies generate clear and easily understandable instructions for the maintenance operator’s device. According to the research findings, the model’s target technique resulted in a mean reward of 1.000 and a standard deviation of 0.000. This means that all the rewards that were obtained in the given task or environment were exactly the same. The fact that the reward standard deviation is 0.000 shows that there is no variability in the outcomes.
  • Loading...
    Thumbnail Image
    Item
    Childhood neglect is associated with alterations in neural prediction error signaling and the response to novelty
    (Cambridge University Press, 2024-10-24) Aloi, Joseph; Crum, Kathleen I.; Blair, Karina S.; Zhang, Ru; Bashford-Largo, Johannah; Bajaj, Sahil; Hwang, Soonjo; Averbeck, Bruno B.; Tottenham, Nim; Dobbertin, Matthew; Blair, R. James R.; Psychiatry, School of Medicine
    Background: One in eight children experience early life stress (ELS), which increases risk for psychopathology. ELS, particularly neglect, has been associated with reduced responsivity to reward. However, little work has investigated the computational specifics of this disrupted reward response - particularly with respect to the neural response to Reward Prediction Errors (RPE) - a critical signal for successful instrumental learning - and the extent to which they are augmented to novel stimuli. The goal of the current study was to investigate the associations of abuse and neglect, and neural representation of RPE to novel and non-novel stimuli. Methods: One hundred and seventy-eight participants (aged 10-18, M = 14.9, s.d. = 2.38) engaged in the Novelty task while undergoing functional magnetic resonance imaging. In this task, participants learn to choose novel or non-novel stimuli to win monetary rewards varying from $0 to $0.30 per trial. Levels of abuse and neglect were measured using the Childhood Trauma Questionnaire. Results: Adolescents exposed to high levels of neglect showed reduced RPE-modulated blood oxygenation level dependent response within medial and lateral frontal cortices particularly when exploring novel stimuli (p < 0.05, corrected for multiple comparisons) relative to adolescents exposed to lower levels of neglect. Conclusions: These data expand on previous work by indicating that neglect, but not abuse, is associated with impairments in neural RPE representation within medial and lateral frontal cortices. However, there was no association between neglect and behavioral impairments on the Novelty task, suggesting that these neural differences do not necessarily translate into behavioral differences within the context of the Novelty task.
  • Loading...
    Thumbnail Image
    Item
    Multi-criteria decision making using reinforcement learning and its application to food, energy, and water systems (FEWS) problem
    (2021-12) Deshpande, Aishwarya; Mukhopadhyay, Snehasis; Tuceryan, Mihran; Xia, Yuni
    Multi-criteria decision making (MCDM) methods have evolved over the past several decades. In today’s world with rapidly growing industries, MCDM has proven to be significant in many application areas. In this study, a decision-making model is devised using reinforcement learning to carry out multi-criteria optimization problems. Learning automata algorithm is used to identify an optimal solution in the presence of single and multiple environments (criteria) using pareto optimality. The application of this model is also discussed, where the model provides an optimal solution to the food, energy, and water systems (FEWS) problem.
  • Loading...
    Thumbnail Image
    Item
    OpenGraphGym: A Parallel Reinforcement Learning Framework for Graph Optimization Problems
    (Springer, 2020-06-15) Zheng, Weijian; Wang, Dali; Song, Fengguang; Krzhizhanovskaya, Valeria V.; Závodszky, Gábor; Lees, Michael H.; Dongarra, Jack J.; Sloot, Peter M. A.; Brissos, Sérgio; Teixeira, João; Computer and Information Science, School of Science
    This paper presents an open-source, parallel AI environment (named OpenGraphGym) to facilitate the application of reinforcement learning (RL) algorithms to address combinatorial graph optimization problems. This environment incorporates a basic deep reinforcement learning method, and several graph embeddings to capture graph features, it also allows users to rapidly plug in and test new RL algorithms and graph embeddings for graph optimization problems. This new open-source RL framework is targeted at achieving both high performance and high quality of the computed graph solutions. This RL framework forms the foundation of several ongoing research directions, including 1) benchmark works on different RL algorithms and embedding methods for classic graph problems; 2) advanced parallel strategies for extreme-scale graph computations, as well as 3) performance evaluation on real-world graph solutions.
  • Loading...
    Thumbnail Image
    Item
    Predicting Attention Shaping Response in People with Schizophrenia
    (Wolters Kluwer, 2021) Beaudette, Danielle M.; Gold, James M.; Waltz, James; Thompson, Judy L.; Cherneski, Lindsay; Martin, Victoria; Monteiro, Brian; Cruz, Lisa N.; Silverstein, Steven M.; Psychology, School of Science
    People with schizophrenia often experience attentional impairments that hinder learning during psychological interventions. Attention shaping is a behavioral technique that improves attentiveness in this population. Because reinforcement learning (RL) is thought to be the mechanism by which attention shaping operates, we investigated if preshaping RL performance predicted level of response to attention shaping in people with schizophrenia. Contrary to hypotheses, a steeper attentiveness growth curve was predicted by less intact pretreatment RL ability and lower baseline attentiveness, accounting for 59% of the variance. Moreover, baseline attentiveness accounted for over 13 times more variance in response to attention shaping than did RL ability. Results suggest attention shaping is most effective for lower-functioning patients, and those high in RL ability may already be close to ceiling in terms of their response to reinforcers. Attention shaping may not be a primarily RL-driven intervention, and other mechanisms of its effects should be considered.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University