- Browse by Subject
Browsing by Subject "Eye tracking"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Machine Learning Classification of Facial Affect Recognition Deficits after Traumatic Brain Injury for Informing Rehabilitation Needs and Progress(2020-12) Iffat Naz, Syeda; Christopher, Lauren; King, Brian; Neumann, DawnA common impairment after a traumatic brain injury (TBI) is a deficit in emotional recognition, such as inferences of others’ intentions. Some researchers have found these impairments in 39\% of the TBI population. Our research information needed to make inferences about emotions and mental states comes from visually presented, nonverbal cues (e.g., facial expressions or gestures). Theory of mind (ToM) deficits after TBI are partially explained by impaired visual attention and the processing of these important cues. This research found that patients with deficits in visual processing differ from healthy controls (HCs). Furthermore, we found visual processing problems can be determined by looking at the eye tracking data developed from industry standard eye tracking hardware and software. We predicted that the eye tracking data of the overall population is correlated to the TASIT test. The visual processing of impaired (who got at least one answer wrong from TASIT questions) and unimpaired (who got all answer correctly from TASIT questions) differs significantly. We have divided the eye-tracking data into 3 second time blocks of time series data to detect the most salient individual blocks to the TASIT score. Our preliminary results suggest that we can predict the whole population's impairment using eye-tracking data with an improved f1 score from 0.54 to 0.73. For this, we developed optimized support vector machine (SVM) and random forest (RF) classifier.Item Sensor-based indicators of performance changes between sessions during robotic surgery training(Elsevier, 2021) Wu, Chuhao; Cha, Jackie; Sulek, Jay; Sundaram, Chandru P.; Wachs, Juan; Proctor, Robert W.; Yu, Denny; Urology, School of MedicineTraining of surgeons is essential for safe and effective usage of robotic surgery, yet current assessment tools for learning progression are limited. The objective of this study was to measure changes in trainees’ cognitive and behavioral states as they progressed in a robotic surgeon training curriculum at a medical institution. Seven surgical trainees in urology who had no formal robotic training experience participated in the simulation curriculum. They performed 12 robotic skills exercises with varying levels of difficulty repetitively in separate sessions. EEG (electroencephalogram) activity and eye movements were measured throughout to calculate three metrics: engagement index (indicator of task engagement), pupil diameter (indicator of mental workload) and gaze entropy (indicator of randomness in gaze pattern). Performance scores (completion of task goals) and mental workload ratings (NASA-Task Load Index) were collected after each exercise. Changes in performance scores between training sessions were calculated. Analysis of variance, repeated measures correlation, and machine learning classification were used to diagnose how cognitive and behavioral states associate with performance increases or decreases between sessions. The changes in performance were correlated with changes in engagement index (rrm = −.25, p < .001) and gaze entropy (rrm = −.37, p < .001). Changes in cognitive and behavioral states were able to predict training outcomes with 72.5% accuracy. Findings suggest that cognitive and behavioral metrics correlate with changes in performance between sessions. These measures can complement current feedback tools used by medical educators and learners for skills assessment in robotic surgery training.