ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "camera"

Now showing 1 - 5 of 5
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Autonomous Embedded System Enabled 3-D Object Detector: (with Point Cloud and Camera)
    (IEEE, 2019-09) Katare, Dewant; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and Technology
    An Autonomous vehicle or present day smart vehicle is equipped with several ADAS safety features such as Blind Spot Detection, Forward Collision Warning, Lane Departure and Parking Assistance, Surround View System, Vehicular communication System. Recent research utilize deep learning algorithms as a counterfeit for these traditional methods, using optimal sensors. This paper discusses the perception tasks related to autonomous vehicle, specifically the computer-vision approach of 3D object detection and thus proposes a model compatible with embedded system using the RTMaps framework. The proposed model is based on the sensors: camera and Lidar connected to an autonomous embedded system, providing the sensed inputs to the deep learning classifier which on the basis of theses inputs estimates the position and predicts a 3-d bounding box on the physical objects. The Frustum PointNet a contemporary architecture for 3-D object detection is used as base model and is implemented with extended functionality. The architecture is trained and tested on the KITTI dataset and is discussed with the competitive validation precision and accuracy. The Presented model is deployed on the Bluebox 2.0 platform with the RTMaps Embedded framework.
  • Loading...
    Thumbnail Image
    Item
    CardioCam: Leveraging Camera on Mobile Devices to Verify Users While Their Heart is Pumping
    (ACM, 2019-05) Liu, Jian; Shi, Cong; Chen, Yingying; Liu, Hongbo; Gruteser, Marco; Computer and Information Science, School of Science
    With the increasing prevalence of mobile and IoT devices (e.g., smartphones, tablets, smart-home appliances), massive private and sensitive information are stored on these devices. To prevent unauthorized access on these devices, existing user verification solutions either rely on the complexity of user-defined secrets (e.g., password) or resort to specialized biometric sensors (e.g., fingerprint reader), but the users may still suffer from various attacks, such as password theft, shoulder surfing, smudge, and forged biometrics attacks. In this paper, we propose, CardioCam, a low-cost, general, hard-to-forge user verification system leveraging the unique cardiac biometrics extracted from the readily available built-in cameras in mobile and IoT devices. We demonstrate that the unique cardiac features can be extracted from the cardiac motion patterns in fingertips, by pressing on the built-in camera. To mitigate the impacts of various ambient lighting conditions and human movements under practical scenarios, CardioCam develops a gradient-based technique to optimize the camera configuration, and dynamically selects the most sensitive pixels in a camera frame to extract reliable cardiac motion patterns. Furthermore, the morphological characteristic analysis is deployed to derive user-specific cardiac features, and a feature transformation scheme grounded on Principle Component Analysis (PCA) is developed to enhance the robustness of cardiac biometrics for effective user verification. With the prototyped system, extensive experiments involving 25 subjects are conducted to demonstrate that CardioCam can achieve effective and reliable user verification with over 99% average true positive rate (TPR) while maintaining the false positive rate (FPR) as low as 4%.
  • Loading...
    Thumbnail Image
    Item
    Forward Collision Prediction with Online Visual Tracking
    (IEEE, 2019-09) Kollazhi Manghat, Surya; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and Technology
    Safety is the key aspect when comes to driving. Self-driving vehicles are equipped with driver-assistive technologies like Adaptive Cruise Control, Forward Collision Warning system (FCW) and Collsion Mitigation by Breaking (CMbB) to ensure safety while driving. This paper proposes a method by following a lean way of multi-target tracking implementation and 3D bounding box detection without processing much visual information. Object Tracking is an integral part of environment sensing, which enables the vehicle to estimate the surrounding object’s trajectories to accomplish motion planning. The advancement in the object detection methods greatly benefits when following the tracking by detection approach. This will lead to less complex tracking methodology and thus decreasing the computational cost. Estimation based on particle filter is added to precisely associate the tracklets with detections. The model estimates and plots bounding box for the objects in its camera range and predict the 3D positions in camera coordinates from monocular camera data using a deep learning combined with geometric constraints using 2D bounding box, then the actual distance from the vehicle camera is calculated. The model is evaluated on the KITTI car dataset.
  • Loading...
    Thumbnail Image
    Item
    A Multi Sensor Real-time Tracking with LiDAR and Camera
    (IEEE, 2020-01) Kollazhi Manghat, Surya; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and Technology
    Self driving cars are equipped with various driver-assistive technologies (ADAS) like Forward Collision Warning system (FCW), Adaptive Cruise Control and Collision Mitigation by Breaking (CMbB) to ensure safety. Tracking plays an important role in ADAS systems for understanding dynamic environment. This paper proposes 3D multi-target tracking method by following a lean way of implementation using object detection with aim of real time. Object Tracking is an integral part of environment sensing, which enables the vehicle to estimate the surrounding object's trajectories to accomplish motion planning. The advancement in the object detection methodologies benefits greatly when following the tracking by detection approach. The proposed method implemented 2D tracking on camera data and 3D tracking on LiDAR point cloud data. The estimated state from each sensors are fused together to come with a more optimal state of objects present in the surrounding. The multi object tracking performance has evaluated on publicly available KITTI dataset.
  • Loading...
    Thumbnail Image
    Item
    Real-Time 3-D Segmentation on An Autonomous Embedded System: using Point Cloud and Camera
    (IEEE, 2019-07) Katare, Dewant; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and Technology
    Present day autonomous vehicle relies on several sensor technologies for it's autonomous functionality. The sensors based on their type and mounted-location on the vehicle, can be categorized as: line of sight and non-line of sight sensors and are responsible for the different level of autonomy. These line of sight sensors are used for the execution of actions related to localization, object detection and the complete environment understanding. The surrounding or environment understanding for an autonomous vehicle can be achieved by segmentation. Several traditional and deep learning related techniques providing semantic segmentation for an input from camera is already available, however with the advancement in the computing processor, the progression is on developing the deep learning application replacing traditional methods. This paper presents an approach to combine the input of camera and lidar for semantic segmentation purpose. The proposed model for outdoor scene segmentation is based on the frustum pointnet, and ResNet which utilizes the 3d point cloud and camera input for the 3d bounding box prediction across the moving and non-moving object and thus finally recognizing and understanding the scenario at the point-cloud or pixel level. For real time application the model is deployed on the RTMaps framework with Bluebox (an embedded platform for autonomous vehicle). The proposed architecture is trained with the CITYScpaes and the KITTI dataset.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University