Sensor Fusion in Neural Networks For Object Detection

dc.contributor.advisorEl-Sharkawy, Mohamed
dc.contributor.authorPrasanna, Sheetal
dc.contributor.otherKing, Brian
dc.contributor.otherRizkalla, Maher
dc.date.accessioned2022-05-27T13:35:48Z
dc.date.available2022-05-27T13:35:48Z
dc.date.issued2022-05
dc.degree.date2022en_US
dc.degree.disciplineElectrical & Computer Engineeringen
dc.degree.grantorPurdue Universityen_US
dc.degree.levelM.S.E.C.E.en_US
dc.descriptionIndiana University-Purdue University Indianapolis (IUPUI)en_US
dc.description.abstractObject detection is an increasingly popular tool used in many fields, especially in the development of autonomous vehicles. The task of object detections involves the localization of objects in an image, constructing a bounding box to determine the presence and location of the object, and classifying each object into its appropriate class. Object detection applications are commonly implemented using convolutional neural networks along with the construction of feature pyramid networks to extract data. Another commonly used technique in the automotive industry is sensor fusion. Each automotive sensor – camera, radar, and lidar – have their own advantages and disadvantages. Fusing two or more sensors together and using the combined information is a popular method of balancing the strengths and weakness of each independent sensor. Together, using sensor fusion within an object detection network has been found to be an effective method of obtaining accurate models. Accurate detections and classifications of images is a vital step in the development of autonomous vehicles or self-driving cars. Many studies have proposed methods to improve neural networks or object detection networks. Some of these techniques involve data augmentation and hyperparameter optimization. This thesis achieves the goal of improving a camera and radar fusion network by implementing various techniques within these areas. Additionally, a novel idea of integrating a third sensor, the lidar, into an existing camera and radar fusion network is explored in this research work. The models were trained on the Nuscenes dataset, one of the biggest automotive datasets available today. Using the concepts of augmentation, hyperparameter optimization, sensor fusion, and annotation filters, the CRF-Net was trained to achieve an accuracy score that was 69.13% higher than the baseline.en_US
dc.identifier.urihttps://hdl.handle.net/1805/29162
dc.identifier.urihttp://dx.doi.org/10.7912/C2/2912
dc.language.isoen_USen_US
dc.rightsAttribution 4.0 International*
dc.rights.urihttps://creativecommons.org/licenses/by/4.0*
dc.titleSensor Fusion in Neural Networks For Object Detectionen_US
dc.typeThesisen
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Prasanna_Purdue_University_Thesis_FINAL.pdf
Size:
13.13 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.99 KB
Format:
Item-specific license agreed upon to submission
Description: