On Evaluating Black-Box Explainable AI Methods for Enhancing Anomaly Detection in Autonomous Driving Systems

dc.contributor.authorNazat, Sazid
dc.contributor.authorArreche, Osvaldo
dc.contributor.authorAbdallah, Mustafa
dc.contributor.departmentElectrical and Computer Engineering, Purdue School of Engineering and Technology
dc.date.accessioned2024-08-27T08:51:37Z
dc.date.available2024-08-27T08:51:37Z
dc.date.issued2024-05-29
dc.description.abstractThe recent advancements in autonomous driving come with the associated cybersecurity issue of compromising networks of autonomous vehicles (AVs), motivating the use of AI models for detecting anomalies on these networks. In this context, the usage of explainable AI (XAI) for explaining the behavior of these anomaly detection AI models is crucial. This work introduces a comprehensive framework to assess black-box XAI techniques for anomaly detection within AVs, facilitating the examination of both global and local XAI methods to elucidate the decisions made by XAI techniques that explain the behavior of AI models classifying anomalous AV behavior. By considering six evaluation metrics (descriptive accuracy, sparsity, stability, efficiency, robustness, and completeness), the framework evaluates two well-known black-box XAI techniques, SHAP and LIME, involving applying XAI techniques to identify primary features crucial for anomaly classification, followed by extensive experiments assessing SHAP and LIME across the six metrics using two prevalent autonomous driving datasets, VeReMi and Sensor. This study advances the deployment of black-box XAI methods for real-world anomaly detection in autonomous driving systems, contributing valuable insights into the strengths and limitations of current black-box XAI methods within this critical domain.
dc.eprint.versionFinal published version
dc.identifier.citationNazat S, Arreche O, Abdallah M. On Evaluating Black-Box Explainable AI Methods for Enhancing Anomaly Detection in Autonomous Driving Systems. Sensors (Basel). 2024;24(11):3515. Published 2024 May 29. doi:10.3390/s24113515
dc.identifier.urihttps://hdl.handle.net/1805/42963
dc.language.isoen_US
dc.publisherMDPI
dc.relation.isversionof10.3390/s24113515
dc.relation.journalSensors
dc.rightsAttribution 4.0 Internationalen
dc.rights.urihttps://creativecommons.org/licenses/by/4.0
dc.sourcePMC
dc.subjectAnomaly detection
dc.subjectAutonomous driving
dc.subjectExplainable AI
dc.subjectShapley additive explanations
dc.subjectLIME
dc.subjectFeature extraction
dc.subjectVeReMi dataset
dc.titleOn Evaluating Black-Box Explainable AI Methods for Enhancing Anomaly Detection in Autonomous Driving Systems
dc.typeArticle
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Nazat2024Evaluating-CCBY.pdf
Size:
2.44 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.04 KB
Format:
Item-specific license agreed upon to submission
Description: