Kaur, DavinderUslu, SuleymanDurresi, ArjanBadve, SunilDundar, Murat2023-02-072023-02-072021-06Kaur, D., Uslu, S., Durresi, A., Badve, S., & Dundar, M. (2021). Trustworthy Explainability Acceptance: A New Metric to Measure the Trustworthiness of Interpretable AI Medical Diagnostic Systems. In L. Barolli, K. Yim, & T. Enokido (Eds.), Complex, Intelligent and Software Intensive Systems (pp. 35–46). Springer International Publishing. https://doi.org/10.1007/978-3-030-79725-6_4https://hdl.handle.net/1805/31168We propose, Trustworthy Explainability Acceptance metric to evaluate explainable AI systems using expert-in-the-loop. Our metric calculates acceptance by quantifying the distance between the explanations generated by the AI system and the reasoning provided by the experts based on their expertise and experience. Our metric also evaluates the trust of the experts to include different groups of experts using our trust mechanism. Our metric can be easily adapted to any Interpretable AI system and be used in the standardization process of trustworthy AI systems. We illustrate the proposed metric using the high-stake medical AI application of Predicting Ductal Carcinoma in Situ (DCIS) Recurrence. Our metric successfully captures the explainability of AI systems in DCIS recurrence by experts.enPublisher PolicyAI systemstrustmedical diagnostic systemsTrustworthy Explainability Acceptance: A New Metric to Measure the Trustworthiness of Interpretable AI Medical Diagnostic SystemsConference proceedings