ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "explainability"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Global Translation of Classification Models
    (MDPI, 2022-05-11) Al-Merri, Mohammad; Ben Miled, Zina; Electrical and Computer Engineering, School of Engineering and Technology
    The widespread and growing usage of machine learning models, particularly for critical areas such as law, predicate the need for global interpretability. Models that cannot be audited are vulnerable to biases inherited from the datasets that were used to develop them. Moreover, locally interpretable models are vulnerable to adversarial attacks. To address this issue, the present paper proposes a new methodology that can translate any existing machine learning model into a globally interpretable one. MTRE-PAN is a hybrid SVM-decision tree architecture that leverages the interpretability of linear hyperplanes by creating a set of polygons that delimit the decision boundaries of the target model. Moreover, the present paper introduces two new metrics: certain and boundary model parities. These metrics can be used to accurately evaluate the performance of the interpretable model near the decision boundaries. These metrics are used to compare MTRE-PAN to a previously proposed interpretable architecture called TRE-PAN. As in the case of TRE-PAN, MTRE-PAN aims at providing global interpretability. The comparisons are performed over target models developed using three benchmark datasets: Abalone, Census and Diabetes data. The results show that MTRE-PAN generates interpretable models that have a lower number of leaves and a higher agreement with the target models, especially around the most important regions in the feature space, namely the decision boundaries.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University