- Browse by Author
Browsing by Author "Kaur, Davinder"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Control Theoretical Modeling of Trust-Based Decision Making in Food-Energy-Water Management(Springer, 2021) Uslu, Suleyman; Kaur, Davinder; Rivera, Samuel J.; Durresi, Arjan; Babbar-Sebens, Meghna; Tilt, Jenna H.; Computer and Information Science, School of ScienceWe propose a hybrid Human-Machine decision making to manage Food-Energy-Water resources. In our system trust among human actors during decision making is measured and managed. Furthermore, such trust is used to pressure human actors to choose among the solutions generated by algorithms that satisfy the community’s preferred trade-offs among various objectives. We model the trust-based loops in decision making by using control theory. In this system, the feedback information is the trust pressure that actors receive from peers. Using control theory, we studied the dynamics of the trust of an actor. Then, we presented the modeling of the change of solution distances. In both scenarios, we also calculated the settling times and the stability using the transfer functions and their Z-transforms as the number of rounds to show whether and when the decision making is finalized.Item Requirements for Trustworthy Artificial Intelligence – A Review(Springer, 2021) Kaur, Davinder; Uslu, Suleyman; Durresi, Arjan; Computer and Information Science, School of ScienceThe field of algorithmic decision-making, particularly Artificial Intelligence (AI), has been drastically changing. With the availability of a massive amount of data and an increase in the processing power, AI systems have been used in a vast number of high-stake applications. So, it becomes vital to make these systems reliable and trustworthy. Different approaches have been proposed to make theses systems trustworthy. In this paper, we have reviewed these approaches and summarized them based on the principles proposed by the European Union for trustworthy AI. This review provides an overview of different principles that are important to make AI trustworthy.Item Trustability for Resilient Internet of Things Services on 5G Multiple Access Edge Cloud Computing(MDPI, 2022-12-16) Uslu, Suleyman; Kaur, Davinder; Durresi, Mimoza; Durresi, Arjan; Computer and Information Science, School of ScienceBillions of Internet of Things (IoT) devices and sensors are expected to be supported by fifth-generation (5G) wireless cellular networks. This highly connected structure is predicted to attract different and unseen types of attacks on devices, sensors, and networks that require advanced mitigation strategies and the active monitoring of the system components. Therefore, a paradigm shift is needed, from traditional prevention and detection approaches toward resilience. This study proposes a trust-based defense framework to ensure resilient IoT services on 5G multi-access edge computing (MEC) systems. This defense framework is based on the trustability metric, which is an extension of the concept of reliability and measures how much a system can be trusted to keep a given level of performance under a specific successful attack vector. Furthermore, trustability is used as a trade-off with system cost to measure the net utility of the system. Systems using multiple sensors with different levels of redundancy were tested, and the framework was shown to measure the trustability of the entire system. Furthermore, different types of attacks were simulated on an edge cloud with multiple nodes, and the trustability was compared to the capabilities of dynamic node addition for the redundancy and removal of untrusted nodes. Finally, the defense framework measured the net utility of the service, comparing the two types of edge clouds with and without the node deactivation capability. Overall, the proposed defense framework based on trustability ensures a satisfactory level of resilience for IoT on 5G MEC systems, which serves as a trade-off with an accepted cost of redundant resources under various attacks.Item Trustworthy Acceptance: A New Metric for Trustworthy Artificial Intelligence Used in Decision Making in Food–Energy–Water Sectors(Springer, 2021-04) Barolli, Leonard; Woungang, Isaac; Enokido, Tomoya; Uslu, Suleyman; Kaur, Davinder; Rivera, Samuel J.; Durresi, Arjan; Durresi, Mimoza; Babbar-Sebens, Meghna; Computer and Information Science, School of ScienceWe propose, for the first time, a trustworthy acceptance metric and its measurement methodology to evaluate the trustworthiness of AI-based systems used in decision making in Food Energy Water (FEW) management. The proposed metric is a significant step forward in the standardization process of AI systems. It is essential to standardize the AI systems’ trustworthiness, but until now, the standardization efforts remain at the level of high-level principles. The measurement methodology of the proposed includes human experts in the loop, and it is based on our trust management system. Our metric captures and quantifies the system’s transparent evaluation by field experts on as many control points as desirable by the users. We illustrate the trustworthy acceptance metric and its measurement methodology using AI in decision-making scenarios of Food-Energy-Water sectors. However, the proposed metric and its methodology can be easily adapted to other fields of AI applications. We show that our metric successfully captures the aggregated acceptance of any number of experts, can be used to do multiple measurements on various points of the system, and provides confidence values for the measured acceptance.Item Trustworthy AI: Ensuring Explainability & Acceptance(2023-12) Kaur, Davinder; Durresi, Arjan; Tuceryan, Mihran; Dundar, Murat; Hu, QinIn the dynamic realm of Artificial Intelligence (AI), this study explores the multifaceted landscape of Trustworthy AI with a dedicated focus on achieving both explainability and acceptance. The research addresses the evolving dynamics of AI, emphasizing the essential role of human involvement in shaping its trajectory. A primary contribution of this work is the introduction of a novel "Trustworthy Explainability Acceptance Metric", tailored for the evaluation of AI-based systems by field experts. Grounded in a versatile distance acceptance approach, this metric provides a reliable measure of acceptance value. Practical applications of this metric are illustrated, particularly in a critical domain like medical diagnostics. Another significant contribution is the proposal of a trust-based security framework for 5G social networks. This framework enhances security and reliability by incorporating community insights and leveraging trust mechanisms, presenting a valuable advancement in social network security. The study also introduces an artificial conscience-control module model, innovating with the concept of "Artificial Feeling." This model is designed to enhance AI system adaptability based on user preferences, ensuring controllability, safety, reliability, and trustworthiness in AI decision-making. This innovation contributes to fostering increased societal acceptance of AI technologies. Additionally, the research conducts a comprehensive survey of foundational requirements for establishing trustworthiness in AI. Emphasizing fairness, accountability, privacy, acceptance, and verification/validation, this survey lays the groundwork for understanding and addressing ethical considerations in AI applications. The study concludes with an exploration of quantum alternatives, offering fresh perspectives on algorithmic approaches in trustworthy AI systems. This exploration broadens the horizons of AI research, pushing the boundaries of traditional algorithms. In summary, this work significantly contributes to the discourse on Trustworthy AI, ensuring both explainability and acceptance in the intricate interplay between humans and AI systems. Through its diverse contributions, the research offers valuable insights and practical frameworks for the responsible and ethical deployment of AI in various applications.Item Trustworthy Explainability Acceptance: A New Metric to Measure the Trustworthiness of Interpretable AI Medical Diagnostic Systems(Springer, 2021-06) Kaur, Davinder; Uslu, Suleyman; Durresi, Arjan; Badve, Sunil; Dundar, Murat; Computer and Information Science, School of ScienceWe propose, Trustworthy Explainability Acceptance metric to evaluate explainable AI systems using expert-in-the-loop. Our metric calculates acceptance by quantifying the distance between the explanations generated by the AI system and the reasoning provided by the experts based on their expertise and experience. Our metric also evaluates the trust of the experts to include different groups of experts using our trust mechanism. Our metric can be easily adapted to any Interpretable AI system and be used in the standardization process of trustworthy AI systems. We illustrate the proposed metric using the high-stake medical AI application of Predicting Ductal Carcinoma in Situ (DCIS) Recurrence. Our metric successfully captures the explainability of AI systems in DCIS recurrence by experts.Item A Trustworthy Human–Machine framework for collective decision making in Food–Energy–Water management: The role of trust sensitivity(Elsevier, 2021-02) Uslu, Suleyman; Kaur, Davinder; Rivera, Samuel J.; Durresi, Arjan; Babbar-Sebens, Meghna; Tilt, Jenna H.; Computer and Information Science, School of ScienceWe propose a hybrid Trustworthy Human–Machine collective decision-making framework to manage Food–Energy–Water (FEW) resources. Decisions for managing such resources impact not only the environment but also influence the economic productivity of FEW sectors and the well-being of society. Therefore, while algorithms can be used to develop optimal solutions under various criteria, it is essential to explain such solutions to the community. More importantly, the community should accept such solutions to be able realistically to apply them. In our collaborative computational framework for decision support, machines and humans interact to converge on the best solutions accepted by the community. In this framework, trust among human actors during decision making is measured and managed using a novel trust management framework. Furthermore, such trust is used to encourage human actors, depending on their trust sensitivity, to choose among the solutions generated by algorithms that satisfy the community’s preferred trade-offs among various objectives. In this paper, we show different scenarios of decision making with continuous and discrete solutions. Then, we propose a game-theory approach where actors maximize their payoff regarding their share and trust weighted by their trust sensitivity. We run simulations for decision-making scenarios with actors having different distributions of trust sensitivities. Results showed that when actors have high trust sensitivity, a consensus is reached 52% faster than scenarios with low trust sensitivity. The utilization of ratings of ratings increased the solution trustworthiness by 50%. Also, the same level of solution trustworthiness is reached 2.7 times faster when ratings of ratings included.