- Browse by Author
Browsing by Author "Rivera, Samuel J."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Control Theoretical Modeling of Trust-Based Decision Making in Food-Energy-Water Management(Springer, 2021) Uslu, Suleyman; Kaur, Davinder; Rivera, Samuel J.; Durresi, Arjan; Babbar-Sebens, Meghna; Tilt, Jenna H.; Computer and Information Science, School of ScienceWe propose a hybrid Human-Machine decision making to manage Food-Energy-Water resources. In our system trust among human actors during decision making is measured and managed. Furthermore, such trust is used to pressure human actors to choose among the solutions generated by algorithms that satisfy the community’s preferred trade-offs among various objectives. We model the trust-based loops in decision making by using control theory. In this system, the feedback information is the trust pressure that actors receive from peers. Using control theory, we studied the dynamics of the trust of an actor. Then, we presented the modeling of the change of solution distances. In both scenarios, we also calculated the settling times and the stability using the transfer functions and their Z-transforms as the number of rounds to show whether and when the decision making is finalized.Item Trustworthy Acceptance: A New Metric for Trustworthy Artificial Intelligence Used in Decision Making in Food–Energy–Water Sectors(Springer, 2021-04) Barolli, Leonard; Woungang, Isaac; Enokido, Tomoya; Uslu, Suleyman; Kaur, Davinder; Rivera, Samuel J.; Durresi, Arjan; Durresi, Mimoza; Babbar-Sebens, Meghna; Computer and Information Science, School of ScienceWe propose, for the first time, a trustworthy acceptance metric and its measurement methodology to evaluate the trustworthiness of AI-based systems used in decision making in Food Energy Water (FEW) management. The proposed metric is a significant step forward in the standardization process of AI systems. It is essential to standardize the AI systems’ trustworthiness, but until now, the standardization efforts remain at the level of high-level principles. The measurement methodology of the proposed includes human experts in the loop, and it is based on our trust management system. Our metric captures and quantifies the system’s transparent evaluation by field experts on as many control points as desirable by the users. We illustrate the trustworthy acceptance metric and its measurement methodology using AI in decision-making scenarios of Food-Energy-Water sectors. However, the proposed metric and its methodology can be easily adapted to other fields of AI applications. We show that our metric successfully captures the aggregated acceptance of any number of experts, can be used to do multiple measurements on various points of the system, and provides confidence values for the measured acceptance.Item A Trustworthy Human–Machine framework for collective decision making in Food–Energy–Water management: The role of trust sensitivity(Elsevier, 2021-02) Uslu, Suleyman; Kaur, Davinder; Rivera, Samuel J.; Durresi, Arjan; Babbar-Sebens, Meghna; Tilt, Jenna H.; Computer and Information Science, School of ScienceWe propose a hybrid Trustworthy Human–Machine collective decision-making framework to manage Food–Energy–Water (FEW) resources. Decisions for managing such resources impact not only the environment but also influence the economic productivity of FEW sectors and the well-being of society. Therefore, while algorithms can be used to develop optimal solutions under various criteria, it is essential to explain such solutions to the community. More importantly, the community should accept such solutions to be able realistically to apply them. In our collaborative computational framework for decision support, machines and humans interact to converge on the best solutions accepted by the community. In this framework, trust among human actors during decision making is measured and managed using a novel trust management framework. Furthermore, such trust is used to encourage human actors, depending on their trust sensitivity, to choose among the solutions generated by algorithms that satisfy the community’s preferred trade-offs among various objectives. In this paper, we show different scenarios of decision making with continuous and discrete solutions. Then, we propose a game-theory approach where actors maximize their payoff regarding their share and trust weighted by their trust sensitivity. We run simulations for decision-making scenarios with actors having different distributions of trust sensitivities. Results showed that when actors have high trust sensitivity, a consensus is reached 52% faster than scenarios with low trust sensitivity. The utilization of ratings of ratings increased the solution trustworthiness by 50%. Also, the same level of solution trustworthiness is reached 2.7 times faster when ratings of ratings included.