- Browse by Author
Browsing by Author "Tilt, Jenna H."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Control Theoretical Modeling of Trust-Based Decision Making in Food-Energy-Water Management(Springer, 2021) Uslu, Suleyman; Kaur, Davinder; Rivera, Samuel J.; Durresi, Arjan; Babbar-Sebens, Meghna; Tilt, Jenna H.; Computer and Information Science, School of ScienceWe propose a hybrid Human-Machine decision making to manage Food-Energy-Water resources. In our system trust among human actors during decision making is measured and managed. Furthermore, such trust is used to pressure human actors to choose among the solutions generated by algorithms that satisfy the community’s preferred trade-offs among various objectives. We model the trust-based loops in decision making by using control theory. In this system, the feedback information is the trust pressure that actors receive from peers. Using control theory, we studied the dynamics of the trust of an actor. Then, we presented the modeling of the change of solution distances. In both scenarios, we also calculated the settling times and the stability using the transfer functions and their Z-transforms as the number of rounds to show whether and when the decision making is finalized.Item A Trustworthy Human–Machine framework for collective decision making in Food–Energy–Water management: The role of trust sensitivity(Elsevier, 2021-02) Uslu, Suleyman; Kaur, Davinder; Rivera, Samuel J.; Durresi, Arjan; Babbar-Sebens, Meghna; Tilt, Jenna H.; Computer and Information Science, School of ScienceWe propose a hybrid Trustworthy Human–Machine collective decision-making framework to manage Food–Energy–Water (FEW) resources. Decisions for managing such resources impact not only the environment but also influence the economic productivity of FEW sectors and the well-being of society. Therefore, while algorithms can be used to develop optimal solutions under various criteria, it is essential to explain such solutions to the community. More importantly, the community should accept such solutions to be able realistically to apply them. In our collaborative computational framework for decision support, machines and humans interact to converge on the best solutions accepted by the community. In this framework, trust among human actors during decision making is measured and managed using a novel trust management framework. Furthermore, such trust is used to encourage human actors, depending on their trust sensitivity, to choose among the solutions generated by algorithms that satisfy the community’s preferred trade-offs among various objectives. In this paper, we show different scenarios of decision making with continuous and discrete solutions. Then, we propose a game-theory approach where actors maximize their payoff regarding their share and trust weighted by their trust sensitivity. We run simulations for decision-making scenarios with actors having different distributions of trust sensitivities. Results showed that when actors have high trust sensitivity, a consensus is reached 52% faster than scenarios with low trust sensitivity. The utilization of ratings of ratings increased the solution trustworthiness by 50%. Also, the same level of solution trustworthiness is reached 2.7 times faster when ratings of ratings included.