- Browse by Subject
Browsing by Subject "Causal Inference"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Count-Regression-Based Empirical Causal Analysis from a Potential Outcomes Perspective: Accounting for Boundedness, Discreteness, Dispersion and Unobservable Confounding(2024-06) Kazeminezhad, Golnoush; Terza, Joseph V.; Harle, Christopher A.; Morrison, Wendy; Russell, StevenEmpirical economic research is primarily driven by the desire to offer scientific evidence that serves to inform the study of cause-and-effect. In this dissertation, I developed new models for count-regression-model-based (CRM-based) causal effect estimation in which the value for the outcome of interest is restricted to the non-negative integers. I implement first-order two-stage residual inclusion (FO-2SRI) methods, in the context of the general potential outcomes framework, that accommodate nonlinearities due to the intrinsic characteristics of count-valued outcomes such as boundedness (outcome nonnegative), discreteness (outcome has countable support) and dispersion (conditional variance and other higher order conditional moments of the outcome not necessarily equal to its conditional mean) of count data, and unobservable confounding. The focus here is on the case in which the causal variable is continuous. The newly proposed causal effect estimators are compared with extant FO-2SRI estimators based on conventional control function methods and the linear instrumental variables (LIV) estimator. A series of simulation studies are performed to investigate the accuracy of the proposed estimators and compare the results with the extant estimators. In the simulation studies, the robustness of the fully nonlinear CRM-based FO-2SRI methods are investigated with attention to an important type of misspecification error. The models are also applied to a real-world data from Nigeria to investigate the effect of female education on their fertility decisions in a developing country. The results of the simulation studies reveal that estimates obtained via the newly proposed estimators are very accurate and widely diverge from the results from the extant control function and LIV methods. Moreover, one of the new estimators, which allows dispersion flexibility, dominated all other estimators (aside from a few extreme dispersion cases) with regard to avoidance of misspecification bias. Finally, the results showed that same estimator to be quite accurate for a wide range of values of the dispersion parameter (which measures mean/variance divergence). Similar results were obtained via the real data analysis which indicates that increasing women’s education decreases childbearing.Item Trustworthy AI: Ensuring Explainability & Acceptance(2023-12) Kaur, Davinder; Durresi, Arjan; Tuceryan, Mihran; Dundar, Murat; Hu, QinIn the dynamic realm of Artificial Intelligence (AI), this study explores the multifaceted landscape of Trustworthy AI with a dedicated focus on achieving both explainability and acceptance. The research addresses the evolving dynamics of AI, emphasizing the essential role of human involvement in shaping its trajectory. A primary contribution of this work is the introduction of a novel "Trustworthy Explainability Acceptance Metric", tailored for the evaluation of AI-based systems by field experts. Grounded in a versatile distance acceptance approach, this metric provides a reliable measure of acceptance value. Practical applications of this metric are illustrated, particularly in a critical domain like medical diagnostics. Another significant contribution is the proposal of a trust-based security framework for 5G social networks. This framework enhances security and reliability by incorporating community insights and leveraging trust mechanisms, presenting a valuable advancement in social network security. The study also introduces an artificial conscience-control module model, innovating with the concept of "Artificial Feeling." This model is designed to enhance AI system adaptability based on user preferences, ensuring controllability, safety, reliability, and trustworthiness in AI decision-making. This innovation contributes to fostering increased societal acceptance of AI technologies. Additionally, the research conducts a comprehensive survey of foundational requirements for establishing trustworthiness in AI. Emphasizing fairness, accountability, privacy, acceptance, and verification/validation, this survey lays the groundwork for understanding and addressing ethical considerations in AI applications. The study concludes with an exploration of quantum alternatives, offering fresh perspectives on algorithmic approaches in trustworthy AI systems. This exploration broadens the horizons of AI research, pushing the boundaries of traditional algorithms. In summary, this work significantly contributes to the discourse on Trustworthy AI, ensuring both explainability and acceptance in the intricate interplay between humans and AI systems. Through its diverse contributions, the research offers valuable insights and practical frameworks for the responsible and ethical deployment of AI in various applications.