- Browse by Subject
Browsing by Subject "federated learning"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item Alliance Makes Difference? Maximizing Social Welfare in Cross-Silo Federated Learning(IEEE, 2024-02) Chen, Jianan; Hu, Qin; Jiang, Honglu; Computer and Information Science, Purdue School of ScienceAs one of the typical settings of Federated Learning (FL), cross-silo FL allows organizations to jointly train an optimal Machine Learning (ML) model. In this case, some organizations may try to obtain the global model without contributing their local training power, lowering the social welfare. In this article, we model the interactions among organizations in cross-silo FL as a public goods game and theoretically prove that there exists a social dilemma where the maximum social welfare is not achieved in Nash equilibrium. To overcome this dilemma, we employ the Multi-player Multi-action Zero-Determinant (MMZD) strategy to maximize the social welfare. With the help of the MMZD, an individual organization can unilaterally control the social welfare without extra cost. Since the MMZD strategy can be adopted by all organizations, we further study the case of multiple organizations jointly adopting the MMZD strategy to form an MMZD Alliance (MMZDA). We prove that the MMZDA strategy can strengthen the control of the maximum social welfare. Experimental results validate that the MMZD strategy is effective in obtaining the maximum social welfare and the MMZDA can achieve a larger maximum value.Item GAN-inspired Defense Against Backdoor Attack on Federated Learning Systems(IEEE, 2023-09) Sundar, Agnideven Palanisamy; Li, Feng; Zou, Xukai; Gao, Tianchong; Hosler, Ryan; Computer Science, Luddy School of Informatics, Computing, and EngineeringFederated Learning (FL) provides an opportunity for clients with limited data resources to combine and build better Machine Learning models without compromising their privacy. But aggregating contributions from various clients implies that the errors present in some clients’ resources will also get propagated to all the clients through the combined model. Malicious entities leverage this negative factor to disrupt the normal functioning of the FL system for their gain. A backdoor attack is one such attack where the malicious entities act as clients and implant a small trigger into the global model. Once implanted, the model performs the attacker desired task in the presence of the trigger but acts benignly otherwise. In this paper, we build a GAN-inspired defense mechanism that can detect and defend against the presence of such backdoor triggers. The unavailability of labeled benign and backdoored models has prevented researchers from building detection classifiers. We tackle this problem by utilizing the clients as Generators to construct the required dataset. We place the Discriminator on the server-side, which acts as a backdoored model detecting binary classifier. We experimentally prove the proficiency of our approach with the image-based non-IID datasets, CIFAR10 and CelebA. Our prediction probability-based defense mechanism successfully removes all the influence of backdoors from the global model.Item Incentive Mechanism Design for Joint Resource Allocation in Blockchain-Based Federated Learning(IEEE, 2023-05) Wang, Zhilin; Hu, Qin; Li, Ruinian; Xu, Minghui; Xiong, Zehui; Computer Science, Luddy School of Informatics, Computing, and EngineeringBlockchain-based federated learning (BCFL) has recently gained tremendous attention because of its advantages, such as decentralization and privacy protection of raw data. However, there has been few studies focusing on the allocation of resources for the participated devices (i.e., clients) in the BCFL system. Especially, in the BCFL framework where the FL clients are also the blockchain miners, clients have to train the local models, broadcast the trained model updates to the blockchain network, and then perform mining to generate new blocks. Since each client has a limited amount of computing resources, the problem of allocating computing resources to training and mining needs to be carefully addressed. In this paper, we design an incentive mechanism to help the model owner (MO) (i.e., the BCFL task publisher) assign each client appropriate rewards for training and mining, and then the client will determine the amount of computing power to allocate for each subtask based on these rewards using the two-stage Stackelberg game. After analyzing the utilities of the MO and clients, we transform the game model into two optimization problems, which are sequentially solved to derive the optimal strategies for both the MO and clients. Further, considering the fact that local training related information of each client may not be known by others, we extend the game model with analytical solutions to the incomplete information scenario. Extensive experimental results demonstrate the validity of our proposed schemes.Item Intelligent Device Selection in Federated Edge Learning with Energy Efficiency(2021-12) Peng, Cheng; Hu, Qin; Kang, Kyubyung; Zou, XukaiDue to the increasing demand from mobile devices for the real-time response of cloud computing services, federated edge learning (FEL) emerges as a new computing paradigm, which utilizes edge devices to achieve efficient machine learning while protecting their data privacy. Implementing efficient FEL suffers from the challenges of devices' limited computing and communication resources, as well as unevenly distributed datasets, which inspires several existing research focusing on device selection to optimize time consumption and data diversity. However, these studies fail to consider the energy consumption of edge devices given their limited power supply, which can seriously affect the cost-efficiency of FEL with unexpected device dropouts. To fill this gap, we propose a device selection model capturing both energy consumption and data diversity optimization, under the constraints of time consumption and training data amount. Then we solve the optimization problem by reformulating the original model and designing a novel algorithm, named E2DS, to reduce the time complexity greatly. By comparing with two classical FEL schemes, we validate the superiority of our proposed device selection mechanism for FEL with extensive experimental results. Furthermore, for each device in a real FEL environment, it is the fact that multiple tasks will occupy the CPU at the same time, so the frequency of the CPU used for training fluctuates all the time, which may lead to large errors in computing energy consumption. To solve this problem, we deploy reinforcement learning to learn the frequency so as to approach real value. And compared to increasing data diversity, we consider a more direct way to improve the convergence speed using loss values. Then we formulate the optimization problem that minimizes the energy consumption and maximizes the loss values to select the appropriate set of devices. After reformulating the problem, we design a new algorithm FCE2DS as the solution to have better performance on convergence speed and accuracy. Finally, we compare the performance of this proposed scheme with the previous scheme and the traditional scheme to verify the improvement of the proposed scheme in multiple aspects.Item Nothing Wasted: Full Contribution Enforcement in Federated Edge Learning(IEEE Xplore, 2021-10) Hu, Qin; Wang, Shengling; Xiong, Zehui; Cheng, Xiuzhen; Computer and Information Science, School of ScienceThe explosive amount of data generated at the network edge makes mobile edge computing an essential technology to support real-time applications, calling for powerful data processing and analysis provided by machine learning (ML) techniques. In particular, federated edge learning (FEL) becomes prominent in securing the privacy of data owners by keeping the data locally used to train ML models. Existing studies on FEL either utilize in-process optimization or remove unqualified participants in advance. In this paper, we enhance the collaboration from all edge devices in FEL to guarantee that the ML model is trained using all available local data to accelerate the learning process. To that aim, we propose a collective extortion (CE) strategy under the imperfect-information multi-player FEL game, which is proved to be effective in helping the server efficiently elicit the full contribution of all devices without worrying about suffering from any economic loss. Technically, our proposed CE strategy extends the classical extortion strategy in controlling the proportionate share of expected utilities for a single opponent to the swiftly homogeneous control over a group of players, which further presents an attractive trait of being impartial for all participants. Both theoretical analysis and experimental evaluations validate the effectiveness and fairness of our proposed scheme.Item Online-Learning-Based Fast-Convergent and Energy-Efficient Device Selection in Federated Edge Learning(IEEE, 2023-03) Peng, Cheng; Hu, Qin; Wang, Zhilin; Liu, Ryan Wen; Xiong, Zehui; Computer and Information Science, Purdue School of ScienceAs edge computing faces increasingly severe data security and privacy issues of edge devices, a framework called federated edge learning (FEL) has recently been proposed to enable machine learning (ML) model training at the edge, ensuring communication efficiency and data privacy protection for edge devices. In this paradigm, the training efficiency has long been challenged by the heterogeneity of communication conditions, computing capabilities, and available data sets at devices. Currently, researchers focus on solving this challenge via device selection from the perspective of optimizing energy consumption or convergence speed. However, the consideration of any one of them is insufficient to guarantee the long-term system efficiency and stability. To fill the gap, we propose an optimization problem to simultaneously minimize the total energy consumption of selected devices and maximize the convergence speed of the global model for device selection in FEL, under the constraints of training data amount and time consumption. For the accurate calculation of energy consumption, we deploy online bandit learning to estimate the CPU-cycle frequency availability of each device, based on an efficient algorithm, named fast-convergent energy-efficient device selection (FCE2DS), is proposed to solve the optimization problem with a low level of time complexity. Through a series of comparative experiments, we evaluate the performance of the proposed FCE2DS scheme, verifying its high training accuracy and energy efficiency.Item Privacy-preserving federated learning: Application to behind-the-meter solar photovoltaic generation forecasting(Elsevier, 2023-05) Hosseini, Paniz; Taheri, Saman; Akhavan, Javid; Razban, Ali; Mechanical and Energy Engineering, Purdue School of Engineering and TechnologyThe growing usage of decentralized renewable energy sources has made accurate estimation of their aggregated generation crucial for maintaining grid flexibility and reliability. However, the majority of distributed photovoltaic (PV) systems are behind-the-meter (BTM) and invisible to utilities, leading to three challenges in obtaining an accurate forecast of their aggregated output. Firstly, traditional centralized prediction algorithms used in previous studies may not be appropriate due to privacy concerns. There is therefore a need for decentralized forecasting methods, such as federated learning (FL), to protect privacy. Secondly, there has been no comparison between localized, centralized, and decentralized forecasting methods for BTM PV production, and the trade-off between prediction accuracy and privacy has not been explored. Lastly, the computational time of data-driven prediction algorithms has not been examined. This article presents a FL power forecasting method for PVs, which uses federated learning as a decentralized collaborative modeling approach to train a single model on data from multiple BTM sites. The machine learning network used to design this FL-based BTM PV forecasting model is a multi-layered perceptron, which ensures privacy and security of the data. Comparing the suggested FL forecasting model to non-private centralized and entirely private localized models revealed that it has a high level of accuracy, with an RMSE that is 18.17% lower than localized models and 9.9% higher than centralized models.Item Resource Optimization for Blockchain-Based Federated Learning in Mobile Edge Computing(IEEE, 2024-05) Wang, Zhilin; Hu, Qin; Xiong, Zehui; Computer Science, Luddy School of Informatics, Computing, and EngineeringWith the booming of mobile edge computing (MEC) and blockchain-based blockchain-based federated learning (BCFL), more studies suggest deploying BCFL on edge servers. In this case, edge servers with restricted resources face the dilemma of serving both mobile devices for their offloading tasks and the BCFL system for model training and blockchain consensus without sacrificing the service quality to any side. To address this challenge, this article proposes a resource allocation scheme for edge servers to provide optimal services at the minimum cost. Specifically, we first analyze the energy consumption of the MEC and BCFL tasks, considering the completion time of each task as the service quality constraint. Then, we model the resource allocation challenge into a multivariate, multiconstraint, and convex optimization problem. While solving the problem in a progressive manner, we design two algorithms based on the alternating direction method of multipliers (ADMMs) in both homogeneous and heterogeneous situations, where equal and on-demand resource distribution strategies are, respectively, adopted. The validity of our proposed algorithms is proved via rigorous theoretical analysis. Moreover, the convergence and efficiency of our proposed resource allocation schemes are evaluated through extensive experiments.Item Solving the Federated Edge Learning Participation Dilemma: A Truthful and Correlated Perspective(IEEE, 2022-07) Hu, Qin; Li, Feng; Zou, Xukai; Xiao, Yinhao; Computer and Information Science, School of ScienceAn emerging computational paradigm, named federated edge learning (FEL), enables intelligent computing at the network edge with the feature of preserving data privacy for edge devices. Given their constrained resources, it becomes a great challenge to achieve high execution performance for FEL. Most of the state-of-the-arts concentrate on enhancing FEL from the perspective of system operation procedures, taking few precautions during the composition step of the FEL system. Though a few recent studies recognize the importance of FEL formation and propose server-centric device selection schemes, the impact of data sizes is largely overlooked. In this paper, we take advantage of game theory to depict the decision dilemma among edge devices regarding whether to participate in FEL or not given their heterogeneous sizes of local datasets. For realizing both the individual and global optimization, the server is employed to solve the participation dilemma, which requires accurate information collection for devices’ local datasets. Hence, we utilize mechanism design to enable truthful information solicitation. With the help of correlated equilibrium , we derive a decision making strategy for devices from the global perspective, which can achieve the long-term stability and efficacy of FEL. For scalability consideration, we optimize the computational complexity of the basic solution to the polynomial level. Lastly, extensive experiments based on both real and synthetic data are conducted to evaluate our proposed mechanisms, with experimental results demonstrating the performance advantages.