- Browse by Subject
Browsing by Subject "edge computing"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item AI on the Edge with CondenseNeXt: An Efficient Deep Neural Network for Devices with Constrained Computational Resources(2021-08) Kalgaonkar, Priyank B.; El-Sharkawy, Mohamed A.; King, Brian S.; Rizkalla, Maher E.Research work presented within this thesis propose a neoteric variant of deep convolutional neural network architecture, CondenseNeXt, designed specifically for ARM-based embedded computing platforms with constrained computational resources. CondenseNeXt is an improved version of CondenseNet, the baseline architecture whose roots can be traced back to ResNet. CondeseNeXt replaces group convolutions in CondenseNet with depthwise separable convolutions and introduces group-wise pruning, a model compression technique, to prune (remove) redundant and insignificant elements that either are irrelevant or do not affect performance of the network upon disposition. Cardinality, a new dimension to the existing spatial dimensions, and class-balanced focal loss function, a weighting factor inversely proportional to the number of samples, has been incorporated in order to relieve the harsh effects of pruning, into the design of CondenseNeXt’s algorithm. Furthermore, extensive analyses of this novel CNN architecture was performed on three benchmarking image datasets: CIFAR-10, CIFAR-100 and ImageNet by deploying the trained weight on to an ARM-based embedded computing platform: NXP BlueBox 2.0, for real-time image classification. The outputs are observed in real-time in RTMaps Remote Studio’s console to verify the correctness of classes being predicted. CondenseNeXt achieves state-of-the-art image classification performance on three benchmark datasets including CIFAR-10 (4.79% top-1 error), CIFAR-100 (21.98% top-1 error) and ImageNet (7.91% single model, single crop top-5 error), and up to 59.98% reduction in forward FLOPs compared to CondenseNet. CondenseNeXt can also achieve a final trained model size of 2.9 MB, however at the cost of 2.26% in accuracy loss. Thus, performing image classification on ARM-Based computing platforms without requiring a CUDA enabled GPU support, with outstanding efficiency.Item Intelligent Device Selection in Federated Edge Learning with Energy Efficiency(2021-12) Peng, Cheng; Hu, Qin; Kang, Kyubyung; Zou, XukaiDue to the increasing demand from mobile devices for the real-time response of cloud computing services, federated edge learning (FEL) emerges as a new computing paradigm, which utilizes edge devices to achieve efficient machine learning while protecting their data privacy. Implementing efficient FEL suffers from the challenges of devices' limited computing and communication resources, as well as unevenly distributed datasets, which inspires several existing research focusing on device selection to optimize time consumption and data diversity. However, these studies fail to consider the energy consumption of edge devices given their limited power supply, which can seriously affect the cost-efficiency of FEL with unexpected device dropouts. To fill this gap, we propose a device selection model capturing both energy consumption and data diversity optimization, under the constraints of time consumption and training data amount. Then we solve the optimization problem by reformulating the original model and designing a novel algorithm, named E2DS, to reduce the time complexity greatly. By comparing with two classical FEL schemes, we validate the superiority of our proposed device selection mechanism for FEL with extensive experimental results. Furthermore, for each device in a real FEL environment, it is the fact that multiple tasks will occupy the CPU at the same time, so the frequency of the CPU used for training fluctuates all the time, which may lead to large errors in computing energy consumption. To solve this problem, we deploy reinforcement learning to learn the frequency so as to approach real value. And compared to increasing data diversity, we consider a more direct way to improve the convergence speed using loss values. Then we formulate the optimization problem that minimizes the energy consumption and maximizes the loss values to select the appropriate set of devices. After reformulating the problem, we design a new algorithm FCE2DS as the solution to have better performance on convergence speed and accuracy. Finally, we compare the performance of this proposed scheme with the previous scheme and the traditional scheme to verify the improvement of the proposed scheme in multiple aspects.Item Solving the Federated Edge Learning Participation Dilemma: A Truthful and Correlated Perspective(IEEE, 2022-07) Hu, Qin; Li, Feng; Zou, Xukai; Xiao, Yinhao; Computer and Information Science, School of ScienceAn emerging computational paradigm, named federated edge learning (FEL), enables intelligent computing at the network edge with the feature of preserving data privacy for edge devices. Given their constrained resources, it becomes a great challenge to achieve high execution performance for FEL. Most of the state-of-the-arts concentrate on enhancing FEL from the perspective of system operation procedures, taking few precautions during the composition step of the FEL system. Though a few recent studies recognize the importance of FEL formation and propose server-centric device selection schemes, the impact of data sizes is largely overlooked. In this paper, we take advantage of game theory to depict the decision dilemma among edge devices regarding whether to participate in FEL or not given their heterogeneous sizes of local datasets. For realizing both the individual and global optimization, the server is employed to solve the participation dilemma, which requires accurate information collection for devices’ local datasets. Hence, we utilize mechanism design to enable truthful information solicitation. With the help of correlated equilibrium , we derive a decision making strategy for devices from the global perspective, which can achieve the long-term stability and efficacy of FEL. For scalability consideration, we optimize the computational complexity of the basic solution to the polynomial level. Lastly, extensive experiments based on both real and synthetic data are conducted to evaluate our proposed mechanisms, with experimental results demonstrating the performance advantages.