- Browse by Subject
Browsing by Subject "Memory management"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item DAG-based Task Orchestration for Edge Computing(IEEE, 2022) Li, Xiang; Abdallah, Mustafa; Suryavansh, Shikhar; Chiang, Mung; Bagchi, Saurabh; Engineering Technology, Purdue School of Engineering and TechnologyEdge computing promises to exploit underlying computation resources closer to users to help run latency-sensitive applications such as augmented reality and video analytics. However, one key missing piece has been how to incorporate personally owned, unmanaged devices into a usable edge computing system. The primary challenges arise due to the heterogeneity, lack of interference management, and unpredictable availability of such devices. In this paper we propose an orchestration framework IBDASH, which orchestrates application tasks on an edge system that comprises a mix of commercial and personal edge devices. IBDASH targets reducing both end-to-end latency of execution and probability of failure for applications that have dependency among tasks, captured by directed acyclic graphs (DAGs). IBDASH takes memory constraints of each edge device and network bandwidth into consideration. To assess the effectiveness of IBDASH, we run real application tasks on real edge devices with widely varying capabilities. We feed these measurements into a simulator that runs IBDASH at scale. Compared to three state-of-the-art edge orchestration schemes and two intuitive baselines, IBDASH reduces the end-to-end latency and probability of failure, by 14% and 41% on average respectively. The main takeaway from our work is that it is feasible to combine personal and commercial devices into a usable edge computing platform, one that delivers low and predictable latency and high availability.Item HBONext: HBONet with Flipped Inverted Residual(IEEE Xplore, 2021-06) Joshi, Sanket Ramesh; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and TechnologyThe top-performing deep CNN (DCNN) architectures are presented every year based on their compatibility and performance ability on the embedded edge applications, significantly for image classification. There are many obstacles in making these neural network architectures hardware friendly due to the limited memory, lesser computational resources, and the energy requirements of these devices. The addition of Bottleneck modules has further helped this classification problem, which explores the channel interdependencies, using either depthwise or groupwise convolutional features. The classical inverted residual block, a well-known design methodology, has now gained more attention due to its growing popularity in portable applications. This paper presents a mutated version of Harmonious Bottlenecks (DHbneck) with a Flipped version of Inverted Residual (FIR), which outperforms the existing HBONet architecture by giving the best accuracy value and the miniaturized model size. This FIR block performs identity mapping and spatial transformation at its higher dimensions, unlike the existing concept of inverted residual. The devised architecture is tested and validated using CIFAR-10 public dataset. The baseline HBONet architecture has an accuracy of 80.97% when tested on CIFAR-10 dataset and the model's size is 22 MB. In contrast, the proposed architecture HBONext has an improved validation accuracy of 88.30% with a model reduction to a size of 7.66 MB.