- Browse by Subject
Browsing by Subject "Kernel"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item A two-branch multi-scale residual attention network for single image super-resolution in remote sensing imagery(IEEE, 2024) Patnaik, Allen; Bhuyan, Manas K.; MacDorman, Karl F.High-resolution remote sensing imagery finds applications in diverse fields, such as land-use mapping, crop planning, and disaster surveillance. To offer detailed and precise insights, reconstructing edges, textures, and other features is crucial. Despite recent advances in detail enhancement through deep learning, disparities between original and reconstructed images persist. To address this challenge, we propose a two-branch multiscale residual attention network for single-image super-resolution reconstruction. The network gathers complex information about input images from two branches with convolution layers of different kernel sizes. The two branches extract both low-level and high-level features from the input image. The network incorporates multiscale efficient channel attention and spatial attention blocks to capture channel and spatial dependencies in the feature maps. This results in more discriminative features and more accurate predictions. Moreover, residual modules with skip connections can help to overcome the vanishing gradient problem. We trained the proposed model on the WHU-RS19 dataset, collated from Google Earth satellite imagery, and validated it on the UC Merced, RSSCN7, AID, and real-world satellite datasets. The experimental results show that our network uses features at different levels of detail more effectively than state-of-the-art models.Item Maximum Density Divergence for Domain Adaptation(IEEE, 2021) Li, Jingjing; Chen, Erpeng; Ding, Zhengming; Zhu, Lei; Lu, Ke; Shen, Heng Tao; Computer Information and Graphics Technology, Purdue School of Engineering and TechnologyUnsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain where the two domains have distinctive data distributions. Thus, the essence of domain adaptation is to mitigate the distribution divergence between the two domains. The state-of-the-art methods practice this very idea by either conducting adversarial training or minimizing a metric which defines the distribution gaps. In this paper, we propose a new domain adaptation method named adversarial tight match (ATM) which enjoys the benefits of both adversarial training and metric learning. Specifically, at first, we propose a novel distance loss, named maximum density divergence (MDD), to quantify the distribution divergence. MDD minimizes the inter-domain divergence ("match" in ATM) and maximizes the intra-class density ("tight" in ATM). Then, to address the equilibrium challenge issue in adversarial domain adaptation, we consider leveraging the proposed MDD into adversarial domain adaptation framework. At last, we tailor the proposed MDD as a practical learning loss and report our ATM. Both empirical evaluation and theoretical analysis are reported to verify the effectiveness of the proposed method. The experimental results on four benchmarks, both classical and large-scale, show that our method is able to achieve new state-of-the-art performance on most evaluations.