ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "domain adaptation"

Now showing 1 - 4 of 4
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Cycle-consistent Conditional Adversarial Transfer Networks
    (ACM, 2019-10) Li, Jingjing; Chen, Erpeng; Ding, Zhengming; Zhu, Lei; Lu, Ke; Huang, Zi; Computer Information and Graphics Technology, School of Engineering and Technology
    Domain adaptation investigates the problem of cross-domain knowledge transfer where the labeled source domain and unlabeled target domain have distinctive data distributions. Recently, adversarial training have been successfully applied to domain adaptation and achieved state-of-the-art performance. However, there is still a fatal weakness existing in current adversarial models which is raised from the equilibrium challenge of adversarial training. Specifically, although most of existing methods are able to confuse the domain discriminator, they cannot guarantee that the source domain and target domain are sufficiently similar. In this paper, we propose a novel approach named cycle-consistent conditional adversarial transfer networks (3CATN) to handle this issue. Our approach takes care of the domain alignment by leveraging adversarial training. Specifically, we condition the adversarial networks with the cross-covariance of learned features and classifier predictions to capture the multimodal structures of data distributions. However, since the classifier predictions are not certainty information, a strong condition with the predictions is risky when the predictions are not accurate. We, therefore, further propose that the truly domain-invariant features should be able to be translated from one domain to the other. To this end, we introduce two feature translation losses and one cycle-consistent loss into the conditional adversarial domain adaptation networks. Extensive experiments on both classical and large-scale datasets verify that our model is able to outperform previous state-of-the-arts with significant improvements.
  • Loading...
    Thumbnail Image
    Item
    Domain Adaptation Tracker With Global and Local Searching
    (IEEE, 2018) Zhao, Fei; Zhang, Ting; Wu, Yi; Wang, Jinqiao; Tang, Ming; Medicine, School of Medicine
    For the convolutional neural network (CNN)-based trackers, most of them locate the target only within a local area, which makes the trackers hard to recapture the target after drifting into the background. Besides, most state-of-the-art trackers spend a large amount of time on training the CNN-based classification networks online to adapt to the current domain. In this paper, to address the two problems, we propose a robust domain adaptation tracker based on the CNNs. The proposed tracker contains three CNNs: a local location network (LL-Net), a global location network (GL-Net), and a domain adaptation classification network (DA-Net). For the former problem, if we come to the conclusion that the tracker drifts into the background based on the output of the LL-Net, we will search for the target in a global area of the current frame based on the GL-Net. For the latter problem, we propose a CNN-based DA-Net with a domain adaptation (DA) layer. By pre-training the DA-Net offline, the DA-Net can adapt to the current domain by only updating the parameters of the DA layer in one training iteration when the online training is triggered, which makes the tracker run five times faster than MDNet with comparable tracking performance. The experimental results show that our tracker performs favorably against the state-of-the-art trackers on three popular benchmarks.
  • Loading...
    Thumbnail Image
    Item
    Graph Adaptive Knowledge Transfer for Unsupervised Domain Adaptation
    (Springer, 2018) Ding, Zhengming; Li, Sheng; Shao, Ming; Fu, Yun; Electrical and Computer Engineering, School of Engineering and Technology
    Unsupervised domain adaptation has caught appealing attentions as it facilitates the unlabeled target learning by borrowing existing well-established source domain knowledge. Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain to better solve cross-domain distribution divergences. However, existing approaches separate target label optimization and domain-invariant feature learning as different steps. To address that issue, we develop a novel Graph Adaptive Knowledge Transfer (GAKT) model to jointly optimize target labels and domain-free features in a unified framework. Specifically, semi-supervised knowledge adaptation and label propagation on target data are coupled to benefit each other, and hence the marginal and conditional disparities across different domains will be better alleviated. Experimental evaluation on two cross-domain visual datasets demonstrates the effectiveness of our designed approach on facilitating the unlabeled target task learning, compared to the state-of-the-art domain adaptation approaches.
  • Loading...
    Thumbnail Image
    Item
    Joint Adversarial Domain Adaptation
    (ACM, 2019-10) Li, Shuang; Liu, Chi Harold; Xie, Binhui; Su, Limin; Ding, Zhengming; Huang, Gao; Computer Information and Graphics Technology, School of Engineering and Technology
    Domain adaptation aims to transfer the enriched label knowledge from large amounts of source data to unlabeled target data. It has raised significant interest in multimedia analysis. Existing researches mainly focus on learning domain-wise transferable representations via statistical moment matching or adversarial adaptation techniques, while ignoring the class-wise mismatch across domains, resulting in inaccurate distribution alignment. To address this issue, we propose a Joint Adversarial Domain Adaptation (JADA) approach to simultaneously align domain-wise and class-wise distributions across source and target in a unified adversarial learning process. Specifically, JADA attempts to solve two complementary minimax problems jointly. The feature generator aims to not only fool the well-trained domain discriminator to learn domain-invariant features, but also minimize the disagreement between two distinct task-specific classifiers' predictions to synthesize target features near the support of source class-wisely. As a result, the learned transferable features will be equipped with more discriminative structures, and effectively avoid mode collapse. Additionally, JADA enables an efficient end-to-end training manner via a simple back-propagation scheme. Extensive experiments on several real-world cross-domain benchmarks, including VisDA-2017, ImageCLEF, Office-31 and digits, verify that JADA can gain remarkable improvements over other state-of-the-art deep domain adaptation approaches.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University