ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Fu, Yun"

Now showing 1 - 8 of 8
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Deep Decision Tree Transfer Boosting
    (IEEE, 2019) Jiang, Shuhui; Mao, Haiyi; Ding, Zhengming; Fu, Yun; Computer Information and Graphics Technology, School of Engineering and Technology
    Instance transfer approaches consider source and target data together during the training process, and borrow examples from the source domain to augment the training data, when there is limited or no label in the target domain. Among them, boosting-based transfer learning methods (e.g., TrAdaBoost) are most widely used. When dealing with more complex data, we may consider the more complex hypotheses (e.g., a decision tree with deeper layers). However, with the fixed and high complexity of the hypotheses, TrAdaBoost and its variants may face the overfitting problems. Even worse, in the transfer learning scenario, a decision tree with deep layers may overfit different distribution data in the source domain. In this paper, we propose a new instance transfer learning method, i.e., Deep Decision Tree Transfer Boosting (DTrBoost), whose weights are learned and assigned to base learners by minimizing the data-dependent learning bounds across both source and target domains in terms of the Rademacher complexities. This guarantees that we can learn decision trees with deep layers without overfitting. The theorem proof and experimental results indicate the effectiveness of our proposed method.
  • Loading...
    Thumbnail Image
    Item
    Discerning Feature Supported Encoder for Image Representation
    (IEEE, 2019) Wang, Shuyang; Ding, Zhengming; Fu, Yun; Computer Information and Graphics Technology, School of Engineering and Technology
    Inspired by the recent successes of deep architecture, the auto-encoder and its variants have been intensively explored on image clustering and classification tasks by learning effective feature representations. Conventional auto-encoder attempts to uncover the data's intrinsic structure, by constraining the output to be as much identical to the input as possible, which denotes that the hidden representation could faithfully reconstruct the input data. One issue that arises, however, is that such representations might not be optimized for specific tasks, e.g., image classification and clustering, since it compresses not only the discriminative information but also a lot of redundant or even noise within data. In other words, not all hidden units would benefit the specific tasks, while partial units are mainly used to represent the task-irrelevant patterns. In this paper, a general framework named discerning feature supported encoder (DFSE) is proposed, which integrates the auto-encoder and feature selection together into a unified model. Specifically, the feature selection is adapted to learned hidden-layer features to capture the task-relevant ones from the task-irrelevant ones. Meanwhile, the selected hidden units could in turn encode more discriminability only on the selected task-relevant units. To this end, our proposed algorithm can generate more effective image representation by distinguishing the task-relevant features from the task-irrelevant ones. Two scenarios of the experiments on image classification and clustering are conducted to evaluate our algorithm. The experiments on several benchmarks demonstrate that our method can achieve better performance over the state-of-the-art approaches in two scenarios.
  • Loading...
    Thumbnail Image
    Item
    Dual Low-Rank Decompositions for Robust Cross-View Learning
    (IEEE, 2018) Ding, Zhengming; Fu, Yun; Computer Information and Graphics Technology, School of Engineering and Technology
    Cross-view data are very popular contemporarily, as different viewpoints or sensors attempt to richly represent data in various views. However, the cross-view data from different views present a significant divergence, that is, cross-view data from the same category have a lower similarity than those in different categories but within the same view. Considering that each cross-view sample is drawn from two intertwined manifold structures, i.e., class manifold and view manifold, in this paper, we propose a robust cross-view learning framework to seek a robust view-invariant low-dimensional space. Specifically, we develop a dual low-rank decomposition technique to unweave those intertwined manifold structures from one another in the learned space. Moreover, we design two discriminative graphs to constrain the dual low-rank decompositions by fully exploring the prior knowledge. Thus, our proposed algorithm is able to capture more within-class knowledge and mitigate the view divergence to obtain a more effective view-invariant feature extractor. Furthermore, our proposed method is very flexible in addressing such a challenging cross-view learning scenario that we only obtain the view information of the training data while with the view information of the evaluation data unknown. Experiments on face and object benchmarks demonstrate the effective performance of our designed model over the state-of-the-art algorithms.
  • Loading...
    Thumbnail Image
    Item
    Graph Adaptive Knowledge Transfer for Unsupervised Domain Adaptation
    (Springer, 2018) Ding, Zhengming; Li, Sheng; Shao, Ming; Fu, Yun; Electrical and Computer Engineering, School of Engineering and Technology
    Unsupervised domain adaptation has caught appealing attentions as it facilitates the unlabeled target learning by borrowing existing well-established source domain knowledge. Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain to better solve cross-domain distribution divergences. However, existing approaches separate target label optimization and domain-invariant feature learning as different steps. To address that issue, we develop a novel Graph Adaptive Knowledge Transfer (GAKT) model to jointly optimize target labels and domain-free features in a unified framework. Specifically, semi-supervised knowledge adaptation and label propagation on target data are coupled to benefit each other, and hence the marginal and conditional disparities across different domains will be better alleviated. Experimental evaluation on two cross-domain visual datasets demonstrates the effectiveness of our designed approach on facilitating the unlabeled target task learning, compared to the state-of-the-art domain adaptation approaches.
  • Loading...
    Thumbnail Image
    Item
    Marginalized Multiview Ensemble Clustering
    (IEEE, 2019-04) Tao, Zhiqiang; Liu, Hongfu; Li, Sheng; Ding, Zhengming; Fu, Yun; Computer Information and Graphics Technology, School of Engineering and Technology
    Multiview clustering (MVC), which aims to explore the underlying cluster structure shared by multiview data, has drawn more research efforts in recent years. To exploit the complementary information among multiple views, existing methods mainly learn a common latent subspace or develop a certain loss across different views, while ignoring the higher level information such as basic partitions (BPs) generated by the single-view clustering algorithm. In light of this, we propose a novel marginalized multiview ensemble clustering (M 2 VEC) method in this paper. Specifically, we solve MVC in an EC way, which generates BPs for each view individually and seeks for a consensus one. By this means, we naturally leverage the complementary information of multiview data upon the same partition space. In order to boost the robustness of our approach, the marginalized denoising process is adopted to mimic the data corruptions and noises, which provides robust partition-level representations for each view by training a single-layer autoencoder. A low-rank and sparse decomposition is seamlessly incorporated into the denoising process to explicitly capture the consistency information and meanwhile compensate the distinctness between heterogeneous features. Spectral consensus graph partitioning is also involved by our model to make M 2 VEC as a unified optimization framework. Moreover, a multilayer M 2 VEC is eventually delivered in a stacked fashion to encapsulate nonlinearity into partition-level representations for handling complex data. Experimental results on eight real-world data sets show the efficacy of our approach compared with several state-of-the-art multiview and EC methods. We also showcase our method performs well with partial multiview data.
  • Loading...
    Thumbnail Image
    Item
    Robust Discriminative Metric Learning for Image Representation
    (IEEE, 2018-11) Ding, Zhengming; Shao, Ming; Hwang, Wonjun; Suh, Sungjoo; Han, Jae-Joon; Choi, Changkyu; Fu, Yun; Computer Information and Graphics Technology, School of Engineering and Technology
    Metric learning has attracted significant attentions in the past decades, for the appealing advances in various realworld applications such as person re-identification and face recognition. Traditional supervised metric learning attempts to seek a discriminative metric, which could minimize the pairwise distance of within-class data samples, while maximizing the pairwise distance of data samples from various classes. However, it is still a challenge to build a robust and discriminative metric, especially for corrupted data in the real-world application. In this paper, we propose a Robust Discriminative Metric Learning algorithm (RDML) via fast low-rank representation and denoising strategy. To be specific, the metric learning problem is guided by a discriminative regularization by incorporating the pair-wise or class-wise information. Moreover, low-rank basis learning is jointly optimized with the metric to better uncover the global data structure and remove noise. Furthermore, fast low-rank representation is implemented to mitigate the computational burden and make sure the scalability on large-scale datasets. Finally, we evaluate our learned metric on several challenging tasks, e.g., face recognition/verification, object recognition, and image clustering. The experimental results verify the effectiveness of the proposed algorithm by comparing to many metric learning algorithms, even deep learning ones.
  • Loading...
    Thumbnail Image
    Item
    Structure-Preserved Unsupervised Domain Adaptation
    (IEEE, 2019-04) Liu, Hongfu; Shao, Ming; Ding, Zhengming; Fu, Yun; Computer Information and Graphics Technology, School of Engineering and Technology
    Domain adaptation has been a primal approach to addressing the issues by lack of labels in many data mining tasks. Although considerable efforts have been devoted to domain adaptation with promising results, most existing work learns a classifier on a source domain and then predicts the labels for target data, where only the instances near the boundary determine the hyperplane and the whole structure information is ignored. Moreover, little work has been done regarding to multi-source domain adaptation. To that end, we develop a novel unsupervised domain adaptation framework, which ensures the whole structure of source domains is preserved to guide the target structure learning in a semi-supervised clustering fashion. To our knowledge, this is the first time when the domain adaptation problem is re-formulated as a semi-supervised clustering problem with target labels as missing values. Furthermore, by introducing an augmented matrix, a non-trivial solution is designed, which can be exactly mapped into a K-means-like optimization problem with modified distance function and update rule for centroids in an efficient way. Extensive experiments on several widely-used databases show the substantial improvements of our proposed approach over the state-of-the-art methods.
  • Loading...
    Thumbnail Image
    Item
    Toward Resolution-Invariant Person Reidentification via Projective Dictionary Learning
    (IEEE, 2019-06) Li, Kai; Ding, Zhengming; Li, Sheng; Fu, Yun; Computer Information and Graphics Technology, School of Engineering and Technology
    Person reidentification (ReID) has recently been widely investigated for its vital role in surveillance and forensics applications. This paper addresses the low-resolution (LR) person ReID problem, which is of great practical meaning because pedestrians are often captured in LRs by surveillance cameras. Existing methods cope with this problem via some complicated and time-consuming strategies, making them less favorable, in practice, and meanwhile, their performances are far from satisfactory. Instead, we solve this problem by developing a discriminative semicoupled projective dictionary learning (DSPDL) model, which adopts the efficient projective dictionary learning strategy, and jointly learns a pair of dictionaries and a mapping function to model the correspondence of the cross-view data. A parameterless cross-view graph regularizer incorporating both positive and negative pair information is designed to enhance the discriminability of the dictionaries. Another weakness of existing approaches to this problem is that they are only applicable for the scenario where the cross-camera image sets have a globally uniform resolution gap. This fact undermines their practicality because the resolution gaps between cross-camera images often vary person by person in practice. To overcome this hurdle, we extend the proposed DSPDL model to the variational resolution gap scenario, basically by learning multiple pairs of dictionaries and multiple mapping functions. A novel technique is proposed to rerank and fuse the results obtained from all dictionary pairs. Experiments on five public data sets show the proposed method achieves superior performances to the state-of-the-art ones.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University