- Browse by Author
Browsing by Author "Ding, Zhengming"
Now showing 1 - 10 of 24
Results Per Page
Sort Options
Item Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation(IEEE, 2021) Jing, Taotao; Ding, Zhengming; Electrical and Computer Engineering, School of Engineering and TechnologyUnsupervised Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain. Conventional UDA concentrates on extracting domain-invariant features through deep adversarial networks. However, most of them seek to match the different domain feature distributions, without considering the task-specific decision boundaries across various classes. In this paper, we propose a novel Adversarial Dual Distinct Classifiers Network (AD 2 CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries. To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment. Moreover, we naturally design two different structure classifiers to identify the unlabeled target samples over the supervision of the labeled source domain data. Such dual distinct classifiers with various architectures can capture diverse knowledge of the target data structure from different perspectives. Extensive experimental results on several cross-domain visual benchmarks prove the model's effectiveness by comparing it with other state-of-the-art UDA.Item Characterization of Proteoform Post-Translational Modifications by Top-Down and Bottom-Up Mass Spectrometry in Conjunction with Annotations(American Chemical Society, 2023) Chen, Wenrong; Ding, Zhengming; Zang, Yong; Liu, Xiaowen; BioHealth Informatics, School of Informatics and ComputingMany proteoforms can be produced from a gene due to genetic mutations, alternative splicing, post-translational modifications (PTMs), and other variations. PTMs in proteoforms play critical roles in cell signaling, protein degradation, and other biological processes. Mass spectrometry (MS) is the primary technique for investigating PTMs in proteoforms, and two alternative MS approaches, top-down and bottom-up, have complementary strengths. The combination of the two approaches has the potential to increase the sensitivity and accuracy in PTM identification and characterization. In addition, protein and PTM knowledge bases, such as UniProt, provide valuable information for PTM characterization and verification. Here, we present a software pipeline PTM-TBA (PTM characterization by Top-down and Bottom-up MS and Annotations) for identifying and localizing PTMs in proteoforms by integrating top-down and bottom-up MS as well as PTM annotations. We assessed PTM-TBA using a technical triplicate of bottom-up and top-down MS data of SW480 cells. On average, database search of the top-down MS data identified 2000 mass shifts, 814.5 (40.7%) of which were matched to 11 common PTMs and 423 of which were localized. Of the mass shifts identified by top-down MS, PTM-TBA verified 435 mass shifts using the bottom-up MS data and UniProt annotations.Item Cycle-consistent Conditional Adversarial Transfer Networks(ACM, 2019-10) Li, Jingjing; Chen, Erpeng; Ding, Zhengming; Zhu, Lei; Lu, Ke; Huang, Zi; Computer Information and Graphics Technology, School of Engineering and TechnologyDomain adaptation investigates the problem of cross-domain knowledge transfer where the labeled source domain and unlabeled target domain have distinctive data distributions. Recently, adversarial training have been successfully applied to domain adaptation and achieved state-of-the-art performance. However, there is still a fatal weakness existing in current adversarial models which is raised from the equilibrium challenge of adversarial training. Specifically, although most of existing methods are able to confuse the domain discriminator, they cannot guarantee that the source domain and target domain are sufficiently similar. In this paper, we propose a novel approach named cycle-consistent conditional adversarial transfer networks (3CATN) to handle this issue. Our approach takes care of the domain alignment by leveraging adversarial training. Specifically, we condition the adversarial networks with the cross-covariance of learned features and classifier predictions to capture the multimodal structures of data distributions. However, since the classifier predictions are not certainty information, a strong condition with the predictions is risky when the predictions are not accurate. We, therefore, further propose that the truly domain-invariant features should be able to be translated from one domain to the other. To this end, we introduce two feature translation losses and one cycle-consistent loss into the conditional adversarial domain adaptation networks. Extensive experiments on both classical and large-scale datasets verify that our model is able to outperform previous state-of-the-arts with significant improvements.Item Deep Decision Tree Transfer Boosting(IEEE, 2019) Jiang, Shuhui; Mao, Haiyi; Ding, Zhengming; Fu, Yun; Computer Information and Graphics Technology, School of Engineering and TechnologyInstance transfer approaches consider source and target data together during the training process, and borrow examples from the source domain to augment the training data, when there is limited or no label in the target domain. Among them, boosting-based transfer learning methods (e.g., TrAdaBoost) are most widely used. When dealing with more complex data, we may consider the more complex hypotheses (e.g., a decision tree with deeper layers). However, with the fixed and high complexity of the hypotheses, TrAdaBoost and its variants may face the overfitting problems. Even worse, in the transfer learning scenario, a decision tree with deep layers may overfit different distribution data in the source domain. In this paper, we propose a new instance transfer learning method, i.e., Deep Decision Tree Transfer Boosting (DTrBoost), whose weights are learned and assigned to base learners by minimizing the data-dependent learning bounds across both source and target domains in terms of the Rademacher complexities. This guarantees that we can learn decision trees with deep layers without overfitting. The theorem proof and experimental results indicate the effectiveness of our proposed method.Item Development of Automated Incident Detection System Using Existing ATMS CCTV(Purdue University, 2019) Chien, Stanley; Chen, Yaobin; Yi, Qiang; Ding, Zhengming; Electrical and Computer Engineering, School of Engineering and TechnologyIndiana Department of Transportation (INDOT) has over 300 digital cameras along highways in populated areas in Indiana. These cameras are used to monitor traffic conditions around the clock, all year round. Currently, the videos from these cameras are observed by human operators. The main objective of this research is to develop an automatic real-time system to monitor traffic conditions using the INDOT CCTV video feeds by a collaborative research team of the Transportation Active Safety Institute (TASI) at Indiana University-Purdue University Indianapolis (IUPUI) and the Traffic Management Center (TMC) of INDOT.Item Discerning Feature Supported Encoder for Image Representation(IEEE, 2019) Wang, Shuyang; Ding, Zhengming; Fu, Yun; Computer Information and Graphics Technology, School of Engineering and TechnologyInspired by the recent successes of deep architecture, the auto-encoder and its variants have been intensively explored on image clustering and classification tasks by learning effective feature representations. Conventional auto-encoder attempts to uncover the data's intrinsic structure, by constraining the output to be as much identical to the input as possible, which denotes that the hidden representation could faithfully reconstruct the input data. One issue that arises, however, is that such representations might not be optimized for specific tasks, e.g., image classification and clustering, since it compresses not only the discriminative information but also a lot of redundant or even noise within data. In other words, not all hidden units would benefit the specific tasks, while partial units are mainly used to represent the task-irrelevant patterns. In this paper, a general framework named discerning feature supported encoder (DFSE) is proposed, which integrates the auto-encoder and feature selection together into a unified model. Specifically, the feature selection is adapted to learned hidden-layer features to capture the task-relevant ones from the task-irrelevant ones. Meanwhile, the selected hidden units could in turn encode more discriminability only on the selected task-relevant units. To this end, our proposed algorithm can generate more effective image representation by distinguishing the task-relevant features from the task-irrelevant ones. Two scenarios of the experiments on image classification and clustering are conducted to evaluate our algorithm. The experiments on several benchmarks demonstrate that our method can achieve better performance over the state-of-the-art approaches in two scenarios.Item Dual Low-Rank Decompositions for Robust Cross-View Learning(IEEE, 2018) Ding, Zhengming; Fu, Yun; Computer Information and Graphics Technology, School of Engineering and TechnologyCross-view data are very popular contemporarily, as different viewpoints or sensors attempt to richly represent data in various views. However, the cross-view data from different views present a significant divergence, that is, cross-view data from the same category have a lower similarity than those in different categories but within the same view. Considering that each cross-view sample is drawn from two intertwined manifold structures, i.e., class manifold and view manifold, in this paper, we propose a robust cross-view learning framework to seek a robust view-invariant low-dimensional space. Specifically, we develop a dual low-rank decomposition technique to unweave those intertwined manifold structures from one another in the learned space. Moreover, we design two discriminative graphs to constrain the dual low-rank decompositions by fully exploring the prior knowledge. Thus, our proposed algorithm is able to capture more within-class knowledge and mitigate the view divergence to obtain a more effective view-invariant feature extractor. Furthermore, our proposed method is very flexible in addressing such a challenging cross-view learning scenario that we only obtain the view information of the training data while with the view information of the evaluation data unknown. Experiments on face and object benchmarks demonstrate the effective performance of our designed model over the state-of-the-art algorithms.Item Graph Adaptive Knowledge Transfer for Unsupervised Domain Adaptation(Springer, 2018) Ding, Zhengming; Li, Sheng; Shao, Ming; Fu, Yun; Electrical and Computer Engineering, School of Engineering and TechnologyUnsupervised domain adaptation has caught appealing attentions as it facilitates the unlabeled target learning by borrowing existing well-established source domain knowledge. Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain to better solve cross-domain distribution divergences. However, existing approaches separate target label optimization and domain-invariant feature learning as different steps. To address that issue, we develop a novel Graph Adaptive Knowledge Transfer (GAKT) model to jointly optimize target labels and domain-free features in a unified framework. Specifically, semi-supervised knowledge adaptation and label propagation on target data are coupled to benefit each other, and hence the marginal and conditional disparities across different domains will be better alleviated. Experimental evaluation on two cross-domain visual datasets demonstrates the effectiveness of our designed approach on facilitating the unlabeled target task learning, compared to the state-of-the-art domain adaptation approaches.Item Joint Adversarial Domain Adaptation(ACM, 2019-10) Li, Shuang; Liu, Chi Harold; Xie, Binhui; Su, Limin; Ding, Zhengming; Huang, Gao; Computer Information and Graphics Technology, School of Engineering and TechnologyDomain adaptation aims to transfer the enriched label knowledge from large amounts of source data to unlabeled target data. It has raised significant interest in multimedia analysis. Existing researches mainly focus on learning domain-wise transferable representations via statistical moment matching or adversarial adaptation techniques, while ignoring the class-wise mismatch across domains, resulting in inaccurate distribution alignment. To address this issue, we propose a Joint Adversarial Domain Adaptation (JADA) approach to simultaneously align domain-wise and class-wise distributions across source and target in a unified adversarial learning process. Specifically, JADA attempts to solve two complementary minimax problems jointly. The feature generator aims to not only fool the well-trained domain discriminator to learn domain-invariant features, but also minimize the disagreement between two distinct task-specific classifiers' predictions to synthesize target features near the support of source class-wisely. As a result, the learned transferable features will be equipped with more discriminative structures, and effectively avoid mode collapse. Additionally, JADA enables an efficient end-to-end training manner via a simple back-propagation scheme. Extensive experiments on several real-world cross-domain benchmarks, including VisDA-2017, ImageCLEF, Office-31 and digits, verify that JADA can gain remarkable improvements over other state-of-the-art deep domain adaptation approaches.Item Knowledge Reused Outlier Detection(IEEE, 2019-03) Yu, Weiren; Ding, Zhengming; Hu, Chunming; Liu, Hongfu; Computer and Information Science, School of ScienceTremendous efforts have been invested in the unsupervised outlier detection research, which is conducted on unlabeled data set with abnormality assumptions. With abundant related labeled data available as auxiliary information, we consider transferring the knowledge from the labeled source data to facilitate the unsupervised outlier detection on target data set. To fully make use of the source knowledge, the source data and target data are put together for joint clustering and outlier detection using the source data cluster structure as a constraint. To achieve this, the categorical utility function is employed to regularize the partitions of target data to be consistent with source data labels. With an augmented matrix, the problem is completely solved by a K-means - a based method with the rigid mathematical formulation and theoretical convergence guarantee. We have used four real-world data sets and eight outlier detection methods of different kinds for extensive experiments and comparison. The results demonstrate the effectiveness and significant improvements of the proposed methods in terms of outlier detection and cluster validity metrics. Moreover, the parameter analysis is provided as a practical guide, and noisy source label analysis proves that the proposed method can handle real applications where source labels can be noisy.
- «
- 1 (current)
- 2
- 3
- »