Abstract:Among the conventional training methods of pattern recognition, the supervised learning methods with abundant labels have achieved significant performance in recognition accuracy. However, in real life, there are problems that samples often lack labels, or existing labeled samples cannot be used directly due to the distributional divergences of the target samples. To resolve these issues, unsupervised domain adaptation is often applied to recognize samples of unlabeled target domain by taking advantage of the data of the source domain with sufficient labels but different distributions. Considering the situation that the distributions of the target recognition samples and the source training samples are different, an optimal representation learning method for unsupervised domain adaption was proposed. Two representation matrices were introduced to the common subspace of the domain samples to better reduce the distributional divergence of the domains. Then, optimization constraints were implemented on the two representation matrices, so as to make the source domain and the target domain optimally represent each other, thereby reducing the distributional divergence between the domains. In this way, the unlabeled target domain samples could be recognized by the fully labeled source domain samples (i.e., transfer learning). Experiments on three common unsupervised domain adaption datasets show that the proposed method outperformed the conventional transfer learning methods and deep learning methods in recognition accuracy, which verifies the validity and robustness of the proposed algorithm.