Abstract:
For the classification of high-dimensional data, PCA joint subspace model can accurately describe the probability distribution of the sample data of one class and thus improve the classification accuracy of the corresponding Bayesian classifier. In this paper, we firstly make certain theoretical normalization of the PCA joint subspace. Particularly, its two basic assumptions are proposed. Moreover, it is proved that the used heuristic value of the parameter referred to as “representative eigenvalue” for the residual subspace is just its maximum likelihood estimate. We further generalize the expression of the probability distribution of the residual subspace and establish the generalized class-wise joint subspace algorithm for Bayesian classification. Finally, the experimental results on several real-world datasets demonstrate the superiority of the generalized class-wise joint subspace algorithm.