Dimension Reduction with Extreme Learning Machine
Abstract of Dimension Reduction with Extreme Learning Machine
The features of PCA (eigenvectors)
linear AE are not able to represent data as parts (e.g. nose in a face image). This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace.To this end, this paper investigates a linear on-linear dimension reduction framework referred to as extreme learning machine AE (ELM-AE) and sparse ELM-AE (SELM-AE). In contrast to tied weight AE, the hidden neurons in ELM-AE and SELM-AE need not be tuned, their parameters (e.g, input weights in additive neurons) are initialized using orthogonal sparse random weights, respectively.Experimental results on USPS handwritten digit recognition data set, CIFAR-10 object recognition,
Dimension Reduction with Extreme Learning Machine,Conventional AE tunes input weights and output weights iteratively to learn features of data. This paper investigates linear and non-linear ELM-AE and SELM-AE with orthogonal and sparse random hidden neurons.
In contrast to the common perception that linear AE learn the variance information of the data, the proposed linear ELM-AE and linear SELM-AE with random neurons in theory learn the between-class scatter matrix which reduces the distances of data points belonging to the same cluster.
NMSE values shows that non-linear ELM-AE and SELM-AE learn a better model than TAE.