Extreme learning machines are very efficient tools to perform data classification, besides being a more effective solution to the structures using the backpropagation to determine values at intermediate layers of the learning model. One problem faced by ELM is over-fitting, which usually happens when the amount of neurons in the ELM hidden layer is high. Several pruning methods for ELM have been proposed, including statistical techniques and control by correlation. This article provides a simple way to remove the ELM hidden layer based on a correlation coefficient of elements present in the ELM architecture, allowing the model to have a simpler topology to perform its pattern classification activities using the selection of neurons more able to predict the responses and contain less entropy between the data. In the end, it is verified that ELM pruned by correlation techniques maintained results of training accuracy and test statistically equivalent to ELM's original proposal, with the advantage of using fewer neurons in the hidden layer, where neurons are selected that best contribute to the model, maintains the accuracy of training and test results equivalent to those obtained when using structures with more neuron models that also use statistical and correlation techniques to remove the ELM architecture. Statistical tests confirm that the solution proposed in the article allows an ELM with a simpler internal architecture and with the high precision capacity to perform the classification of standards, allowing the proposed methodology to be seen as an efficient pruning technique for ELM.

Method of pruning the hidden layer of the extreme learning machine based on correlation coefficient

P. V. de Campos Souza;
2018-01-01

Abstract

Extreme learning machines are very efficient tools to perform data classification, besides being a more effective solution to the structures using the backpropagation to determine values at intermediate layers of the learning model. One problem faced by ELM is over-fitting, which usually happens when the amount of neurons in the ELM hidden layer is high. Several pruning methods for ELM have been proposed, including statistical techniques and control by correlation. This article provides a simple way to remove the ELM hidden layer based on a correlation coefficient of elements present in the ELM architecture, allowing the model to have a simpler topology to perform its pattern classification activities using the selection of neurons more able to predict the responses and contain less entropy between the data. In the end, it is verified that ELM pruned by correlation techniques maintained results of training accuracy and test statistically equivalent to ELM's original proposal, with the advantage of using fewer neurons in the hidden layer, where neurons are selected that best contribute to the model, maintains the accuracy of training and test results equivalent to those obtained when using structures with more neuron models that also use statistical and correlation techniques to remove the ELM architecture. Statistical tests confirm that the solution proposed in the article allows an ELM with a simpler internal architecture and with the high precision capacity to perform the classification of standards, allowing the proposed methodology to be seen as an efficient pruning technique for ELM.
2018
978-1-5386-4626-7
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/341016
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact