Extreme learning machines are used for various contexts in artificial intelligence, such as for classifying patterns, performing time series prediction and regression problems, and being a more viable solution for training hidden layer weights to determine values of the learning model. However, the essence, the model determines that these weights should be determined randomly, and the Moore Penrose pseudoinverse will define only the weights that will act in the output layer. Random weights make this learning a black box because there is no relationship between the hidden layer weights and the problem data. This paper proposes the initialization of weights and bias in the hidden layer through the Wavelets transform that allows the two parameters, previously initialized at random, to be more representative about the problem domain, allowing the frequency range of the input patterns of the network to aid in the definition of weights of the ELM hidden layer. To assist in the representativeness of the data, a technique of selection of characteristics based on automatic relevance determination will be applied to the selection of the most characteristic dimensions of the problem. To compose the network structure, activation functions of the type rectified linear unit, also called ReLU, were used. The proposed model was submitted to the classification test of binary patterns in real classes, and the results show that the proposition of this work assists in bringing better accuracy to the classification results, and thus can be considered a feasible proposition to the training of neural networks that use extreme learning machine.

Pruning Extreme Wavelets Learning Machine by Automatic Relevance Determination

Paulo V. de Campos Souza;
2019-01-01

Abstract

Extreme learning machines are used for various contexts in artificial intelligence, such as for classifying patterns, performing time series prediction and regression problems, and being a more viable solution for training hidden layer weights to determine values of the learning model. However, the essence, the model determines that these weights should be determined randomly, and the Moore Penrose pseudoinverse will define only the weights that will act in the output layer. Random weights make this learning a black box because there is no relationship between the hidden layer weights and the problem data. This paper proposes the initialization of weights and bias in the hidden layer through the Wavelets transform that allows the two parameters, previously initialized at random, to be more representative about the problem domain, allowing the frequency range of the input patterns of the network to aid in the definition of weights of the ELM hidden layer. To assist in the representativeness of the data, a technique of selection of characteristics based on automatic relevance determination will be applied to the selection of the most characteristic dimensions of the problem. To compose the network structure, activation functions of the type rectified linear unit, also called ReLU, were used. The proposed model was submitted to the classification test of binary patterns in real classes, and the results show that the proposition of this work assists in bringing better accuracy to the classification results, and thus can be considered a feasible proposition to the training of neural networks that use extreme learning machine.
2019
978-3-030-20256-9
978-3-030-20257-6
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/341069
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact