This report introduces a novel algorithm to learn the width of non-linear activation functions (of arbitrary analytical form) in layered networks. The algorithm is based on a steepest gradient-descent technique, and relies on the inductive proof of a theorem that involves the novel concept of expansion function of the activation associated to a given unit of the neural net. Experimental results obtained in a speaker nomalization task with a mixture of Multilayer Perceptron show a dramatic improvement of performance with respect to the standard Back-Propagation training

Learning the Width of Activations in Neural Networks

Trentin, Edmondo
1996-01-01

Abstract

This report introduces a novel algorithm to learn the width of non-linear activation functions (of arbitrary analytical form) in layered networks. The algorithm is based on a steepest gradient-descent technique, and relies on the inductive proof of a theorem that involves the novel concept of expansion function of the activation associated to a given unit of the neural net. Experimental results obtained in a speaker nomalization task with a mixture of Multilayer Perceptron show a dramatic improvement of performance with respect to the standard Back-Propagation training
1996
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/1296
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact