AdaBoost is one of the most popular classification methods in use. Differently from other ensemble methods (e.g., Bagging), AdaBoost is inherently sequential. In many data intensive, real world applications this may limit the practical applicability of the method. In this paper, a scheme is presented for the parallelization of the AdaBoost. The procedure builds upon earlier results concerning the dynamics of AdaBoost weights, and yields approximations to the standard AdaBoost models that can be easily and efficiently distributed over a network of computing nodes. Margin maximization properties of the proposed procedure are discussed, and experiments are reported on either synthetic and benchmark data sets

Giving AdaBoost a Parallel Boost

Merler, Stefano;Furlanello, Cesare;Caprile, Bruno Giovanni
2004-01-01

Abstract

AdaBoost is one of the most popular classification methods in use. Differently from other ensemble methods (e.g., Bagging), AdaBoost is inherently sequential. In many data intensive, real world applications this may limit the practical applicability of the method. In this paper, a scheme is presented for the parallelization of the AdaBoost. The procedure builds upon earlier results concerning the dynamics of AdaBoost weights, and yields approximations to the standard AdaBoost models that can be easily and efficiently distributed over a network of computing nodes. Margin maximization properties of the proposed procedure are discussed, and experiments are reported on either synthetic and benchmark data sets
2004
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/2573
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact