This paper focuses on a bias variance decomposition analysis of a local learning algorithm, the nearest neighbor classifier, that has been extended with 'error correcting output codes'. This extended algorithm often considerably reduces the 0-1 (i.e., classification) error in comparison with nearest neigbor (Ricci & Aha, 1997). The analysis presented here reveals that this performance improvement is obtained by drastically reducing bias at the cost of increasing variance. We also show that, even in classification problems with few classes (m<_5), extending the codeword length beyond the limit that assures column separation yields an error reduction. This error reduction is not only in the variance, which is due to the voting mechanism used for error-correcting output codes, but also in the bias
Bias, Variance, and Error Correcting Output Codes for Local Learners
Ricci, Francesco;
1997-01-01
Abstract
This paper focuses on a bias variance decomposition analysis of a local learning algorithm, the nearest neighbor classifier, that has been extended with 'error correcting output codes'. This extended algorithm often considerably reduces the 0-1 (i.e., classification) error in comparison with nearest neigbor (Ricci & Aha, 1997). The analysis presented here reveals that this performance improvement is obtained by drastically reducing bias at the cost of increasing variance. We also show that, even in classification problems with few classes (m<_5), extending the codeword length beyond the limit that assures column separation yields an error reduction. This error reduction is not only in the variance, which is due to the voting mechanism used for error-correcting output codes, but also in the biasI documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.