Neural networks excel at addressing real-world tasks, yet their computational demands often confine them to cloud-based platforms. Recent literature has responded with compute-efficient neural architectures for edge devices (e.g. microcontroller units). Nonetheless, the proliferation of edge devices makes it relevant to address the dynamic nature of real-world environments, where models need to adapt to shifting data distributions and integrate new information without forgetting the old one. Continual Learning (CL) solves this issue by enabling models to learn new tasks while sequentially retaining knowledge from previous ones. In this paper, we study how efficient neural networks perform when solving the task of Class-Incremental Continual Learning. In particular, we evaluate the PhiNets architecture family on the well-established CORe50 and CIFAR-10 benchmarks and present a feasibility study for Latent Replay on edge devices. In terms of performance, PhiNet models exhibit superior results compared to MobileNet architectures on the CIFAR-10 dataset, achieving a 4.47% higher accuracy. Remarkably, PhiNet achieves this level of accuracy while utilizing only 0.012% of the computation required by MobileNet. This not only attests to its superior performance but also its substantial computational efficiency, affirming the feasibility of deploying PhiNet models in real-world applications.

An empirical evaluation of tinyML architectures for Class-Incremental Continual Learning

Francesco Paissan;Elisabetta Farella;
2024-01-01

Abstract

Neural networks excel at addressing real-world tasks, yet their computational demands often confine them to cloud-based platforms. Recent literature has responded with compute-efficient neural architectures for edge devices (e.g. microcontroller units). Nonetheless, the proliferation of edge devices makes it relevant to address the dynamic nature of real-world environments, where models need to adapt to shifting data distributions and integrate new information without forgetting the old one. Continual Learning (CL) solves this issue by enabling models to learn new tasks while sequentially retaining knowledge from previous ones. In this paper, we study how efficient neural networks perform when solving the task of Class-Incremental Continual Learning. In particular, we evaluate the PhiNets architecture family on the well-established CORe50 and CIFAR-10 benchmarks and present a feasibility study for Latent Replay on edge devices. In terms of performance, PhiNet models exhibit superior results compared to MobileNet architectures on the CIFAR-10 dataset, achieving a 4.47% higher accuracy. Remarkably, PhiNet achieves this level of accuracy while utilizing only 0.012% of the computation required by MobileNet. This not only attests to its superior performance but also its substantial computational efficiency, affirming the feasibility of deploying PhiNet models in real-world applications.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/345427
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact