Mixture of Experts (MoE) architectures have recently started burgeoning due to their ability to scale model’s capacity while maintaining the computational cost affordable, leading to state-of-the-art results in numerous fields. While MoE has been mostly investigated for the pre-training stage, its use in parameter-efficient transfer learning (PETL) settings is underexplored. To narrow this gap, this paper attempts to demystify the use of MoE for PETL of Audio Spectrogram Transformers to audio and speech downstream tasks. Specifically, we propose Soft Mixture of Adapters (Soft-MoA). It exploits adapters as the experts and, leveraging the recent Soft MoE method, it relies on a soft assignment between the input tokens and experts to keep the computational time limited. Extensive experiments across 4 benchmarks demonstrate that Soft-MoA outperforms the single adapter method and performs on par with the dense MoA counterpart. We finally present ablation studies on key elements of Soft-MoA. Our code is available at https://github.com/umbertocappellazzo/PETL_AST.

Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters

Cappellazzo, Umberto;Falavigna, Daniele;Brutti, Alessio
2024-01-01

Abstract

Mixture of Experts (MoE) architectures have recently started burgeoning due to their ability to scale model’s capacity while maintaining the computational cost affordable, leading to state-of-the-art results in numerous fields. While MoE has been mostly investigated for the pre-training stage, its use in parameter-efficient transfer learning (PETL) settings is underexplored. To narrow this gap, this paper attempts to demystify the use of MoE for PETL of Audio Spectrogram Transformers to audio and speech downstream tasks. Specifically, we propose Soft Mixture of Adapters (Soft-MoA). It exploits adapters as the experts and, leveraging the recent Soft MoE method, it relies on a soft assignment between the input tokens and experts to keep the computational time limited. Extensive experiments across 4 benchmarks demonstrate that Soft-MoA outperforms the single adapter method and performs on par with the dense MoA counterpart. We finally present ablation studies on key elements of Soft-MoA. Our code is available at https://github.com/umbertocappellazzo/PETL_AST.
File in questo prodotto:
File Dimensione Formato  
cappellazzo24_interspeech.pdf

accesso aperto

Licenza: Dominio pubblico
Dimensione 754.54 kB
Formato Adobe PDF
754.54 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/357487
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact