Neural style transfer (NST) is a powerful technique that merges the content of one image with the artistic style of another. However, existing NST models often suffer from computational inefficiency, limiting their practical applicability. In this paper, we propose a lightweight NST approach based on XiNets, neural networks especially developed for microcontrollers and embedded applications. Our XiNet-based pipeline achieves an extremely lightweight transfer style mechanism and high-quality stylized outputs, significantly reducing computational complexity. Furthermore, we explore various optimizations and loss functions to optimize network generation performance. The combination of our novel loss factors and an extremely lightweight generative network enables performance comparable to those offered by networks of larger orders of magnitude. Experimental results demonstrate comparable style transfer quality to state-of-the-art approaches while running on embedded devices like a Raspberry Pi 4 single-board computer and an STM32H7 microcontroller. Our lightweight NST framework allows for real-time style transfer applications on resource-constrained devices without compromising visual fidelity.
Painting the Starry Night using XiNets
Alberto Ancilotto;Elisabetta Farella
2024-01-01
Abstract
Neural style transfer (NST) is a powerful technique that merges the content of one image with the artistic style of another. However, existing NST models often suffer from computational inefficiency, limiting their practical applicability. In this paper, we propose a lightweight NST approach based on XiNets, neural networks especially developed for microcontrollers and embedded applications. Our XiNet-based pipeline achieves an extremely lightweight transfer style mechanism and high-quality stylized outputs, significantly reducing computational complexity. Furthermore, we explore various optimizations and loss functions to optimize network generation performance. The combination of our novel loss factors and an extremely lightweight generative network enables performance comparable to those offered by networks of larger orders of magnitude. Experimental results demonstrate comparable style transfer quality to state-of-the-art approaches while running on embedded devices like a Raspberry Pi 4 single-board computer and an STM32H7 microcontroller. Our lightweight NST framework allows for real-time style transfer applications on resource-constrained devices without compromising visual fidelity.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.