Network Function Virtualization is a promising technology that proposes to decouple the network functions from their underlying hardware and transform them into software entities called Virtual Network Functions (VNFs). This approach offers network operators with more flexibility to instantiate, configure, scale, and migrate VNFs at runtime depending on the demand. On introducing these VNFs at the network edges (e.g., base stations), emerging use cases such as connected cars can be supported. However, in such an environment, efficient VNF placement and orchestration mechanisms are needed to address the challenges of continuously changing network dynamics, service latency requirements and user mobility patterns. The purpose of this paper is twofold. Firstly, we propose a neural-network model (i.e., a subset of machine learning) that can assist in proactive auto-scaling by predicting the number of VNF instances required as a function of the network traffic they should process. Based on the traffic traces collected over a commercial mobile network, the model achieves a prediction accuracy of 97%. Secondly, we provide an Integer Linear Programming formulation for placing these VNFs at the edge nodes with a primary objective of minimizing end-to-end latency from all users to their respective VNFs. Our results show an improvement in latency by upto 75% when VNFs are placed at the network edges.

Machine learning-driven Scaling and Placement of Virtual Network Functions at the Network Edges

Tejas Subramanya;Roberto Riggio
2019-01-01

Abstract

Network Function Virtualization is a promising technology that proposes to decouple the network functions from their underlying hardware and transform them into software entities called Virtual Network Functions (VNFs). This approach offers network operators with more flexibility to instantiate, configure, scale, and migrate VNFs at runtime depending on the demand. On introducing these VNFs at the network edges (e.g., base stations), emerging use cases such as connected cars can be supported. However, in such an environment, efficient VNF placement and orchestration mechanisms are needed to address the challenges of continuously changing network dynamics, service latency requirements and user mobility patterns. The purpose of this paper is twofold. Firstly, we propose a neural-network model (i.e., a subset of machine learning) that can assist in proactive auto-scaling by predicting the number of VNF instances required as a function of the network traffic they should process. Based on the traffic traces collected over a commercial mobile network, the model achieves a prediction accuracy of 97%. Secondly, we provide an Integer Linear Programming formulation for placing these VNFs at the edge nodes with a primary objective of minimizing end-to-end latency from all users to their respective VNFs. Our results show an improvement in latency by upto 75% when VNFs are placed at the network edges.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/317865
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact