In order to defend neural networks against malicious attacks, recent approaches propose the use of secret keys in the training or inference pipelines of learning systems. While this concept is innovative and the results are promising in terms of attack mitigation and classification accuracy, the effectiveness relies on the secrecy of the key. However, this aspect is often not discussed. In this short paper, we explore this issue for the case of a recently proposed key-based deep neural network. White-box experiments on multiple models and datasets, using the original key-based method and our own extensions, show that it is currently possible to extract secret key bits with relatively limited effort.
On the Difficulty of Hiding Keys in Neural Networks
Pasquini, C.;
2020-01-01
Abstract
In order to defend neural networks against malicious attacks, recent approaches propose the use of secret keys in the training or inference pipelines of learning systems. While this concept is innovative and the results are promising in terms of attack mitigation and classification accuracy, the effectiveness relies on the secrecy of the key. However, this aspect is often not discussed. In this short paper, we explore this issue for the case of a recently proposed key-based deep neural network. White-box experiments on multiple models and datasets, using the original key-based method and our own extensions, show that it is currently possible to extract secret key bits with relatively limited effort.File | Dimensione | Formato | |
---|---|---|---|
IH2020.pdf
solo utenti autorizzati
Tipologia:
Documento in Post-print
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
1.04 MB
Formato
Adobe PDF
|
1.04 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.