Threat detection systems collect and analyze a large amount of security data logs for detecting potential attacks. Since log data from enterprise systems may contain sensitive and personal information access should be limited to the data relevant to the task at hand as mandated by data protection regulations. To this end, data need to be pre-processed (anonymized) to eliminate or obfuscate the sensitive information that is not-strictly necessary for the task. Additional security/accountability measures may be also applied to reduce the privacy risk, such as logging the access to the personal data or imposing deletion obligations. Anonymization reduces the privacy risk, but it should be carefully applied and balanced with utility requirements of the different phases of the process: a preliminary analysis may require fewer details than an in-depth investigation on a suspect set of logs. We propose a risk-based privacy-aware access control framework for threat detection systems, where each access request is evaluated by comparing the privacy-risk and the trustworthiness of the request. When the risk is too large compared to the trust level, the framework can apply adaptive adjustment strategies to decrease the risk (e.g., by selectively obfuscating the data) or to increase the trust level to perform a given task (e.g., imposing enforceable obligations to the user). We show how the framework can simultaneously address both the privacy and the utility requirements. The experimental results presented in the paper that the framework leads to meaningful results, and real-time performance, within an industrial threat detection solution.

Risk-Based Privacy-Aware Access Control for Threat Detection Systems

Metoui, Nadia;
2017-01-01

Abstract

Threat detection systems collect and analyze a large amount of security data logs for detecting potential attacks. Since log data from enterprise systems may contain sensitive and personal information access should be limited to the data relevant to the task at hand as mandated by data protection regulations. To this end, data need to be pre-processed (anonymized) to eliminate or obfuscate the sensitive information that is not-strictly necessary for the task. Additional security/accountability measures may be also applied to reduce the privacy risk, such as logging the access to the personal data or imposing deletion obligations. Anonymization reduces the privacy risk, but it should be carefully applied and balanced with utility requirements of the different phases of the process: a preliminary analysis may require fewer details than an in-depth investigation on a suspect set of logs. We propose a risk-based privacy-aware access control framework for threat detection systems, where each access request is evaluated by comparing the privacy-risk and the trustworthiness of the request. When the risk is too large compared to the trust level, the framework can apply adaptive adjustment strategies to decrease the risk (e.g., by selectively obfuscating the data) or to increase the trust level to perform a given task (e.g., imposing enforceable obligations to the user). We show how the framework can simultaneously address both the privacy and the utility requirements. The experimental results presented in the paper that the framework leads to meaningful results, and real-time performance, within an industrial threat detection solution.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/313247
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact