Evolving fuzzy neural networks are models capable of solving complex problems in a wide variety of contexts. In general, the quality of the data evaluated by a model has a direct impact on the quality of the results. Some procedures can generate uncertainty during data collection, which can be identified by experts to choose more suitable forms of model training. This paper proposes the integration of expert input on labeling uncertainty into evolving fuzzy neural classifiers (EFNC) in an approach called EFNC-U. Uncertainty is considered in class label input provided by experts, who may not be entirely confident in their labeling or who may have limited experience with the application scenario for which the data is processed. Further, we aimed to create highly interpretable fuzzy classification rules to gain a better understanding of the process and thus to enable the user to elicit new knowledge from the model. To prove our technique, we performed binary pattern classification tests within two application scenarios, cyber invasion and fraud detection in auctions. By explicitly considering class label uncertainty in the update process of the EFNC-U, improved accuracy trend lines were achieved compared to fully (and blindly) updating the classifiers with uncertain data. Integration of (simulated) labeling uncertainty smaller than 20% led to similar accuracy trends as using the original streams (unaffected by uncertainty). This demonstrates the robustness of our approach up to this uncertainty level. Finally, interpretable rules were elicited for a particular application (auction fraud identification) with reduced (and thus readable) antecedent lengths and with certainty values in the consequent class labels. Additionally, an average expected uncertainty of the rules were elicited based on the uncertainty levels in those samples which formed the corresponding rules.

Evolving fuzzy neural classifier that integrates uncertainty from human-expert feedback

Paulo Vitor de Campos Souza;
2023-01-01

Abstract

Evolving fuzzy neural networks are models capable of solving complex problems in a wide variety of contexts. In general, the quality of the data evaluated by a model has a direct impact on the quality of the results. Some procedures can generate uncertainty during data collection, which can be identified by experts to choose more suitable forms of model training. This paper proposes the integration of expert input on labeling uncertainty into evolving fuzzy neural classifiers (EFNC) in an approach called EFNC-U. Uncertainty is considered in class label input provided by experts, who may not be entirely confident in their labeling or who may have limited experience with the application scenario for which the data is processed. Further, we aimed to create highly interpretable fuzzy classification rules to gain a better understanding of the process and thus to enable the user to elicit new knowledge from the model. To prove our technique, we performed binary pattern classification tests within two application scenarios, cyber invasion and fraud detection in auctions. By explicitly considering class label uncertainty in the update process of the EFNC-U, improved accuracy trend lines were achieved compared to fully (and blindly) updating the classifiers with uncertain data. Integration of (simulated) labeling uncertainty smaller than 20% led to similar accuracy trends as using the original streams (unaffected by uncertainty). This demonstrates the robustness of our approach up to this uncertainty level. Finally, interpretable rules were elicited for a particular application (auction fraud identification) with reduced (and thus readable) antecedent lengths and with certainty values in the consequent class labels. Additionally, an average expected uncertainty of the rules were elicited based on the uncertainty levels in those samples which formed the corresponding rules.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/341092
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
social impact