As the deployment of AI agents in Automated Driving Systems (ADS) becomes increasingly prevalent, ensuring their safety and reliability is of paramount importance. This paper presents a novel approach to enhance the safety assurance of automated driving systems by employing formal contracts to specify and refine system-level properties. The proposed framework leverages formal methods to specify contracts that capture the expected behavior of perception and control components in ADS. These contracts serve as a basis for systematically validating system behavior against safety requirements during design and testing phases. We demonstrate the efficacy of our approach using the CARLA simulator and an off-the-shelf AI agent. By monitoring the components contracts on the ADS simulation, we could identify not only the cause of system failures, but also the situations which could lead to a system failure, facilitating the debugging and maintenance of the AI agents.
Leveraging Contracts for Failure Monitoring and Identification in Automated Driving Systems
Srajan Goyal;Alberto Griggio;Stefano Tonetta
2025-01-01
Abstract
As the deployment of AI agents in Automated Driving Systems (ADS) becomes increasingly prevalent, ensuring their safety and reliability is of paramount importance. This paper presents a novel approach to enhance the safety assurance of automated driving systems by employing formal contracts to specify and refine system-level properties. The proposed framework leverages formal methods to specify contracts that capture the expected behavior of perception and control components in ADS. These contracts serve as a basis for systematically validating system behavior against safety requirements during design and testing phases. We demonstrate the efficacy of our approach using the CARLA simulator and an off-the-shelf AI agent. By monitoring the components contracts on the ADS simulation, we could identify not only the cause of system failures, but also the situations which could lead to a system failure, facilitating the debugging and maintenance of the AI agents.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.