Many autonomous systems use deep neural networks (DNNs) to perceive the world around them. However, no DNN is perfect, which can be devastating in safety-critical applications. DeepDECS is a controller synthesis method that captures the uncertainty of DNN perception components through the use of DNN verification techniques. From this, it produces correct-by-construction controllers for such autonomous systems.
The diagram on the right depicts the DeepDECS controller synthesis process, including its four inputs. The first is a discrete-time Markov Chain (DTMC) modelling the behaviour of the autonomous system within its application (assuming perfect perception); the second is a set of requirements specified in probabilistic computation tree logic (PCTL). The third input is a pre-trained perception DNN that will be utilised in the autonomous system, and the fourth is a representative test dataset for this DNN. These last two inputs are utilised, along with \(n\) verification methods, in Stage 1 ('DNN uncertainty quantification') of DeepDECS to derive a set of \(2^{n}\) confusion matrices. These confusion matrices are then used to estimate the DNN classification/misclassification probabilities for an augmented, DNN-perception DTMC obtained automatically in Stage 2 ('Model augmentation') of DeepDECS. The augmented model contains an accurate representation of the imperfect DNN perception component. Finally, Stage 3 ('Controller synthesis') of DeepDECS employs probabilistic model checking techniques to generate the autonomous system controller synthesis.
Inspired by recent research on the integration of DNNs with a wide variety of autonomous systems we used DeepDECS to develop a mobile robot collision-limitation controller. We considered a robot travelling between locations A and B via waypoints(see left), where the robot (blue) may encounter and potentially collide with another moving agent (red). As such, the robot uses DNN perception at each waypoint, to assess if it is on a collision course. The DNN's inputs are the distance between the robot and collider (\(x\), \(y\)), the linear and angular velocity of the collider (\(s\), \(\dot{\theta}\)), and the initial heading of the collider (\(\theta\)). The input values are all relative to the robot's local reference frame. Based on the DNN output, its controller decides whether the robot will proceed to the next waypoint or should wait for a while at its current one. The requirements for the robot is to minimise the time and minimise the risk of a dangerous collision when travelling between two waypoints.
Clicking on the camera icon on the left will open a video visualising a simulation of a robot travelling between two locations using a controller synthesised by the DeepDECS process.
We used DEEPDECS to design a proof-of-concept driver-attentiveness management system for shared-control autonomous cars. Developed as part of our SafeSCAD project and inspired by the first United Nations regulation on vehicles with Level 3 automation, this system uses: (i) specialised sensors to monitor key car parameters (velocity, lane position, etc.) and driver’s biometrics (eye movement, heart rate, etc.), (ii) a three-class DNN to predict the driver’s response to a request to resume manual driving, and (iii) a deterministic controller to issue visual/acoustic/haptic alerts when the driver is insufficiently attentive. We used an existing DNN (DeepTake) trained and validated with driver data from a SafeSCAD user study performed within a driving simulator, with the test dataset used for our DNN uncertainty quantification obtained from the same study. The requirements for the controller is to minimise the risk, which is dependant upon the driver's attentiveness level, and to minimise the nuisance caused to the driver through the usage of alerts.
A model augmentation tool that automates Stage 2 of DeepDECS, a DeepDECS tutorial, and all the models, DNNs, datasets from the two case studies are freely available in our Github repo.
This project has received funding from the UKRI project ‘Trustworthy Autonomous Systems Node in Resilience’, the UKRI Global Research and Innovation Programme, and the Assuring Autonomy International Programme. The authors are grateful to the developers of the DeepTake deep neural network for sharing the DeepTake data sets, and to the University of York’s Viking research computing cluster team for providing access to their systems.