Anomaly Detection Through Explanations

Abstract

Under most conditions, complex systems are imperfect. When errors occur, as they inevitably will, systems need to be able to (1) localize the error and (2) take appropriate action to mitigate the repercussions of that error. In this talk, I present new methodologies for detecting and explaining errors in complex systems. My novel contribution is a system-wide monitoring architecture, which is composed of introspective, overlapping committees of subsystems. Each subsystem is encapsulated in a “reasonableness” monitor, an adaptable framework that supplements local decisions with commonsense data and reasonableness rules. This framework is dynamic and introspective: it allows each subsystem to defend its decisions in different contexts: to the committees it participates in and to itself. For reconciling system-wide errors, I developed a comprehensive architecture that I call “Anomaly Detection through Explanations (ADE).” The ADE architecture contributes an explanation synthesizer that produces an argument tree, which in turn can be traced and queried to determine the support of a decision, and to construct counterfactual explanations. I have applied this methodology to detect incorrect labels in semi-autonomous vehicle data, and to reconcile inconsistencies in simulated, anomalous driving scenarios. My work has opened up the new area of explanatory anomaly detection, working towards a vision in which complex systems will be articulate by design: they will be dynamic; internal explanations will be part of the design criteria; system-level explanations will be provided, and they can be challenged in an adversarial proceeding.

Date
Jul 14, 2021
Location
remote