Anomaly Detection Through Explanations


Under most conditions, complex machines are imperfect. When errors occur, as they inevitably will, these machines need to be able to (1) localize the error and (2) take appropriate action to mitigate the repercussions of a possible failure. My thesis con- tributes a system architecture that reconciles local errors and inconsistencies amongst parts. I represent a complex machine as a hierarchical model of introspective sub- systems working together towards a common goal. The subsystems communicate in a common symbolic language. In the process of this investigation, I constructed a set of reasonableness monitors to diagnose and explain local errors, and a system- wide architecture, Anomaly Detection through Explanations (ADE), which reconciles system-wide failures. The ADE architecture contributes an explanation synthesizer that produces an argument tree, which in turn can be backtracked and queried for support and counterfactual explanations. I have applied my results to explain incor- rect labels in semi-autonomous vehicle data. A series of test simulations show the accuracy and performance of this architecture based on real-world, anomalous driving scenarios. My work has opened up the new area of explanatory anomaly detection, towards a vision in which: complex machines will be articulate by design; dynamic, internal explanations will be part of the design criteria, and system-level explanations will be able to be challenged in an adversarial proceeding.

PhD Thesis