Development and Application of Fault Detectability Performance Metrics for Instrument Calibration Verification and Anomaly Detection
J. Wesley Hines, Dustin R. Garvey
Abstract
Traditionally, the calibration of safety critical nuclear instrumentation has been performed during each refueling outage. However, many nuclear plants have moved toward condition-directed rather than time-directed calibration. This condition-directed calibration is accomplished through the use of on-line monitoring. On-line monitoring (OLM) commonly uses an autoassociative empirical modeling architecture to assess instrument channel performance. An autoassociative architecture predicts a group of correct sensor values when supplied a group of sensor values that is usually corrupted with process and instrument noise, and could also contain faults such as sensor drift or complete failure. This paper describes one such autoassociative model architecture, specifically autoassociative kernel regression (AAKR), and presents five metrics that may be used to evaluate performance. These metrics include the previously developed accuracy, auto sensitivity and cross sensitivity metrics along with a description of two new fault detectability performance metrics for application to instrument calibration verification (ICV) and anomaly detection. These parameters are calculated for an AAKR model of an operating nuclear power plant steam system and were used to describe the effects of model architecture on performance. It is shown that the ability of an empirical model to detect sensor faults in ICV systems is largely dependent on the model uncertainty and to a lesser degree its auto sensitivity. It is also shown that the ability of an empirical model to detect anomalies via the Sequential Probability Ratio Test (SPRT) is also related to uncertainty and the SPRT detectability is on the order of 50% smaller than the ICV detectability. These guidelines provide a framework for developing various models, in that models intended to be applied to ICV and anomaly detection tasks should focus on the minimization of uncertainty. Furthermore, the ICV and anomaly detection performance metrics are shown to be within the traditional +/-1% calibration tolerance and their performance under artificially faulted conditions are shown to be in direct agreement with their theoretical foundations.