Development and Application of Fault Detectability Performance Metrics for Instrument Calibration Verification and Anomaly Detection
The Journal of Pattern Recognition Research (JPRR) provides an international forum for the electronic publication of high-quality research and industrial experience articles in all areas of pattern recognition, machine learning, and artificial intelligence. JPRR is committed to rigorous yet rapid reviewing. Final versions are published electronically
(ISSN 1558-884X) immediately upon acceptance.
Development and Application of Fault Detectability Performance Metrics for Instrument Calibration Verification and Anomaly Detection
J. Wesley Hines, Dustin R. Garvey
JPRR Vol 1, No 1 (2006); doi:10.13176/11.5 
Download
J. Wesley Hines, Dustin R. Garvey
Abstract
Traditionally, the calibration of safety critical nuclear instrumentation has been performed during each refueling outage. However, many nuclear plants have moved toward condition-directed rather than time-directed calibration. This condition-directed calibration is accomplished through the use of on-line monitoring. On-line monitoring (OLM) commonly uses an autoassociative empirical modeling architecture to assess instrument channel performance. An autoassociative architecture predicts a group of correct sensor values when supplied a group of sensor values that is usually corrupted with process and instrument noise, and could also contain faults such as sensor drift or complete failure. This paper describes one such autoassociative model architecture, specifically autoassociative kernel regression (AAKR), and presents five metrics that may be used to evaluate performance. These metrics include the previously developed accuracy, auto sensitivity and cross sensitivity metrics along with a description of two new fault detectability performance metrics for application to instrument calibration verification (ICV) and anomaly detection. These parameters are calculated for an AAKR model of an operating nuclear power plant steam system and were used to describe the effects of model architecture on performance. It is shown that the ability of an empirical model to detect sensor faults in ICV systems is largely dependent on the model uncertainty and to a lesser degree its auto sensitivity. It is also shown that the ability of an empirical model to detect anomalies via the Sequential Probability Ratio Test (SPRT) is also related to uncertainty and the SPRT detectability is on the order of 50% smaller than the ICV detectability. These guidelines provide a framework for developing various models, in that models intended to be applied to ICV and anomaly detection tasks should focus on the minimization of uncertainty. Furthermore, the ICV and anomaly detection performance metrics are shown to be within the traditional +/-1% calibration tolerance and their performance under artificially faulted conditions are shown to be in direct agreement with their theoretical foundations.
JPRR Vol 1, No 1 (2006); doi:10.13176/11.5 | Full Text  | Share this paper: