Science.gov

Sample records for sequential anomaly detection

  1. An anomaly detection and isolation scheme with instance-based learning and sequential analysis

    SciTech Connect

    Yoo, T. S.; Garcia, H. E.

    2006-07-01

    This paper presents an online anomaly detection and isolation (FDI) technique using an instance-based learning method combined with a sequential change detection and isolation algorithm. The proposed method uses kernel density estimation techniques to build statistical models of the given empirical data (null hypothesis). The null hypothesis is associated with the set of alternative hypotheses modeling the abnormalities of the systems. A decision procedure involves a sequential change detection and isolation algorithm. Notably, the proposed method enjoys asymptotic optimality as the applied change detection and isolation algorithm is optimal in minimizing the worst mean detection/isolation delay for a given mean time before a false alarm or a false isolation. Applicability of this methodology is illustrated with redundant sensor data set and its performance. (authors)

  2. Anomaly Detection in Dynamic Networks

    SciTech Connect

    Turcotte, Melissa

    2014-10-14

    Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. A second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behavior inherent in communication networks. This seasonal behavior is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the change points over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don’t fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the

  3. Automated anomaly detection processor

    NASA Astrophysics Data System (ADS)

    Kraiman, James B.; Arouh, Scott L.; Webb, Michael L.

    2002-07-01

    Robust exploitation of tracking and surveillance data will provide an early warning and cueing capability for military and civilian Law Enforcement Agency operations. This will improve dynamic tasking of limited resources and hence operational efficiency. The challenge is to rapidly identify threat activity within a huge background of noncombatant traffic. We discuss development of an Automated Anomaly Detection Processor (AADP) that exploits multi-INT, multi-sensor tracking and surveillance data to rapidly identify and characterize events and/or objects of military interest, without requiring operators to specify threat behaviors or templates. The AADP has successfully detected an anomaly in traffic patterns in Los Angeles, analyzed ship track data collected during a Fleet Battle Experiment to detect simulated mine laying behavior amongst maritime noncombatants, and is currently under development for surface vessel tracking within the Coast Guard's Vessel Traffic Service to support port security, ship inspection, and harbor traffic control missions, and to monitor medical surveillance databases for early alert of a bioterrorist attack. The AADP can also be integrated into combat simulations to enhance model fidelity of multi-sensor fusion effects in military operations.

  4. Seismic data fusion anomaly detection

    NASA Astrophysics Data System (ADS)

    Harrity, Kyle; Blasch, Erik; Alford, Mark; Ezekiel, Soundararajan; Ferris, David

    2014-06-01

    Detecting anomalies in non-stationary signals has valuable applications in many fields including medicine and meteorology. These include uses such as identifying possible heart conditions from an Electrocardiography (ECG) signals or predicting earthquakes via seismographic data. Over the many choices of anomaly detection algorithms, it is important to compare possible methods. In this paper, we examine and compare two approaches to anomaly detection and see how data fusion methods may improve performance. The first approach involves using an artificial neural network (ANN) to detect anomalies in a wavelet de-noised signal. The other method uses a perspective neural network (PNN) to analyze an arbitrary number of "perspectives" or transformations of the observed signal for anomalies. Possible perspectives may include wavelet de-noising, Fourier transform, peak-filtering, etc.. In order to evaluate these techniques via signal fusion metrics, we must apply signal preprocessing techniques such as de-noising methods to the original signal and then use a neural network to find anomalies in the generated signal. From this secondary result it is possible to use data fusion techniques that can be evaluated via existing data fusion metrics for single and multiple perspectives. The result will show which anomaly detection method, according to the metrics, is better suited overall for anomaly detection applications. The method used in this study could be applied to compare other signal processing algorithms.

  5. Model selection for anomaly detection

    NASA Astrophysics Data System (ADS)

    Burnaev, E.; Erofeev, P.; Smolyakov, D.

    2015-12-01

    Anomaly detection based on one-class classification algorithms is broadly used in many applied domains like image processing (e.g. detection of whether a patient is "cancerous" or "healthy" from mammography image), network intrusion detection, etc. Performance of an anomaly detection algorithm crucially depends on a kernel, used to measure similarity in a feature space. The standard approaches (e.g. cross-validation) for kernel selection, used in two-class classification problems, can not be used directly due to the specific nature of a data (absence of a second, abnormal, class data). In this paper we generalize several kernel selection methods from binary-class case to the case of one-class classification and perform extensive comparison of these approaches using both synthetic and real-world data.

  6. Sequential detection of web defects

    DOEpatents

    Eichel, Paul H.; Sleefe, Gerard E.; Stalker, K. Terry; Yee, Amy A.

    2001-01-01

    A system for detecting defects on a moving web having a sequential series of identical frames uses an imaging device to form a real-time camera image of a frame and a comparitor to comparing elements of the camera image with corresponding elements of an image of an exemplar frame. The comparitor provides an acceptable indication if the pair of elements are determined to be statistically identical; and a defective indication if the pair of elements are determined to be statistically not identical. If the pair of elements is neither acceptable nor defective, the comparitor recursively compares the element of said exemplar frame with corresponding elements of other frames on said web until one of the acceptable or defective indications occur.

  7. Survey of Anomaly Detection Methods

    SciTech Connect

    Ng, B

    2006-10-12

    This survey defines the problem of anomaly detection and provides an overview of existing methods. The methods are categorized into two general classes: generative and discriminative. A generative approach involves building a model that represents the joint distribution of the input features and the output labels of system behavior (e.g., normal or anomalous) then applies the model to formulate a decision rule for detecting anomalies. On the other hand, a discriminative approach aims directly to find the decision rule, with the smallest error rate, that distinguishes between normal and anomalous behavior. For each approach, we will give an overview of popular techniques and provide references to state-of-the-art applications.

  8. Efficient Computer Network Anomaly Detection by Changepoint Detection Methods

    NASA Astrophysics Data System (ADS)

    Tartakovsky, Alexander G.; Polunchenko, Aleksey S.; Sokolov, Grigory

    2013-02-01

    We consider the problem of efficient on-line anomaly detection in computer network traffic. The problem is approached statistically, as that of sequential (quickest) changepoint detection. A multi-cyclic setting of quickest change detection is a natural fit for this problem. We propose a novel score-based multi-cyclic detection algorithm. The algorithm is based on the so-called Shiryaev-Roberts procedure. This procedure is as easy to employ in practice and as computationally inexpensive as the popular Cumulative Sum chart and the Exponentially Weighted Moving Average scheme. The likelihood ratio based Shiryaev-Roberts procedure has appealing optimality properties, particularly it is exactly optimal in a multi-cyclic setting geared to detect a change occurring at a far time horizon. It is therefore expected that an intrusion detection algorithm based on the Shiryaev-Roberts procedure will perform better than other detection schemes. This is confirmed experimentally for real traces. We also discuss the possibility of complementing our anomaly detection algorithm with a spectral-signature intrusion detection system with false alarm filtering and true attack confirmation capability, so as to obtain a synergistic system.

  9. Conscious and unconscious detection of semantic anomalies.

    PubMed

    Hannon, Brenda

    2015-01-01

    When asked What superhero is associated with bats, Robin, the Penguin, Metropolis, Catwoman, the Riddler, the Joker, and Mr. Freeze? people frequently fail to notice the anomalous word Metropolis. The goals of this study were to determine whether detection of semantic anomalies, like Metropolis, is conscious or unconscious and whether this detection is immediate or delayed. To achieve these goals, participants answered anomalous and nonanomalous questions as their reading times for words were recorded. Comparisons between detected versus undetected anomalies revealed slower reading times for detected anomalies-a finding that suggests that people immediately and consciously detected anomalies. Further, comparisons between first and second words following undetected anomalies versus nonanomalous controls revealed some slower reading times for first and second words-a finding that suggests that people may have unconsciously detected anomalies but this detection was delayed. Taken together, these findings support the idea that when we are immediately aware of a semantic anomaly (i.e., immediate conscious detection) our language processes make immediate adjustments in order to reconcile contradictory information of anomalies with surrounding text; however, even when we are not consciously aware of semantic anomalies, our language processes still make these adjustments, although these adjustments are delayed (i.e., delayed unconscious detection). PMID:25624136

  10. Data Mining for Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Biswas, Gautam; Mack, Daniel; Mylaraswamy, Dinkar; Bharadwaj, Raj

    2013-01-01

    The Vehicle Integrated Prognostics Reasoner (VIPR) program describes methods for enhanced diagnostics as well as a prognostic extension to current state of art Aircraft Diagnostic and Maintenance System (ADMS). VIPR introduced a new anomaly detection function for discovering previously undetected and undocumented situations, where there are clear deviations from nominal behavior. Once a baseline (nominal model of operations) is established, the detection and analysis is split between on-aircraft outlier generation and off-aircraft expert analysis to characterize and classify events that may not have been anticipated by individual system providers. Offline expert analysis is supported by data curation and data mining algorithms that can be applied in the contexts of supervised learning methods and unsupervised learning. In this report, we discuss efficient methods to implement the Kolmogorov complexity measure using compression algorithms, and run a systematic empirical analysis to determine the best compression measure. Our experiments established that the combination of the DZIP compression algorithm and CiDM distance measure provides the best results for capturing relevant properties of time series data encountered in aircraft operations. This combination was used as the basis for developing an unsupervised learning algorithm to define "nominal" flight segments using historical flight segments.

  11. A New, Principled Approach to Anomaly Detection

    SciTech Connect

    Ferragut, Erik M; Laska, Jason A; Bridges, Robert A

    2012-01-01

    Intrusion detection is often described as having two main approaches: signature-based and anomaly-based. We argue that only unsupervised methods are suitable for detecting anomalies. However, there has been a tendency in the literature to conflate the notion of an anomaly with the notion of a malicious event. As a result, the methods used to discover anomalies have typically been ad hoc, making it nearly impossible to systematically compare between models or regulate the number of alerts. We propose a new, principled approach to anomaly detection that addresses the main shortcomings of ad hoc approaches. We provide both theoretical and cyber-specific examples to demonstrate the benefits of our more principled approach.

  12. Analytic sequential methods for detecting network intrusions

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Walker, Ernest

    2014-05-01

    In this paper, we propose an analytic sequential methods for detecting port-scan attackers which routinely perform random "portscans" of IP addresses to find vulnerable servers to compromise. In addition to rigorously control the probability of falsely implicating benign remote hosts as malicious, our method performs significantly faster than other current solutions. We have developed explicit formulae for quick determination of the parameters of the new detection algorithm.

  13. A sequential framework for image change detection.

    PubMed

    Lingg, Andrew J; Zelnio, Edmund; Garber, Fred; Rigling, Brian D

    2014-05-01

    We present a sequential framework for change detection. This framework allows us to use multiple images from reference and mission passes of a scene of interest in order to improve detection performance. It includes a change statistic that is easily updated when additional data becomes available. Detection performance using this statistic is predictable when the reference and image data are drawn from known distributions. We verify our performance prediction by simulation. Additionally, we show that detection performance improves with additional measurements on a set of synthetic aperture radar images and a set of visible images with unknown probability distributions. PMID:24818249

  14. Anomaly detection for internet surveillance

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Raaijmakers, Stephan; Halma, Arvid; Wedemeijer, Harry

    2012-06-01

    Many threats in the real world can be related to activity of persons on the internet. Internet surveillance aims to predict and prevent attacks and to assist in finding suspects based on information from the web. However, the amount of data on the internet rapidly increases and it is time consuming to monitor many websites. In this paper, we present a novel method to automatically monitor trends and find anomalies on the internet. The system was tested on Twitter data. The results showed that it can successfully recognize abnormal changes in activity or emotion.

  15. Modeling And Detecting Anomalies In Scada Systems

    NASA Astrophysics Data System (ADS)

    Svendsen, Nils; Wolthusen, Stephen

    The detection of attacks and intrusions based on anomalies is hampered by the limits of specificity underlying the detection techniques. However, in the case of many critical infrastructure systems, domain-specific knowledge and models can impose constraints that potentially reduce error rates. At the same time, attackers can use their knowledge of system behavior to mask their manipulations, causing adverse effects to observed only after a significant period of time. This paper describes elementary statistical techniques that can be applied to detect anomalies in critical infrastructure networks. A SCADA system employed in liquefied natural gas (LNG) production is used as a case study.

  16. Spectral anomaly detection in deep shadows.

    PubMed

    Kanaev, Andrey V; Murray-Krezan, Jeremy

    2010-03-20

    Although several hyperspectral anomaly detection algorithms have proven useful when illumination conditions provide for enough light, many of these same detection algorithms fail to perform well when shadows are also present. To date, no general approach to the problem has been demonstrated. In this paper, a novel hyperspectral anomaly detection algorithm that adapts the dimensionality of the spectral detection subspace to multiple illumination levels is described. The novel detection algorithm is applied to reflectance domain hyperspectral data that represents a variety of illumination conditions: well illuminated and poorly illuminated (i.e., shadowed). Detection results obtained for objects located in deep shadows and light-shadow transition areas suggest superiority of the novel algorithm over standard subspace RX detection. PMID:20300158

  17. Anomaly Detection for Discrete Sequences: A Survey

    SciTech Connect

    Chandola, Varun; Banerjee, Arindam; Kumar, Vipin

    2012-01-01

    This survey attempts to provide a comprehensive and structured overview of the existing research for the problem of detecting anomalies in discrete/symbolic sequences. The objective is to provide a global understanding of the sequence anomaly detection problem and how existing techniques relate to each other. The key contribution of this survey is the classification of the existing research into three distinct categories, based on the problem formulation that they are trying to solve. These problem formulations are: 1) identifying anomalous sequences with respect to a database of normal sequences; 2) identifying an anomalous subsequence within a long sequence; and 3) identifying a pattern in a sequence whose frequency of occurrence is anomalous. We show how each of these problem formulations is characteristically distinct from each other and discuss their relevance in various application domains. We review techniques from many disparate and disconnected application domains that address each of these formulations. Within each problem formulation, we group techniques into categories based on the nature of the underlying algorithm. For each category, we provide a basic anomaly detection technique, and show how the existing techniques are variants of the basic technique. This approach shows how different techniques within a category are related or different from each other. Our categorization reveals new variants and combinations that have not been investigated before for anomaly detection. We also provide a discussion of relative strengths and weaknesses of different techniques. We show how techniques developed for one problem formulation can be adapted to solve a different formulation, thereby providing several novel adaptations to solve the different problem formulations. We also highlight the applicability of the techniques that handle discrete sequences to other related areas such as online anomaly detection and time series anomaly detection.

  18. Anomaly Detection Using Behavioral Approaches

    NASA Astrophysics Data System (ADS)

    Benferhat, Salem; Tabia, Karim

    Behavioral approaches, which represent normal/abnormal activities, have been widely used during last years in intrusion detection and computer security. Nevertheless, most works showed that they are ineffective for detecting novel attacks involving new behaviors. In this paper, we first study this recurring problem due on one hand to inadequate handling of anomalous and unusual audit events and on other hand to insufficient decision rules which do not meet behavioral approach objectives. We then propose to enhance the standard decision rules in order to fit behavioral approach requirements and better detect novel attacks. Experimental studies carried out on real and simulated http traffic show that these enhanced decision rules improve detecting most novel attacks without triggering higher false alarm rates.

  19. Hyperspectral Anomaly Detection in Urban Scenarios

    NASA Astrophysics Data System (ADS)

    Rejas Ayuga, J. G.; Martínez Marín, R.; Marchamalo Sacristán, M.; Bonatti, J.; Ojeda, J. C.

    2016-06-01

    We have studied the spectral features of reflectance and emissivity in the pattern recognition of urban materials in several single hyperspectral scenes through a comparative analysis of anomaly detection methods and their relationship with city surfaces with the aim to improve information extraction processes. Spectral ranges of the visible-near infrared (VNIR), shortwave infrared (SWIR) and thermal infrared (TIR) from hyperspectral data cubes of AHS sensor and HyMAP and MASTER of two cities, Alcalá de Henares (Spain) and San José (Costa Rica) respectively, have been used. In this research it is assumed no prior knowledge of the targets, thus, the pixels are automatically separated according to their spectral information, significantly differentiated with respect to a background, either globally for the full scene, or locally by image segmentation. Several experiments on urban scenarios and semi-urban have been designed, analyzing the behaviour of the standard RX anomaly detector and different methods based on subspace, image projection and segmentation-based anomaly detection methods. A new technique for anomaly detection in hyperspectral data called DATB (Detector of Anomalies from Thermal Background) based on dimensionality reduction by projecting targets with unknown spectral signatures to a background calculated from thermal spectrum wavelengths is presented. First results and their consequences in non-supervised classification and extraction information processes are discussed.

  20. Detecting data anomalies methods in distributed systems

    NASA Astrophysics Data System (ADS)

    Mosiej, Lukasz

    2009-06-01

    Distributed systems became most popular systems in big companies. Nowadays many telecommunications companies want to hold large volumes of data about all customers. Obviously, those data cannot be stored in single database because of many technical difficulties, such as data access efficiency, security reasons, etc. On the other hand there is no need to hold all data in one place, because companies already have dedicated systems to perform specific tasks. In the distributed systems there is a redundancy of data and each system holds only interesting data in appropriate form. Data updated in one system should be also updated in the rest of systems, which hold that data. There are technical problems to update those data in all systems in transactional way. This article is about data anomalies in distributed systems. Avail data anomalies detection methods are shown. Furthermore, a new initial concept of new data anomalies detection methods is described on the last section.

  1. Network Anomaly Detection Based on Wavelet Analysis

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Ghorbani, Ali A.

    2008-12-01

    Signal processing techniques have been applied recently for analyzing and detecting network anomalies due to their potential to find novel or unknown intrusions. In this paper, we propose a new network signal modelling technique for detecting network anomalies, combining the wavelet approximation and system identification theory. In order to characterize network traffic behaviors, we present fifteen features and use them as the input signals in our system. We then evaluate our approach with the 1999 DARPA intrusion detection dataset and conduct a comprehensive analysis of the intrusions in the dataset. Evaluation results show that the approach achieves high-detection rates in terms of both attack instances and attack types. Furthermore, we conduct a full day's evaluation in a real large-scale WiFi ISP network where five attack types are successfully detected from over 30 millions flows.

  2. Fusion and normalization to enhance anomaly detection

    NASA Astrophysics Data System (ADS)

    Mayer, R.; Atkinson, G.; Antoniades, J.; Baumback, M.; Chester, D.; Edwards, J.; Goldstein, A.; Haas, D.; Henderson, S.; Liu, L.

    2009-05-01

    This study examines normalizing the imagery and the optimization metrics to enhance anomaly and change detection, respectively. The RX algorithm, the standard anomaly detector for hyperspectral imagery, more successfully extracts bright rather than dark man-made objects when applied to visible hyperspectral imagery. However, normalizing the imagery prior to applying the anomaly detector can help detect some of the problematic dark objects, but can also miss some bright objects. This study jointly fuses images of RX applied to normalized and unnormalized imagery and has a single decision surface. The technique was tested using imagery of commercial vehicles in urban environment gathered by a hyperspectral visible/near IR sensor mounted in an airborne platform. Combining detections first requires converting the detector output to a target probability. The observed anomaly detections were fitted with a linear combination of chi square distributions and these weights were used to help compute the target probability. Receiver Operator Characteristic (ROC) quantitatively assessed the target detection performance. The target detection performance is highly variable depending on the relative number of candidate bright and dark targets and false alarms and controlled in this study by using vegetation and street line masks. The joint Boolean OR and AND operations also generate variable performance depending on the scene. The joint SUM operation provides a reasonable compromise between OR and AND operations and has good target detection performance. In addition, new transforms based on normalizing correlation coefficient and least squares generate new transforms related to canonical correlation analysis (CCA) and a normalized image regression (NIR). Transforms based on CCA and NIR performed better than the standard approaches. Only RX detection of the unnormalized of the difference imagery in change detection provides adequate change detection performance.

  3. Sequential decision rules for failure detection

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Willsky, A. S.

    1981-01-01

    The formulation of the decision making of a failure detection process as a Bayes sequential decision problem (BSDP) provides a simple conceptualization of the decision rule design problem. As the optimal Bayes rule is not computable, a methodology that is based on the Baysian approach and aimed at a reduced computational requirement is developed for designing suboptimal rules. A numerical algorithm is constructed to facilitate the design and performance evaluation of these suboptimal rules. The result of applying this design methodology to an example shows that this approach is a useful one.

  4. Anomaly Detection Techniques for Ad Hoc Networks

    ERIC Educational Resources Information Center

    Cai, Chaoli

    2009-01-01

    Anomaly detection is an important and indispensable aspect of any computer security mechanism. Ad hoc and mobile networks consist of a number of peer mobile nodes that are capable of communicating with each other absent a fixed infrastructure. Arbitrary node movements and lack of centralized control make them vulnerable to a wide variety of…

  5. Adaptive sequential methods for detecting network intrusions

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Walker, Ernest

    2013-06-01

    In this paper, we propose new sequential methods for detecting port-scan attackers which routinely perform random "portscans" of IP addresses to find vulnerable servers to compromise. In addition to rigorously control the probability of falsely implicating benign remote hosts as malicious, our method performs significantly faster than other current solutions. Moreover, our method guarantees that the maximum amount of observational time is bounded. In contrast to the previous most effective method, Threshold Random Walk Algorithm, which is explicit and analytical in nature, our proposed algorithm involve parameters to be determined by numerical methods. We have introduced computational techniques such as iterative minimax optimization for quick determination of the parameters of the new detection algorithm. A framework of multi-valued decision for detecting portscanners and DoS attacks is also proposed.

  6. Automatic detection of anomalies in Space Shuttle Main Engine turbopumps

    NASA Astrophysics Data System (ADS)

    Lo, Ching F.; Whitehead, B. A.; Wu, Kewei

    1992-07-01

    A prototype expert system (developed on both PC and Symbolics 3670 lisp machine) for detecting anomalies in turbopump vibration data has been tested with data from ground tests 902-473, 902-501, 902-519, and 904-097 of the Space Shuttle Main Engine (SSME). The expert system has been utilized to analyze vibration data from each of the following SSME components: high-pressure oxidizer turbopump, high-pressure fuel turbopump, low-pressure fuel turbopump, and preburner boost pump. The expert system locates and classifies peaks in the power spectral density of each 0.4-sec window of steady-state data. Peaks representing the fundamental and harmonic frequencies of both shaft rotation and bearing cage rotation are identified by the expert system. Anomalies are then detected on the basis of sequential criteria and two threshold criteria set individually for the amplitude of each of these peaks: a prior threshold used during the first few windows of data in a test, and a posterior threshold used thereafter. In most cases the anomalies detected by the expert system agree with those reported by NASA. The two cases where there is significant disagreement will be further studied and the system design refined accordingly.

  7. Automatic detection of anomalies in Space Shuttle Main Engine turbopumps

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.; Whitehead, B. A.; Wu, Kewei

    1992-01-01

    A prototype expert system (developed on both PC and Symbolics 3670 lisp machine) for detecting anomalies in turbopump vibration data has been tested with data from ground tests 902-473, 902-501, 902-519, and 904-097 of the Space Shuttle Main Engine (SSME). The expert system has been utilized to analyze vibration data from each of the following SSME components: high-pressure oxidizer turbopump, high-pressure fuel turbopump, low-pressure fuel turbopump, and preburner boost pump. The expert system locates and classifies peaks in the power spectral density of each 0.4-sec window of steady-state data. Peaks representing the fundamental and harmonic frequencies of both shaft rotation and bearing cage rotation are identified by the expert system. Anomalies are then detected on the basis of sequential criteria and two threshold criteria set individually for the amplitude of each of these peaks: a prior threshold used during the first few windows of data in a test, and a posterior threshold used thereafter. In most cases the anomalies detected by the expert system agree with those reported by NASA. The two cases where there is significant disagreement will be further studied and the system design refined accordingly.

  8. OPAD data analysis. [Optical Plumes Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Kraft, Richard; Whitaker, Kevin; Cooper, Anita E.; Powers, W. T.; Wallace, Tim L.

    1993-01-01

    Data obtained in the framework of an Optical Plume Anomaly Detection (OPAD) program intended to create a rocket engine health monitor based on spectrometric detections of anomalous atomic and molecular species in the exhaust plume are analyzed. The major results include techniques for handling data noise, methods for registration of spectra to wavelength, and a simple automatic process for estimating the metallic component of a spectrum.

  9. Sequential Bayesian Detection: A Model-Based Approach

    SciTech Connect

    Sullivan, E J; Candy, J V

    2007-08-13

    Sequential detection theory has been known for a long time evolving in the late 1940's by Wald and followed by Middleton's classic exposition in the 1960's coupled with the concurrent enabling technology of digital computer systems and the development of sequential processors. Its development, when coupled to modern sequential model-based processors, offers a reasonable way to attack physics-based problems. In this chapter, the fundamentals of the sequential detection are reviewed from the Neyman-Pearson theoretical perspective and formulated for both linear and nonlinear (approximate) Gauss-Markov, state-space representations. We review the development of modern sequential detectors and incorporate the sequential model-based processors as an integral part of their solution. Motivated by a wealth of physics-based detection problems, we show how both linear and nonlinear processors can seamlessly be embedded into the sequential detection framework to provide a powerful approach to solving non-stationary detection problems.

  10. Sequential Bayesian Detection: A Model-Based Approach

    SciTech Connect

    Candy, J V

    2008-12-08

    Sequential detection theory has been known for a long time evolving in the late 1940's by Wald and followed by Middleton's classic exposition in the 1960's coupled with the concurrent enabling technology of digital computer systems and the development of sequential processors. Its development, when coupled to modern sequential model-based processors, offers a reasonable way to attack physics-based problems. In this chapter, the fundamentals of the sequential detection are reviewed from the Neyman-Pearson theoretical perspective and formulated for both linear and nonlinear (approximate) Gauss-Markov, state-space representations. We review the development of modern sequential detectors and incorporate the sequential model-based processors as an integral part of their solution. Motivated by a wealth of physics-based detection problems, we show how both linear and nonlinear processors can seamlessly be embedded into the sequential detection framework to provide a powerful approach to solving non-stationary detection problems.

  11. Gravity anomaly detection: Apollo/Soyuz

    NASA Technical Reports Server (NTRS)

    Vonbun, F. O.; Kahn, W. D.; Bryan, J. W.; Schmid, P. E.; Wells, W. T.; Conrad, D. T.

    1976-01-01

    The Goddard Apollo-Soyuz Geodynamics Experiment is described. It was performed to demonstrate the feasibility of tracking and recovering high frequency components of the earth's gravity field by utilizing a synchronous orbiting tracking station such as ATS-6. Gravity anomalies of 5 MGLS or larger having wavelengths of 300 to 1000 kilometers on the earth's surface are important for geologic studies of the upper layers of the earth's crust. Short wavelength Earth's gravity anomalies were detected from space. Two prime areas of data collection were selected for the experiment: (1) the center of the African continent and (2) the Indian Ocean Depression centered at 5% north latitude and 75% east longitude. Preliminary results show that the detectability objective of the experiment was met in both areas as well as at several additional anomalous areas around the globe. Gravity anomalies of the Karakoram and Himalayan mountain ranges, ocean trenches, as well as the Diamantina Depth, can be seen. Maps outlining the anomalies discovered are shown.

  12. System and method for anomaly detection

    DOEpatents

    Scherrer, Chad

    2010-06-15

    A system and method for detecting one or more anomalies in a plurality of observations is provided. In one illustrative embodiment, the observations are real-time network observations collected from a stream of network traffic. The method includes performing a discrete decomposition of the observations, and introducing derived variables to increase storage and query efficiencies. A mathematical model, such as a conditional independence model, is then generated from the formatted data. The formatted data is also used to construct frequency tables which maintain an accurate count of specific variable occurrence as indicated by the model generation process. The formatted data is then applied to the mathematical model to generate scored data. The scored data is then analyzed to detect anomalies.

  13. Sequential detection of learning in cognitive diagnosis.

    PubMed

    Ye, Sangbeak; Fellouris, Georgios; Culpepper, Steven; Douglas, Jeff

    2016-05-01

    In order to look more closely at the many particular skills examinees utilize to answer items, cognitive diagnosis models have received much attention, and perhaps are preferable to item response models that ordinarily involve just one or a few broadly defined skills, when the objective is to hasten learning. If these fine-grained skills can be identified, a sharpened focus on learning and remediation can be achieved. The focus here is on how to detect when learning has taken place for a particular attribute and efficiently guide a student through a sequence of items to ultimately attain mastery of all attributes while administering as few items as possible. This can be seen as a problem in sequential change-point detection for which there is a long history and a well-developed literature. Though some ad hoc rules for determining learning may be used, such as stopping after M consecutive items have been successfully answered, more efficient methods that are optimal under various conditions are available. The CUSUM, Shiryaev-Roberts and Shiryaev procedures can dramatically reduce the time required to detect learning while maintaining rigorous Type I error control, and they are studied in this context through simulation. Future directions for modelling and detection of learning are discussed. PMID:26931602

  14. Sequential Changepoint Approach for Online Community Detection

    NASA Astrophysics Data System (ADS)

    Marangoni-Simonsen, David; Xie, Yao

    2015-08-01

    We present new algorithms for detecting the emergence of a community in large networks from sequential observations. The networks are modeled using Erdos-Renyi random graphs with edges forming between nodes in the community with higher probability. Based on statistical changepoint detection methodology, we develop three algorithms: the Exhaustive Search (ES), the mixture, and the Hierarchical Mixture (H-Mix) methods. Performance of these methods is evaluated by the average run length (ARL), which captures the frequency of false alarms, and the detection delay. Numerical comparisons show that the ES method performs the best; however, it is exponentially complex. The mixture method is polynomially complex by exploiting the fact that the size of the community is typically small in a large network. However, it may react to a group of active edges that do not form a community. This issue is resolved by the H-Mix method, which is based on a dendrogram decomposition of the network. We present an asymptotic analytical expression for ARL of the mixture method when the threshold is large. Numerical simulation verifies that our approximation is accurate even in the non-asymptotic regime. Hence, it can be used to determine a desired threshold efficiently. Finally, numerical examples show that the mixture and the H-Mix methods can both detect a community quickly with a lower complexity than the ES method.

  15. Anomaly detection in the maritime domain

    NASA Astrophysics Data System (ADS)

    Roy, Jean

    2008-04-01

    Defence R&D Canada is developing a Collaborative Knowledge Exploitation Framework (CKEF) to support the analysts in efficiently managing and exploiting relevant knowledge assets to achieve maritime domain awareness in joint operations centres of the Canadian Forces. While developing the CKEF, anomaly detection has been clearly recognized as an important aspect requiring R&D. An activity has thus been undertaken to implement, within the CKEF, a proof-of-concept prototype of a rule-based expert system to support the analysts regarding this aspect. This expert system has to perform automated reasoning and output recommendations (or alerts) about maritime anomalies, thereby supporting the identification of vessels of interest and threat analysis. The system must contribute to a lower false alarm rate and a better probability of detection in drawing operator's attention to vessels worthy of their attention. It must provide explanations as to why the vessels may be of interest, with links to resources that help the operators dig deeper. Mechanisms are necessary for the analysts to fine tune the system, and for the knowledge engineer to maintain the knowledge base as the expertise of the operators evolves. This paper portrays the anomaly detection prototype, and describes the knowledge acquisition and elicitation session conducted to capture the know-how of the experts, the formal knowledge representation enablers and the ontology required for aspects of the maritime domain that are relevant to anomaly detection, vessels of interest, and threat analysis, the prototype high-level design and implementation on the service-oriented architecture of the CKEF, and other findings and results of this ongoing activity.

  16. Geomagnetic anomaly detected at hydromagnetic wave frequencies

    NASA Astrophysics Data System (ADS)

    Meloni, A.; Medford, L. V.; Lanzerotti, L. J.

    1985-04-01

    We report the discovery, in northwestern Illinois, of a geomagnetic anomaly, using hydromagnetic wave frequencies as the source spectrum. Three portable magnetometer stations with computer-compatible digital data acquisition systems were operated in a longitude array at Piano and Ashton, Illinois, and Cascade, Iowa (total separation ˜200 km), in 1981-1982. Analysis of the natural geomagnetic field fluctuations in the hydromagnetic wave regime reveals that the vertical components of the detected fluctuations are essentially 180° out of phase between Plano/Ashton and Cascade for variations with periods ˜30-120 s. The observations can be modeled in terms of a shallow (˜10-20 km) north-south oriented geomagnetic anomaly of enhanced conductivity located between Ashton and Cascade, approximately parallel to the Mississippi River valley.

  17. Detecting syntactic and semantic anomalies in schizophrenia.

    PubMed

    Moro, Andrea; Bambini, Valentina; Bosia, Marta; Anselmetti, Simona; Riccaboni, Roberta; Cappa, Stefano F; Smeraldi, Enrico; Cavallaro, Roberto

    2015-12-01

    One of the major challenges in the study of language in schizophrenia is to identify specific levels of the linguistic structure that might be selectively impaired. While historically a main semantic deficit has been widely claimed, results are mixed, with also evidence of syntactic impairment. This might be due to heterogeneity in materials and paradigms across studies, which often do not allow to tap into single linguistic components. Moreover, the interaction between linguistic and neurocognitive deficits is still unclear. In this study, we concentrated on syntactic and semantic knowledge. We employed an anomaly detection task including short and long sentences with either syntactic errors violating the principles of Universal Grammar, or a novel form of semantic errors, resulting from a contradiction in the computation of the whole sentence meaning. Fifty-eight patients with diagnosis of schizophrenia were compared to 30 healthy subjects. Results showed that, in patients, only the ability to identify syntactic anomaly, both in short and long sentences, was impaired. This result cannot be explained by working memory abilities or psychopathological features. These findings suggest the presence of an impairment of syntactic knowledge in schizophrenia, at least partially independent of the cognitive and psychopathological profile. On the contrary, we cannot conclude that there is a semantic impairment, at least in terms of compositional semantics abilities. PMID:26519554

  18. Recursive SAM-based band selection for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    He, Yuanlei; Liu, Daizhi; Yi, Shihua

    2010-10-01

    Band selection has been widely used in hyperspectral image processing for dimension reduction. In this paper, a recursive SAM-based band selection (RSAM-BBS) method is proposed. Once two initial bands are given, RSAM-BBS is performed in a sequential manner, and at each step the band that can best describe the spectral separation of two hyperspectral signatures is added to the bands already selected until the spectral angle reaches its maximum. In order to demonstrate the utility of the proposed band selection method, an anomaly detection algorithm is developed, which first extracts the anomalous target spectrum from the original image using automatic target detection and classification algorithm (ATDCA), followed by maximum spectral screening (MSS) to estimate the background average spectrum, then implements RSAM-BBS to select bands that participate in the subsequent adaptive cosine estimator (ACE) target detection. As shown in the experimental result on the AVIRIS dataset, less than five bands selected by the RSAM-BBS can achieve comparable detection performance using the full bands.

  19. Clustering and Recurring Anomaly Identification: Recurring Anomaly Detection System (ReADS)

    NASA Technical Reports Server (NTRS)

    McIntosh, Dawn

    2006-01-01

    This viewgraph presentation reviews the Recurring Anomaly Detection System (ReADS). The Recurring Anomaly Detection System is a tool to analyze text reports, such as aviation reports and maintenance records: (1) Text clustering algorithms group large quantities of reports and documents; Reduces human error and fatigue (2) Identifies interconnected reports; Automates the discovery of possible recurring anomalies; (3) Provides a visualization of the clusters and recurring anomalies We have illustrated our techniques on data from Shuttle and ISS discrepancy reports, as well as ASRS data. ReADS has been integrated with a secure online search

  20. A model for anomaly classification in intrusion detection systems

    NASA Astrophysics Data System (ADS)

    Ferreira, V. O.; Galhardi, V. V.; Gonçalves, L. B. L.; Silva, R. C.; Cansian, A. M.

    2015-09-01

    Intrusion Detection Systems (IDS) are traditionally divided into two types according to the detection methods they employ, namely (i) misuse detection and (ii) anomaly detection. Anomaly detection has been widely used and its main advantage is the ability to detect new attacks. However, the analysis of anomalies generated can become expensive, since they often have no clear information about the malicious events they represent. In this context, this paper presents a model for automated classification of alerts generated by an anomaly based IDS. The main goal is either the classification of the detected anomalies in well-defined taxonomies of attacks or to identify whether it is a false positive misclassified by the IDS. Some common attacks to computer networks were considered and we achieved important results that can equip security analysts with best resources for their analyses.

  1. Anomaly detection enhanced classification in computer intrusion detection

    SciTech Connect

    Fugate, M. L.; Gattiker, J. R.

    2002-01-01

    This report describes work with the goal of enhancing capabilities in computer intrusion detection. The work builds upon a study of classification performance, that compared various methods of classifying information derived from computer network packets into attack versus normal categories, based on a labeled training dataset. This previous work validates our classification methods, and clears the ground for studying whether and how anomaly detection can be used to enhance this performance, The DARPA project that initiated the dataset used here concluded that anomaly detection should be examined to boost the performance of machine learning in the computer intrusion detection task. This report investigates the data set for aspects that will be valuable for anomaly detection application, and supports these results with models constructed from the data. In this report, the term anomaly detection means learning a model from unlabeled data, and using this to make some inference about future data. Our data is a feature vector derived from network packets: an 'example' or 'sample'. On the other hand, classification means building a model from labeled data, and using that model to classify unlabeled (future) examples. There is some precedent in the literature for combining these methods. One approach is to stage the two techniques, using anomaly detection to segment data into two sets for classification. An interpretation of this is a method to combat nonstationarity in the data. In our previous work, we demonstrated that the data has substantial temporal nonstationarity. With classification methods that can be thought of as learning a decision surface between two statistical distributions, performance is expected to degrade significantly when classifying examples that are from regions not well represented in the training set. Anomaly detection can be seen as a problem of learning the density (landscape) or the support (boundary) of a statistical distribution so that

  2. Statistical Anomaly Detection for Monitoring of Human Dynamics

    NASA Astrophysics Data System (ADS)

    Kamiya, K.; Fuse, T.

    2015-05-01

    Understanding of human dynamics has drawn attention to various areas. Due to the wide spread of positioning technologies that use GPS or public Wi-Fi, location information can be obtained with high spatial-temporal resolution as well as at low cost. By collecting set of individual location information in real time, monitoring of human dynamics is recently considered possible and is expected to lead to dynamic traffic control in the future. Although this monitoring focuses on detecting anomalous states of human dynamics, anomaly detection methods are developed ad hoc and not fully systematized. This research aims to define an anomaly detection problem of the human dynamics monitoring with gridded population data and develop an anomaly detection method based on the definition. According to the result of a review we have comprehensively conducted, we discussed the characteristics of the anomaly detection of human dynamics monitoring and categorized our problem to a semi-supervised anomaly detection problem that detects contextual anomalies behind time-series data. We developed an anomaly detection method based on a sticky HDP-HMM, which is able to estimate the number of hidden states according to input data. Results of the experiment with synthetic data showed that our proposed method has good fundamental performance with respect to the detection rate. Through the experiment with real gridded population data, an anomaly was detected when and where an actual social event had occurred.

  3. Multicriteria Similarity-Based Anomaly Detection Using Pareto Depth Analysis.

    PubMed

    Hsiao, Ko-Jen; Xu, Kevin S; Calder, Jeff; Hero, Alfred O

    2016-06-01

    We consider the problem of identifying patterns in a data set that exhibits anomalous behavior, often referred to as anomaly detection. Similarity-based anomaly detection algorithms detect abnormally large amounts of similarity or dissimilarity, e.g., as measured by the nearest neighbor Euclidean distances between a test sample and the training samples. In many application domains, there may not exist a single dissimilarity measure that captures all possible anomalous patterns. In such cases, multiple dissimilarity measures can be defined, including nonmetric measures, and one can test for anomalies by scalarizing using a nonnegative linear combination of them. If the relative importance of the different dissimilarity measures are not known in advance, as in many anomaly detection applications, the anomaly detection algorithm may need to be executed multiple times with different choices of weights in the linear combination. In this paper, we propose a method for similarity-based anomaly detection using a novel multicriteria dissimilarity measure, the Pareto depth. The proposed Pareto depth analysis (PDA) anomaly detection algorithm uses the concept of Pareto optimality to detect anomalies under multiple criteria without having to run an algorithm multiple times with different choices of weights. The proposed PDA approach is provably better than using linear combinations of the criteria, and shows superior performance on experiments with synthetic and real data sets. PMID:26336154

  4. Automated Network Anomaly Detection with Learning, Control and Mitigation

    ERIC Educational Resources Information Center

    Ippoliti, Dennis

    2014-01-01

    Anomaly detection is a challenging problem that has been researched within a variety of application domains. In network intrusion detection, anomaly based techniques are particularly attractive because of their ability to identify previously unknown attacks without the need to be programmed with the specific signatures of every possible attack.…

  5. Fluorescence sensor for sequential detection of zinc and phosphate ions.

    PubMed

    An, Miran; Kim, Bo-Yeon; Seo, Hansol; Helal, Aasif; Kim, Hong-Seok

    2016-12-01

    A new, highly selective turn-on fluorescent chemosensor based on 2-(2'-tosylamidophenyl)thiazole (1) for the detection of zinc and phosphate ions in ethanol was synthesized and characterized. Sensor 1 showed a high selectivity for zinc compared to other cations and sequentially detected hydrogen pyrophosphate and hydrogen phosphate. The fluorescence mechanism can be explained by two different mechanisms: (i) the inhibition of excited-state intramolecular proton transfer (ESIPT) and (ii) chelation-induced enhanced fluorescence by binding with Zn(2+). The sequential detection of phosphate anions was achieved by the quenching and subsequent revival of ESIPT. PMID:27343439

  6. Fluorescence sensor for sequential detection of zinc and phosphate ions

    NASA Astrophysics Data System (ADS)

    An, Miran; Kim, Bo-Yeon; Seo, Hansol; Helal, Aasif; Kim, Hong-Seok

    2016-12-01

    A new, highly selective turn-on fluorescent chemosensor based on 2-(2‧-tosylamidophenyl)thiazole (1) for the detection of zinc and phosphate ions in ethanol was synthesized and characterized. Sensor 1 showed a high selectivity for zinc compared to other cations and sequentially detected hydrogen pyrophosphate and hydrogen phosphate. The fluorescence mechanism can be explained by two different mechanisms: (i) the inhibition of excited-state intramolecular proton transfer (ESIPT) and (ii) chelation-induced enhanced fluorescence by binding with Zn2 +. The sequential detection of phosphate anions was achieved by the quenching and subsequent revival of ESIPT.

  7. Network anomaly detection system with optimized DS evidence theory.

    PubMed

    Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu

    2014-01-01

    Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network-complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each sensor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly. PMID:25254258

  8. Network Anomaly Detection System with Optimized DS Evidence Theory

    PubMed Central

    Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu

    2014-01-01

    Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network—complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each senor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly. PMID:25254258

  9. Post-processing for improving hyperspectral anomaly detection accuracy

    NASA Astrophysics Data System (ADS)

    Wu, Jee-Cheng; Jiang, Chi-Ming; Huang, Chen-Liang

    2015-10-01

    Anomaly detection is an important topic in the exploitation of hyperspectral data. Based on the Reed-Xiaoli (RX) detector and a morphology operator, this research proposes a novel technique for improving the accuracy of hyperspectral anomaly detection. Firstly, the RX-based detector is used to process a given input scene. Then, a post-processing scheme using morphology operator is employed to detect those pixels around high-scoring anomaly pixels. Tests were conducted using two real hyperspectral images with ground truth information and the results based on receiver operating characteristic curves, illustrated that the proposed method reduced the false alarm rates of the RXbased detector.

  10. Hyperspectral anomaly detection method based on auto-encoder

    NASA Astrophysics Data System (ADS)

    Bati, Emrecan; ćalışkan, Akın.; Koz, Alper; Alatan, A. A.

    2015-10-01

    A major drawback of most of the existing hyperspectral anomaly detection methods is the lack of an efficient background representation, which can successfully adapt to the varying complexity of hyperspectral images. In this paper, we propose a novel anomaly detection method which represents the hyperspectral scenes of different complexity with the state-of-the-art representation learning method, namely auto-encoder. The proposed method first encodes the spectral image into a sparse code, then decodes the coded image, and finally, assesses the coding error at each pixel as a measure of anomaly. Predictive Sparse Decomposition Auto-encoder is utilized in the proposed anomaly method due to its efficient joint learning for the encoding and decoding functions. The performance of the proposed anomaly detection method is both tested on visible-near infrared (VNIR) and long wave infrared (LWIR) hyperspectral images and compared with the conventional anomaly detection method, namely Reed-Xiaoli (RX) detector.1 The experiments has verified the superiority of the proposed anomaly detection method in terms of receiver operating characteristics (ROC) performance.

  11. Load characterization and anomaly detection for voice over IP traffic.

    PubMed

    Mandjes, Michel; Saniee, Iraj; Stolyar, Alexander L

    2005-09-01

    We consider the problem of traffic anomaly detection in IP networks. Traffic anomalies typically arise when there is focused overload or when a network element fails and it is desired to infer these purely from the measured traffic. We derive new general formulae for the variance of the cumulative traffic over a fixed time interval and show how the derived analytical expression simplifies for the case of voice over IP traffic, the focus of this paper. To detect load anomalies, we show it is sufficient to consider cumulative traffic over relatively long intervals such as 5 min. We also propose simple anomaly detection tests including detection of over/underload. This approach substantially extends the current practice in IP network management where only the first-order statistics and fixed thresholds are used to identify abnormal behavior. We conclude with the application of the scheme to field data from an operational network. PMID:16252813

  12. Lidar detection algorithm for time and range anomalies

    NASA Astrophysics Data System (ADS)

    Ben-David, Avishai; Davidson, Charles E.; Vanderbeek, Richard G.

    2007-10-01

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t1 to t2" is addressed, and for range anomaly where the question "is a target present at time t within ranges R1 and R2" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO2 lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  13. Lidar detection algorithm for time and range anomalies.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G

    2007-10-10

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed. PMID:17932542

  14. Sequential Detection of Fission Processes for Harbor Defense

    SciTech Connect

    Candy, J V; Walston, S E; Chambers, D H

    2015-02-12

    With the large increase in terrorist activities throughout the world, the timely and accurate detection of special nuclear material (SNM) has become an extremely high priority for many countries concerned with national security. The detection of radionuclide contraband based on their γ-ray emissions has been attacked vigorously with some interesting and feasible results; however, the fission process of SNM has not received as much attention due to its inherent complexity and required predictive nature. In this paper, on-line, sequential Bayesian detection and estimation (parameter) techniques to rapidly and reliably detect unknown fissioning sources with high statistical confidence are developed.

  15. A New Methodology for Early Anomaly Detection of BWR Instabilities

    SciTech Connect

    Ivanov, K. N.

    2005-11-27

    The objective of the performed research is to develop an early anomaly detection methodology so as to enhance safety, availability, and operational flexibility of Boiling Water Reactor (BWR) nuclear power plants. The technical approach relies on suppression of potential power oscillations in BWRs by detecting small anomalies at an early stage and taking appropriate prognostic actions based on an anticipated operation schedule. The research utilizes a model of coupled (two-phase) thermal-hydraulic and neutron flux dynamics, which is used as a generator of time series data for anomaly detection at an early stage. The model captures critical nonlinear features of coupled thermal-hydraulic and nuclear reactor dynamics and (slow time-scale) evolution of the anomalies as non-stationary parameters. The time series data derived from this nonlinear non-stationary model serves as the source of information for generating the symbolic dynamics for characterization of model parameter changes that quantitatively represent small anomalies. The major focus of the presented research activity was on developing and qualifying algorithms of pattern recognition for power instability based on anomaly detection from time series data, which later can be used to formulate real-time decision and control algorithms for suppression of power oscillations for a variety of anticipated operating conditions. The research being performed in the framework of this project is essential to make significant improvement in the capability of thermal instability analyses for enhancing safety, availability, and operational flexibility of currently operating and next generation BWRs.

  16. Evaluation schemes for video and image anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael

    2016-05-01

    Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.

  17. Multiple-Instance Learning for Anomaly Detection in Digital Mammography.

    PubMed

    Quellec, Gwenole; Lamard, Mathieu; Cozic, Michel; Coatrieux, Gouenou; Cazuguel, Guy

    2016-07-01

    This paper describes a computer-aided detection and diagnosis system for breast cancer, the most common form of cancer among women, using mammography. The system relies on the Multiple-Instance Learning (MIL) paradigm, which has proven useful for medical decision support in previous works from our team. In the proposed framework, breasts are first partitioned adaptively into regions. Then, features derived from the detection of lesions (masses and microcalcifications) as well as textural features, are extracted from each region and combined in order to classify mammography examinations as "normal" or "abnormal". Whenever an abnormal examination record is detected, the regions that induced that automated diagnosis can be highlighted. Two strategies are evaluated to define this anomaly detector. In a first scenario, manual segmentations of lesions are used to train an SVM that assigns an anomaly index to each region; local anomaly indices are then combined into a global anomaly index. In a second scenario, the local and global anomaly detectors are trained simultaneously, without manual segmentations, using various MIL algorithms (DD, APR, mi-SVM, MI-SVM and MILBoost). Experiments on the DDSM dataset show that the second approach, which is only weakly-supervised, surprisingly outperforms the first approach, even though it is strongly-supervised. This suggests that anomaly detectors can be advantageously trained on large medical image archives, without the need for manual segmentation. PMID:26829783

  18. Anomaly Detection in Power Quality at Data Centers

    NASA Technical Reports Server (NTRS)

    Grichine, Art; Solano, Wanda M.

    2015-01-01

    The goal during my internship at the National Center for Critical Information Processing and Storage (NCCIPS) is to implement an anomaly detection method through the StruxureWare SCADA Power Monitoring system. The benefit of the anomaly detection mechanism is to provide the capability to detect and anticipate equipment degradation by monitoring power quality prior to equipment failure. First, a study is conducted that examines the existing techniques of power quality management. Based on these findings, and the capabilities of the existing SCADA resources, recommendations are presented for implementing effective anomaly detection. Since voltage, current, and total harmonic distortion demonstrate Gaussian distributions, effective set-points are computed using this model, while maintaining a low false positive count.

  19. Anomaly Detection In Additively Manufactured Parts Using Laser Doppler Vibrometery

    SciTech Connect

    Hernandez, Carlos A.

    2015-09-29

    Additively manufactured parts are susceptible to non-uniform structure caused by the unique manufacturing process. This can lead to structural weakness or catastrophic failure. Using laser Doppler vibrometry and frequency response analysis, non-contact detection of anomalies in additively manufactured parts may be possible. Preliminary tests show promise for small scale detection, but more future work is necessary.

  20. Visual analytics of anomaly detection in large data streams

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Dayal, Umeshwar; Keim, Daniel A.; Sharma, Ratnesh K.; Mehta, Abhay

    2009-01-01

    Most data streams usually are multi-dimensional, high-speed, and contain massive volumes of continuous information. They are seen in daily applications, such as telephone calls, retail sales, data center performance, and oil production operations. Many analysts want insight into the behavior of this data. They want to catch the exceptions in flight to reveal the causes of the anomalies and to take immediate action. To guide the user in finding the anomalies in the large data stream quickly, we derive a new automated neighborhood threshold marking technique, called AnomalyMarker. This technique is built on cell-based data streams and user-defined thresholds. We extend the scope of the data points around the threshold to include the surrounding areas. The idea is to define a focus area (marked area) which enables users to (1) visually group the interesting data points related to the anomalies (i.e., problems that occur persistently or occasionally) for observing their behavior; (2) discover the factors related to the anomaly by visualizing the correlations between the problem attribute with the attributes of the nearby data items from the entire multi-dimensional data stream. Mining results are quickly presented in graphical representations (i.e., tooltip) for the user to zoom into the problem regions. Different algorithms are introduced which try to optimize the size and extent of the anomaly markers. We have successfully applied this technique to detect data stream anomalies in large real-world enterprise server performance and data center energy management.

  1. Firefly Algorithm in detection of TEC seismo-ionospheric anomalies

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, Mehdi

    2015-07-01

    Anomaly detection in time series of different earthquake precursors is an essential introduction to create an early warning system with an allowable uncertainty. Since these time series are more often non linear, complex and massive, therefore the applied predictor method should be able to detect the discord patterns from a large data in a short time. This study acknowledges Firefly Algorithm (FA) as a simple and robust predictor to detect the TEC (Total Electron Content) seismo-ionospheric anomalies around the time of the some powerful earthquakes including Chile (27 February 2010), Varzeghan (11 August 2012) and Saravan (16 April 2013). Outstanding anomalies were observed 7 and 5 days before the Chile and Varzeghan earthquakes, respectively and also 3 and 8 days prior to the Saravan earthquake.

  2. A hybrid approach for efficient anomaly detection using metaheuristic methods.

    PubMed

    Ghanem, Tamer F; Elkilani, Wail S; Abdul-Kader, Hatem M

    2015-07-01

    Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms. PMID:26199752

  3. [Anomaly Detection of Multivariate Time Series Based on Riemannian Manifolds].

    PubMed

    Xu, Yonghong; Hou, Xiaoying; Li Shuting; Cui, Jie

    2015-06-01

    Multivariate time series problems widely exist in production and life in the society. Anomaly detection has provided people with a lot of valuable information in financial, hydrological, meteorological fields, and the research areas of earthquake, video surveillance, medicine and others. In order to quickly and efficiently find exceptions in time sequence so that it can be presented in front of people in an intuitive way, we in this study combined the Riemannian manifold with statistical process control charts, based on sliding window, with a description of the covariance matrix as the time sequence, to achieve the multivariate time series of anomaly detection and its visualization. We made MA analog data flow and abnormal electrocardiogram data from MIT-BIH as experimental objects, and verified the anomaly detection method. The results showed that the method was reasonable and effective. PMID:26485975

  4. Anomaly Detection Based on Sensor Data in Petroleum Industry Applications

    PubMed Central

    Martí, Luis; Sanchez-Pi, Nayat; Molina, José Manuel; Garcia, Ana Cristina Bicharra

    2015-01-01

    Anomaly detection is the problem of finding patterns in data that do not conform to an a priori expected behavior. This is related to the problem in which some samples are distant, in terms of a given metric, from the rest of the dataset, where these anomalous samples are indicated as outliers. Anomaly detection has recently attracted the attention of the research community, because of its relevance in real-world applications, like intrusion detection, fraud detection, fault detection and system health monitoring, among many others. Anomalies themselves can have a positive or negative nature, depending on their context and interpretation. However, in either case, it is important for decision makers to be able to detect them in order to take appropriate actions. The petroleum industry is one of the application contexts where these problems are present. The correct detection of such types of unusual information empowers the decision maker with the capacity to act on the system in order to correctly avoid, correct or react to the situations associated with them. In that application context, heavy extraction machines for pumping and generation operations, like turbomachines, are intensively monitored by hundreds of sensors each that send measurements with a high frequency for damage prevention. In this paper, we propose a combination of yet another segmentation algorithm (YASA), a novel fast and high quality segmentation algorithm, with a one-class support vector machine approach for efficient anomaly detection in turbomachines. The proposal is meant for dealing with the aforementioned task and to cope with the lack of labeled training data. As a result, we perform a series of empirical studies comparing our approach to other methods applied to benchmark problems and a real-life application related to oil platform turbomachinery anomaly detection. PMID:25633599

  5. Anomaly detection based on sensor data in petroleum industry applications.

    PubMed

    Martí, Luis; Sanchez-Pi, Nayat; Molina, José Manuel; Garcia, Ana Cristina Bicharra

    2015-01-01

    Anomaly detection is the problem of finding patterns in data that do not conform to an a priori expected behavior. This is related to the problem in which some samples are distant, in terms of a given metric, from the rest of the dataset, where these anomalous samples are indicated as outliers. Anomaly detection has recently attracted the attention of the research community, because of its relevance in real-world applications, like intrusion detection, fraud detection, fault detection and system health monitoring, among many others. Anomalies themselves can have a positive or negative nature, depending on their context and interpretation. However, in either case, it is important for decision makers to be able to detect them in order to take appropriate actions. The petroleum industry is one of the application contexts where these problems are present. The correct detection of such types of unusual information empowers the decision maker with the capacity to act on the system in order to correctly avoid, correct or react to the situations associated with them. In that application context, heavy extraction machines for pumping and generation operations, like turbomachines, are intensively monitored by hundreds of sensors each that send measurements with a high frequency for damage prevention. In this paper, we propose a combination of yet another segmentation algorithm (YASA), a novel fast and high quality segmentation algorithm, with a one-class support vector machine approach for efficient anomaly detection in turbomachines. The proposal is meant for dealing with the aforementioned task and to cope with the lack of labeled training data. As a result, we perform a series of empirical studies comparing our approach to other methods applied to benchmark problems and a real-life application related to oil platform turbomachinery anomaly detection. PMID:25633599

  6. Anomalies.

    ERIC Educational Resources Information Center

    Online-Offline, 1999

    1999-01-01

    This theme issue on anomalies includes Web sites, CD-ROMs and software, videos, books, and additional resources for elementary and junior high school students. Pertinent activities are suggested, and sidebars discuss UFOs, animal anomalies, and anomalies from nature; and resources covering unexplained phenonmenas like crop circles, Easter Island,…

  7. Anomaly detection using classified eigenblocks in GPR image

    NASA Astrophysics Data System (ADS)

    Kim, Min Ju; Kim, Seong Dae; Lee, Seung-eui

    2016-05-01

    Automatic landmine detection system using ground penetrating radar has been widely researched. For the automatic mine detection system, system speed is an important factor. Many techniques for mine detection have been developed based on statistical background. Among them, a detection technique employing the Principal Component Analysis(PCA) has been used for clutter reduction and anomaly detection. However, the PCA technique can retard the entire process, because of large basis dimension and a numerous number of inner product operations. In order to overcome this problem, we propose a fast anomaly detection system using 2D DCT and PCA. Our experiments use a set of data obtained from a test site where the anti-tank and anti- personnel mines are buried. We evaluate the proposed system in terms of the ROC curve. The result shows that the proposed system performs much better than the conventional PCA systems from the viewpoint of speed and false alarm rate.

  8. Profile-based adaptive anomaly detection for network security.

    SciTech Connect

    Zhang, Pengchu C. (Sandia National Laboratories, Albuquerque, NM); Durgin, Nancy Ann

    2005-11-01

    As information systems become increasingly complex and pervasive, they become inextricably intertwined with the critical infrastructure of national, public, and private organizations. The problem of recognizing and evaluating threats against these complex, heterogeneous networks of cyber and physical components is a difficult one, yet a solution is vital to ensuring security. In this paper we investigate profile-based anomaly detection techniques that can be used to address this problem. We focus primarily on the area of network anomaly detection, but the approach could be extended to other problem domains. We investigate using several data analysis techniques to create profiles of network hosts and perform anomaly detection using those profiles. The ''profiles'' reduce multi-dimensional vectors representing ''normal behavior'' into fewer dimensions, thus allowing pattern and cluster discovery. New events are compared against the profiles, producing a quantitative measure of how ''anomalous'' the event is. Most network intrusion detection systems (IDSs) detect malicious behavior by searching for known patterns in the network traffic. This approach suffers from several weaknesses, including a lack of generalizability, an inability to detect stealthy or novel attacks, and lack of flexibility regarding alarm thresholds. Our research focuses on enhancing current IDS capabilities by addressing some of these shortcomings. We identify and evaluate promising techniques for data mining and machine-learning. The algorithms are ''trained'' by providing them with a series of data-points from ''normal'' network traffic. A successful algorithm can be trained automatically and efficiently, will have a low error rate (low false alarm and miss rates), and will be able to identify anomalies in ''pseudo real-time'' (i.e., while the intrusion is still in progress, rather than after the fact). We also build a prototype anomaly detection tool that demonstrates how the techniques might

  9. Locality-constrained anomaly detection for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Liu, Jiabin; Li, Wei; Du, Qian; Liu, Kui

    2015-12-01

    Detecting a target with low-occurrence-probability from unknown background in a hyperspectral image, namely anomaly detection, is of practical significance. Reed-Xiaoli (RX) algorithm is considered as a classic anomaly detector, which calculates the Mahalanobis distance between local background and the pixel under test. Local RX, as an adaptive RX detector, employs a dual-window strategy to consider pixels within the frame between inner and outer windows as local background. However, the detector is sensitive if such a local region contains anomalous pixels (i.e., outliers). In this paper, a locality-constrained anomaly detector is proposed to remove outliers in the local background region before employing the RX algorithm. Specifically, a local linear representation is designed to exploit the internal relationship between linearly correlated pixels in the local background region and the pixel under test and its neighbors. Experimental results demonstrate that the proposed detector improves the original local RX algorithm.

  10. Attention focusing and anomaly detection in systems monitoring

    NASA Technical Reports Server (NTRS)

    Doyle, Richard J.

    1994-01-01

    Any attempt to introduce automation into the monitoring of complex physical systems must start from a robust anomaly detection capability. This task is far from straightforward, for a single definition of what constitutes an anomaly is difficult to come by. In addition, to make the monitoring process efficient, and to avoid the potential for information overload on human operators, attention focusing must also be addressed. When an anomaly occurs, more often than not several sensors are affected, and the partially redundant information they provide can be confusing, particularly in a crisis situation where a response is needed quickly. The focus of this paper is a new technique for attention focusing. The technique involves reasoning about the distance between two frequency distributions, and is used to detect both anomalous system parameters and 'broken' causal dependencies. These two forms of information together isolate the locus of anomalous behavior in the system being monitored.

  11. The use of Compton scattering in detecting anomaly in soil-possible use in pyromaterial detection

    NASA Astrophysics Data System (ADS)

    Abedin, Ahmad Firdaus Zainal; Ibrahim, Noorddin; Zabidi, Noriza Ahmad; Demon, Siti Zulaikha Ngah

    2016-01-01

    The Compton scattering is able to determine the signature of land mine detection based on dependency of density anomaly and energy change of scattered photons. In this study, 4.43 MeV gamma of the Am-Be source was used to perform Compton scattering. Two detectors were placed between source with distance of 8 cm and radius of 1.9 cm. Detectors of thallium-doped sodium iodide NaI(TI) was used for detecting gamma ray. There are 9 anomalies used in this simulation. The physical of anomaly is in cylinder form with radius of 10 cm and 8.9 cm height. The anomaly is buried 5 cm deep in the bed soil measured 80 cm radius and 53.5 cm height. Monte Carlo methods indicated the scattering of photons is directly proportional to density of anomalies. The difference between detector response with anomaly and without anomaly namely contrast ratio values are in a linear relationship with density of anomalies. Anomalies of air, wood and water give positive contrast ratio values whereas explosive, sand, concrete, graphite, limestone and polyethylene give negative contrast ratio values. Overall, the contrast ratio values are greater than 2 % for all anomalies. The strong contrast ratios result a good detection capability and distinction between anomalies.

  12. On-line Flagging of Anomalies and Adaptive Sequential Hypothesis Testing for Fine-feature Characterization of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Payne, T.; Kinateder, K.; Dao, P.; Beecher, E.; Boone, D.; Elliott, B.

    The objective of on-line flagging in this paper is to perform interactive assessment of geosynchronous satellites anomalies such as cross-tagging of a satellites in a cluster, solar panel offset change, etc. This assessment will utilize a Bayesian belief propagation procedure and will include automated update of baseline signature data for the satellite, while accounting for the seasonal changes. Its purpose is to enable an ongoing, automated assessment of satellite behavior through its life cycle using the photometry data collected during the synoptic search performed by a ground or space-based sensor as a part of its metrics mission. The change in the satellite features will be reported along with the probabilities of Type I and Type II errors. The objective of adaptive sequential hypothesis testing in this paper is to define future sensor tasking for the purpose of characterization of fine features of the satellite. The tasking will be designed in order to maximize new information with the least number of photometry data points to be collected during the synoptic search by a ground or space-based sensor. Its calculation is based on the utilization of information entropy techniques. The tasking is defined by considering a sequence of hypotheses in regard to the fine features of the satellite. The optimal observation conditions are then ordered in order to maximize new information about a chosen fine feature. The combined objective of on-line flagging and adaptive sequential hypothesis testing is to progressively discover new information about the features of a geosynchronous satellites by leveraging the regular but sparse cadence of data collection during the synoptic search performed by a ground or space-based sensor. Automated Algorithm to Detect Changes in Geostationary Satellite's Configuration and Cross-Tagging Phan Dao, Air Force Research Laboratory/RVB By characterizing geostationary satellites based on photometry and color photometry, analysts can

  13. Robust and efficient anomaly detection using heterogeneous representations

    NASA Astrophysics Data System (ADS)

    Hu, Xing; Hu, Shiqiang; Xie, Jinhua; Zheng, Shiyou

    2015-05-01

    Various approaches have been proposed for video anomaly detection. Yet these approaches typically suffer from one or more limitations: they often characterize the pattern using its internal information, but ignore its external relationship which is important for local anomaly detection. Moreover, the high-dimensionality and the lack of robustness of pattern representation may lead to problems, including overfitting, increased computational cost and memory requirements, and high false alarm rate. We propose a video anomaly detection framework which relies on a heterogeneous representation to account for both the pattern's internal information and external relationship. The internal information is characterized by slow features learned by slow feature analysis from low-level representations, and the external relationship is characterized by the spatial contextual distances. The heterogeneous representation is compact, robust, efficient, and discriminative for anomaly detection. Moreover, both the pattern's internal information and external relationship can be taken into account in the proposed framework. Extensive experiments demonstrate the robustness and efficiency of our approach by comparison with the state-of-the-art approaches on the widely used benchmark datasets.

  14. A spring window for geobotanical anomaly detection

    NASA Technical Reports Server (NTRS)

    Bell, R.; Labovitz, M. L.; Masuoka, E. J.

    1985-01-01

    The observation of senescence of deciduous vegetation to detect soil heavy metal mineralization is discussed. A gridded sampling of two sites of Quercus alba L. in south-central Virginia in 1982 is studied. The data reveal that smaller leaf blade lengths are observed in the soil site with copper, lead, and zinc concentrations. A random study in 1983 of red and white Q. rubra L., Q. prinus L., and Acer rubrum L., to confirm previous results is described. The observations of blade length and bud breaks show a 7-10 day lag in growth in the mineral site for the oak trees; however, the maple trees are not influenced by the minerals.

  15. Solar cell anomaly detection method and apparatus

    NASA Technical Reports Server (NTRS)

    Miller, Emmett L. (Inventor); Shumka, Alex (Inventor); Gauthier, Michael K. (Inventor)

    1981-01-01

    A method is provided for detecting cracks and other imperfections in a solar cell, which includes scanning a narrow light beam back and forth across the cell in a raster pattern, while monitoring the electrical output of the cell to find locations where the electrical output varies significantly. The electrical output can be monitored on a television type screen containing a raster pattern with each point on the screen corresponding to a point on the solar cell surface, and with the brightness of each point on the screen corresponding to the electrical output from the cell which was produced when the light beam was at the corresponding point on the cell. The technique can be utilized to scan a large array of interconnected solar cells, to determine which ones are defective.

  16. Extending TOPS: Knowledge Management System for Anomaly Detection and Analysis

    NASA Astrophysics Data System (ADS)

    Votava, P.; Nemani, R. R.; Michaelis, A.

    2009-12-01

    Terrestrial Observation and Prediction System (TOPS) is a flexible modeling software system that integrates ecosystem models with frequent satellite and surface weather observations to produce ecosystem nowcasts (assessments of current conditions) and forecasts useful in natural resources management, public health and disaster management. We have been extending the Terrestrial Observation and Prediction System (TOPS) to include capability for automated anomaly detection and analysis of both on-line (streaming) and off-line data. While there are large numbers of anomaly detection algorithms for multivariate datasets, we are extending this capability beyond the anomaly detection itself and towards an automated analysis that would discover the possible causes of the anomalies. There are often indirect connections between datasets that manifest themselves during occurrence of external events and rather than searching exhaustively throughout all the datasets, our goal is to capture this knowledge and provide it to the system during automated analysis. This results in more efficient processing. Since we don’t need to process all the datasets using the original anomaly detection algorithms, which is often compute intensive; we achieve data reduction as we don’t need to store all the datasets in order to search for possible connections but we can download selected data on-demand based on our analysis. For example, an anomaly observed in vegetation Net Primary Production (NPP) can relate to an anomaly in vegetation Leaf Area Index (LAI), which is a fairly direct connection, as LAI is one of the inputs for NPP, however the change in LAI could be caused by a fire event, which is not directly connected with NPP. Because we are able to capture this knowledge we can analyze fire datasets and if there is a match with the NPP anomaly, we can infer that a fire is a likely cause. The knowledge is captured using OWL ontology language, where connections are defined in a schema

  17. Method for Real-Time Model Based Structural Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Smith, Timothy A. (Inventor); Urnes, James M., Sr. (Inventor); Reichenbach, Eric Y. (Inventor)

    2015-01-01

    A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.

  18. Automated anomaly detection for Orbiter High Temperature Reusable Surface Insulation

    NASA Astrophysics Data System (ADS)

    Cooper, Eric G.; Jones, Sharon M.; Goode, Plesent W.; Vazquez, Sixto L.

    1992-11-01

    The description, analysis, and experimental results of a method for identifying possible defects on High Temperature Reusable Surface Insulation (HRSI) of the Orbiter Thermal Protection System (TPS) is presented. Currently, a visual postflight inspection of Orbiter TPS is conducted to detect and classify defects as part of the Orbiter maintenance flow. The objective of the method is to automate the detection of defects by identifying anomalies between preflight and postflight images of TPS components. The initial version is intended to detect and label gross (greater than 0.1 inches in the smallest dimension) anomalies on HRSI components for subsequent classification by a human inspector. The approach is a modified Golden Template technique where the preflight image of a tile serves as the template against which the postflight image of the tile is compared. Candidate anomalies are selected as a result of the comparison and processed to identify true anomalies. The processing methods are developed and discussed, and the results of testing on actual and simulated tile images are presented. Solutions to the problems of brightness and spatial normalization, timely execution, and minimization of false positives are also discussed.

  19. Gaussian Process for Activity Modeling and Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Liao, W.; Rosenhahn, B.; Yang, M. Ying

    2015-08-01

    Complex activity modeling and identification of anomaly is one of the most interesting and desired capabilities for automated video behavior analysis. A number of different approaches have been proposed in the past to tackle this problem. There are two main challenges for activity modeling and anomaly detection: 1) most existing approaches require sufficient data and supervision for learning; 2) the most interesting abnormal activities arise rarely and are ambiguous among typical activities, i.e. hard to be precisely defined. In this paper, we propose a novel approach to model complex activities and detect anomalies by using non-parametric Gaussian Process (GP) models in a crowded and complicated traffic scene. In comparison with parametric models such as HMM, GP models are nonparametric and have their advantages. Our GP models exploit implicit spatial-temporal dependence among local activity patterns. The learned GP regression models give a probabilistic prediction of regional activities at next time interval based on observations at present. An anomaly will be detected by comparing the actual observations with the prediction at real time. We verify the effectiveness and robustness of the proposed model on the QMUL Junction Dataset. Furthermore, we provide a publicly available manually labeled ground truth of this data set.

  20. Sequential damage detection based on the continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Liao, Yizheng; Balafas, Konstantinos; Rajagopal, Ram; Kiremidjian, Anne S.

    2015-03-01

    This paper presents a sequential structural damage detection algorithm that is based on a statistical model for the wavelet transform of the structural responses. The detector uses the coefficients of the wavelet model and does not require prior knowledge of the structural properties. Principal Component Analysis is applied to select and extract the most sensitive features from the wavelet coefficients as the damage sensitive features. The damage detection algorithm is validated using the simulation data collected from a four-story steel moment frame. Various features have been explored and the detection algorithm was able to identify damage. Additionally, we show that for a desired probability of false alarm, the proposed detector is asymptotically optimal on the expected delay.

  1. Limitations of Aneuploidy and Anomaly Detection in the Obese Patient.

    PubMed

    Zozzaro-Smith, Paula; Gray, Lisa M; Bacak, Stephen J; Thornburg, Loralei L

    2014-01-01

    Obesity is a worldwide epidemic and can have a profound effect on pregnancy risks. Obese patients tend to be older and are at increased risk for structural fetal anomalies and aneuploidy, making screening options critically important for these women. Failure rates for first-trimester nuchal translucency (NT) screening increase with obesity, while the ability to detect soft-markers declines, limiting ultrasound-based screening options. Obesity also decreases the chances of completing the anatomy survey and increases the residual risk of undetected anomalies. Additionally, non-invasive prenatal testing (NIPT) is less likely to provide an informative result in obese patients. Understanding the limitations and diagnostic accuracy of aneuploidy and anomaly screening in obese patients can help guide clinicians in counseling patients on the screening options. PMID:26237478

  2. Extending TOPS: Ontology-driven Anomaly Detection and Analysis System

    NASA Astrophysics Data System (ADS)

    Votava, P.; Nemani, R. R.; Michaelis, A.

    2010-12-01

    Terrestrial Observation and Prediction System (TOPS) is a flexible modeling software system that integrates ecosystem models with frequent satellite and surface weather observations to produce ecosystem nowcasts (assessments of current conditions) and forecasts useful in natural resources management, public health and disaster management. We have been extending the Terrestrial Observation and Prediction System (TOPS) to include a capability for automated anomaly detection and analysis of both on-line (streaming) and off-line data. In order to best capture the knowledge about data hierarchies, Earth science models and implied dependencies between anomalies and occurrences of observable events such as urbanization, deforestation, or fires, we have developed an ontology to serve as a knowledge base. We can query the knowledge base and answer questions about dataset compatibilities, similarities and dependencies so that we can, for example, automatically analyze similar datasets in order to verify a given anomaly occurrence in multiple data sources. We are further extending the system to go beyond anomaly detection towards reasoning about possible causes of anomalies that are also encoded in the knowledge base as either learned or implied knowledge. This enables us to scale up the analysis by eliminating a large number of anomalies early on during the processing by either failure to verify them from other sources, or matching them directly with other observable events without having to perform an extensive and time-consuming exploration and analysis. The knowledge is captured using OWL ontology language, where connections are defined in a schema that is later extended by including specific instances of datasets and models. The information is stored using Sesame server and is accessible through both Java API and web services using SeRQL and SPARQL query languages. Inference is provided using OWLIM component integrated with Sesame.

  3. Anomaly detection for machine learning redshifts applied to SDSS galaxies

    NASA Astrophysics Data System (ADS)

    Hoyle, Ben; Rau, Markus Michael; Paech, Kerstin; Bonnett, Christopher; Seitz, Stella; Weller, Jochen

    2015-10-01

    We present an analysis of anomaly detection for machine learning redshift estimation. Anomaly detection allows the removal of poor training examples, which can adversely influence redshift estimates. Anomalous training examples may be photometric galaxies with incorrect spectroscopic redshifts, or galaxies with one or more poorly measured photometric quantity. We select 2.5 million `clean' SDSS DR12 galaxies with reliable spectroscopic redshifts, and 6730 `anomalous' galaxies with spectroscopic redshift measurements which are flagged as unreliable. We contaminate the clean base galaxy sample with galaxies with unreliable redshifts and attempt to recover the contaminating galaxies using the Elliptical Envelope technique. We then train four machine learning architectures for redshift analysis on both the contaminated sample and on the preprocessed `anomaly-removed' sample and measure redshift statistics on a clean validation sample generated without any preprocessing. We find an improvement on all measured statistics of up to 80 per cent when training on the anomaly removed sample as compared with training on the contaminated sample for each of the machine learning routines explored. We further describe a method to estimate the contamination fraction of a base data sample.

  4. Sparsity-driven anomaly detection for ship detection and tracking in maritime video

    NASA Astrophysics Data System (ADS)

    Shafer, Scott; Harguess, Josh; Forero, Pedro A.

    2015-05-01

    This work examines joint anomaly detection and dictionary learning approaches for identifying anomalies in persistent surveillance applications that require data compression. We have developed a sparsity-driven anomaly detector that can be used for learning dictionaries to address these challenges. In our approach, each training datum is modeled as a sparse linear combination of dictionary atoms in the presence of noise. The noise term is modeled as additive Gaussian noise and a deterministic term models the anomalies. However, no model for the statistical distribution of the anomalies is made. An estimator is postulated for a dictionary that exploits the fact that since anomalies by definition are rare, only a few anomalies will be present when considering the entire dataset. From this vantage point, we endow the deterministic noise term (anomaly-related) with a group-sparsity property. A robust dictionary learning problem is postulated where a group-lasso penalty is used to encourage most anomaly-related noise components to be zero. The proposed estimator achieves robustness by both identifying the anomalies and removing their effect from the dictionary estimate. Our approach is applied to the problem of ship detection and tracking from full-motion video with promising results.

  5. A multilevel approach to sequential detection of pictorial features

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    1976-01-01

    The problem of detecting the local similarity between templates in a given class and a given image using a hierarchically ordered sequential decision rule is examined. It is proposed that the set of templates be partitioned and a 'representative template' be defined for each of the partitions. Several levels of partitioning are defined. Elimination of mismatching locations and termination of computation can take place at each level of detection. Each level of testing is over a more restrictive subset of the template class than the previous level. Criteria are given for selecting representative templates, the ordering of components of a template vector for error evaluation, and the threshold sequences to be used in deciding about a 'match'. Suboptimal solutions are given satisfying these criteria. Examples showing recognition of linear features in test patterns and photographs obtained by aerial and spaceborne sensors are provided.

  6. Spectral anomaly methods for aerial detection using KUT nuisance rejection

    NASA Astrophysics Data System (ADS)

    Detwiler, R. S.; Pfund, D. M.; Myjak, M. J.; Kulisek, J. A.; Seifert, C. E.

    2015-06-01

    This work discusses the application and optimization of a spectral anomaly method for the real-time detection of gamma radiation sources from an aerial helicopter platform. Aerial detection presents several key challenges over ground-based detection. For one, larger and more rapid background fluctuations are typical due to higher speeds, larger field of view, and geographically induced background changes. As well, the possible large altitude or stand-off distance variations cause significant steps in background count rate as well as spectral changes due to increased gamma-ray scatter with detection at higher altitudes. The work here details the adaptation and optimization of the PNNL-developed algorithm Nuisance-Rejecting Spectral Comparison Ratios for Anomaly Detection (NSCRAD), a spectral anomaly method previously developed for ground-based applications, for an aerial platform. The algorithm has been optimized for two multi-detector systems; a NaI(Tl)-detector-based system and a CsI detector array. The optimization here details the adaptation of the spectral windows for a particular set of target sources to aerial detection and the tailoring for the specific detectors. As well, the methodology and results for background rejection methods optimized for the aerial gamma-ray detection using Potassium, Uranium and Thorium (KUT) nuisance rejection are shown. Results indicate that use of a realistic KUT nuisance rejection may eliminate metric rises due to background magnitude and spectral steps encountered in aerial detection due to altitude changes and geographically induced steps such as at land-water interfaces.

  7. Anomaly Detection for Next-Generation Space Launch Ground Operations

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Iverson, David L.; Hall, David R.; Taylor, William M.; Patterson-Hine, Ann; Brown, Barbara; Ferrell, Bob A.; Waterman, Robert D.

    2010-01-01

    NASA is developing new capabilities that will enable future human exploration missions while reducing mission risk and cost. The Fault Detection, Isolation, and Recovery (FDIR) project aims to demonstrate the utility of integrated vehicle health management (IVHM) tools in the domain of ground support equipment (GSE) to be used for the next generation launch vehicles. In addition to demonstrating the utility of IVHM tools for GSE, FDIR aims to mature promising tools for use on future missions and document the level of effort - and hence cost - required to implement an application with each selected tool. One of the FDIR capabilities is anomaly detection, i.e., detecting off-nominal behavior. The tool we selected for this task uses a data-driven approach. Unlike rule-based and model-based systems that require manual extraction of system knowledge, data-driven systems take a radically different approach to reasoning. At the basic level, they start with data that represent nominal functioning of the system and automatically learn expected system behavior. The behavior is encoded in a knowledge base that represents "in-family" system operations. During real-time system monitoring or during post-flight analysis, incoming data is compared to that nominal system operating behavior knowledge base; a distance representing deviation from nominal is computed, providing a measure of how far "out of family" current behavior is. We describe the selected tool for FDIR anomaly detection - Inductive Monitoring System (IMS), how it fits into the FDIR architecture, the operations concept for the GSE anomaly monitoring, and some preliminary results of applying IMS to a Space Shuttle GSE anomaly.

  8. Anomaly detection in clutter using spectrally enhanced LADAR

    NASA Astrophysics Data System (ADS)

    Chhabra, Puneet S.; Wallace, Andrew M.; Hopgood, James R.

    2015-05-01

    Discrete return (DR) Laser Detection and Ranging (Ladar) systems provide a series of echoes that reflect from objects in a scene. These can be first, last or multi-echo returns. In contrast, Full-Waveform (FW)-Ladar systems measure the intensity of light reflected from objects continuously over a period of time. In a camflouaged scenario, e.g., objects hidden behind dense foliage, a FW-Ladar penetrates such foliage and returns a sequence of echoes including buried faint echoes. The aim of this paper is to learn local-patterns of co-occurring echoes characterised by their measured spectra. A deviation from such patterns defines an abnormal event in a forest/tree depth profile. As far as the authors know, neither DR or FW-Ladar, along with several spectral measurements, has not been applied to anomaly detection. This work presents an algorithm that allows detection of spectral and temporal anomalies in FW-Multi Spectral Ladar (FW-MSL) data samples. An anomaly is defined as a full waveform temporal and spectral signature that does not conform to a prior expectation, represented using a learnt subspace (dictionary) and set of coefficients that capture co-occurring local-patterns using an overlapping temporal window. A modified optimization scheme is proposed for subspace learning based on stochastic approximations. The objective function is augmented with a discriminative term that represents the subspace's separability properties and supports anomaly characterisation. The algorithm detects several man-made objects and anomalous spectra hidden in a dense clutter of vegetation and also allows tree species classification.

  9. Anomaly-based intrusion detection for SCADA systems

    SciTech Connect

    Yang, D.; Usynin, A.; Hines, J. W.

    2006-07-01

    Most critical infrastructure such as chemical processing plants, electrical generation and distribution networks, and gas distribution is monitored and controlled by Supervisory Control and Data Acquisition Systems (SCADA. These systems have been the focus of increased security and there are concerns that they could be the target of international terrorists. With the constantly growing number of internet related computer attacks, there is evidence that our critical infrastructure may also be vulnerable. Researchers estimate that malicious online actions may cause $75 billion at 2007. One of the interesting countermeasures for enhancing information system security is called intrusion detection. This paper will briefly discuss the history of research in intrusion detection techniques and introduce the two basic detection approaches: signature detection and anomaly detection. Finally, it presents the application of techniques developed for monitoring critical process systems, such as nuclear power plants, to anomaly intrusion detection. The method uses an auto-associative kernel regression (AAKR) model coupled with the statistical probability ratio test (SPRT) and applied to a simulated SCADA system. The results show that these methods can be generally used to detect a variety of common attacks. (authors)

  10. Anomaly detection based on the statistics of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Catterall, Stephen P.

    2004-10-01

    The purpose of this paper is to introduce a new anomaly detection algorithm for application to hyperspectral imaging (HSI) data. The algorithm uses characterisations of the joint (among wavebands) probability density function (pdf) of HSI data. Traditionally, the pdf has been assumed to be multivariate Gaussian or a mixture of multivariate Gaussians. Other distributions have been considered by previous authors, in particular Elliptically Contoured Distributions (ECDs). In this paper we focus on another distribution, which has only recently been defined and studied. This distribution has a more flexible and extensive set of parameters than the multivariate Gaussian does, yet the pdf takes on a relatively simple mathematical form. The result of all this is a model for the pdf of a hyperspectral image, consisting of a mixture of these distributions. Once a model for the pdf of a hyperspectral image has been obtained, it can be incorporated into an anomaly detector. The new anomaly detector is implemented and applied to some medium wave infra-red (MWIR) hyperspectral imagery. Comparison is made with a well-known anomaly detector, and it will be seen that the results are promising.

  11. Claycap anomaly detection using hyperspectral remote sensing and lidargrammetric techniques

    NASA Astrophysics Data System (ADS)

    Garcia Quijano, Maria Jose

    Clay capped waste sites are a common method to dispose of the more than 40 million tons of hazardous waste produced in the United States every year (EPA, 2003). Due to the potential threat that hazardous waste poses, it is essential to monitor closely the performance of these facilities. Development of a monitoring system that exploits spectral and topographic changes over hazardous waste sites is presented. Spectral anomaly detection is based upon the observed changes in absolute reflectance and spectral derivatives in centipede grass (Eremochloa ophiuroides) under different irrigation levels. The spectral features that provide the best separability among irrigation levels were identified using Stepwise Discriminant Analyses. The Red Edge Position was selected as a suitable discriminant variable to compare the performance of a global and a local anomaly detection algorithm using a DAIS 3715 hyperspectral image. Topographical anomaly detection is assessed by evaluating the vertical accuracy of two LIDAR datasets acquired from two different altitudes (700 m and 1,200 m AGL) over a clay-capped hazardous site at the Savannah River National Laboratory, SC using the same Optech ALTM 2050 and Cessna 337 platform. Additionally, a quantitative comparison is performed to determine the effect that decreasing platform altitude and increasing posting density have on the vertical accuracy of the LIDAR data collected.

  12. GPR anomaly detection with robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Kelly, Jack; Havens, Timothy C.

    2015-05-01

    This paper investigates the application of Robust Principal Component Analysis (RPCA) to ground penetrating radar as a means to improve GPR anomaly detection. The method consists of a preprocessing routine to smoothly align the ground and remove the ground response (haircut), followed by mapping to the frequency domain, applying RPCA, and then mapping the sparse component of the RPCA decomposition back to the time domain. A prescreener is then applied to the time-domain sparse component to perform anomaly detection. The emphasis of the RPCA algorithm on sparsity has the effect of significantly increasing the apparent signal-to-clutter ratio (SCR) as compared to the original data, thereby enabling improved anomaly detection. This method is compared to detrending (spatial-mean removal) and classical principal component analysis (PCA), and the RPCA-based processing is seen to provide substantial improvements in the apparent SCR over both of these alternative processing schemes. In particular, the algorithm has been applied to both field collected impulse GPR data and has shown significant improvement in terms of the ROC curve relative to detrending and PCA.

  13. Automatic detection of anomalies in Space Shuttle Main Engine turbopumps

    NASA Technical Reports Server (NTRS)

    Lo, Ching F. (Principal Investigator); Whitehead, Bruce; Wu, Kewei; Rogers, George

    1992-01-01

    A prototype expert system for detecting anomalies in turbopump vibration data has been tested with data from ground tests 902-473, 902-501 902-519, and 904-097 of the Space Shuttle Main Engine!nc (SSME). The expert system has been utilized to analyze vibration ion data from each of the following SSME components: pressure oxidizer turbopump, high-pressure fuel turbo pump, low-pressure fuel turbopump, and preburner boost pump. The expert system locates and classifies peaks in the power spectral density of each 0.4 s window of steady-state data. Peaks representing the fundamental and harmonic frequencies of both shaft rotation and bearing cage rotation are identified by the expert system. Anomalies are then detected on the basis of of two thresholds set individually for the amplitude of each of these peaks: a prior threshold used during the first few windows of data in a test, and a posterior threshold used thereafter. In most cases the anomalies detected by the expert system agree with those reported by NASA. The two cases where there is significant disagreement will be further studied and the system design refined accordingly.

  14. Colorimetric multiplexed immunoassay for sequential detection of tumor markers.

    PubMed

    Wang, Jing; Cao, Ya; Xu, Yuanyuan; Li, Genxi

    2009-10-15

    In this paper, a very simple and easily-operated colorimetric multiplexed immunoassay method for sequential detection of tumor markers has been presented. Magnetic microparticles which are conjugated with biotinylated antibodies are firstly added into the test solution. After fast magnetic collection, these complexes are separated from non-specific proteins. Through different enzymatic reactions of 3,3',5,5'-tetramethylbenzidine (TMB) and o-phenylenediamine (OPD) catalyzed by horseradish peroxidase molecules which are loaded on the surfaces of gold nanoparticles, two antigens carcinoembryonic antigen and alpha-fetoprotein can be detected even with naked eyes. The detection limit obtained from the spectrophotometric measurements is as low as 0.02 ng/mL. This proposed method also has high specificity and reproducibility, as well as excellent efficiency of 94 min for the detection of serum samples. So, this new multiplexed immunoassay method might be a promising approach for the diagnosis of cancer and some other diseases in clinical applications. PMID:19726177

  15. Detection of Low Temperature Volcanogenic Thermal Anomalies with ASTER

    NASA Astrophysics Data System (ADS)

    Pieri, D. C.; Baxter, S.

    2009-12-01

    Predicting volcanic eruptions is a thorny problem, as volcanoes typically exhibit idiosyncratic waxing and/or waning pre-eruption emission, geodetic, and seismic behavior. It is no surprise that increasing our accuracy and precision in eruption prediction depends on assessing the time-progressions of all relevant precursor geophysical, geochemical, and geological phenomena, and on more frequently observing volcanoes when they become restless. The ASTER instrument on the NASA Terra Earth Observing System satellite in low earth orbit provides important capabilities in the area of detection of volcanogenic anomalies such as thermal precursors and increased passive gas emissions. Its unique high spatial resolution multi-spectral thermal IR imaging data (90m/pixel; 5 bands in the 8-12um region), bore-sighted with visible and near-IR imaging data, and combined with off-nadir pointing and stereo-photogrammetric capabilities make ASTER a potentially important volcanic precursor detection tool. We are utilizing the JPL ASTER Volcano Archive (http://ava.jpl.nasa.gov) to systematically examine 80,000+ ASTER volcano images to analyze (a) thermal emission baseline behavior for over 1500 volcanoes worldwide, (b) the form and magnitude of time-dependent thermal emission variability for these volcanoes, and (c) the spatio-temporal limits of detection of pre-eruption temporal changes in thermal emission in the context of eruption precursor behavior. We are creating and analyzing a catalog of the magnitude, frequency, and distribution of volcano thermal signatures worldwide as observed from ASTER since 2000 at 90m/pixel. Of particular interest as eruption precursors are small low contrast thermal anomalies of low apparent absolute temperature (e.g., melt-water lakes, fumaroles, geysers, grossly sub-pixel hotspots), for which the signal-to-noise ratio may be marginal (e.g., scene confusion due to clouds, water and water vapor, fumarolic emissions, variegated ground emissivity, and

  16. Using Physical Models for Anomaly Detection in Control Systems

    NASA Astrophysics Data System (ADS)

    Svendsen, Nils; Wolthusen, Stephen

    Supervisory control and data acquisition (SCADA) systems are increasingly used to operate critical infrastructure assets. However, the inclusion of advanced information technology and communications components and elaborate control strategies in SCADA systems increase the threat surface for external and subversion-type attacks. The problems are exacerbated by site-specific properties of SCADA environments that make subversion detection impractical; and by sensor noise and feedback characteristics that degrade conventional anomaly detection systems. Moreover, potential attack mechanisms are ill-defined and may include both physical and logical aspects.

  17. Inflight and Preflight Detection of Pitot Tube Anomalies

    NASA Technical Reports Server (NTRS)

    Mitchell, Darrell W.

    2014-01-01

    The health and integrity of aircraft sensors play a critical role in aviation safety. Inaccurate or false readings from these sensors can lead to improper decision making, resulting in serious and sometimes fatal consequences. This project demonstrated the feasibility of using advanced data analysis techniques to identify anomalies in Pitot tubes resulting from blockage such as icing, moisture, or foreign objects. The core technology used in this project is referred to as noise analysis because it relates sensors' response time to the dynamic component (noise) found in the signal of these same sensors. This analysis technique has used existing electrical signals of Pitot tube sensors that result from measured processes during inflight conditions and/or induced signals in preflight conditions to detect anomalies in the sensor readings. Analysis and Measurement Services Corporation (AMS Corp.) has routinely used this technology to determine the health of pressure transmitters in nuclear power plants. The application of this technology for the detection of aircraft anomalies is innovative. Instead of determining the health of process monitoring at a steady-state condition, this technology will be used to quickly inform the pilot when an air-speed indication becomes faulty under any flight condition as well as during preflight preparation.

  18. Hierarchical Kohonenen net for anomaly detection in network security.

    PubMed

    Sarasamma, Suseela T; Zhu, Qiuming A; Huff, Julie

    2005-04-01

    A novel multilevel hierarchical Kohonen Net (K-Map) for an intrusion detection system is presented. Each level of the hierarchical map is modeled as a simple winner-take-all K-Map. One significant advantage of this multilevel hierarchical K-Map is its computational efficiency. Unlike other statistical anomaly detection methods such as nearest neighbor approach, K-means clustering or probabilistic analysis that employ distance computation in the feature space to identify the outliers, our approach does not involve costly point-to-point computation in organizing the data into clusters. Another advantage is the reduced network size. We use the classification capability of the K-Map on selected dimensions of data set in detecting anomalies. Randomly selected subsets that contain both attacks and normal records from the KDD Cup 1999 benchmark data are used to train the hierarchical net. We use a confidence measure to label the clusters. Then we use the test set from the same KDD Cup 1999 benchmark to test the hierarchical net. We show that a hierarchical K-Map in which each layer operates on a small subset of the feature space is superior to a single-layer K-Map operating on the whole feature space in detecting a variety of attacks in terms of detection rate as well as false positive rate. PMID:15828658

  19. Towards Reliable Evaluation of Anomaly-Based Intrusion Detection Performance

    NASA Technical Reports Server (NTRS)

    Viswanathan, Arun

    2012-01-01

    This report describes the results of research into the effects of environment-induced noise on the evaluation process for anomaly detectors in the cyber security domain. This research was conducted during a 10-week summer internship program from the 19th of August, 2012 to the 23rd of August, 2012 at the Jet Propulsion Laboratory in Pasadena, California. The research performed lies within the larger context of the Los Angeles Department of Water and Power (LADWP) Smart Grid cyber security project, a Department of Energy (DoE) funded effort involving the Jet Propulsion Laboratory, California Institute of Technology and the University of Southern California/ Information Sciences Institute. The results of the present effort constitute an important contribution towards building more rigorous evaluation paradigms for anomaly-based intrusion detectors in complex cyber physical systems such as the Smart Grid. Anomaly detection is a key strategy for cyber intrusion detection and operates by identifying deviations from profiles of nominal behavior and are thus conceptually appealing for detecting "novel" attacks. Evaluating the performance of such a detector requires assessing: (a) how well it captures the model of nominal behavior, and (b) how well it detects attacks (deviations from normality). Current evaluation methods produce results that give insufficient insight into the operation of a detector, inevitably resulting in a significantly poor characterization of a detectors performance. In this work, we first describe a preliminary taxonomy of key evaluation constructs that are necessary for establishing rigor in the evaluation regime of an anomaly detector. We then focus on clarifying the impact of the operational environment on the manifestation of attacks in monitored data. We show how dynamic and evolving environments can introduce high variability into the data stream perturbing detector performance. Prior research has focused on understanding the impact of this

  20. Computationally efficient strategies to perform anomaly detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Rossi, Alessandro; Acito, Nicola; Diani, Marco; Corsini, Giovanni

    2012-11-01

    In remote sensing, hyperspectral sensors are effectively used for target detection and recognition because of their high spectral resolution that allows discrimination of different materials in the sensed scene. When a priori information about the spectrum of the targets of interest is not available, target detection turns into anomaly detection (AD), i.e. searching for objects that are anomalous with respect to the scene background. In the field of AD, anomalies can be generally associated to observations that statistically move away from background clutter, being this latter intended as a local neighborhood surrounding the observed pixel or as a large part of the image. In this context, many efforts have been put to reduce the computational load of AD algorithms so as to furnish information for real-time decision making. In this work, a sub-class of AD methods is considered that aim at detecting small rare objects that are anomalous with respect to their local background. Such techniques not only are characterized by mathematical tractability but also allow the design of real-time strategies for AD. Within these methods, one of the most-established anomaly detectors is the RX algorithm which is based on a local Gaussian model for background modeling. In the literature, the RX decision rule has been employed to develop computationally efficient algorithms implemented in real-time systems. In this work, a survey of computationally efficient methods to implement the RX detector is presented where advanced algebraic strategies are exploited to speed up the estimate of the covariance matrix and of its inverse. The comparison of the overall number of operations required by the different implementations of the RX algorithms is given and discussed by varying the RX parameters in order to show the computational improvements achieved with the introduced algebraic strategy.

  1. Rule-based expert system for maritime anomaly detection

    NASA Astrophysics Data System (ADS)

    Roy, Jean

    2010-04-01

    Maritime domain operators/analysts have a mandate to be aware of all that is happening within their areas of responsibility. This mandate derives from the needs to defend sovereignty, protect infrastructures, counter terrorism, detect illegal activities, etc., and it has become more challenging in the past decade, as commercial shipping turned into a potential threat. In particular, a huge portion of the data and information made available to the operators/analysts is mundane, from maritime platforms going about normal, legitimate activities, and it is very challenging for them to detect and identify the non-mundane. To achieve such anomaly detection, they must establish numerous relevant situational facts from a variety of sensor data streams. Unfortunately, many of the facts of interest just cannot be observed; the operators/analysts thus use their knowledge of the maritime domain and their reasoning faculties to infer these facts. As they are often overwhelmed by the large amount of data and information, automated reasoning tools could be used to support them by inferring the necessary facts, ultimately providing indications and warning on a small number of anomalous events worthy of their attention. Along this line of thought, this paper describes a proof-of-concept prototype of a rule-based expert system implementing automated rule-based reasoning in support of maritime anomaly detection.

  2. Visual Mismatch Negativity Reveals Automatic Detection of Sequential Regularity Violation

    PubMed Central

    Stefanics, Gábor; Kimura, Motohiro; Czigler, István

    2011-01-01

    Sequential regularities are abstract rules based on repeating sequences of environmental events, which are useful to make predictions about future events. Here, we tested whether the visual system is capable to detect sequential regularity in unattended stimulus sequences. The visual mismatch negativity (vMMN) component of the event-related potentials is sensitive to the violation of complex regularities (e.g., object-related characteristics, temporal patterns). We used the vMMN component as an index of violation of conditional (if, then) regularities. In the first experiment, to investigate emergence of vMMN and other change-related activity to the violation of conditional rules, red and green disk patterns were delivered in pairs. The majority of pairs comprised of disk patterns with identical colors, whereas in deviant pairs the colors were different. The probabilities of the two colors were equal. The second member of the deviant pairs elicited a vMMN with longer latency and more extended spatial distribution to deviants with lower probability (10 vs. 30%). In the second (control) experiment the emergence of vMMN to violation of a simple, feature-related rule was studied using oddball sequences of stimulus pairs where deviant colors were presented with 20% probabilities. Deviant colored patterns elicited a vMMN, and this component was larger for the second member of the pair, i.e., after a shorter inter-stimulus interval. This result corresponds to the SOA/(v)MMN relationship, expected on the basis of a memory-mismatch process. Our results show that the system underlying vMMN is sensitive to abstract, conditional rules. Representation of such rules implicates expectation of a subsequent event, therefore vMMN can be considered as a correlate of violated predictions about the characteristics of environmental events. PMID:21629766

  3. Detection of Anomalies in Hydrometric Data Using Artificial Intelligence Techniques

    NASA Astrophysics Data System (ADS)

    Lauzon, N.; Lence, B. J.

    2002-12-01

    This work focuses on the detection of anomalies in hydrometric data sequences, such as 1) outliers, which are individual data having statistical properties that differ from those of the overall population; 2) shifts, which are sudden changes over time in the statistical properties of the historical records of data; and 3) trends, which are systematic changes over time in the statistical properties. For the purpose of the design and management of water resources systems, it is important to be aware of these anomalies in hydrometric data, for they can induce a bias in the estimation of water quantity and quality parameters. These anomalies may be viewed as specific patterns affecting the data, and therefore pattern recognition techniques can be used for identifying them. However, the number of possible patterns is very large for each type of anomaly and consequently large computing capacities are required to account for all possibilities using the standard statistical techniques, such as cluster analysis. Artificial intelligence techniques, such as the Kohonen neural network and fuzzy c-means, are clustering techniques commonly used for pattern recognition in several areas of engineering and have recently begun to be used for the analysis of natural systems. They require much less computing capacity than the standard statistical techniques, and therefore are well suited for the identification of outliers, shifts and trends in hydrometric data. This work constitutes a preliminary study, using synthetic data representing hydrometric data that can be found in Canada. The analysis of the results obtained shows that the Kohonen neural network and fuzzy c-means are reasonably successful in identifying anomalies. This work also addresses the problem of uncertainties inherent to the calibration procedures that fit the clusters to the possible patterns for both the Kohonen neural network and fuzzy c-means. Indeed, for the same database, different sets of clusters can be

  4. Detection of chiral anomaly and valley transport in Dirac semimetals

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Zhang, Enze; Liu, Yanwen; Chen, Zhigang; Liang, Sihang; Cao, Junzhi; Yuan, Xiang; Tang, Lei; Li, Qian; Gu, Teng; Wu, Yizheng; Zou, Jin; Xiu, Faxian

    Chiral anomaly is a non-conservation of chiral charge pumped by the topological nontrivial gauge field, which has been predicted to exist in the emergent quasiparticle excitations in Dirac and Weyl semimetals. However, so far, such pumping process hasn't been clearly demonstrated and lacks a convincing experimental identification. Here, we report the detection of the charge pumping effect and the related valley transport in Cd3As2 driven by external electric and magnetic fields (EB). We find that the chiral imbalance leads to a non-zero gyrotropic coefficient, which can be confirmed by the EB-generated Kerr effect. By applying B along the current direction, we observe a negative magnetoresistance despite the giant positive one at other directions, a clear indication of the chiral anomaly. Remarkably, a robust nonlocal response in valley diffusion originated from the chiral anomaly is persistent up to room temperature when B is parallel to E. The ability to manipulate the valley polarization in Dirac semimetal opens up a brand-new route to understand its fundamental properties through external fields and utilize the chiral fermions in valleytronic applications.

  5. Segmentation of laser range image for pipe anomaly detection

    NASA Astrophysics Data System (ADS)

    Liu, Zheng; Krys, Dennis

    2010-04-01

    Laser-based scanning can provide a precise surface profile. It has been widely applied to the inspection of pipe inner walls and is often used along with other types of sensors, like sonar and close-circuit television (CCTV). These measurements can be used for pipe deterioration modeling and condition assessment. Geometric information needs to be extracted to characterize anomalies in the pipe profile. Since the laser scanning measures the distance, segmentation with a threshold is a straightforward way to isolate the anomalies. However, threshold with a fixed distance value does not work well for the laser range image due to the intensity inhomogeneity, which is caused the uncontrollable factors during the inspection. Thus, a local binary fitting (LBF) active contour model is employed in this work to process the laser range image and an image phase congruency algorithm is adopted to provide the initial contour as required by the LBF method. The combination of these two approaches can successfully detect the anomalies from a laser range image.

  6. New models for hyperspectral anomaly detection and un-mixing

    NASA Astrophysics Data System (ADS)

    Bernhardt, M.; Heather, J. P.; Smith, M. I.

    2005-06-01

    It is now established that hyperspectral images of many natural backgrounds have statistics with fat-tails. In spite of this, many of the algorithms that are used to process them appeal to the multivariate Gaussian model. In this paper we consider biologically motivated generative models that might explain observed mixtures of vegetation in natural backgrounds. The degree to which these models match the observed fat-tailed distributions is investigated. Having shown how fat-tailed statistics arise naturally from the generative process, the models are put to work in new anomaly detection and un-mixing algorithms. The performance of these algorithms is compared with more traditional approaches.

  7. Inductive inference model of anomaly and misuse detection

    SciTech Connect

    Helman, P.

    1997-01-01

    Further consequences of the inductive inference model of anomaly and misuse detection are presented. The results apply to the design of both probability models for the inductive inference framework and to the design of W&S rule bases. The issues considered include: the role of misuse models M{sub A}, the selection of relevant sets of attributes and the aggregation of their values, the effect on a rule base of nonmaximal rules, and the partitioning of a set of attributes into a left hand and right hand side.

  8. System for Anomaly and Failure Detection (SAFD) system development

    NASA Technical Reports Server (NTRS)

    Oreilly, D.

    1993-01-01

    The System for Anomaly and Failure Detection (SAFD) algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failures as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient conditions. This task assignment originally specified developing a platform for executing the algorithm during hot fire tests at Technology Test Bed (TTB) and installing the SAFD algorithm on that platform. Two units were built and installed in the Hardware Simulation Lab and at the TTB in December 1991. Since that time, the task primarily entailed improvement and maintenance of the systems, additional testing to prove the feasibility of the algorithm, and support of hot fire testing. This document addresses the work done since the last report of June 1992. The work on the System for Anomaly and Failure Detection during this period included improving the platform and the algorithm, testing the algorithm against previous test data and in the Hardware Simulation Lab, installing other algorithms on the system, providing support for operations at the Technology Test Bed, and providing routine maintenance.

  9. Anomaly depth detection in trans-admittance mammography: a formula independent of anomaly size or admittivity contrast

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Lee, Eunjung; Seo, Jin Keun

    2014-04-01

    Trans-admittance mammography (TAM) is a bioimpedance technique for breast cancer detection. It is based on the comparison of tissue conductivity: cancerous tissue is identified by its higher conductivity in comparison with the surrounding normal tissue. In TAM, the breast is compressed between two electrical plates (in a similar architecture to x-ray mammography). The bottom plate has many sensing point electrodes that provide two-dimensional images (trans-admittance maps) that are induced by voltage differences between the two plates. Multi-frequency admittance data (Neumann data) are measured over the range 50 Hz-500 kHz. TAM aims to determine the location and size of any anomaly from the multi-frequency admittance data. Various anomaly detection algorithms can be used to process TAM data to determine the transverse positions of anomalies. However, existing methods cannot reliably determine the depth or size of an anomaly. Breast cancer detection using TAM would be improved if the depth or size of an anomaly could also be estimated, properties that are independent of the admittivity contrast. A formula is proposed here that can estimate the depth of an anomaly independent of its size and the admittivity contrast. This depth estimation can also be used to derive an estimation of the size of the anomaly. The proposed estimations are verified rigorously under a simplified model. Numerical simulation shows that the proposed method also works well in general settings.

  10. Sequential Model-Based Detection in a Shallow Ocean Acoustic Environment

    SciTech Connect

    Candy, J V

    2002-03-26

    A model-based detection scheme is developed to passively monitor an ocean acoustic environment along with its associated variations. The technique employs an embedded model-based processor and a reference model in a sequential likelihood detection scheme. The monitor is therefore called a sequential reference detector. The underlying theory for the design is developed and discussed in detail.

  11. System for Anomaly and Failure Detection (SAFD) system development

    NASA Astrophysics Data System (ADS)

    Oreilly, D.

    1992-07-01

    This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.

  12. Log Summarization and Anomaly Detection for TroubleshootingDistributed Systems

    SciTech Connect

    Gunter, Dan; Tierney, Brian L.; Brown, Aaron; Swany, Martin; Bresnahan, John; Schopf, Jennifer M.

    2007-08-01

    Today's system monitoring tools are capable of detectingsystem failures such as host failures, OS errors, and network partitionsin near-real time. Unfortunately, the same cannot yet be said of theend-to-end distributed softwarestack. Any given action, for example,reliably transferring a directory of files, can involve a wide range ofcomplex and interrelated actions across multiple pieces of software:checking user certificates and permissions, getting details for allfiles, performing third-party transfers, understanding re-try policydecisions, etc. We present an infrastructure for troubleshooting complexmiddleware, a general purpose technique for configurable logsummarization, and an anomaly detection technique that works in near-realtime on running Grid middleware. We present results gathered using thisinfrastructure from instrumented Grid middleware and applications runningon the Emulab testbed. From these results, we analyze the effectivenessof several algorithms at accurately detecting a variety of performanceanomalies.

  13. System for Anomaly and Failure Detection (SAFD) system development

    NASA Technical Reports Server (NTRS)

    Oreilly, D.

    1992-01-01

    This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.

  14. Identification and detection of anomalies through SSME data analysis

    NASA Technical Reports Server (NTRS)

    Pereira, Lisa; Ali, Moonis

    1990-01-01

    The goal of the ongoing research described in this paper is to analyze real-time ground test data in order to identify patterns associated with the anomalous engine behavior, and on the basis of this analysis to develop an expert system which detects anomalous engine behavior in the early stages of fault development. A prototype of the expert system has been developed and tested on the high frequency data of two SSME tests, namely Test #901-0516 and Test #904-044. The comparison of our results with the post-test analyses indicates that the expert system detected the presence of the anomalies in a significantly early stage of fault development.

  15. Anomaly Detection in Test Equipment via Sliding Mode Observers

    NASA Technical Reports Server (NTRS)

    Solano, Wanda M.; Drakunov, Sergey V.

    2012-01-01

    Nonlinear observers were originally developed based on the ideas of variable structure control, and for the purpose of detecting disturbances in complex systems. In this anomaly detection application, these observers were designed for estimating the distributed state of fluid flow in a pipe described by a class of advection equations. The observer algorithm uses collected data in a piping system to estimate the distributed system state (pressure and velocity along a pipe containing liquid gas propellant flow) using only boundary measurements. These estimates are then used to further estimate and localize possible anomalies such as leaks or foreign objects, and instrumentation metering problems such as incorrect flow meter orifice plate size. The observer algorithm has the following parts: a mathematical model of the fluid flow, observer control algorithm, and an anomaly identification algorithm. The main functional operation of the algorithm is in creating the sliding mode in the observer system implemented as software. Once the sliding mode starts in the system, the equivalent value of the discontinuous function in sliding mode can be obtained by filtering out the high-frequency chattering component. In control theory, "observers" are dynamic algorithms for the online estimation of the current state of a dynamic system by measurements of an output of the system. Classical linear observers can provide optimal estimates of a system state in case of uncertainty modeled by white noise. For nonlinear cases, the theory of nonlinear observers has been developed and its success is mainly due to the sliding mode approach. Using the mathematical theory of variable structure systems with sliding modes, the observer algorithm is designed in such a way that it steers the output of the model to the output of the system obtained via a variety of sensors, in spite of possible mismatches between the assumed model and actual system. The unique properties of sliding mode control

  16. Anomaly detection applied to a materials control and accounting database

    SciTech Connect

    Whiteson, R.; Spanks, L.; Yarbro, T.

    1995-09-01

    An important component of the national mission of reducing the nuclear danger includes accurate recording of the processing and transportation of nuclear materials. Nuclear material storage facilities, nuclear chemical processing plants, and nuclear fuel fabrication facilities collect and store large amounts of data describing transactions that involve nuclear materials. To maintain confidence in the integrity of these data, it is essential to identify anomalies in the databases. Anomalous data could indicate error, theft, or diversion of material. Yet, because of the complex and diverse nature of the data, analysis and evaluation are extremely tedious. This paper describes the authors work in the development of analysis tools to automate the anomaly detection process for the Material Accountability and Safeguards System (MASS) that tracks and records the activities associated with accountable quantities of nuclear material at Los Alamos National Laboratory. Using existing guidelines that describe valid transactions, the authors have created an expert system that identifies transactions that do not conform to the guidelines. Thus, this expert system can be used to focus the attention of the expert or inspector directly on significant phenomena.

  17. Anomaly detection of microstructural defects in continuous fiber reinforced composites

    NASA Astrophysics Data System (ADS)

    Bricker, Stephen; Simmons, J. P.; Przybyla, Craig; Hardie, Russell

    2015-03-01

    Ceramic matrix composites (CMC) with continuous fiber reinforcements have the potential to enable the next generation of high speed hypersonic vehicles and/or significant improvements in gas turbine engine performance due to their exhibited toughness when subjected to high mechanical loads at extreme temperatures (2200F+). Reinforced fiber composites (RFC) provide increased fracture toughness, crack growth resistance, and strength, though little is known about how stochastic variation and imperfections in the material effect material properties. In this work, tools are developed for quantifying anomalies within the microstructure at several scales. The detection and characterization of anomalous microstructure is a critical step in linking production techniques to properties, as well as in accurate material simulation and property prediction for the integrated computation materials engineering (ICME) of RFC based components. It is desired to find statistical outliers for any number of material characteristics such as fibers, fiber coatings, and pores. Here, fiber orientation, or `velocity', and `velocity' gradient are developed and examined for anomalous behavior. Categorizing anomalous behavior in the CMC is approached by multivariate Gaussian mixture modeling. A Gaussian mixture is employed to estimate the probability density function (PDF) of the features in question, and anomalies are classified by their likelihood of belonging to the statistical normal behavior for that feature.

  18. A high-order statistical tensor based algorithm for anomaly detection in hyperspectral imagery.

    PubMed

    Geng, Xiurui; Sun, Kang; Ji, Luyan; Zhao, Yongchao

    2014-01-01

    Recently, high-order statistics have received more and more interest in the field of hyperspectral anomaly detection. However, most of the existing high-order statistics based anomaly detection methods require stepwise iterations since they are the direct applications of blind source separation. Moreover, these methods usually produce multiple detection maps rather than a single anomaly distribution image. In this study, we exploit the concept of coskewness tensor and propose a new anomaly detection method, which is called COSD (coskewness detector). COSD does not need iteration and can produce single detection map. The experiments based on both simulated and real hyperspectral data sets verify the effectiveness of our algorithm. PMID:25366706

  19. Near-Real Time Anomaly Detection for Scientific Sensor Data

    NASA Astrophysics Data System (ADS)

    Gallegos, I.; Gates, A.; Tweedie, C. E.; goswami, S.; Jaimes, A.; Gamon, J. A.

    2011-12-01

    Verification (SDVe) prototype tool identified anomalies detected by the expert-specified data properties over the EC data. Scientists using DaProS and SDVe were able to detect environmental variability, instrument malfunctioning, and seasonal and diurnal variability in EC and hyperspectral datasets. The results of the experiment also yielded insights regarding the practices followed by scientists to specify data properties, and it exposed new data properties challenges and a potential method for capturing data quality confidence levels.

  20. Apparatus for detecting a magnetic anomaly contiguous to remote location by squid gradiometer and magnetometer systems

    DOEpatents

    Overton, Jr., William C.; Steyert, Jr., William A.

    1984-01-01

    A superconducting quantum interference device (SQUID) magnetic detection apparatus detects magnetic fields, signals, and anomalies at remote locations. Two remotely rotatable SQUID gradiometers may be housed in a cryogenic environment to search for and locate unambiguously magnetic anomalies. The SQUID magnetic detection apparatus can be used to determine the azimuth of a hydrofracture by first flooding the hydrofracture with a ferrofluid to create an artificial magnetic anomaly therein.

  1. FRaC: a feature-modeling approach for semi-supervised and unsupervised anomaly detection

    PubMed Central

    Brodley, Carla; Slonim, Donna

    2011-01-01

    Anomaly detection involves identifying rare data instances (anomalies) that come from a different class or distribution than the majority (which are simply called “normal” instances). Given a training set of only normal data, the semi-supervised anomaly detection task is to identify anomalies in the future. Good solutions to this task have applications in fraud and intrusion detection. The unsupervised anomaly detection task is different: Given unlabeled, mostly-normal data, identify the anomalies among them. Many real-world machine learning tasks, including many fraud and intrusion detection tasks, are unsupervised because it is impractical (or impossible) to verify all of the training data. We recently presented FRaC, a new approach for semi-supervised anomaly detection. FRaC is based on using normal instances to build an ensemble of feature models, and then identifying instances that disagree with those models as anomalous. In this paper, we investigate the behavior of FRaC experimentally and explain why FRaC is so successful. We also show that FRaC is a superior approach for the unsupervised as well as the semi-supervised anomaly detection task, compared to well-known state-of-the-art anomaly detection methods, LOF and one-class support vector machines, and to an existing feature-modeling approach. PMID:22639542

  2. Sensor Anomaly Detection in Wireless Sensor Networks for Healthcare

    PubMed Central

    Haque, Shah Ahsanul; Rahman, Mustafizur; Aziz, Syed Mahfuzul

    2015-01-01

    Wireless Sensor Networks (WSN) are vulnerable to various sensor faults and faulty measurements. This vulnerability hinders efficient and timely response in various WSN applications, such as healthcare. For example, faulty measurements can create false alarms which may require unnecessary intervention from healthcare personnel. Therefore, an approach to differentiate between real medical conditions and false alarms will improve remote patient monitoring systems and quality of healthcare service afforded by WSN. In this paper, a novel approach is proposed to detect sensor anomaly by analyzing collected physiological data from medical sensors. The objective of this method is to effectively distinguish false alarms from true alarms. It predicts a sensor value from historic values and compares it with the actual sensed value for a particular instance. The difference is compared against a threshold value, which is dynamically adjusted, to ascertain whether the sensor value is anomalous. The proposed approach has been applied to real healthcare datasets and compared with existing approaches. Experimental results demonstrate the effectiveness of the proposed system, providing high Detection Rate (DR) and low False Positive Rate (FPR). PMID:25884786

  3. Online anomaly detection in crowd scenes via structure analysis.

    PubMed

    Yuan, Yuan; Fang, Jianwu; Wang, Qi

    2015-03-01

    Abnormal behavior detection in crowd scenes is continuously a challenge in the field of computer vision. For tackling this problem, this paper starts from a novel structure modeling of crowd behavior. We first propose an informative structural context descriptor (SCD) for describing the crowd individual, which originally introduces the potential energy function of particle's interforce in solid-state physics to intuitively conduct vision contextual cueing. For computing the crowd SCD variation effectively, we then design a robust multi-object tracker to associate the targets in different frames, which employs the incremental analytical ability of the 3-D discrete cosine transform (DCT). By online spatial-temporal analyzing the SCD variation of the crowd, the abnormality is finally localized. Our contribution mainly lies on three aspects: 1) the new exploration of abnormal detection from structure modeling where the motion difference between individuals is computed by a novel selective histogram of optical flow that makes the proposed method can deal with more kinds of anomalies; 2) the SCD description that can effectively represent the relationship among the individuals; and 3) the 3-D DCT multi-object tracker that can robustly associate the limited number of (instead of all) targets which makes the tracking analysis in high density crowd situation feasible. Experimental results on several publicly available crowd video datasets verify the effectiveness of the proposed method. PMID:24988603

  4. Sequential detection of a weak target in a hostile ocean environment

    SciTech Connect

    Candy, J V; Sullivan, E J

    2005-03-14

    When the underlying physical phenomenology (medium, sediment, bottom, etc.) is space-time varying along with corresponding nonstationary statistics characterizing noise and uncertainties, then sequential methods must be applied to capture the underlying processes. Sequential detection and estimation techniques offer distinct advantages over batch methods. A reasonable signal processing approach to solve this class of problem is to employ adaptive or parametrically adaptive signal models and noise to capture these phenomena. In this paper, we develop a sequential approach to solve the signal detection problem in a nonstationary environment.

  5. Fault detection on a sewer network by a combination of a Kalman filter and a binary sequential probability ratio test

    NASA Astrophysics Data System (ADS)

    Piatyszek, E.; Voignier, P.; Graillot, D.

    2000-05-01

    One of the aims of sewer networks is the protection of population against floods and the reduction of pollution rejected to the receiving water during rainy events. To meet these goals, managers have to equip the sewer networks with and to set up real-time control systems. Unfortunately, a component fault (leading to intolerable behaviour of the system) or sensor fault (deteriorating the process view and disturbing the local automatism) makes the sewer network supervision delicate. In order to ensure an adequate flow management during rainy events it is essential to set up procedures capable of detecting and diagnosing these anomalies. This article introduces a real-time fault detection method, applicable to sewer networks, for the follow-up of rainy events. This method consists in comparing the sensor response with a forecast of this response. This forecast is provided by a model and more precisely by a state estimator: a Kalman filter. This Kalman filter provides not only a flow estimate but also an entity called 'innovation'. In order to detect abnormal operations within the network, this innovation is analysed with the binary sequential probability ratio test of Wald. Moreover, by crossing available information on several nodes of the network, a diagnosis of the detected anomalies is carried out. This method provided encouraging results during the analysis of several rains, on the sewer network of Seine-Saint-Denis County, France.

  6. A Bayesian group sequential approach to safety signal detection.

    PubMed

    Chen, Wenfeng; Zhao, Naiqing; Qin, Guoyou; Chen, Jie

    2013-01-01

    Clinical safety data, usually reported as clinically manifested adverse events (AEs) according to the Medical Dictionary for Regulatory Activities (MedDRA), are routinely collected during the course of a clinical trial involving comparative groups, and periodical monitoring of the safety events is often required to determine whether excessive occurrence of a set of AEs is associated with treatment. To accommodate the structure of reported AEs with the MedDRA system, a Bayesian hierarchical model has been proposed for the analysis of clinical safety data. However, the characteristics of sequential use of the Bayesian method has not been studied. In this paper the Bayesian hierarchical model is applied in a group sequential manner for multiple interim analyses of safety events. A decision-theoretic approach is employed to determine threshold values in the safety signaling process. The proposed approach is illustrated through simulations and a real example. PMID:23331232

  7. Study of Southern Tyrrhenian and Sicilian regions by a sequential procedure to integrate WAM seismic tomographies and Bouguer anomaly data

    NASA Astrophysics Data System (ADS)

    Panepinto, S.; Calo, M. M.; Luzio, D.; Dorbath, C.

    2009-12-01

    A procedure to obtain 3D velocity-density models and earthquake relocation by integrated inversion of P and S wave traveltimes and Bouguer anomaly distribution was applied to a large dataset concerning the Southern Tyrrhenian and Sicilian areas. The seismic dataset was subdivided into two subsets for separate inversions, whose results were later on joined by the WAM (Weighted Average Model) technique. This is a post-processing technique proposed by Calò et al. (2009) by which preliminary tomographic models are unified in a common 3D grid. The first dataset concerns 28873 P and 9990 S arrival times of 1800 earthquakes located in the area 14°30‧ E - 17°E, 37°N - 41°N while the second dataset contains 31250 P and 13588 S arrival-times related to 1951 events located in the area 11° E - 15°48‧ E, 36°30‧N - 39°N. The selected events were recorded at least by 10 stations in the period 1981-2005 and marked by RMS < 0.50 s. The second dataset was integrated with P-wave traveltimes picked in several sesmic profiles carried out in the study region. The Bouguer anomaly measurements were interpolated in the nodes of a 8x8 km regular grid covering the area 12° E - 16°01‧ E, 36°13‧ N - 38°31‧ N. The proposed procedure allows to invert seismic and gravimetric data with a sequential technique to avoid the problematic optimization of the relative weights to assign to the different type of data. A first WAM provides a preliminary Vp, Vs and Vp/Vs models and a first ipocentral relocation. Since the obtained Vs model seems poorly constrained by the S wave arrival times, the Vp model is converted in a new Vs model, through a Vs-Vp correlation law proposed by T.M. Brocher (2005), and used, jointly to the Vp model, as input for a second WAM. The results of this second step are used to derive, by the empirical Brocher’s equations, 2 density distributions associated to the Vp and Vs models. These density models are statistically compared and the distribution of

  8. SCADA Protocol Anomaly Detection Utilizing Compression (SPADUC) 2013

    SciTech Connect

    Gordon Rueff; Lyle Roybal; Denis Vollmer

    2013-01-01

    There is a significant need to protect the nation’s energy infrastructures from malicious actors using cyber methods. Supervisory, Control, and Data Acquisition (SCADA) systems may be vulnerable due to the insufficient security implemented during the design and deployment of these control systems. This is particularly true in older legacy SCADA systems that are still commonly in use. The purpose of INL’s research on the SCADA Protocol Anomaly Detection Utilizing Compression (SPADUC) project was to determine if and how data compression techniques could be used to identify and protect SCADA systems from cyber attacks. Initially, the concept was centered on how to train a compression algorithm to recognize normal control system traffic versus hostile network traffic. Because large portions of the TCP/IP message traffic (called packets) are repetitive, the concept of using compression techniques to differentiate “non-normal” traffic was proposed. In this manner, malicious SCADA traffic could be identified at the packet level prior to completing its payload. Previous research has shown that SCADA network traffic has traits desirable for compression analysis. This work investigated three different approaches to identify malicious SCADA network traffic using compression techniques. The preliminary analyses and results presented herein are clearly able to differentiate normal from malicious network traffic at the packet level at a very high confidence level for the conditions tested. Additionally, the master dictionary approach used in this research appears to initially provide a meaningful way to categorize and compare packets within a communication channel.

  9. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data.

    PubMed

    Goldstein, Markus; Uchida, Seiichi

    2016-01-01

    Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. PMID:27093601

  10. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data

    PubMed Central

    Goldstein, Markus; Uchida, Seiichi

    2016-01-01

    Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. PMID:27093601

  11. Remote detection of geobotanical anomalies associated with hydrocarbon microseepage

    NASA Technical Reports Server (NTRS)

    Rock, B. N.

    1985-01-01

    As part of the continuing study of the Lost River, West Virginia NASA/Geosat Test Case Site, an extensive soil gas survey of the site was conducted during the summer of 1983. This soil gas survey has identified an order of magnitude methane, ethane, propane, and butane anomaly that is precisely coincident with the linear maple anomaly reported previously. This and other maple anomalies were previously suggested to be indicative of anaerobic soil conditions associated with hydrocarbon microseepage. In vitro studies support the view that anomalous distributions of native tree species tolerant of anaerobic soil conditions may be useful indicators of methane microseepage in heavily vegetated areas of the United States characterized by deciduous forest cover. Remote sensing systems which allow discrimination and mapping of native tree species and/or species associations will provide the exploration community with a means of identifying vegetation distributional anomalies indicative of microseepage.

  12. Accumulating pyramid spatial-spectral collaborative coding divergence for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Zou, Huanxin; Zhou, Shilin

    2016-03-01

    Detection of anomalous targets of various sizes in hyperspectral data has received a lot of attention in reconnaissance and surveillance applications. Many anomaly detectors have been proposed in literature. However, current methods are susceptible to anomalies in the processing window range and often make critical assumptions about the distribution of the background data. Motivated by the fact that anomaly pixels are often distinctive from their local background, in this letter, we proposed a novel hyperspectral anomaly detection framework for real-time remote sensing applications. The proposed framework consists of four major components, sparse feature learning, pyramid grid window selection, joint spatial-spectral collaborative coding and multi-level divergence fusion. It exploits the collaborative representation difference in the feature space to locate potential anomalies and is totally unsupervised without any prior assumptions. Experimental results on airborne recorded hyperspectral data demonstrate that the proposed methods adaptive to anomalies in a large range of sizes and is well suited for parallel processing.

  13. Detecting Distributed Network Traffic Anomaly with Network-Wide Correlation Analysis

    NASA Astrophysics Data System (ADS)

    Zonglin, Li; Guangmin, Hu; Xingmiao, Yao; Dan, Yang

    2008-12-01

    Distributed network traffic anomaly refers to a traffic abnormal behavior involving many links of a network and caused by the same source (e.g., DDoS attack, worm propagation). The anomaly transiting in a single link might be unnoticeable and hard to detect, while the anomalous aggregation from many links can be prevailing, and does more harm to the networks. Aiming at the similar features of distributed traffic anomaly on many links, this paper proposes a network-wide detection method by performing anomalous correlation analysis of traffic signals' instantaneous parameters. In our method, traffic signals' instantaneous parameters are firstly computed, and their network-wide anomalous space is then extracted via traffic prediction. Finally, an anomaly is detected by a global correlation coefficient of anomalous space. Our evaluation using Abilene traffic traces demonstrates the excellent performance of this approach for distributed traffic anomaly detection.

  14. Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions.

    PubMed

    Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy

    2016-01-01

    Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience. PMID:26179055

  15. Inverse sequential detection of parameter changes in developing time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1992-01-01

    Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.

  16. Gaussian Process Regression-Based Video Anomaly Detection and Localization With Hierarchical Feature Representation.

    PubMed

    Cheng, Kai-Wen; Chen, Yie-Tarng; Fang, Wen-Hsien

    2015-12-01

    This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression (GPR) which is fully non-parametric and robust to the noisy training data, and supports sparse features. While most research on anomaly detection has focused more on detecting local anomalies, we are more interested in global anomalies that involve multiple normal events interacting in an unusual manner, such as car accidents. To simultaneously detect local and global anomalies, we cast the extraction of normal interactions from the training videos as a problem of finding the frequent geometric relations of the nearby sparse spatio-temporal interest points (STIPs). A codebook of interaction templates is then constructed and modeled using the GPR, based on which a novel inference method for computing the likelihood of an observed interaction is also developed. Thereafter, these local likelihood scores are integrated into globally consistent anomaly masks, from which anomalies can be succinctly identified. To the best of our knowledge, it is the first time GPR is employed to model the relationship of the nearby STIPs for anomaly detection. Simulations based on four widespread datasets show that the new method outperforms the main state-of-the-art methods with lower computational burden. PMID:26394423

  17. Numerical study on the sequential Bayesian approach for radioactive materials detection

    NASA Astrophysics Data System (ADS)

    Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng

    2013-01-01

    A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.

  18. Aircraft Anomaly Detection Using Performance Models Trained on Fleet Data

    NASA Technical Reports Server (NTRS)

    Gorinevsky, Dimitry; Matthews, Bryan L.; Martin, Rodney

    2012-01-01

    This paper describes an application of data mining technology called Distributed Fleet Monitoring (DFM) to Flight Operational Quality Assurance (FOQA) data collected from a fleet of commercial aircraft. DFM transforms the data into aircraft performance models, flight-to-flight trends, and individual flight anomalies by fitting a multi-level regression model to the data. The model represents aircraft flight performance and takes into account fixed effects: flight-to-flight and vehicle-to-vehicle variability. The regression parameters include aerodynamic coefficients and other aircraft performance parameters that are usually identified by aircraft manufacturers in flight tests. Using DFM, the multi-terabyte FOQA data set with half-million flights was processed in a few hours. The anomalies found include wrong values of competed variables, (e.g., aircraft weight), sensor failures and baises, failures, biases, and trends in flight actuators. These anomalies were missed by the existing airline monitoring of FOQA data exceedances.

  19. Detecting Anomaly Regions in Satellite Image Time Series Based on Sesaonal Autocorrelation Analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Z.-G.; Tang, P.; Zhou, M.

    2016-06-01

    Anomaly regions in satellite images can reflect unexpected changes of land cover caused by flood, fire, landslide, etc. Detecting anomaly regions in satellite image time series is important for studying the dynamic processes of land cover changes as well as for disaster monitoring. Although several methods have been developed to detect land cover changes using satellite image time series, they are generally designed for detecting inter-annual or abrupt land cover changes, but are not focusing on detecting spatial-temporal changes in continuous images. In order to identify spatial-temporal dynamic processes of unexpected changes of land cover, this study proposes a method for detecting anomaly regions in each image of satellite image time series based on seasonal autocorrelation analysis. The method was validated with a case study to detect spatial-temporal processes of a severe flooding using Terra/MODIS image time series. Experiments demonstrated the advantages of the method that (1) it can effectively detect anomaly regions in each of satellite image time series, showing spatial-temporal varying process of anomaly regions, (2) it is flexible to meet some requirement (e.g., z-value or significance level) of detection accuracies with overall accuracy being up to 89% and precision above than 90%, and (3) it does not need time series smoothing and can detect anomaly regions in noisy satellite images with a high reliability.

  20. Energy Detection Based on Undecimated Discrete Wavelet Transform and Its Application in Magnetic Anomaly Detection

    PubMed Central

    Nie, Xinhua; Pan, Zhongming; Zhang, Dasha; Zhou, Han; Chen, Min; Zhang, Wenna

    2014-01-01

    Magnetic anomaly detection (MAD) is a passive approach for detection of a ferromagnetic target, and its performance is often limited by external noises. In consideration of one major noise source is the fractal noise (or called 1/f noise) with a power spectral density of 1/fa (0detection method based on undecimated discrete wavelet transform (UDWT) is proposed in this paper. Firstly, the foundations of magnetic anomaly detection and UDWT are introduced in brief, while a possible detection system based on giant magneto-impedance (GMI) magnetic sensor is also given out. Then our proposed energy detection based on UDWT is described in detail, and the probabilities of false alarm and detection for given the detection threshold in theory are presented. It is noticeable that no a priori assumptions regarding the ferromagnetic target or the magnetic noise probability are necessary for our method, and different from the discrete wavelet transform (DWT), the UDWT is shift invariant. Finally, some simulations are performed and the results show that the detection performance of our proposed detector is better than that of the conventional energy detector even utilized in the Gaussian white noise, especially when the spectral parameter α is less than 1.0. In addition, a real-world experiment was done to demonstrate the advantages of the proposed method. PMID:25343484

  1. Lunar magnetic anomalies detected by the Apollo subsatellite magnetometers

    NASA Technical Reports Server (NTRS)

    Hood, L. L.; Coleman, P. J., Jr.; Russell, C. T.; Wilhelms, D. E.

    1979-01-01

    Properties of lunar crustal magnetization thus far deduced from Apollo subsatellite magnetometer data are reviewed using two of the most accurate available magnetic anomaly maps, one covering a portion of the lunar near side and the other a part of the far side. The largest single anomaly found within the region of coverage on the near-side map correlates exactly with a conspicuous light-colored marking in western Oceanus Procellarum called Reiner Gamma. This feature is interpreted as an unusual deposit of ejecta from secondary craters of the large nearby primary impact crater Cavalerius. The mean altitude of the far-side anomaly gap is much higher than that of the near side map and the surface geology is more complex; individual anomaly sources have therefore not yet been identified. The mechanism of magnetization and the origin of the magnetizing field remain unresolved, but the uniformity with which the Reiner Gamma deposit is apparently magnetized, and the north-south depletion of magnetization intensity across a substantial portion of the far side, seem to require the existence of an ambient field, perhaps of global or larger extent.

  2. A novel approach for detection of anomalies using measurement data of the Ironton-Russell bridge

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Norouzi, Mehdi; Hunt, Victor; Helmicki, Arthur

    2015-04-01

    Data models have been increasingly used in recent years for documenting normal behavior of structures and hence detect and classify anomalies. Large numbers of machine learning algorithms were proposed by various researchers to model operational and functional changes in structures; however, a limited number of studies were applied to actual measurement data due to limited access to the long term measurement data of structures and lack of access to the damaged states of structures. By monitoring the structure during construction and reviewing the effect of construction events on the measurement data, this study introduces a new approach to detect and eventually classify anomalies during construction and after construction. First, the implementation procedure of the sensory network that develops while the bridge is being built and its current status will be detailed. Second, the proposed anomaly detection algorithm will be applied on the collected data and finally, detected anomalies will be validated against the archived construction events.

  3. Detection of nucleic acids by multiple sequential invasive cleavages

    DOEpatents

    Hall, Jeff G.; Lyamichev, Victor I.; Mast, Andrea L.; Brow, Mary Ann D.

    1999-01-01

    The present invention relates to means for the detection and characterization of nucleic acid sequences, as well as variations in nucleic acid sequences. The present invention also relates to methods for forming a nucleic acid cleavage structure on a target sequence and cleaving the nucleic acid cleavage structure in a site-specific manner. The structure-specific nuclease activity of a variety of enzymes is used to cleave the target-dependent cleavage structure, thereby indicating the presence of specific nucleic acid sequences or specific variations thereof. The present invention further relates to methods and devices for the separation of nucleic acid molecules based on charge. The present invention also provides methods for the detection of non-target cleavage products via the formation of a complete and activated protein binding region. The invention further provides sensitive and specific methods for the detection of human cytomegalovirus nucleic acid in a sample.

  4. Detection of nucleic acids by multiple sequential invasive cleavages 02

    DOEpatents

    Hall, Jeff G.; Lyamichev, Victor I.; Mast, Andrea L.; Brow, Mary Ann D.

    2002-01-01

    The present invention relates to means for the detection and characterization of nucleic acid sequences, as well as variations in nucleic acid sequences. The present invention also relates to methods for forming a nucleic acid cleavage structure on a target sequence and cleaving the nucleic acid cleavage structure in a site-specific manner. The structure-specific nuclease activity of a variety of enzymes is used to cleave the target-dependent cleavage structure, thereby indicating the presence of specific nucleic acid sequences or specific variations thereof. The present invention further relates to methods and devices for the separation of nucleic acid molecules based on charge. The present invention also provides methods for the detection of non-target cleavage products via the formation of a complete and activated protein binding region. The invention further provides sensitive and specific methods for the detection of human cytomegalovirus nucleic acid in a sample.

  5. Detection of nucleic acids by multiple sequential invasive cleavages

    DOEpatents

    Hall, Jeff G; Lyamichev, Victor I; Mast, Andrea L; Brow, Mary Ann D

    2012-10-16

    The present invention relates to means for the detection and characterization of nucleic acid sequences, as well as variations in nucleic acid sequences. The present invention also relates to methods for forming a nucleic acid cleavage structure on a target sequence and cleaving the nucleic acid cleavage structure in a site-specific manner. The structure-specific nuclease activity of a variety of enzymes is used to cleave the target-dependent cleavage structure, thereby indicating the presence of specific nucleic acid sequences or specific variations thereof. The present invention further relates to methods and devices for the separation of nucleic acid molecules based on charge. The present invention also provides methods for the detection of non-target cleavage products via the formation of a complete and activated protein binding region. The invention further provides sensitive and specific methods for the detection of human cytomegalovirus nucleic acid in a sample.

  6. Off-line experiments on radionuclide detection based on the sequential Bayesian approach

    NASA Astrophysics Data System (ADS)

    Qingpei, Xiang; Dongfeng, Tian; Fanhua, Hao; Ge, Ding; Jun, Zeng; Fei, Luo

    2013-11-01

    The sequential Bayesian approach proposed by Candy et al. for radioactive materials detection has aroused increasing interest in radiation detection research and is potentially a useful tool for prevention of the transportation of radioactive materials by terrorists. In our previous work, the performance of the sequential Bayesian approach was studied numerically through a simulation experiment platform. In this paper, a sequential Bayesian processor incorporating a LaBr3(Ce) detector, and using the energy, decay rate and emission probability of the radionuclide, is fully developed. Off-line experiments for the performance of the sequential Bayesian approach in radionuclide detection are developed by placing 60Co, 137Cs, 133Ba and 152Eu at various distances from the front face of the LaBr3(Ce) detector. The off-line experiment results agree well with the results of previous numerical experiments. The maximum detection distance is introduced to evaluate the processor‧s ability to detect radionuclides with a specific level of activity.

  7. Anomaly Detection in Multiple Scale for Insider Threat Analysis

    SciTech Connect

    Kim, Yoohwan; Sheldon, Frederick T; Hively, Lee M

    2012-01-01

    We propose a method to quantify malicious insider activity with statistical and graph-based analysis aided with semantic scoring rules. Different types of personal activities or interactions are monitored to form a set of directed weighted graphs. The semantic scoring rules assign higher scores for the events more significant and suspicious. Then we build personal activity profiles in the form of score tables. Profiles are created in multiple scales where the low level profiles are aggregated toward more stable higherlevel profiles within the subject or object hierarchy. Further, the profiles are created in different time scales such as day, week, or month. During operation, the insider s current activity profile is compared to the historical profiles to produce an anomaly score. For each subject with a high anomaly score, a subgraph of connected subjects is extracted to look for any related score movement. Finally the subjects are ranked by their anomaly scores to help the analysts focus on high-scored subjects. The threat-ranking component supports the interaction between the User Dashboard and the Insider Threat Knowledge Base portal. The portal includes a repository for historical results, i.e., adjudicated cases containing all of the information first presented to the user and including any additional insights to help the analysts. In this paper we show the framework of the proposed system and the operational algorithms.

  8. Software Tool Support to Specify and Verify Scientific Sensor Data Properties to Improve Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Gallegos, I.; Gates, A. Q.; Tweedie, C.; Cybershare

    2010-12-01

    Advancements in scientific sensor data acquisition technologies, such as wireless sensor networks and robotic trams equipped with sensors, are increasing the amount of data being collected at field sites . This elevates the challenges of verifying the quality of streamed data and monitoring the correct operation of the instrumentation. Without the ability to evaluate the data collection process at near real-time, scientists can lose valuable time and data. In addition, scientists have to rely on their knowledge and experience in the field to evaluate data quality. Such knowledge is rarely shared or reused by other scientists mostly because of the lack of a well-defined methodology and tool support. Numerous scientific projects address anomaly detection, mostly as part of the verification system’s source code; however, anomaly detection properties, which often are embedded or hard-coded in the source code, are difficult to refine. In addition, a software developer is required to modify the source code every time a new anomaly detection property or a modification to an existing property is needed. This poster describes the tool support that has been developed, based on software engineering techniques, to address these challenges. The overall tool support allows scientists to specify and reuse anomaly detection properties generated using the specification tool and to use the specified properties to conduct automated anomaly detection at near-real time. The anomaly-detection mechanism is independent of the system used to collect the sensor data. With guidance provided by a classification and categorization of anomaly-detection properties, the user specifies properties on scientific sensor data. The properties, which can be associated with particular field sites or instrumentation, document knowledge about data anomalies that otherwise would have limited availability to the scientific community.

  9. Lunar magnetic anomalies detected by the Apollo substatellite magnetometers

    USGS Publications Warehouse

    Hood, L.L.; Coleman, P.J., Jr.; Russell, C.T.; Wilhelms, D.E.

    1979-01-01

    Properties of lunar crustal magnetization thus far deduced from Apollo subsatellite magnetometer data are reviewed using two of the most accurate presently available magnetic anomaly maps - one covering a portion of the lunar near side and the other a part of the far side. The largest single anomaly found within the region of coverage on the near-side map correlates exactly with a conspicuous, light-colored marking in western Oceanus Procellarum called Reiner Gamma. This feature is interpreted as an unusual deposit of ejecta from secondary craters of the large nearby primary impact crater Cavalerius. An age for Cavalerius (and, by implication, for Reiner Gamma) of 3.2 ?? 0.2 ?? 109 y is estimated. The main (30 ?? 60 km) Reiner Gamma deposit is nearly uniformly magnetized in a single direction, with a minimum mean magnetization intensity of ???7 ?? 10-2 G cm3/g (assuming a density of 3 g/cm3), or about 700 times the stable magnetization component of the most magnetic returned samples. Additional medium-amplitude anomalies exist over the Fra Mauro Formation (Imbrium basin ejecta emplaced ???3.9 ?? 109 y ago) where it has not been flooded by mare basalt flows, but are nearly absent over the maria and over the craters Copernicus, Kepler, and Reiner and their encircling ejecta mantles. The mean altitude of the far-side anomaly gap is much higher than that of the near-side map and the surface geology is more complex, so individual anomaly sources have not yet been identified. However, it is clear that a concentration of especially strong sources exists in the vicinity of the craters Van de Graaff and Aitken. Numerical modeling of the associated fields reveals that the source locations do not correspond with the larger primary impact craters of the region and, by analogy with Reiner Gamma, may be less conspicuous secondary crater ejecta deposits. The reason for a special concentration of strong sources in the Van de Graaff-Aitken region is unknown, but may be indirectly

  10. Sequential capillary electrophoresis analysis using optically gated sample injection and UV/vis detection.

    PubMed

    Liu, Xiaoxia; Tian, Miaomiao; Camara, Mohamed Amara; Guo, Liping; Yang, Li

    2015-10-01

    We present sequential CE analysis of amino acids and L-asparaginase-catalyzed enzyme reaction, by combing the on-line derivatization, optically gated (OG) injection and commercial-available UV-Vis detection. Various experimental conditions for sequential OG-UV/vis CE analysis were investigated and optimized by analyzing a standard mixture of amino acids. High reproducibility of the sequential CE analysis was demonstrated with RSD values (n = 20) of 2.23, 2.57, and 0.70% for peak heights, peak areas, and migration times, respectively, and the LOD of 5.0 μM (for asparagine) and 2.0 μM (for aspartic acid) were obtained. With the application of the OG-UV/vis CE analysis, sequential online CE enzyme assay of L-asparaginase-catalyzed enzyme reaction was carried out by automatically and continuously monitoring the substrate consumption and the product formation every 12 s from the beginning to the end of the reaction. The Michaelis constants for the reaction were obtained and were found to be in good agreement with the results of traditional off-line enzyme assays. The study demonstrated the feasibility and reliability of integrating the OG injection with UV/vis detection for sequential online CE analysis, which could be of potential value for online monitoring various chemical reaction and bioprocesses. PMID:26040711

  11. A hyperspectral imagery anomaly detection algorithm based on local three-dimensional orthogonal subspace projection

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Wen, Gongjian

    2015-10-01

    Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.

  12. Multi-Level Anomaly Detection on Time-Varying Graph Data

    SciTech Connect

    Bridges, Robert A; Collins, John P; Ferragut, Erik M; Laska, Jason A; Sullivan, Blair D

    2015-01-01

    This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating probabilities at finer levels, and these closely related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitates intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. To illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.

  13. Implementation of a General Real-Time Visual Anomaly Detection System Via Soft Computing

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve; Ferrell, Bob; Steinrock, Todd (Technical Monitor)

    2001-01-01

    The intelligent visual system detects anomalies or defects in real time under normal lighting operating conditions. The application is basically a learning machine that integrates fuzzy logic (FL), artificial neural network (ANN), and generic algorithm (GA) schemes to process the image, run the learning process, and finally detect the anomalies or defects. The system acquires the image, performs segmentation to separate the object being tested from the background, preprocesses the image using fuzzy reasoning, performs the final segmentation using fuzzy reasoning techniques to retrieve regions with potential anomalies or defects, and finally retrieves them using a learning model built via ANN and GA techniques. FL provides a powerful framework for knowledge representation and overcomes uncertainty and vagueness typically found in image analysis. ANN provides learning capabilities, and GA leads to robust learning results. An application prototype currently runs on a regular PC under Windows NT, and preliminary work has been performed to build an embedded version with multiple image processors. The application prototype is being tested at the Kennedy Space Center (KSC), Florida, to visually detect anomalies along slide basket cables utilized by the astronauts to evacuate the NASA Shuttle launch pad in an emergency. The potential applications of this anomaly detection system in an open environment are quite wide. Another current, potentially viable application at NASA is in detecting anomalies of the NASA Space Shuttle Orbiter's radiator panels.

  14. Analyzing Global Climate System Using Graph Based Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Das, K.; Agrawal, S.; Atluri, G.; Liess, S.; Steinbach, M.; Kumar, V.

    2014-12-01

    Climate networks have been studied for understanding complex relationships between different spatial locations such as community structures and teleconnections. Analysis of time-evolving climate networks reveals changes that occur in those relationships over time and can provide insights for discovering new and complex climate phenomena. We have recently developed a novel data mining technique to discover anomalous relationships from dynamic climate networks. The algorithms efficiently identifies anomalous changes in relationships that cause significant structural changes in the climate network from one time instance to the next. Using this technique we investigated the presence of anomalies in precipitation networks that were constructed based on monthly averages of precipitation recorded at .5 degree resolution during the time period 1982 to 2002. The precipitation network consisted of 10-nearest neighbor graphs for every month's data. Preliminary results on this data set indicate that we were able to discover several anomalies that have been verified to be related to or as the outcome of well known climate phenomena. For instance, one such set of anomalies corresponds to transition from January 1994 (normal conditions) to January 1995 (El-Nino conditions) and include events like worst droughts of the 20th century in Australian Plains, very high rainfall in southeast Asian islands, and drought-like conditions in Peru, Chile, and eastern equatorial Africa during that time period. We plan to further apply our technique to networks constructed out of different climate variables such as sea-level pressure, surface air temperature, wind velocity, 500 geo-potential height etc. at different resolutions. Using this method we hope to develop deeper insights regarding the interactions of multiple climate variables globally over time, which might lead to discovery of previously unknown climate phenomena involving heterogeneous data sources.

  15. Sequential detection of temporal communities by estrangement confinement.

    PubMed

    Kawadia, Vikas; Sreenivasan, Sameet

    2012-01-01

    Temporal communities are the result of a consistent partitioning of nodes across multiple snapshots of an evolving network, and they provide insights into how dense clusters in a network emerge, combine, split and decay over time. To reliably detect temporal communities we need to not only find a good community partition in a given snapshot but also ensure that it bears some similarity to the partition(s) found in the previous snapshot(s), a particularly difficult task given the extreme sensitivity of community structure yielded by current methods to changes in the network structure. Here, motivated by the inertia of inter-node relationships, we present a new measure of partition distance called estrangement, and show that constraining estrangement enables one to find meaningful temporal communities at various degrees of temporal smoothness in diverse real-world datasets. Estrangement confinement thus provides a principled approach to uncovering temporal communities in evolving networks. PMID:23145317

  16. Detection of anomaly in human retina using Laplacian Eigenmaps and vectorized matched filtering

    NASA Astrophysics Data System (ADS)

    Yacoubou Djima, Karamatou A.; Simonelli, Lucia D.; Cunningham, Denise; Czaja, Wojciech

    2015-03-01

    We present a novel method for automated anomaly detection on auto fluorescent data provided by the National Institute of Health (NIH). This is motivated by the need for new tools to improve the capability of diagnosing macular degeneration in its early stages, track the progression over time, and test the effectiveness of new treatment methods. In previous work, macular anomalies have been detected automatically through multiscale analysis procedures such as wavelet analysis or dimensionality reduction algorithms followed by a classification algorithm, e.g., Support Vector Machine. The method that we propose is a Vectorized Matched Filtering (VMF) algorithm combined with Laplacian Eigenmaps (LE), a nonlinear dimensionality reduction algorithm with locality preserving properties. By applying LE, we are able to represent the data in the form of eigenimages, some of which accentuate the visibility of anomalies. We pick significant eigenimages and proceed with the VMF algorithm that classifies anomalies across all of these eigenimages simultaneously. To evaluate our performance, we compare our method to two other schemes: a matched filtering algorithm based on anomaly detection on single images and a combination of PCA and VMF. LE combined with VMF algorithm performs best, yielding a high rate of accurate anomaly detection. This shows the advantage of using a nonlinear approach to represent the data and the effectiveness of VMF, which operates on the images as a data cube rather than individual images.

  17. Effective Sensor Selection and Data Anomaly Detection for Condition Monitoring of Aircraft Engines.

    PubMed

    Liu, Liansheng; Liu, Datong; Zhang, Yujie; Peng, Yu

    2016-01-01

    In a complex system, condition monitoring (CM) can collect the system working status. The condition is mainly sensed by the pre-deployed sensors in/on the system. Most existing works study how to utilize the condition information to predict the upcoming anomalies, faults, or failures. There is also some research which focuses on the faults or anomalies of the sensing element (i.e., sensor) to enhance the system reliability. However, existing approaches ignore the correlation between sensor selecting strategy and data anomaly detection, which can also improve the system reliability. To address this issue, we study a new scheme which includes sensor selection strategy and data anomaly detection by utilizing information theory and Gaussian Process Regression (GPR). The sensors that are more appropriate for the system CM are first selected. Then, mutual information is utilized to weight the correlation among different sensors. The anomaly detection is carried out by using the correlation of sensor data. The sensor data sets that are utilized to carry out the evaluation are provided by National Aeronautics and Space Administration (NASA) Ames Research Center and have been used as Prognostics and Health Management (PHM) challenge data in 2008. By comparing the two different sensor selection strategies, the effectiveness of selection method on data anomaly detection is proved. PMID:27136561

  18. Effective Sensor Selection and Data Anomaly Detection for Condition Monitoring of Aircraft Engines

    PubMed Central

    Liu, Liansheng; Liu, Datong; Zhang, Yujie; Peng, Yu

    2016-01-01

    In a complex system, condition monitoring (CM) can collect the system working status. The condition is mainly sensed by the pre-deployed sensors in/on the system. Most existing works study how to utilize the condition information to predict the upcoming anomalies, faults, or failures. There is also some research which focuses on the faults or anomalies of the sensing element (i.e., sensor) to enhance the system reliability. However, existing approaches ignore the correlation between sensor selecting strategy and data anomaly detection, which can also improve the system reliability. To address this issue, we study a new scheme which includes sensor selection strategy and data anomaly detection by utilizing information theory and Gaussian Process Regression (GPR). The sensors that are more appropriate for the system CM are first selected. Then, mutual information is utilized to weight the correlation among different sensors. The anomaly detection is carried out by using the correlation of sensor data. The sensor data sets that are utilized to carry out the evaluation are provided by National Aeronautics and Space Administration (NASA) Ames Research Center and have been used as Prognostics and Health Management (PHM) challenge data in 2008. By comparing the two different sensor selection strategies, the effectiveness of selection method on data anomaly detection is proved. PMID:27136561

  19. Addressing the Challenges of Anomaly Detection for Cyber Physical Energy Grid Systems

    SciTech Connect

    Ferragut, Erik M; Laska, Jason A; Melin, Alexander M; Czejdo, Bogdan

    2013-01-01

    The consolidation of cyber communications networks and physical control systems within the energy smart grid introduces a number of new risks. Unfortunately, these risks are largely unknown and poorly understood, yet include very high impact losses from attack and component failures. One important aspect of risk management is the detection of anomalies and changes. However, anomaly detection within cyber security remains a difficult, open problem, with special challenges in dealing with false alert rates and heterogeneous data. Furthermore, the integration of cyber and physical dynamics is often intractable. And, because of their broad scope, energy grid cyber-physical systems must be analyzed at multiple scales, from individual components, up to network level dynamics. We describe an improved approach to anomaly detection that combines three important aspects. First, system dynamics are modeled using a reduced order model for greater computational tractability. Second, a probabilistic and principled approach to anomaly detection is adopted that allows for regulation of false alerts and comparison of anomalies across heterogeneous data sources. Third, a hierarchy of aggregations are constructed to support interactive and automated analyses of anomalies at multiple scales.

  20. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    DOEpatents

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  1. A Distance Measure for Attention Focusing and Anomaly Detection in Systems Monitoring

    NASA Technical Reports Server (NTRS)

    Doyle, R.

    1994-01-01

    Any attempt to introduce automation into the monitoring of complex physical systems must start from a robust anomaly detection capability. This task is far from straightforward, for a single definition of what constitutes an anomaly is difficult to come by. In addition, to make the monitoring process efficient, and to avoid the potential for information overload on human operators, attention focusing must also be addressed. When an anomaly occurs, more often than not several sensors are affected, and the partially redundant information they provide can be confusing, particularly in a crisis situation where a response is needed quickly. Previous results on extending traditional anomaly detection techniques are summarized. The focus of this paper is a new technique for attention focusing.

  2. Dual-Functional Nanoparticles for In Situ Sequential Detection and Imaging of ATP and H2 O2.

    PubMed

    Ren, Hong; Long, Zi; Cui, Mengchao; Shao, Kang; Zhou, Kaixiang; Ouyang, Jin; Na, Na

    2016-08-01

    Within a complex biological sample, the in situ sequential detection of multiple molecules without any interference is greatly desirable. Dual-functional nanoparticles are constructed, with the enzyme-based core-shell structures, for the in situ sequential detection of ATP and H2 O2 within the same biological system. PMID:27337683

  3. Security inspection in ports by anomaly detection using hyperspectral imaging technology

    NASA Astrophysics Data System (ADS)

    Rivera, Javier; Valverde, Fernando; Saldaña, Manuel; Manian, Vidya

    2013-05-01

    Applying hyperspectral imaging technology in port security is crucial for the detection of possible threats or illegal activities. One of the most common problems that cargo suffers is tampering. This represents a danger to society because it creates a channel to smuggle illegal and hazardous products. If a cargo is altered, security inspections on that cargo should contain anomalies that reveal the nature of the tampering. Hyperspectral images can detect anomalies by gathering information through multiple electromagnetic bands. The spectrums extracted from these bands can be used to detect surface anomalies from different materials. Based on this technology, a scenario was built in which a hyperspectral camera was used to inspect the cargo for any surface anomalies and a user interface shows the results. The spectrum of items, altered by different materials that can be used to conceal illegal products, is analyzed and classified in order to provide information about the tampered cargo. The image is analyzed with a variety of techniques such as multiple features extracting algorithms, autonomous anomaly detection, and target spectrum detection. The results will be exported to a workstation or mobile device in order to show them in an easy -to-use interface. This process could enhance the current capabilities of security systems that are already implemented, providing a more complete approach to detect threats and illegal cargo.

  4. Advancements of data anomaly detection research in wireless sensor networks: a survey and open issues.

    PubMed

    Rassam, Murad A; Zainal, Anazida; Maarof, Mohd Aizaini

    2013-01-01

    Wireless Sensor Networks (WSNs) are important and necessary platforms for the future as the concept "Internet of Things" has emerged lately. They are used for monitoring, tracking, or controlling of many applications in industry, health care, habitat, and military. However, the quality of data collected by sensor nodes is affected by anomalies that occur due to various reasons, such as node failures, reading errors, unusual events, and malicious attacks. Therefore, anomaly detection is a necessary process to ensure the quality of sensor data before it is utilized for making decisions. In this review, we present the challenges of anomaly detection in WSNs and state the requirements to design efficient and effective anomaly detection models. We then review the latest advancements of data anomaly detection research in WSNs and classify current detection approaches in five main classes based on the detection methods used to design these approaches. Varieties of the state-of-the-art models for each class are covered and their limitations are highlighted to provide ideas for potential future works. Furthermore, the reviewed approaches are compared and evaluated based on how well they meet the stated requirements. Finally, the general limitations of current approaches are mentioned and further research opportunities are suggested and discussed. PMID:23966182

  5. Advancements of Data Anomaly Detection Research in Wireless Sensor Networks: A Survey and Open Issues

    PubMed Central

    Rassam, Murad A.; Zainal, Anazida; Maarof, Mohd Aizaini

    2013-01-01

    Wireless Sensor Networks (WSNs) are important and necessary platforms for the future as the concept “Internet of Things” has emerged lately. They are used for monitoring, tracking, or controlling of many applications in industry, health care, habitat, and military. However, the quality of data collected by sensor nodes is affected by anomalies that occur due to various reasons, such as node failures, reading errors, unusual events, and malicious attacks. Therefore, anomaly detection is a necessary process to ensure the quality of sensor data before it is utilized for making decisions. In this review, we present the challenges of anomaly detection in WSNs and state the requirements to design efficient and effective anomaly detection models. We then review the latest advancements of data anomaly detection research in WSNs and classify current detection approaches in five main classes based on the detection methods used to design these approaches. Varieties of the state-of-the-art models for each class are covered and their limitations are highlighted to provide ideas for potential future works. Furthermore, the reviewed approaches are compared and evaluated based on how well they meet the stated requirements. Finally, the general limitations of current approaches are mentioned and further research opportunities are suggested and discussed. PMID:23966182

  6. Improving Cyber-Security of Smart Grid Systems via Anomaly Detection and Linguistic Domain Knowledge

    SciTech Connect

    Ondrej Linda; Todd Vollmer; Milos Manic

    2012-08-01

    The planned large scale deployment of smart grid network devices will generate a large amount of information exchanged over various types of communication networks. The implementation of these critical systems will require appropriate cyber-security measures. A network anomaly detection solution is considered in this work. In common network architectures multiple communications streams are simultaneously present, making it difficult to build an anomaly detection solution for the entire system. In addition, common anomaly detection algorithms require specification of a sensitivity threshold, which inevitably leads to a tradeoff between false positives and false negatives rates. In order to alleviate these issues, this paper proposes a novel anomaly detection architecture. The designed system applies the previously developed network security cyber-sensor method to individual selected communication streams allowing for learning accurate normal network behavior models. Furthermore, the developed system dynamically adjusts the sensitivity threshold of each anomaly detection algorithm based on domain knowledge about the specific network system. It is proposed to model this domain knowledge using Interval Type-2 Fuzzy Logic rules, which linguistically describe the relationship between various features of the network communication and the possibility of a cyber attack. The proposed method was tested on experimental smart grid system demonstrating enhanced cyber-security.

  7. Sequential Model Selection based Segmentation to Detect DNA Copy Number Variation

    PubMed Central

    Hu, Jianhua; Zhang, Liwen; Wang, Huixia Judy

    2016-01-01

    Summary Array-based CGH experiments are designed to detect genomic aberrations or regions of DNA copy-number variation that are associated with an outcome, typically a state of disease. Most of the existing statistical methods target on detecting DNA copy number variations in a single sample or array. We focus on the detection of group effect variation, through simultaneous study of multiple samples from multiple groups. Rather than using direct segmentation or smoothing techniques, as commonly seen in existing detection methods, we develop a sequential model selection procedure that is guided by a modified Bayesian information criterion. This approach improves detection accuracy by accumulatively utilizing information across contiguous clones, and has computational advantage over the existing popular detection methods. Our empirical investigation suggests that the performance of the proposed method is superior to that of the existing detection methods, in particular, in detecting small segments or separating neighboring segments with differential degrees of copy-number variation. PMID:26954760

  8. Incremental classification learning for anomaly detection in medical images

    NASA Astrophysics Data System (ADS)

    Giritharan, Balathasan; Yuan, Xiaohui; Liu, Jianguo

    2009-02-01

    Computer-aided diagnosis usually screens thousands of instances to find only a few positive cases that indicate probable presence of disease.The amount of patient data increases consistently all the time. In diagnosis of new instances, disagreement occurs between a CAD system and physicians, which suggests inaccurate classifiers. Intuitively, misclassified instances and the previously acquired data should be used to retrain the classifier. This, however, is very time consuming and, in some cases where dataset is too large, becomes infeasible. In addition, among the patient data, only a small percentile shows positive sign, which is known as imbalanced data.We present an incremental Support Vector Machines(SVM) as a solution for the class imbalance problem in classification of anomaly in medical images. The support vectors provide a concise representation of the distribution of the training data. Here we use bootstrapping to identify potential candidate support vectors for future iterations. Experiments were conducted using images from endoscopy videos, and the sensitivity and specificity were close to that of SVM trained using all samples available at a given incremental step with significantly improved efficiency in training the classifier.

  9. Fuzzy neural networks for classification and detection of anomalies.

    PubMed

    Meneganti, M; Saviello, F S; Tagliaferri, R

    1998-01-01

    In this paper, a new learning algorithm for the Simpson's fuzzy min-max neural network is presented. It overcomes some undesired properties of the Simpson's model: specifically, in it there are neither thresholds that bound the dimension of the hyperboxes nor sensitivity parameters. Our new algorithm improves the network performance: in fact, the classification result does not depend on the presentation order of the patterns in the training set, and at each step, the classification error in the training set cannot increase. The new neural model is particularly useful in classification problems as it is shown by comparison with some fuzzy neural nets cited in literature (Simpson's min-max model, fuzzy ARTMAP proposed by Carpenter, Grossberg et al. in 1992, adaptive fuzzy systems as introduced by Wang in his book) and the classical multilayer perceptron neural network with backpropagation learning algorithm. The tests were executed on three different classification problems: the first one with two-dimensional synthetic data, the second one with realistic data generated by a simulator to find anomalies in the cooling system of a blast furnace, and the third one with real data for industrial diagnosis. The experiments were made following some recent evaluation criteria known in literature and by using Microsoft Visual C++ development environment on personal computers. PMID:18255771

  10. Anomaly detection of turbopump vibration in Space Shuttle Main Engine using statistics and neural networks

    NASA Astrophysics Data System (ADS)

    Lo, C. F.; Wu, K.; Whitehead, B. A.

    1993-06-01

    The statistical and neural networks methods have been applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME. The anomalies are detected based on the amplitude of peaks of fundamental and harmonic frequencies in the power spectral density. These data are reduced to the proper format from sensor data measured by strain gauges and accelerometers. Both methods are feasible to detect the vibration anomalies. The statistical method requires sufficient data points to establish a reasonable statistical distribution data bank. This method is applicable for on-line operation. The neural networks method also needs to have enough data basis to train the neural networks. The testing procedure can be utilized at any time so long as the characteristics of components remain unchanged.

  11. Anomaly detection of turbopump vibration in Space Shuttle Main Engine using statistics and neural networks

    NASA Technical Reports Server (NTRS)

    Lo, C. F.; Wu, K.; Whitehead, B. A.

    1993-01-01

    The statistical and neural networks methods have been applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME. The anomalies are detected based on the amplitude of peaks of fundamental and harmonic frequencies in the power spectral density. These data are reduced to the proper format from sensor data measured by strain gauges and accelerometers. Both methods are feasible to detect the vibration anomalies. The statistical method requires sufficient data points to establish a reasonable statistical distribution data bank. This method is applicable for on-line operation. The neural networks method also needs to have enough data basis to train the neural networks. The testing procedure can be utilized at any time so long as the characteristics of components remain unchanged.

  12. Adaptive sequential algorithms for detecting targets in a heavy IR clutter

    NASA Astrophysics Data System (ADS)

    Tartakovsky, Alexander G.; Kligys, Skirmantas; Petrov, Anton

    1999-10-01

    Cruise missiles over land and sea cluttered background are serious threats to search and track systems. In general, these threats are stealth in both the IR and radio frequency bands. That is, their thermal IR signature and their radar cross section can be quite small. This paper discusses adaptive sequential detection methods which exploit 'track- before-detect' technology for detection glow-SNR targets in IR search and track (IRST) systems. Despite the fact that we focus on an IRST against cruise missiles over land and sea cluttered backgrounds, the results are applicable to other sensors and other kinds of targets.

  13. Apparatus and method for detecting a magnetic anomaly contiguous to remote location by SQUID gradiometer and magnetometer systems

    DOEpatents

    Overton, W.C. Jr.; Steyert, W.A. Jr.

    1981-05-22

    A superconducting quantum interference device (SQUID) magnetic detection apparatus detects magnetic fields, signals, and anomalies at remote locations. Two remotely rotatable SQUID gradiometers may be housed in a cryogenic environment to search for and locate unambiguously magnetic anomalies. The SQUID magnetic detection apparatus can be used to determine the azimuth of a hydrofracture by first flooding the hydrofracture with a ferrofluid to create an artificial magnetic anomaly therein.

  14. A novel anomaly detection approach based on clustering and decision-level fusion

    NASA Astrophysics Data System (ADS)

    Zhong, Shengwei; Zhang, Ye

    2015-09-01

    In hyperspectral image processing, anomaly detection is a valuable way of searching targets whose spectral characteristics are not known, and the estimation of background signals is the key procedure. On account of the high dimensionality and complexity of hyperspectral image, dimensionality reduction and background suppression is necessary. In addition, the complementarity of different anomaly detection algorithms can be utilized to improve the effectiveness of anomaly detection. In this paper, we propose a novel method of anomaly detection, which is based on clustering of optimized K-means and decision-level fusion. In our proposed method, pixels with similar features are firstly clustered using an optimized k-means method. Secondly, dimensionality reduction is conducted using principle component analysis to reduce the amount of calculation. Then, to increase the accuracy of detection and decrease the false-alarm ratio, both Reed-Xiaoli (RX) and Kernel RX algorithm are used on processed image. Lastly, a decision-level fusion is processed on the detection results. A simulated hyperspectral image and a real hyperspectral one are both used to evaluate the performance of our proposed method. Visual analysis and quantative analysis of receiver operating characteristic (ROC) curves show that our algorithm can achieve better performance when compared with other classic approaches and state-of-the-art approaches.

  15. Low frequency of Y anomaly detected in Australian Brahman cow-herds.

    PubMed

    de Camargo, Gregório M F; Porto-Neto, Laercio R; Fortes, Marina R S; Bunch, Rowan J; Tonhati, Humberto; Reverter, Antonio; Moore, Stephen S; Lehnert, Sigrid A

    2015-02-01

    Indicine cattle have lower reproductive performance in comparison to taurine. A chromosomal anomaly characterized by the presence Y markers in females was reported and associated with infertility in cattle. The aim of this study was to investigate the occurrence of the anomaly in Brahman cows. Brahman cows (n = 929) were genotyped for a Y chromosome specific region using real time-PCR. Only six out of 929 cows had the anomaly (0.6%). The anomaly frequency was much lower in Brahman cows than in the crossbred population, in which it was first detected. It also seems that the anomaly doesn't affect pregnancy in the population. Due to the low frequency, association analyses couldn't be executed. Further, SNP signal of the pseudoautosomal boundary region of the Y chromosome was investigated using HD SNP chip. Pooled DNA of "non-pregnant" and "pregnant" cows were compared and no difference in SNP allele frequency was observed. Results suggest that the anomaly had a very low frequency in this Australian Brahman population and had no effect on reproduction. Further studies comparing pregnant cows and cows that failed to conceive should be executed after better assembly and annotation of the Y chromosome in cattle. PMID:25750859

  16. Low frequency of Y anomaly detected in Australian Brahman cow-herds

    PubMed Central

    de Camargo, Gregório M.F.; Porto-Neto, Laercio R.; Fortes, Marina R.S.; Bunch, Rowan J.; Tonhati, Humberto; Reverter, Antonio; Moore, Stephen S.; Lehnert, Sigrid A.

    2015-01-01

    Indicine cattle have lower reproductive performance in comparison to taurine. A chromosomal anomaly characterized by the presence Y markers in females was reported and associated with infertility in cattle. The aim of this study was to investigate the occurrence of the anomaly in Brahman cows. Brahman cows (n = 929) were genotyped for a Y chromosome specific region using real time-PCR. Only six out of 929 cows had the anomaly (0.6%). The anomaly frequency was much lower in Brahman cows than in the crossbred population, in which it was first detected. It also seems that the anomaly doesn't affect pregnancy in the population. Due to the low frequency, association analyses couldn't be executed. Further, SNP signal of the pseudoautosomal boundary region of the Y chromosome was investigated using HD SNP chip. Pooled DNA of “non-pregnant” and “pregnant” cows were compared and no difference in SNP allele frequency was observed. Results suggest that the anomaly had a very low frequency in this Australian Brahman population and had no effect on reproduction. Further studies comparing pregnant cows and cows that failed to conceive should be executed after better assembly and annotation of the Y chromosome in cattle. PMID:25750859

  17. Time series analysis of infrared satellite data for detecting thermal anomalies: a hybrid approach

    NASA Astrophysics Data System (ADS)

    Koeppen, W. C.; Pilger, E.; Wright, R.

    2011-07-01

    We developed and tested an automated algorithm that analyzes thermal infrared satellite time series data to detect and quantify the excess energy radiated from thermal anomalies such as active volcanoes. Our algorithm enhances the previously developed MODVOLC approach, a simple point operation, by adding a more complex time series component based on the methods of the Robust Satellite Techniques (RST) algorithm. Using test sites at Anatahan and Kīlauea volcanoes, the hybrid time series approach detected ~15% more thermal anomalies than MODVOLC with very few, if any, known false detections. We also tested gas flares in the Cantarell oil field in the Gulf of Mexico as an end-member scenario representing very persistent thermal anomalies. At Cantarell, the hybrid algorithm showed only a slight improvement, but it did identify flares that were undetected by MODVOLC. We estimate that at least 80 MODIS images for each calendar month are required to create good reference images necessary for the time series analysis of the hybrid algorithm. The improved performance of the new algorithm over MODVOLC will result in the detection of low temperature thermal anomalies that will be useful in improving our ability to document Earth's volcanic eruptions, as well as detecting low temperature thermal precursors to larger eruptions.

  18. [A Hyperspectral Imagery Anomaly Detection Algorithm Based on Gauss-Markov Model].

    PubMed

    Gao, Kun; Liu, Ying; Wang, Li-jing; Zhu, Zhen-yu; Cheng, Hao-bo

    2015-10-01

    With the development of spectral imaging technology, hyperspectral anomaly detection is getting more and more widely used in remote sensing imagery processing. The traditional RX anomaly detection algorithm neglects spatial correlation of images. Besides, it does not validly reduce the data dimension, which costs too much processing time and shows low validity on hyperspectral data. The hyperspectral images follow Gauss-Markov Random Field (GMRF) in space and spectral dimensions. The inverse matrix of covariance matrix is able to be directly calculated by building the Gauss-Markov parameters, which avoids the huge calculation of hyperspectral data. This paper proposes an improved RX anomaly detection algorithm based on three-dimensional GMRF. The hyperspectral imagery data is simulated with GMRF model, and the GMRF parameters are estimated with the Approximated Maximum Likelihood method. The detection operator is constructed with GMRF estimation parameters. The detecting pixel is considered as the centre in a local optimization window, which calls GMRF detecting window. The abnormal degree is calculated with mean vector and covariance inverse matrix, and the mean vector and covariance inverse matrix are calculated within the window. The image is detected pixel by pixel with the moving of GMRF window. The traditional RX detection algorithm, the regional hypothesis detection algorithm based on GMRF and the algorithm proposed in this paper are simulated with AVIRIS hyperspectral data. Simulation results show that the proposed anomaly detection method is able to improve the detection efficiency and reduce false alarm rate. We get the operation time statistics of the three algorithms in the same computer environment. The results show that the proposed algorithm improves the operation time by 45.2%, which shows good computing efficiency. PMID:26904830

  19. Using new edges for anomaly detection in computer networks

    DOEpatents

    Neil, Joshua Charles

    2015-05-19

    Creation of new edges in a network may be used as an indication of a potential attack on the network. Historical data of a frequency with which nodes in a network create and receive new edges may be analyzed. Baseline models of behavior among the edges in the network may be established based on the analysis of the historical data. A new edge that deviates from a respective baseline model by more than a predetermined threshold during a time window may be detected. The new edge may be flagged as potentially anomalous when the deviation from the respective baseline model is detected. Probabilities for both new and existing edges may be obtained for all edges in a path or other subgraph. The probabilities may then be combined to obtain a score for the path or other subgraph. A threshold may be obtained by calculating an empirical distribution of the scores under historical conditions.

  20. Detection of anomalies in radio tomography of asteroids: Source count and forward errors

    NASA Astrophysics Data System (ADS)

    Pursiainen, S.; Kaasalainen, M.

    2014-09-01

    The purpose of this study was to advance numerical methods for radio tomography in which asteroid's internal electric permittivity distribution is to be recovered from radio frequency data gathered by an orbiter. The focus was on signal generation via multiple sources (transponders) providing one potential, or even essential, scenario to be implemented in a challenging in situ measurement environment and within tight payload limits. As a novel feature, the effects of forward errors including noise and a priori uncertainty of the forward (data) simulation were examined through a combination of the iterative alternating sequential (IAS) inverse algorithm and finite-difference time-domain (FDTD) simulation of time evolution data. Single and multiple source scenarios were compared in two-dimensional localization of permittivity anomalies. Three different anomaly strengths and four levels of total noise were tested. Results suggest, among other things, that multiple sources can be necessary to obtain appropriate results, for example, to distinguish three separate anomalies with permittivity less or equal than half of the background value, relevant in recovery of internal cavities.

  1. Radiation detection method and system using the sequential probability ratio test

    DOEpatents

    Nelson, Karl E.; Valentine, John D.; Beauchamp, Brock R.

    2007-07-17

    A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.

  2. Dynamic analysis methods for detecting anomalies in asynchronously interacting systems

    SciTech Connect

    Kumar, Akshat; Solis, John Hector; Matschke, Benjamin

    2014-01-01

    Detecting modifications to digital system designs, whether malicious or benign, is problematic due to the complexity of the systems being analyzed. Moreover, static analysis techniques and tools can only be used during the initial design and implementation phases to verify safety and liveness properties. It is computationally intractable to guarantee that any previously verified properties still hold after a system, or even a single component, has been produced by a third-party manufacturer. In this paper we explore new approaches for creating a robust system design by investigating highly-structured computational models that simplify verification and analysis. Our approach avoids the need to fully reconstruct the implemented system by incorporating a small verification component that dynamically detects for deviations from the design specification at run-time. The first approach encodes information extracted from the original system design algebraically into a verification component. During run-time this component randomly queries the implementation for trace information and verifies that no design-level properties have been violated. If any deviation is detected then a pre-specified fail-safe or notification behavior is triggered. Our second approach utilizes a partitioning methodology to view liveness and safety properties as a distributed decision task and the implementation as a proposed protocol that solves this task. Thus the problem of verifying safety and liveness properties is translated to that of verifying that the implementation solves the associated decision task. We develop upon results from distributed systems and algebraic topology to construct a learning mechanism for verifying safety and liveness properties from samples of run-time executions.

  3. A comparison of algorithms for anomaly detection in safeguards and computer security systems using neural networks

    SciTech Connect

    Howell, J.A.; Whiteson, R.

    1992-08-01

    Detection of anomalies in nuclear safeguards and computer security systems is a tedious and time-consuming task. It typically requires the examination of large amounts of data for unusual patterns of activity. Neural networks provide a flexible pattern-recognition capability that can easily be adapted for these purposes. In this paper, we discuss architectures for accomplishing this task.

  4. A comparison of algorithms for anomaly detection in safeguards and computer security systems using neural networks

    SciTech Connect

    Howell, J.A.; Whiteson, R.

    1992-01-01

    Detection of anomalies in nuclear safeguards and computer security systems is a tedious and time-consuming task. It typically requires the examination of large amounts of data for unusual patterns of activity. Neural networks provide a flexible pattern-recognition capability that can easily be adapted for these purposes. In this paper, we discuss architectures for accomplishing this task.

  5. Dual Use Corrosion Inhibitor and Penetrant for Anomaly Detection in Neutron/X Radiography

    NASA Technical Reports Server (NTRS)

    Hall, Phillip B. (Inventor); Novak, Howard L. (Inventor)

    2004-01-01

    A dual purpose corrosion inhibitor and penetrant composition sensitive to radiography interrogation is provided. The corrosion inhibitor mitigates or eliminates corrosion on the surface of a substrate upon which the corrosion inhibitor is applied. In addition, the corrosion inhibitor provides for the attenuation of a signal used during radiography interrogation thereby providing for detection of anomalies on the surface of the substrate.

  6. Anomaly Detection in the Right Hemisphere: The Influence of Visuospatial Factors

    ERIC Educational Resources Information Center

    Smith, Stephen D.; Dixon, Michael J.; Tays, William J.; Bulman-Fleming, M. Barbara

    2004-01-01

    Previous research with both brain-damaged and neurologically intact populations has demonstrated that the right cerebral hemisphere (RH) is superior to the left cerebral hemisphere (LH) at detecting anomalies (or incongruities) in objects (Ramachandran, 1995; Smith, Tays, Dixon, & Bulman-Fleming, 2002). The current research assesses whether the RH…

  7. Damage diagnosis algorithm using a sequential change point detection method with an unknown distribution for damage

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.

    2012-04-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  8. Magnetic anomaly detection (MAD) of ferromagnetic pipelines using principal component analysis (PCA)

    NASA Astrophysics Data System (ADS)

    Sheinker, Arie; Moldwin, Mark B.

    2016-04-01

    The magnetic anomaly detection (MAD) method is used for detection of visually obscured ferromagnetic objects. The method exploits the magnetic field originating from the ferromagnetic object, which constitutes an anomaly in the ambient earth’s magnetic field. Traditionally, MAD is used to detect objects with a magnetic field of a dipole structure, where far from the object it can be considered as a point source. In the present work, we expand MAD to the case of a non-dipole source, i.e. a ferromagnetic pipeline. We use principal component analysis (PCA) to calculate the principal components, which are then employed to construct an effective detector. Experiments conducted in our lab with real-world data validate the above analysis. The simplicity, low computational complexity, and the high detection rate make the proposed detector attractive for real-time, low power applications.

  9. A New Curb Detection Method for Unmanned Ground Vehicles Using 2D Sequential Laser Data

    PubMed Central

    Liu, Zhao; Wang, Jinling; Liu, Daxue

    2013-01-01

    Curb detection is an important research topic in environment perception, which is an essential part of unmanned ground vehicle (UGV) operations. In this paper, a new curb detection method using a 2D laser range finder in a semi-structured environment is presented. In the proposed method, firstly, a local Digital Elevation Map (DEM) is built using 2D sequential laser rangefinder data and vehicle state data in a dynamic environment and a probabilistic moving object deletion approach is proposed to cope with the effect of moving objects. Secondly, the curb candidate points are extracted based on the moving direction of the vehicle in the local DEM. Finally, the straight and curved curbs are detected by the Hough transform and the multi-model RANSAC algorithm, respectively. The proposed method can detect the curbs robustly in both static and typical dynamic environments. The proposed method has been verified in real vehicle experiments. PMID:23325170

  10. A new curb detection method for unmanned ground vehicles using 2D sequential laser data.

    PubMed

    Liu, Zhao; Wang, Jinling; Liu, Daxue

    2013-01-01

    Curb detection is an important research topic in environment perception, which is an essential part of unmanned ground vehicle (UGV) operations. In this paper, a new curb detection method using a 2D laser range finder in a semi-structured environment is presented. In the proposed method, firstly, a local Digital Elevation Map (DEM) is built using 2D sequential laser rangefinder data and vehicle state data in a dynamic environment and a probabilistic moving object deletion approach is proposed to cope with the effect of moving objects. Secondly, the curb candidate points are extracted based on the moving direction of the vehicle in the local DEM. Finally, the straight and curved curbs are detected by the Hough transform and the multi-model RANSAC algorithm, respectively. The proposed method can detect the curbs robustly in both static and typical dynamic environments. The proposed method has been verified in real vehicle experiments. PMID:23325170

  11. Anomaly Detection in Large Sets of High-Dimensional Symbol Sequences

    NASA Technical Reports Server (NTRS)

    Budalakoti, Suratna; Srivastava, Ashok N.; Akella, Ram; Turkov, Eugene

    2006-01-01

    This paper addresses the problem of detecting and describing anomalies in large sets of high-dimensional symbol sequences. The approach taken uses unsupervised clustering of sequences using the normalized longest common subsequence (LCS) as a similarity measure, followed by detailed analysis of outliers to detect anomalies. As the LCS measure is expensive to compute, the first part of the paper discusses existing algorithms, such as the Hunt-Szymanski algorithm, that have low time-complexity. We then discuss why these algorithms often do not work well in practice and present a new hybrid algorithm for computing the LCS that, in our tests, outperforms the Hunt-Szymanski algorithm by a factor of five. The second part of the paper presents new algorithms for outlier analysis that provide comprehensible indicators as to why a particular sequence was deemed to be an outlier. The algorithms provide a coherent description to an analyst of the anomalies in the sequence, compared to more normal sequences. The algorithms we present are general and domain-independent, so we discuss applications in related areas such as anomaly detection.

  12. Towards spatial localisation of harmful algal blooms; statistics-based spatial anomaly detection

    NASA Astrophysics Data System (ADS)

    Shutler, J. D.; Grant, M. G.; Miller, P. I.

    2005-10-01

    Harmful algal blooms are believed to be increasing in occurrence and their toxins can be concentrated by filter-feeding shellfish and cause amnesia or paralysis when ingested. As a result fisheries and beaches in the vicinity of blooms may need to be closed and the local population informed. For this avoidance planning timely information on the existence of a bloom, its species and an accurate map of its extent would be prudent. Current research to detect these blooms from space has mainly concentrated on spectral approaches towards determining species. We present a novel statistics-based background-subtraction technique that produces improved descriptions of an anomaly's extent from remotely-sensed ocean colour data. This is achieved by extracting bulk information from a background model; this is complemented by a computer vision ramp filtering technique to specifically detect the perimeter of the anomaly. The complete extraction technique uses temporal-variance estimates which control the subtraction of the scene of interest from the time-weighted background estimate, producing confidence maps of anomaly extent. Through the variance estimates the method learns the associated noise present in the data sequence, providing robustness, and allowing generic application. Further, the use of the median for the background model reduces the effects of anomalies that appear within the time sequence used to generate it, allowing seasonal variations in the background levels to be closely followed. To illustrate the detection algorithm's application, it has been applied to two spectrally different oceanic regions.

  13. HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System

    SciTech Connect

    Chen, Yan

    2013-12-05

    Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm is significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.

  14. Detection of Surface Temperature Anomalies in the Coso Geothermal Field Using Thermal Infrared Remote Sensing

    NASA Astrophysics Data System (ADS)

    Coolbaugh, M.; Eneva, M.; Bjornstad, S.; Combs, J.

    2007-12-01

    We use thermal infrared (TIR) data from the spaceborne ASTER instrument to detect surface temperature anomalies in the Coso geothermal field in eastern California. The identification of such anomalies in a known geothermal area serves as an incentive to search for similar markers to areas of unknown geothermal potential. We carried out field measurements concurrently with the collection of ASTER images. The field data included reflectance, subsurface and surface temperatures, and radiosonde atmospheric profiles. We apply techniques specifically targeted to correct for thermal artifacts caused by topography, albedo, and thermal inertia. This approach has the potential to reduce data noise and to reveal thermal anomalies which are not distinguishable in the uncorrected imagery. The combination of remote sensing and field data can be used to evaluate the performance of TIR remote sensing as a cost-effective geothermal exploration tool.

  15. Structural Anomaly Detection Using Fiber Optic Sensors and Inverse Finite Element Method

    NASA Technical Reports Server (NTRS)

    Quach, Cuong C.; Vazquez, Sixto L.; Tessler, Alex; Moore, Jason P.; Cooper, Eric G.; Spangler, Jan. L.

    2005-01-01

    NASA Langley Research Center is investigating a variety of techniques for mitigating aircraft accidents due to structural component failure. One technique under consideration combines distributed fiber optic strain sensing with an inverse finite element method for detecting and characterizing structural anomalies anomalies that may provide early indication of airframe structure degradation. The technique identifies structural anomalies that result in observable changes in localized strain but do not impact the overall surface shape. Surface shape information is provided by an Inverse Finite Element Method that computes full-field displacements and internal loads using strain data from in-situ fiberoptic sensors. This paper describes a prototype of such a system and reports results from a series of laboratory tests conducted on a test coupon subjected to increasing levels of damage.

  16. Unsupervised Anomaly Detection Based on Clustering and Multiple One-Class SVM

    NASA Astrophysics Data System (ADS)

    Song, Jungsuk; Takakura, Hiroki; Okabe, Yasuo; Kwon, Yongjin

    Intrusion detection system (IDS) has played an important role as a device to defend our networks from cyber attacks. However, since it is unable to detect unknown attacks, i.e., 0-day attacks, the ultimate challenge in intrusion detection field is how we can exactly identify such an attack by an automated manner. Over the past few years, several studies on solving these problems have been made on anomaly detection using unsupervised learning techniques such as clustering, one-class support vector machine (SVM), etc. Although they enable one to construct intrusion detection models at low cost and effort, and have capability to detect unforeseen attacks, they still have mainly two problems in intrusion detection: a low detection rate and a high false positive rate. In this paper, we propose a new anomaly detection method based on clustering and multiple one-class SVM in order to improve the detection rate while maintaining a low false positive rate. We evaluated our method using KDD Cup 1999 data set. Evaluation results show that our approach outperforms the existing algorithms reported in the literature; especially in detection of unknown attacks.

  17. Sequential detection of influenza epidemics by the Kolmogorov-Smirnov test

    PubMed Central

    2012-01-01

    Background Influenza is a well known and common human respiratory infection, causing significant morbidity and mortality every year. Despite Influenza variability, fast and reliable outbreak detection is required for health resource planning. Clinical health records, as published by the Diagnosticat database in Catalonia, host useful data for probabilistic detection of influenza outbreaks. Methods This paper proposes a statistical method to detect influenza epidemic activity. Non-epidemic incidence rates are modeled against the exponential distribution, and the maximum likelihood estimate for the decaying factor λ is calculated. The sequential detection algorithm updates the parameter as new data becomes available. Binary epidemic detection of weekly incidence rates is assessed by Kolmogorov-Smirnov test on the absolute difference between the empirical and the cumulative density function of the estimated exponential distribution with significance level 0 ≤ α ≤ 1. Results The main advantage with respect to other approaches is the adoption of a statistically meaningful test, which provides an indicator of epidemic activity with an associated probability. The detection algorithm was initiated with parameter λ0 = 3.8617 estimated from the training sequence (corresponding to non-epidemic incidence rates of the 2008-2009 influenza season) and sequentially updated. Kolmogorov-Smirnov test detected the following weeks as epidemic for each influenza season: 50−10 (2008-2009 season), 38−50 (2009-2010 season), weeks 50−9 (2010-2011 season) and weeks 3 to 12 for the current 2011-2012 season. Conclusions Real medical data was used to assess the validity of the approach, as well as to construct a realistic statistical model of weekly influenza incidence rates in non-epidemic periods. For the tested data, the results confirmed the ability of the algorithm to detect the start and the end of epidemic periods. In general, the proposed test could be applied to other data

  18. Application of Artificial Bee Colony algorithm in TEC seismo-ionospheric anomalies detection

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2015-09-01

    In this study, the efficiency of Artificial Bee Colony (ABC) algorithm is investigated to detect the TEC (Total Electron Content) seismo-ionospheric anomalies around the time of some strong earthquakes including Chile (27 February 2010; 01 April 2014), Varzeghan (11 August 2012), Saravan (16 April 2013) and Papua New Guinea (29 March 2015). In comparison with other anomaly detection algorithms, ABC has a number of advantages which can be numerated as (1) detection of discord patterns in a large non linear data during a short time, (2) simplicity, (3) having less control parameters and (4) efficiently for solving multimodal and multidimensional optimization problems. Also the results of this study acknowledge the TEC time-series as a robust earthquake precursor.

  19. A new approach for structural health monitoring by applying anomaly detection on strain sensor data

    NASA Astrophysics Data System (ADS)

    Trichias, Konstantinos; Pijpers, Richard; Meeuwissen, Erik

    2014-03-01

    Structural Health Monitoring (SHM) systems help to monitor critical infrastructures (bridges, tunnels, etc.) remotely and provide up-to-date information about their physical condition. In addition, it helps to predict the structure's life and required maintenance in a cost-efficient way. Typically, inspection data gives insight in the structural health. The global structural behavior, and predominantly the structural loading, is generally measured with vibration and strain sensors. Acoustic emission sensors are more and more used for measuring global crack activity near critical locations. In this paper, we present a procedure for local structural health monitoring by applying Anomaly Detection (AD) on strain sensor data for sensors that are applied in expected crack path. Sensor data is analyzed by automatic anomaly detection in order to find crack activity at an early stage. This approach targets the monitoring of critical structural locations, such as welds, near which strain sensors can be applied during construction and/or locations with limited inspection possibilities during structural operation. We investigate several anomaly detection techniques to detect changes in statistical properties, indicating structural degradation. The most effective one is a novel polynomial fitting technique, which tracks slow changes in sensor data. Our approach has been tested on a representative test structure (bridge deck) in a lab environment, under constant and variable amplitude fatigue loading. In both cases, the evolving cracks at the monitored locations were successfully detected, autonomously, by our AD monitoring tool.

  20. Sequential structural damage diagnosis algorithm using a change point detection method

    NASA Astrophysics Data System (ADS)

    Noh, H.; Rajagopal, R.; Kiremidjian, A. S.

    2013-11-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  1. Detecting errors and anomalies in computerized materials control and accountability databases

    SciTech Connect

    Whiteson, R.; Hench, K.; Yarbro, T.; Baumgart, C.

    1998-12-31

    The Automated MC and A Database Assessment project is aimed at improving anomaly and error detection in materials control and accountability (MC and A) databases and increasing confidence in the data that they contain. Anomalous data resulting in poor categorization of nuclear material inventories greatly reduces the value of the database information to users. Therefore it is essential that MC and A data be assessed periodically for anomalies or errors. Anomaly detection can identify errors in databases and thus provide assurance of the integrity of data. An expert system has been developed at Los Alamos National Laboratory that examines these large databases for anomalous or erroneous data. For several years, MC and A subject matter experts at Los Alamos have been using this automated system to examine the large amounts of accountability data that the Los Alamos Plutonium Facility generates. These data are collected and managed by the Material Accountability and Safeguards System, a near-real-time computerized nuclear material accountability and safeguards system. This year they have expanded the user base, customizing the anomaly detector for the varying requirements of different groups of users. This paper describes the progress in customizing the expert systems to the needs of the users of the data and reports on their results.

  2. Millimeter Wave Detection of Localized Anomalies in the Space Shuttle External Fuel Tank Insulating Foam

    NASA Technical Reports Server (NTRS)

    Kharkovsky, S.; Case, J. T.; Abou-Khousa, M. A.; Zoughi, R.; Hepburn, F.

    2006-01-01

    The Space Shuttle Columbia's catastrophic accident emphasizes the growing need for developing and applying effective, robust and life-cycle oriented nondestructive testing (NDT) methods for inspecting the shuttle external fuel tank spray on foam insulation (SOFI). Millimeter wave NDT techniques were one of the methods chosen for evaluating their potential for inspecting these structures. Several panels with embedded anomalies (mainly voids) were produced and tested for this purpose. Near-field and far-field millimeter wave NDT methods were used for producing images of the anomalies in these panels. This paper presents the results of an investigation for the purpose of detecting localized anomalies in several SOFI panels. To this end, reflectometers at a relatively wide range of frequencies (Ka-band (26.5 - 40 GHz) to W-band (75 - 110 GHz)) and utilizing different types of radiators were employed. The resulting raw images revealed a significant amount of information about the interior of these panels. However, using simple image processing techniques the results were improved in particular as it relate s to detecting the smaller anomalies. This paper presents the results of this investigation and a discussion of these results.

  3. Conformal prediction for anomaly detection and collision alert in space surveillance

    NASA Astrophysics Data System (ADS)

    Chen, Huimin; Chen, Genshe; Blasch, Erik; Pham, Khanh

    2013-05-01

    Anomaly detection has been considered as an important technique for detecting critical events in a wide range of data rich applications where a majority of the data is inconsequential and/or uninteresting. We study the detection of anomalous behaviors among space objects using the theory of conformal prediction for distribution-independent on-line learning to provide collision alerts with a desirable confidence level. We exploit the fact that conformal predictors provide valid forecasted sets at specified confidence levels under the relatively weak assumption that the normal training data, together with the normal testing data, are generated from the same distribution. If the actual observation is not included in the conformal prediction set, it is classified as anomalous at the corresponding significance level. Interpreting the significance level as an upper bound of the probability that a normal observation is mistakenly classified as anomalous, we can conveniently adjust the sensitivity to anomalies while controlling the false alarm rate without having to find the application specific threshold. The proposed conformal prediction method was evaluated for a space surveillance application using the open source North American Aerospace Defense Command (NORAD) catalog data. The validity of the prediction sets is justified by the empirical error rate that matches the significance level. In addition, experiments with simulated anomalous data indicate that anomaly detection sensitivity with conformal prediction is superior to that of the existing methods in declaring potential collision events.

  4. [Multi-DSP parallel processing technique of hyperspectral RX anomaly detection].

    PubMed

    Guo, Wen-Ji; Zeng, Xiao-Ru; Zhao, Bao-Wei; Ming, Xing; Zhang, Gui-Feng; Lü, Qun-Bo

    2014-05-01

    To satisfy the requirement of high speed, real-time and mass data storage etc. for RX anomaly detection of hyperspectral image data, the present paper proposes a solution of multi-DSP parallel processing system for hyperspectral image based on CPCI Express standard bus architecture. Hardware topological architecture of the system combines the tight coupling of four DSPs sharing data bus and memory unit with the interconnection of Link ports. On this hardware platform, by assigning parallel processing task for each DSP in consideration of the spectrum RX anomaly detection algorithm and the feature of 3D data in the spectral image, a 4DSP parallel processing technique which computes and solves the mean matrix and covariance matrix of the whole image by spatially partitioning the image is proposed. The experiment result shows that, in the case of equivalent detective effect, it can reach the time efficiency 4 times higher than single DSP process with the 4-DSP parallel processing technique of RX anomaly detection algorithm proposed by this paper, which makes a breakthrough in the constraints to the huge data image processing of DSP's internal storage capacity, meanwhile well meeting the demands of the spectral data in real-time processing. PMID:25095443

  5. A sequential enzymatic microreactor system for ethanol detection of gasohol mixtures.

    PubMed

    Alhadeff, Eliana M; Salgado, Andréa M; Pereira, Nei; Valdman, Belkis

    2005-01-01

    A sequential enzymatic double microreactor system with dilution line was developed for quantifying ethanol from gasohol mixtures, using a colorimetric detection method, as a new proposal to the single micro reactor system used in previous work. Alcohol oxidase (AOD) and horseradish peroxidase (HRP) immobilized on glass beads, one in each microreactor, were used with phenol and 4-aminophenazone and the red-colored product was detected with a spectrophotometer at 555 nm. Good results were obtained with the immobilization technique used for both AOD and HRP enzymes, with best retention efficiencies of 95.3 +/- 2.3% and 63.2 +/- 7.0%, respectively. The two microreactors were used to analyze extracted ethanol from gasohol blends in the range 1-30 % v/v (10.0-238.9 g ethanol/L), with and without an on-line dilution sampling line. A calibration curve was obtained in the range 0.0034-0.087 g ethanol/L working with the on-line dilution integrated to the biosensor FIA system proposed. The diluted sample concentrations were also determined by gas chromatography (GC) and high-pressure liquid chromatography (HPLC) methods and the results compared with the proposed sequential system measurements. The effect of the number of analysis performed with the same system was also investigated. PMID:15917613

  6. Complexity-Measure-Based Sequential Hypothesis Testing for Real-Time Detection of Lethal Cardiac Arrhythmias

    NASA Astrophysics Data System (ADS)

    Chen, Szi-Wen

    2006-12-01

    A novel approach that employs a complexity-based sequential hypothesis testing (SHT) technique for real-time detection of ventricular fibrillation (VF) and ventricular tachycardia (VT) is presented. A dataset consisting of a number of VF and VT electrocardiogram (ECG) recordings drawn from the MIT-BIH database was adopted for such an analysis. It was split into two smaller datasets for algorithm training and testing, respectively. Each ECG recording was measured in a 10-second interval. For each recording, a number of overlapping windowed ECG data segments were obtained by shifting a 5-second window by a step of 1 second. During the windowing process, the complexity measure (CM) value was calculated for each windowed segment and the task of pattern recognition was then sequentially performed by the SHT procedure. A preliminary test conducted using the database produced optimal overall predictive accuracy of[InlineEquation not available: see fulltext.]. The algorithm was also implemented on a commercial embedded DSP controller, permitting a hardware realization of real-time ventricular arrhythmia detection.

  7. Shape anomaly detection under strong measurement noise: An analytical approach to adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.

    2015-10-01

    We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.

  8. Capacitance probe for detection of anomalies in non-metallic plastic pipe

    DOEpatents

    Mathur, Mahendra P.; Spenik, James L.; Condon, Christopher M.; Anderson, Rodney; Driscoll, Daniel J.; Fincham, Jr., William L.; Monazam, Esmail R.

    2010-11-23

    The disclosure relates to analysis of materials using a capacitive sensor to detect anomalies through comparison of measured capacitances. The capacitive sensor is used in conjunction with a capacitance measurement device, a location device, and a processor in order to generate a capacitance versus location output which may be inspected for the detection and localization of anomalies within the material under test. The components may be carried as payload on an inspection vehicle which may traverse through a pipe interior, allowing evaluation of nonmetallic or plastic pipes when the piping exterior is not accessible. In an embodiment, supporting components are solid-state devices powered by a low voltage on-board power supply, providing for use in environments where voltage levels may be restricted.

  9. GraphPrints: Towards a Graph Analytic Method for Network Anomaly Detection

    SciTech Connect

    Harshaw, Chris R; Bridges, Robert A; Iannacone, Michael D; Reed, Joel W; Goodall, John R

    2016-01-01

    This paper introduces a novel graph-analytic approach for detecting anomalies in network flow data called \\textit{GraphPrints}. Building on foundational network-mining techniques, our method represents time slices of traffic as a graph, then counts graphlets\\textemdash small induced subgraphs that describe local topology. By performing outlier detection on the sequence of graphlet counts, anomalous intervals of traffic are identified, and furthermore, individual IPs experiencing abnormal behavior are singled-out. Initial testing of GraphPrints is performed on real network data with an implanted anomaly. Evaluation shows false positive rates bounded by 2.84\\% at the time-interval level, and 0.05\\% at the IP-level with 100\\% true positive rates at both.

  10. Parallel implementation of RX anomaly detection on multi-core processors: impact of data partitioning strategies

    NASA Astrophysics Data System (ADS)

    Molero, Jose M.; Garzón, Ester M.; García, Inmaculada; Plaza, Antonio

    2011-11-01

    Anomaly detection is an important task for remotely sensed hyperspectral data exploitation. One of the most widely used and successful algorithms for anomaly detection in hyperspectral images is the Reed-Xiaoli (RX) algorithm. Despite its wide acceptance and high computational complexity when applied to real hyperspectral scenes, few documented parallel implementations of this algorithm exist, in particular for multi-core processors. The advantage of multi-core platforms over other specialized parallel architectures is that they are a low-power, inexpensive, widely available and well-known technology. A critical issue in the parallel implementation of RX is the sample covariance matrix calculation, which can be approached in global or local fashion. This aspect is crucial for the RX implementation since the consideration of a local or global strategy for the computation of the sample covariance matrix is expected to affect both the scalability of the parallel solution and the anomaly detection results. In this paper, we develop new parallel implementations of the RX in multi-core processors and specifically investigate the impact of different data partitioning strategies when parallelizing its computations. For this purpose, we consider both global and local data partitioning strategies in the spatial domain of the scene, and further analyze their scalability in different multi-core platforms. The numerical effectiveness of the considered solutions is evaluated using receiver operating characteristics (ROC) curves, analyzing their capacity to detect thermal hot spots (anomalies) in hyperspectral data collected by the NASA's Airborne Visible Infra- Red Imaging Spectrometer system over the World Trade Center in New York, five days after the terrorist attacks of September 11th, 2001.

  11. A new morphological anomaly detection algorithm for hyperspectral images and its GPU implementation

    NASA Astrophysics Data System (ADS)

    Paz, Abel; Plaza, Antonio

    2011-10-01

    Anomaly detection is considered a very important task for hyperspectral data exploitation. It is now routinely applied in many application domains, including defence and intelligence, public safety, precision agriculture, geology, or forestry. Many of these applications require timely responses for swift decisions which depend upon high computing performance of algorithm analysis. However, with the recent explosion in the amount and dimensionality of hyperspectral imagery, this problem calls for the incorporation of parallel computing techniques. In the past, clusters of computers have offered an attractive solution for fast anomaly detection in hyperspectral data sets already transmitted to Earth. However, these systems are expensive and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power integrated components are essential to reduce mission payload and obtain analysis results in (near) real-time, i.e., at the same time as the data is collected by the sensor. An exciting new development in the field of commodity computing is the emergence of commodity graphics processing units (GPUs), which can now bridge the gap towards on-board processing of remotely sensed hyperspectral data. In this paper, we develop a new morphological algorithm for anomaly detection in hyperspectral images along with an efficient GPU implementation of the algorithm. The algorithm is implemented on latest-generation GPU architectures, and evaluated with regards to other anomaly detection algorithms using hyperspectral data collected by NASA's Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the World Trade Center (WTC) in New York, five days after the terrorist attacks that collapsed the two main towers in the WTC complex. The proposed GPU implementation achieves real-time performance in the considered case study.

  12. Can we detect regional methane anomalies? A comparison between three observing systems

    NASA Astrophysics Data System (ADS)

    Cressot, Cindy; Pison, Isabelle; Rayner, Peter J.; Bousquet, Philippe; Fortems-Cheiney, Audrey; Chevallier, Frédéric

    2016-07-01

    A Bayesian inversion system is used to evaluate the capability of the current global surface network and of the space-borne GOSAT/TANSO-FTS and IASI instruments to quantify surface flux anomalies of methane at various spatial (global, semi-hemispheric and regional) and time (seasonal, yearly, 3-yearly) scales. The evaluation is based on a signal-to-noise ratio analysis, the signal being the methane fluxes inferred from the surface-based inversion from 2000 to 2011 and the noise (i.e., precision) of each of the three observing systems being computed from the Bayesian equation. At the global and semi-hemispheric scales, all observing systems detect flux anomalies at most of the tested timescales. At the regional scale, some seasonal flux anomalies are detected by the three observing systems, but year-to-year anomalies and longer-term trends are only poorly detected. Moreover, reliably detected regions depend on the reference surface-based inversion used as the signal. Indeed, tropical flux inter-annual variability, for instance, can be attributed mostly to Africa in the reference inversion or spread between tropical regions in Africa and America. Our results show that inter-annual analyses of methane emissions inferred by atmospheric inversions should always include an uncertainty assessment and that the attribution of current trends in atmospheric methane to particular regions' needs increased effort, for instance, gathering more observations (in the future) and improving transport models. At all scales, GOSAT generally shows the best performance of the three observing systems.

  13. A Comparative Study of Unsupervised Anomaly Detection Techniques Using Honeypot Data

    NASA Astrophysics Data System (ADS)

    Song, Jungsuk; Takakura, Hiroki; Okabe, Yasuo; Inoue, Daisuke; Eto, Masashi; Nakao, Koji

    Intrusion Detection Systems (IDS) have been received considerable attention among the network security researchers as one of the most promising countermeasures to defend our crucial computer systems or networks against attackers on the Internet. Over the past few years, many machine learning techniques have been applied to IDSs so as to improve their performance and to construct them with low cost and effort. Especially, unsupervised anomaly detection techniques have a significant advantage in their capability to identify unforeseen attacks, i.e., 0-day attacks, and to build intrusion detection models without any labeled (i.e., pre-classified) training data in an automated manner. In this paper, we conduct a set of experiments to evaluate and analyze performance of the major unsupervised anomaly detection techniques using real traffic data which are obtained at our honeypots deployed inside and outside of the campus network of Kyoto University, and using various evaluation criteria, i.e., performance evaluation by similarity measurements and the size of training data, overall performance, detection ability for unknown attacks, and time complexity. Our experimental results give some practical and useful guidelines to IDS researchers and operators, so that they can acquire insight to apply these techniques to the area of intrusion detection, and devise more effective intrusion detection models.

  14. Developing a new, passive diffusion sampling array to detect helium anomalies associated with volcanic unrest

    USGS Publications Warehouse

    Dame, Brittany E; Solomon, D Kip; Evans, William C.; Ingebritsen, Steven E.

    2015-01-01

    Helium (He) concentration and 3 He/ 4 He anomalies in soil gas and spring water are potentially powerful tools for investigating hydrothermal circulation associated with volca- nism and could perhaps serve as part of a hazards warning system. However, in operational practice, He and other gases are often sampled only after volcanic unrest is detected by other means. A new passive diffusion sampler suite, intended to be collected after the onset of unrest, has been developed and tested as a relatively low-cost method of determining He- isotope composition pre- and post-unrest. The samplers, each with a distinct equilibration time, passively record He concen- tration and isotope ratio in springs and soil gas. Once collected and analyzed, the He concentrations in the samplers are used to deconvolve the time history of the He concentration and the 3 He/ 4 He ratio at the collection site. The current suite consisting of three samplers is sufficient to deconvolve both the magnitude and the timing of a step change in in situ con- centration if the suite is collected within 100 h of the change. The effects of temperature and prolonged deployment on the suite ’ s capability of recording He anomalies have also been evaluated. The suite has captured a significant 3 He/ 4 He soil gas anomaly at Horseshoe Lake near Mammoth Lakes, California. The passive diffusion sampler suite appears to be an accurate and affordable alternative for determining He anomalies associated with volcanic unrest.

  15. Clusters versus GPUs for Parallel Target and Anomaly Detection in Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Paz, Abel; Plaza, Antonio

    2010-12-01

    Remotely sensed hyperspectral sensors provide image data containing rich information in both the spatial and the spectral domain, and this information can be used to address detection tasks in many applications. In many surveillance applications, the size of the objects (targets) searched for constitutes a very small fraction of the total search area and the spectral signatures associated to the targets are generally different from those of the background, hence the targets can be seen as anomalies. In hyperspectral imaging, many algorithms have been proposed for automatic target and anomaly detection. Given the dimensionality of hyperspectral scenes, these techniques can be time-consuming and difficult to apply in applications requiring real-time performance. In this paper, we develop several new parallel implementations of automatic target and anomaly detection algorithms. The proposed parallel algorithms are quantitatively evaluated using hyperspectral data collected by the NASA's Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) system over theWorld Trade Center (WTC) in New York, five days after the terrorist attacks that collapsed the two main towers in theWTC complex.

  16. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  17. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  18. Detection and Origin of Hydrocarbon Seepage Anomalies in the Barents Sea

    NASA Astrophysics Data System (ADS)

    Polteau, Stephane; Planke, Sverre; Stolze, Lina; Kjølhamar, Bent E.; Myklebust, Reidun

    2016-04-01

    We have collected more than 450 gravity cores in the Barents Sea to detect hydrocarbon seepage anomalies and for seismic-stratigraphic tie. The cores are from the Hoop Area (125 samples) and from the Barents Sea SE (293 samples). In addition, we have collected cores near seven exploration wells. The samples were analyzed using three different analytical methods; (1) the standard organic geochemical analyzes of Applied Petroleum Technologies (APT), (2) the Amplified Geochemical Imaging (AGI) method, and (3) the Microbial Prospecting for Oil and Gas (MPOG) method. These analytical approaches can detect trace amounts of thermogenic hydrocarbons in the sediment samples, and may provide additional information about the fluid phases and the depositional environment, maturation, and age of the source rocks. However, hydrocarbon anomalies in seabed sediments may also be related to shallow sources, such as biogenic gas or reworked source rocks in the sediments. To better understand the origin of the hydrocarbon anomalies in the Barents Sea we have studied 35 samples collected approximately 200 m away from seven exploration wells. The wells included three boreholes associated with oil discoveries, two with gas discoveries, one dry well with gas shows, and one dry well. In general, the results of this case study reveal that the oil wells have an oil signature, gas wells show a gas signature, and dry wells have a background signature. However, differences in results from the three methods may occur and have largely been explained in terms of analytical measurement ranges, method sensitivities, and bio-geochemical processes in the seabed sediments. The standard geochemical method applied by APT relies on measuring the abundance of compounds between C1 to C5 in the headspace gas and between C11 to C36 in the sediment extracts. The anomalies detected in the sediment samples from this study were in the C16 to C30 range. Since the organic matter yields were mostly very low, the

  19. Gaussian mixture model based approach to anomaly detection in multi/hyperspectral images

    NASA Astrophysics Data System (ADS)

    Acito, N.; Diani, M.; Corsini, G.

    2005-10-01

    Anomaly detectors reveal the presence of objects/materials in a multi/hyperspectral image simply searching for those pixels whose spectrum differs from the background one (anomalies). This procedure can be applied directly to the radiance at the sensor level and has the great advantage of avoiding the difficult step of atmospheric correction. The most popular anomaly detector is the RX algorithm derived by Yu and Reed. It is based on the assumption that the pixels, in a region around the one under test, follow a single multivariate Gaussian distribution. Unfortunately, such a hypothesis is generally not met in actual scenarios and a large number of false alarms is usually experienced when the RX algorithm is applied in practice. In this paper, a more general approach to anomaly detection is considered based on the assumption that the background contains different terrain types (clusters) each of them Gaussian distributed. In this approach the parameters of each cluster are estimated and used in the detection process. Two detectors are considered: the SEM-RX and the K-means RX. Both the algorithms follow two steps: first, 1) the parameters of the background clusters are estimated, then, 2) a detection rule based on the RX test is applied. The SEM-RX stems from the GMM and employs the SEM algorithm to estimate the clusters' parameters; instead, the K-means RX resorts to the well known K-means algorithm to obtain the background clusters. An automatic procedure is defined, for both the detectors, to select the number of clusters and a novel criterion is proposed to set the test threshold. The performances of the two detectors are also evaluated on an experimental data set and compared to the ones of the RX algorithm. The comparative analysis is carried out in terms of experimental Receiver Operating Characteristics.

  20. Small sample training and test selection method for optimized anomaly detection algorithms in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2012-01-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques provide an avenue to select robust settings capable of operating consistently across a large variety of image scenes. Many researchers in this area are faced with a paucity of data. Unfortunately, there are no data splitting methods for model validation of datasets with small sample sizes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research has developed a framework for optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. We have developed method for selecting hyperspectral image training and test subsets that yields consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. The small sample training and test selection method is contrasted with randomly selected training sets as well as training sets chosen from the CADEX and DUPLEX algorithms for the well known Reed-Xiaoli anomaly detector.

  1. Wavelet-RX anomaly detection for dual-band forward-looking infrared imagery.

    PubMed

    Mehmood, Asif; Nasrabadi, Nasser M

    2010-08-20

    This paper describes a new wavelet-based anomaly detection technique for a dual-band forward-looking infrared (FLIR) sensor consisting of a coregistered longwave (LW) with a midwave (MW) sensor. The proposed approach, called the wavelet-RX (Reed-Xiaoli) algorithm, consists of a combination of a two-dimensional (2D) wavelet transform and a well-known multivariate anomaly detector called the RX algorithm. In our wavelet-RX algorithm, a 2D wavelet transform is first applied to decompose the input image into uniform subbands. A subband-image cube is formed by concatenating together a number of significant subbands (high-energy subbands). The RX algorithm is then applied to the subband-image cube obtained from a wavelet decomposition of the LW or MW sensor data. In the case of the dual band, the RX algorithm is applied to a subband-image cube constructed by concatenating together the high-energy subbands of the LW and MW subband-image cubes. Experimental results are presented for the proposed wavelet-RX and the classical constant false alarm rate (CFAR) algorithm for detecting anomalies (targets) in a single broadband FLIR (LW or MW) or in a coregistered dual-band FLIR sensor. The results show that the proposed wavelet-RX algorithm outperforms the classical CFAR detector for both single-band and dual-band FLIR sensors. PMID:20733634

  2. Anomaly detection in hyperspectral imagery based on low-rank and sparse decomposition

    NASA Astrophysics Data System (ADS)

    Cui, Xiaoguang; Tian, Yuan; Weng, Lubin; Yang, Yiping

    2014-01-01

    This paper presents a novel low-rank and sparse decomposition (LSD) based model for anomaly detection in hyperspectral images. In our model, a local image region is represented as a low-rank matrix plus spares noises in the spectral space, where the background can be explained by the low-rank matrix, and the anomalies are indicated by the sparse noises. The detection of anomalies in local image regions is formulated as a constrained LSD problem, which can be solved efficiently and robustly with a modified "Go Decomposition" (GoDec) method. To enhance the validity of this model, we adapts a "simple linear iterative clustering" (SLIC) superpixel algorithm to efficiently generate homogeneous local image regions i.e. superpixels in hyperspectral imagery, thus ensures that the background in local image regions satisfies the condition of low-rank. Experimental results on real hyperspectral data demonstrate that, compared with several known local detectors including RX detector, kernel RX detector, and SVDD detector, the proposed model can comfortably achieves better performance in satisfactory computation time.

  3. On-road anomaly detection by multimodal sensor analysis and multimedia processing

    NASA Astrophysics Data System (ADS)

    Orhan, Fatih; Eren, P. E.

    2014-03-01

    The use of smartphones in Intelligent Transportation Systems is gaining popularity, yet many challenges exist in developing functional applications. Due to the dynamic nature of transportation, vehicular social applications face complexities such as developing robust sensor management, performing signal and image processing tasks, and sharing information among users. This study utilizes a multimodal sensor analysis framework which enables the analysis of sensors in multimodal aspect. It also provides plugin-based analyzing interfaces to develop sensor and image processing based applications, and connects its users via a centralized application as well as to social networks to facilitate communication and socialization. With the usage of this framework, an on-road anomaly detector is being developed and tested. The detector utilizes the sensors of a mobile device and is able to identify anomalies such as hard brake, pothole crossing, and speed bump crossing. Upon such detection, the video portion containing the anomaly is automatically extracted in order to enable further image processing analysis. The detection results are shared on a central portal application for online traffic condition monitoring.

  4. Small-scale anomaly detection in panoramic imaging using neural models of low-level vision

    NASA Astrophysics Data System (ADS)

    Casey, Matthew C.; Hickman, Duncan L.; Pavlou, Athanasios; Sadler, James R. E.

    2011-06-01

    Our understanding of sensory processing in animals has reached the stage where we can exploit neurobiological principles in commercial systems. In human vision, one brain structure that offers insight into how we might detect anomalies in real-time imaging is the superior colliculus (SC). The SC is a small structure that rapidly orients our eyes to a movement, sound or touch that it detects, even when the stimulus may be on a small-scale; think of a camouflaged movement or the rustle of leaves. This automatic orientation allows us to prioritize the use of our eyes to raise awareness of a potential threat, such as a predator approaching stealthily. In this paper we describe the application of a neural network model of the SC to the detection of anomalies in panoramic imaging. The neural approach consists of a mosaic of topographic maps that are each trained using competitive Hebbian learning to rapidly detect image features of a pre-defined shape and scale. What makes this approach interesting is the ability of the competition between neurons to automatically filter noise, yet with the capability of generalizing the desired shape and scale. We will present the results of this technique applied to the real-time detection of obscured targets in visible-band panoramic CCTV images. Using background subtraction to highlight potential movement, the technique is able to correctly identify targets which span as little as 3 pixels wide while filtering small-scale noise.

  5. Item Anomaly Detection Based on Dynamic Partition for Time Series in Recommender Systems

    PubMed Central

    Gao, Min; Tian, Renli; Wen, Junhao; Xiong, Qingyu; Ling, Bin; Yang, Linda

    2015-01-01

    In recent years, recommender systems have become an effective method to process information overload. However, recommendation technology still suffers from many problems. One of the problems is shilling attacks-attackers inject spam user profiles to disturb the list of recommendation items. There are two characteristics of all types of shilling attacks: 1) Item abnormality: The rating of target items is always maximum or minimum; and 2) Attack promptness: It takes only a very short period time to inject attack profiles. Some papers have proposed item anomaly detection methods based on these two characteristics, but their detection rate, false alarm rate, and universality need to be further improved. To solve these problems, this paper proposes an item anomaly detection method based on dynamic partitioning for time series. This method first dynamically partitions item-rating time series based on important points. Then, we use chi square distribution (χ2) to detect abnormal intervals. The experimental results on MovieLens 100K and 1M indicate that this approach has a high detection rate and a low false alarm rate and is stable toward different attack models and filler sizes. PMID:26267477

  6. Item Anomaly Detection Based on Dynamic Partition for Time Series in Recommender Systems.

    PubMed

    Gao, Min; Tian, Renli; Wen, Junhao; Xiong, Qingyu; Ling, Bin; Yang, Linda

    2015-01-01

    In recent years, recommender systems have become an effective method to process information overload. However, recommendation technology still suffers from many problems. One of the problems is shilling attacks-attackers inject spam user profiles to disturb the list of recommendation items. There are two characteristics of all types of shilling attacks: 1) Item abnormality: The rating of target items is always maximum or minimum; and 2) Attack promptness: It takes only a very short period time to inject attack profiles. Some papers have proposed item anomaly detection methods based on these two characteristics, but their detection rate, false alarm rate, and universality need to be further improved. To solve these problems, this paper proposes an item anomaly detection method based on dynamic partitioning for time series. This method first dynamically partitions item-rating time series based on important points. Then, we use chi square distribution (χ2) to detect abnormal intervals. The experimental results on MovieLens 100K and 1M indicate that this approach has a high detection rate and a low false alarm rate and is stable toward different attack models and filler sizes. PMID:26267477

  7. Data-Driven Anomaly Detection Performance for the Ares I-X Ground Diagnostic Prototype

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Schwabacher, Mark A.; Matthews, Bryan L.

    2010-01-01

    In this paper, we will assess the performance of a data-driven anomaly detection algorithm, the Inductive Monitoring System (IMS), which can be used to detect simulated Thrust Vector Control (TVC) system failures. However, the ability of IMS to detect these failures in a true operational setting may be related to the realistic nature of how they are simulated. As such, we will investigate both a low fidelity and high fidelity approach to simulating such failures, with the latter based upon the underlying physics. Furthermore, the ability of IMS to detect anomalies that were previously unknown and not previously simulated will be studied in earnest, as well as apparent deficiencies or misapplications that result from using the data-driven paradigm. Our conclusions indicate that robust detection performance of simulated failures using IMS is not appreciably affected by the use of a high fidelity simulation. However, we have found that the inclusion of a data-driven algorithm such as IMS into a suite of deployable health management technologies does add significant value.

  8. Radiation anomaly detection algorithms for field-acquired gamma energy spectra

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ron; Guss, Paul; Mitchell, Stephen

    2015-08-01

    The Remote Sensing Laboratory (RSL) is developing a tactical, networked radiation detection system that will be agile, reconfigurable, and capable of rapid threat assessment with high degree of fidelity and certainty. Our design is driven by the needs of users such as law enforcement personnel who must make decisions by evaluating threat signatures in urban settings. The most efficient tool available to identify the nature of the threat object is real-time gamma spectroscopic analysis, as it is fast and has a very low probability of producing false positive alarm conditions. Urban radiological searches are inherently challenged by the rapid and large spatial variation of background gamma radiation, the presence of benign radioactive materials in terms of the normally occurring radioactive materials (NORM), and shielded and/or masked threat sources. Multiple spectral anomaly detection algorithms have been developed by national laboratories and commercial vendors. For example, the Gamma Detector Response and Analysis Software (GADRAS) a one-dimensional deterministic radiation transport software capable of calculating gamma ray spectra using physics-based detector response functions was developed at Sandia National Laboratories. The nuisance-rejection spectral comparison ratio anomaly detection algorithm (or NSCRAD), developed at Pacific Northwest National Laboratory, uses spectral comparison ratios to detect deviation from benign medical and NORM radiation source and can work in spite of strong presence of NORM and or medical sources. RSL has developed its own wavelet-based gamma energy spectral anomaly detection algorithm called WAVRAD. Test results and relative merits of these different algorithms will be discussed and demonstrated.

  9. Motivating Complex Dependence Structures in Data Mining: A Case Study with Anomaly Detection in Climate

    SciTech Connect

    Kao, Shih-Chieh; Ganguly, Auroop R; Steinhaeuser, Karsten J K

    2009-01-01

    While data mining aims to identify hidden knowledge from massive and high dimensional datasets, the importance of dependence structure among time, space, and between different variables is less emphasized. Analogous to the use of probability density functions in modeling individual variables, it is now possible to characterize the complete dependence space mathematically through the application of copulas. By adopting copulas, the multivariate joint probability distribution can be constructed without constraint to specific types of marginal distributions. Some common assumptions, like normality and independence between variables, can also be relieved. This study provides fundamental introduction and illustration of dependence structure, aimed at the potential applicability of copulas in general data mining. The case study in hydro-climatic anomaly detection shows that the frequency of multivariate anomalies is affected by the dependence level between variables. The appropriate multivariate thresholds can be determined through a copula-based approach.

  10. Using the sequential regression (SER) algorithm for long-term signal processing. [Intrusion detection

    SciTech Connect

    Soldan, D. L.; Ahmed, N.; Stearns, S. D.

    1980-01-01

    The use of the sequential regression (SER) algorithm (Electron. Lett., 14, 118(1978); 13, 446(1977)) for long-term processing applications is limited by two problems that can occur when an SER predictor has more weights than required to predict the input signal. First, computational difficulties related to updating the autocorrelation matrix inverse could arise, since no unique least-squares solution exists. Second, the predictor strives to remove very low-level components in the input, and hence could implement a gain function that is essentially zero over the entire passband. The predictor would then tend to become a no-pass filter which is undesirable in certain applications, e.g., intrusion detection (SAND--78-1032). Modifications to the SER algorithm that overcome the above problems are presented, which enable its use for long-term signal processing applications. 3 figures.

  11. Automated determinations of selenium in thermal power plant wastewater by sequential hydride generation and chemiluminescence detection.

    PubMed

    Ezoe, Kentaro; Ohyama, Seiichi; Hashem, Md Abul; Ohira, Shin-Ichi; Toda, Kei

    2016-02-01

    After the Fukushima disaster, power generation from nuclear power plants in Japan was completely stopped and old coal-based power plants were re-commissioned to compensate for the decrease in power generation capacity. Although coal is a relatively inexpensive fuel for power generation, it contains high levels (mgkg(-1)) of selenium, which could contaminate the wastewater from thermal power plants. In this work, an automated selenium monitoring system was developed based on sequential hydride generation and chemiluminescence detection. This method could be applied to control of wastewater contamination. In this method, selenium is vaporized as H2Se, which reacts with ozone to produce chemiluminescence. However, interference from arsenic is of concern because the ozone-induced chemiluminescence intensity of H2Se is much lower than that of AsH3. This problem was successfully addressed by vaporizing arsenic and selenium individually in a sequential procedure using a syringe pump equipped with an eight-port selection valve and hot and cold reactors. Oxidative decomposition of organoselenium compounds and pre-reduction of the selenium were performed in the hot reactor, and vapor generation of arsenic and selenium were performed separately in the cold reactor. Sample transfers between the reactors were carried out by a pneumatic air operation by switching with three-way solenoid valves. The detection limit for selenium was 0.008 mg L(-1) and calibration curve was linear up to 1.0 mg L(-1), which provided suitable performance for controlling selenium in wastewater to around the allowable limit (0.1 mg L(-1)). This system consumes few chemicals and is stable for more than a month without any maintenance. Wastewater samples from thermal power plants were collected, and data obtained by the proposed method were compared with those from batchwise water treatment followed by hydride generation-atomic fluorescence spectrometry. PMID:26653491

  12. A function approximation approach to anomaly detection in propulsion system test data

    NASA Astrophysics Data System (ADS)

    Whitehead, Bruce A.; Hoyt, W. A.

    1993-06-01

    Ground test data from propulsion systems such as the Space Shuttle Main Engine (SSME) can be automatically screened for anomalies by a neural network. The neural network screens data after being trained with nominal data only. Given the values of 14 measurements reflecting external influences on the SSME at a given time, the neural network predicts the expected nominal value of a desired engine parameter at that time. We compared the ability of three different function-approximation techniques to perform this nominal value prediction: a novel neural network architecture based on Gaussian bar basis functions, a conventional back propagation neural network, and linear regression. These three techniques were tested with real data from six SSME ground tests containing two anomalies. The basis function network trained more rapidly than back propagation. It yielded nominal predictions with, a tight enough confidence interval to distinguish anomalous deviations from the nominal fluctuations in an engine parameter. Since the function-approximation approach requires nominal training data only, it is capable of detecting unknown classes of anomalies for which training data is not available.

  13. Advanced Unsupervised Classification Methods to Detect Anomalies on Earthen Levees Using Polarimetric SAR Imagery.

    PubMed

    Marapareddy, Ramakalavathi; Aanstoos, James V; Younan, Nicolas H

    2016-01-01

    Fully polarimetric Synthetic Aperture Radar (polSAR) data analysis has wide applications for terrain and ground cover classification. The dynamics of surface and subsurface water events can lead to slope instability resulting in slough slides on earthen levees. Early detection of these anomalies by a remote sensing approach could save time versus direct assessment. We used L-band Synthetic Aperture Radar (SAR) to screen levees for anomalies. SAR technology, due to its high spatial resolution and soil penetration capability, is a good choice for identifying problematic areas on earthen levees. Using the parameters entropy (H), anisotropy (A), alpha (α), and eigenvalues (λ, λ₁, λ₂, and λ₃), we implemented several unsupervised classification algorithms for the identification of anomalies on the levee. The classification techniques applied are H/α, H/A, A/α, Wishart H/α, Wishart H/A/α, and H/α/λ classification algorithms. In this work, the effectiveness of the algorithms was demonstrated using quad-polarimetric L-band SAR imagery from the NASA Jet Propulsion Laboratory's (JPL's) Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR). The study area is a section of the lower Mississippi River valley in the Southern USA, where earthen flood control levees are maintained by the US Army Corps of Engineers. PMID:27322270

  14. Automatic, Real-Time Algorithms for Anomaly Detection in High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Srivastava, A. N.; Nemani, R. R.; Votava, P.

    2008-12-01

    Earth observing satellites are generating data at an unprecedented rate, surpassing almost all other data intensive applications. However, most of the data that arrives from the satellites is not analyzed directly. Rather, multiple scientific teams analyze only a small fraction of the total data available in the data stream. Although there are many reasons for this situation one paramount concern is developing algorithms and methods that can analyze the vast, high dimensional, streaming satellite images. This paper describes a new set of methods that are among the fastest available algorithms for real-time anomaly detection. These algorithms were built to maximize accuracy and speed for a variety of applications in fields outside of the earth sciences. However, our studies indicate that with appropriate modifications, these algorithms can be extremely valuable for identifying anomalies rapidly using only modest computational power. We review two algorithms which are used as benchmarks in the field: Orca, One-Class Support Vector Machines and discuss the anomalies that are discovered in MODIS data taken over the Central California region. We are especially interested in automatic identification of disturbances within the ecosystems (e,g, wildfires, droughts, floods, insect/pest damage, wind damage, logging). We show the scalability of the algorithms and demonstrate that with appropriately adapted technology, the dream of real-time analysis can be made a reality.

  15. Advanced Unsupervised Classification Methods to Detect Anomalies on Earthen Levees Using Polarimetric SAR Imagery

    PubMed Central

    Marapareddy, Ramakalavathi; Aanstoos, James V.; Younan, Nicolas H.

    2016-01-01

    Fully polarimetric Synthetic Aperture Radar (polSAR) data analysis has wide applications for terrain and ground cover classification. The dynamics of surface and subsurface water events can lead to slope instability resulting in slough slides on earthen levees. Early detection of these anomalies by a remote sensing approach could save time versus direct assessment. We used L-band Synthetic Aperture Radar (SAR) to screen levees for anomalies. SAR technology, due to its high spatial resolution and soil penetration capability, is a good choice for identifying problematic areas on earthen levees. Using the parameters entropy (H), anisotropy (A), alpha (α), and eigenvalues (λ, λ1, λ2, and λ3), we implemented several unsupervised classification algorithms for the identification of anomalies on the levee. The classification techniques applied are H/α, H/A, A/α, Wishart H/α, Wishart H/A/α, and H/α/λ classification algorithms. In this work, the effectiveness of the algorithms was demonstrated using quad-polarimetric L-band SAR imagery from the NASA Jet Propulsion Laboratory’s (JPL’s) Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR). The study area is a section of the lower Mississippi River valley in the Southern USA, where earthen flood control levees are maintained by the US Army Corps of Engineers. PMID:27322270

  16. Volcanic activity and satellite-detected thermal anomalies at Central American volcanoes

    NASA Technical Reports Server (NTRS)

    Stoiber, R. E. (Principal Investigator); Rose, W. I., Jr.

    1973-01-01

    The author has identified the following significant results. A large nuee ardente eruption occurred at Santiaguito volcano, within the test area on 16 September 1973. Through a system of local observers, the eruption has been described, reported to the international scientific community, extent of affected area mapped, and the new ash sampled. A more extensive report on this event will be prepared. The eruption is an excellent example of the kind of volcanic situation in which satellite thermal imagery might be useful. The Santiaguito dome is a complex mass with a whole series of historically active vents. It's location makes access difficult, yet its activity is of great concern to large agricultural populations who live downslope. Santiaguito has produced a number of large eruptions with little apparent warning. In the earlier ground survey large thermal anomalies were identified at Santiaguito. There is no way of knowing whether satellite monitoring could have detected changes in thermal anomaly patterns related to this recent event, but the position of thermal anomalies on Santiaguito and any changes in their character would be relevant information.

  17. Ferromagnetic eddy current probe having eccentric magnetization for detecting anomalies in a tube

    SciTech Connect

    Cecco, V.S.; Carter, J.R.

    1993-08-17

    An eddy current probe is described for detecting anomalies in a tube made of a ferromagnetic material, comprising: a probe housing made of a non-ferromagnetic material and shaped to be introduced into the tube for inspection, said housing having a central axis substantially coinciding with the axis of the tube to be inspected when the probe is in use; at least two eddy current measuring assemblies provided in said housing, each said assembly including magnetization means for generating a magnetic field in the tube under inspection to magnetize said tube, said magnetization means producing a maximum magnetization at an area of said tube and a minimum magnetization at a diametrically opposite area of said tube and at least one eddy current measuring coil associated with said magnetization means to measure the eddy current generated in the said tube and which has a relatively high sensitivity to an anomaly at said maximum magnetization area; and said eddy current measuring assemblies being spaced apart axially within said housing and rotated about said central axis from each other by a predetermined angle so that each assembly is sensitive to anomalies differently depending upon their location in said housing.

  18. A function approximation approach to anomaly detection in propulsion system test data

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Hoyt, W. A.

    1993-01-01

    Ground test data from propulsion systems such as the Space Shuttle Main Engine (SSME) can be automatically screened for anomalies by a neural network. The neural network screens data after being trained with nominal data only. Given the values of 14 measurements reflecting external influences on the SSME at a given time, the neural network predicts the expected nominal value of a desired engine parameter at that time. We compared the ability of three different function-approximation techniques to perform this nominal value prediction: a novel neural network architecture based on Gaussian bar basis functions, a conventional back propagation neural network, and linear regression. These three techniques were tested with real data from six SSME ground tests containing two anomalies. The basis function network trained more rapidly than back propagation. It yielded nominal predictions with, a tight enough confidence interval to distinguish anomalous deviations from the nominal fluctuations in an engine parameter. Since the function-approximation approach requires nominal training data only, it is capable of detecting unknown classes of anomalies for which training data is not available.

  19. GPU implementation of target and anomaly detection algorithms for remotely sensed hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Paz, Abel; Plaza, Antonio

    2010-08-01

    Automatic target and anomaly detection are considered very important tasks for hyperspectral data exploitation. These techniques are now routinely applied in many application domains, including defence and intelligence, public safety, precision agriculture, geology, or forestry. Many of these applications require timely responses for swift decisions which depend upon high computing performance of algorithm analysis. However, with the recent explosion in the amount and dimensionality of hyperspectral imagery, this problem calls for the incorporation of parallel computing techniques. In the past, clusters of computers have offered an attractive solution for fast anomaly and target detection in hyperspectral data sets already transmitted to Earth. However, these systems are expensive and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power integrated components are essential to reduce mission payload and obtain analysis results in (near) real-time, i.e., at the same time as the data is collected by the sensor. An exciting new development in the field of commodity computing is the emergence of commodity graphics processing units (GPUs), which can now bridge the gap towards on-board processing of remotely sensed hyperspectral data. In this paper, we describe several new GPU-based implementations of target and anomaly detection algorithms for hyperspectral data exploitation. The parallel algorithms are implemented on latest-generation Tesla C1060 GPU architectures, and quantitatively evaluated using hyperspectral data collected by NASA's AVIRIS system over the World Trade Center (WTC) in New York, five days after the terrorist attacks that collapsed the two main towers in the WTC complex.

  20. Multiple Kernel Learning for Heterogeneous Anomaly Detection: Algorithm and Aviation Safety Case Study

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.

    2010-01-01

    The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods

  1. Molecular Detection of Human Cytomegalovirus (HCMV) Among Infants with Congenital Anomalies in Khartoum State, Sudan

    PubMed Central

    Ebrahim, Maha G.; Ali, Aisha S.; Mustafa, Mohamed O.; Musa, Dalal F.; El Hussein, Abdel Rahim M.; Elkhidir, Isam M.; Enan, Khalid A.

    2015-01-01

    Human Cytomegalovirus (HCMV) infection still represents the most common potentially serious viral complication in humans and is a major cause of congenital anomalies in infants. This study is aimed to detect HCMV in infants with congenital anomalies. Study subjects consisted of infants born with neural tube defect, hydrocephalus and microcephaly. Fifty serum specimens (20 males, 30 females) were collected from different hospitals in Khartoum State. The sera were investigated for cytomegalovirus specific immunoglobin M (IgM) antibodies using enzyme-linked immunosorbent assay (ELISA), and for Cytomegalovirus DNA using polymerase chain reaction (PCR). Out of the 50 sera tested, one patient’s (2%) sample showed HCMV IgM, but with no detectable DNA, other 4(8.2 %) sera were positive for HCMV DNA but with no detectable IgM. Various diagnostic techniques should be considered to evaluate HCMV disease and routine screening for HCMV should be introduced for pregnant women in this setting. It is vital to initiate further research work with many samples from different area to assess prevalence and characterize HCMV and evaluate its maternal health implications. PMID:26862356

  2. Fiber Optic Bragg Grating Sensors for Thermographic Detection of Subsurface Anomalies

    NASA Technical Reports Server (NTRS)

    Allison, Sidney G.; Winfree, William P.; Wu, Meng-Chou

    2009-01-01

    Conventional thermography with an infrared imager has been shown to be an extremely viable technique for nondestructively detecting subsurface anomalies such as thickness variations due to corrosion. A recently developed technique using fiber optic sensors to measure temperature holds potential for performing similar inspections without requiring an infrared imager. The structure is heated using a heat source such as a quartz lamp with fiber Bragg grating (FBG) sensors at the surface of the structure to detect temperature. Investigated structures include a stainless steel plate with thickness variations simulated by small platelets attached to the back side using thermal grease. A relationship is shown between the FBG sensor thermal response and variations in material thickness. For comparison, finite element modeling was performed and found to agree closely with the fiber optic thermography results. This technique shows potential for applications where FBG sensors are already bonded to structures for Integrated Vehicle Health Monitoring (IVHM) strain measurements and can serve dual-use by also performing thermographic detection of subsurface anomalies.

  3. The Frog-Boiling Attack: Limitations of Anomaly Detection for Secure Network Coordinate Systems

    NASA Astrophysics Data System (ADS)

    Chan-Tin, Eric; Feldman, Daniel; Hopper, Nicholas; Kim, Yongdae

    A network coordinate system assigns Euclidean “virtual” coordinates to every node in a network to allow easy estimation of network latency between pairs of nodes that have never contacted each other. These systems have been implemented in a variety of applications, most notably the popular Azureus/Vuze BitTorrent client. Zage and Nita-Rotaru (CCS 2007) and independently, Kaafar et al. (SIGCOMM 2007), demonstrated that several widely-cited network coordinate systems are prone to simple attacks, and proposed mechanisms to defeat these attacks using outlier detection to filter out adversarial inputs. We propose a new attack, Frog-Boiling, that defeats anomaly-detection based defenses in the context of network coordinate systems, and demonstrate empirically that Frog-Boiling is more disruptive than the previously known attacks. Our results suggest that a new approach is needed to solve this problem: outlier detection alone cannot be used to secure network coordinate systems.

  4. Particle Filtering for Model-Based Anomaly Detection in Sensor Networks

    NASA Technical Reports Server (NTRS)

    Solano, Wanda; Banerjee, Bikramjit; Kraemer, Landon

    2012-01-01

    A novel technique has been developed for anomaly detection of rocket engine test stand (RETS) data. The objective was to develop a system that postprocesses a csv file containing the sensor readings and activities (time-series) from a rocket engine test, and detects any anomalies that might have occurred during the test. The output consists of the names of the sensors that show anomalous behavior, and the start and end time of each anomaly. In order to reduce the involvement of domain experts significantly, several data-driven approaches have been proposed where models are automatically acquired from the data, thus bypassing the cost and effort of building system models. Many supervised learning methods can efficiently learn operational and fault models, given large amounts of both nominal and fault data. However, for domains such as RETS data, the amount of anomalous data that is actually available is relatively small, making most supervised learning methods rather ineffective, and in general met with limited success in anomaly detection. The fundamental problem with existing approaches is that they assume that the data are iid, i.e., independent and identically distributed, which is violated in typical RETS data. None of these techniques naturally exploit the temporal information inherent in time series data from the sensor networks. There are correlations among the sensor readings, not only at the same time, but also across time. However, these approaches have not explicitly identified and exploited such correlations. Given these limitations of model-free methods, there has been renewed interest in model-based methods, specifically graphical methods that explicitly reason temporally. The Gaussian Mixture Model (GMM) in a Linear Dynamic System approach assumes that the multi-dimensional test data is a mixture of multi-variate Gaussians, and fits a given number of Gaussian clusters with the help of the wellknown Expectation Maximization (EM) algorithm. The

  5. Feasibility of anomaly detection and characterization using trans-admittance mammography with 60 × 60 electrode array

    NASA Astrophysics Data System (ADS)

    Zhao, Mingkang; Wi, Hun; Lee, Eun Jung; Woo, Eung Je; In Oh, Tong

    2014-10-01

    Electrical impedance imaging has the potential to detect an early stage of breast cancer due to higher admittivity values compared with those of normal breast tissues. The tumor size and extent of axillary lymph node involvement are important parameters to evaluate the breast cancer survival rate. Additionally, the anomaly characterization is required to distinguish a malignant tumor from a benign tumor. In order to overcome the limitation of breast cancer detection using impedance measurement probes, we developed the high density trans-admittance mammography (TAM) system with 60 × 60 electrode array and produced trans-admittance maps obtained at several frequency pairs. We applied the anomaly detection algorithm to the high density TAM system for estimating the volume and position of breast tumor. We tested four different sizes of anomaly with three different conductivity contrasts at four different depths. From multifrequency trans-admittance maps, we can readily observe the transversal position and estimate its volume and depth. Specially, the depth estimated values were obtained accurately, which were independent to the size and conductivity contrast when applying the new formula using Laplacian of trans-admittance map. The volume estimation was dependent on the conductivity contrast between anomaly and background in the breast phantom. We characterized two testing anomalies using frequency difference trans-admittance data to eliminate the dependency of anomaly position and size. We confirmed the anomaly detection and characterization algorithm with the high density TAM system on bovine breast tissue. Both results showed the feasibility of detecting the size and position of anomaly and tissue characterization for screening the breast cancer.

  6. Sparsity divergence index based on locally linear embedding for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Zhang, Lili; Zhao, Chunhui

    2016-04-01

    Hyperspectral imagery (HSI) has high spectral and spatial resolutions, which are essential for anomaly detection (AD). Many anomaly detectors assume that the spectrum signature of HSI pixels can be modeled with a Gaussian distribution, which is actually not accurate and often leads to many false alarms. Therefore, a sparsity model without any distribution hypothesis is usually employed. Dimensionality reduction (DR) as a preprocessing step for HSI is important. Principal component analysis as a conventional DR method is a linear projection and cannot exploit the nonlinear properties in hyperspectral data, whereas locally linear embedding (LLE) as a local, nonlinear manifold learning algorithm works well for DR of HSI. A modified algorithm of sparsity divergence index based on locally linear embedding (SDI-LLE) is thus proposed. First, kernel collaborative representation detection is adopted to calculate the sparse dictionary matrix of local reconstruction weights in LLE. Then, SDI is obtained both in the spectral and spatial domains, where spatial SDI is computed after DR by LLE. Finally, joint SDI, combining spectral SDI and spatial SDI, is computed, and the optimal SDI is performed for AD. Experimental results demonstrate that the proposed algorithm significantly improves the performance, when compared with its counterparts.

  7. A MLP neural network as an investigator of TEC time series to detect seismo-ionospheric anomalies

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2013-06-01

    Anomaly detection is extremely important for earthquake parameters estimation. In this paper, an application of Artificial Neural Networks (ANNs) in the earthquake precursor's domain has been developed. This study is concerned with investigating the Total Electron Content (TEC) time series by using a Multi-Layer Perceptron (MLP) neural network to detect seismo-ionospheric anomalous variations induced by the powerful Tohoku earthquake of March 11, 2011.The duration of TEC time series dataset is 120 days at time resolution of 2 h. The results show that the MLP presents anomalies better than referenced and conventional methods such as Auto-Regressive Integrated Moving Average (ARIMA) technique. In this study, also the detected TEC anomalies using the proposed method, are compared to the previous results (Akhoondzadeh, 2012) dealing with the observed TEC anomalies by applying the mean, median, wavelet and Kalman filter methods. The MLP detected anomalies are similar to those detected using the previous methods applied on the same case study. The results indicate that a MLP feed-forward neural network can be a suitable non-parametric method to detect changes of a non linear time series such as variations of earthquake precursors.

  8. Anomaly Detection Techniques with Real Test Data from a Spinning Turbine Engine-Like Rotor

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Woike, Mark R.; Oza, Nikunj C.; Matthews, Bryan L.

    2012-01-01

    Online detection techniques to monitor the health of rotating engine components are becoming increasingly attractive to aircraft engine manufacturers in order to increase safety of operation and lower maintenance costs. Health monitoring remains a challenge to easily implement, especially in the presence of scattered loading conditions, crack size, component geometry, and materials properties. The current trend, however, is to utilize noninvasive types of health monitoring or nondestructive techniques to detect hidden flaws and mini-cracks before any catastrophic event occurs. These techniques go further to evaluate material discontinuities and other anomalies that have grown to the level of critical defects that can lead to failure. Generally, health monitoring is highly dependent on sensor systems capable of performing in various engine environmental conditions and able to transmit a signal upon a predetermined crack length, while acting in a neutral form upon the overall performance of the engine system.

  9. System and method for the detection of anomalies in an image

    DOEpatents

    Prasad, Lakshman; Swaminarayan, Sriram

    2013-09-03

    Preferred aspects of the present invention can include receiving a digital image at a processor; segmenting the digital image into a hierarchy of feature layers comprising one or more fine-scale features defining a foreground object embedded in one or more coarser-scale features defining a background to the one or more fine-scale features in the segmentation hierarchy; detecting a first fine-scale foreground feature as an anomaly with respect to a first background feature within which it is embedded; and constructing an anomalous feature layer by synthesizing spatially contiguous anomalous fine-scale features. Additional preferred aspects of the present invention can include detecting non-pervasive changes between sets of images in response at least in part to one or more difference images between the sets of images.

  10. MedMon: securing medical devices through wireless monitoring and anomaly detection.

    PubMed

    Zhang, Meng; Raghunathan, Anand; Jha, Niraj K

    2013-12-01

    Rapid advances in personal healthcare systems based on implantable and wearable medical devices promise to greatly improve the quality of diagnosis and treatment for a range of medical conditions. However, the increasing programmability and wireless connectivity of medical devices also open up opportunities for malicious attackers. Unfortunately, implantable/wearable medical devices come with extreme size and power constraints, and unique usage models, making it infeasible to simply borrow conventional security solutions such as cryptography. We propose a general framework for securing medical devices based on wireless channel monitoring and anomaly detection. Our proposal is based on a medical security monitor (MedMon) that snoops on all the radio-frequency wireless communications to/from medical devices and uses multi-layered anomaly detection to identify potentially malicious transactions. Upon detection of a malicious transaction, MedMon takes appropriate response actions, which could range from passive (notifying the user) to active (jamming the packets so that they do not reach the medical device). A key benefit of MedMon is that it is applicable to existing medical devices that are in use by patients, with no hardware or software modifications to them. Consequently, it also leads to zero power overheads on these devices. We demonstrate the feasibility of our proposal by developing a prototype implementation for an insulin delivery system using off-the-shelf components (USRP software-defined radio). We evaluate its effectiveness under several attack scenarios. Our results show that MedMon can detect virtually all naive attacks and a large fraction of more sophisticated attacks, suggesting that it is an effective approach to enhancing the security of medical devices. PMID:24473551