Sample records for anomaly detection algorithm

  1. Quantum machine learning for quantum anomaly detection

    NASA Astrophysics Data System (ADS)

    Liu, Nana; Rebentrost, Patrick

    2018-04-01

    Anomaly detection is used for identifying data that deviate from "normal" data patterns. Its usage on classical data finds diverse applications in many important areas such as finance, fraud detection, medical diagnoses, data cleaning, and surveillance. With the advent of quantum technologies, anomaly detection of quantum data, in the form of quantum states, may become an important component of quantum applications. Machine-learning algorithms are playing pivotal roles in anomaly detection using classical data. Two widely used algorithms are the kernel principal component analysis and the one-class support vector machine. We find corresponding quantum algorithms to detect anomalies in quantum states. We show that these two quantum algorithms can be performed using resources that are logarithmic in the dimensionality of quantum states. For pure quantum states, these resources can also be logarithmic in the number of quantum states used for training the machine-learning algorithm. This makes these algorithms potentially applicable to big quantum data applications.

  2. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data.

    PubMed

    Goldstein, Markus; Uchida, Seiichi

    2016-01-01

    Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks.

  3. Adiabatic Quantum Anomaly Detection and Machine Learning

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen; Lidar, Daniel

    2012-02-01

    We present methods of anomaly detection and machine learning using adiabatic quantum computing. The machine learning algorithm is a boosting approach which seeks to optimally combine somewhat accurate classification functions to create a unified classifier which is much more accurate than its components. This algorithm then becomes the first part of the larger anomaly detection algorithm. In the anomaly detection routine, we first use adiabatic quantum computing to train two classifiers which detect two sets, the overlap of which forms the anomaly class. We call this the learning phase. Then, in the testing phase, the two learned classification functions are combined to form the final Hamiltonian for an adiabatic quantum computation, the low energy states of which represent the anomalies in a binary vector space.

  4. Evaluation schemes for video and image anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael

    2016-05-01

    Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.

  5. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data

    PubMed Central

    Goldstein, Markus; Uchida, Seiichi

    2016-01-01

    Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. PMID:27093601

  6. An incremental anomaly detection model for virtual machines.

    PubMed

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  7. An incremental anomaly detection model for virtual machines

    PubMed Central

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  8. Anomaly detection in hyperspectral imagery: statistics vs. graph-based algorithms

    NASA Astrophysics Data System (ADS)

    Berkson, Emily E.; Messinger, David W.

    2016-05-01

    Anomaly detection (AD) algorithms are frequently applied to hyperspectral imagery, but different algorithms produce different outlier results depending on the image scene content and the assumed background model. This work provides the first comparison of anomaly score distributions between common statistics-based anomaly detection algorithms (RX and subspace-RX) and the graph-based Topological Anomaly Detector (TAD). Anomaly scores in statistical AD algorithms should theoretically approximate a chi-squared distribution; however, this is rarely the case with real hyperspectral imagery. The expected distribution of scores found with graph-based methods remains unclear. We also look for general trends in algorithm performance with varied scene content. Three separate scenes were extracted from the hyperspectral MegaScene image taken over downtown Rochester, NY with the VIS-NIR-SWIR ProSpecTIR instrument. In order of most to least cluttered, we study an urban, suburban, and rural scene. The three AD algorithms were applied to each scene, and the distributions of the most anomalous 5% of pixels were compared. We find that subspace-RX performs better than RX, because the data becomes more normal when the highest variance principal components are removed. We also see that compared to statistical detectors, anomalies detected by TAD are easier to separate from the background. Due to their different underlying assumptions, the statistical and graph-based algorithms highlighted different anomalies within the urban scene. These results will lead to a deeper understanding of these algorithms and their applicability across different types of imagery.

  9. Anomaly Detection in Large Sets of High-Dimensional Symbol Sequences

    NASA Technical Reports Server (NTRS)

    Budalakoti, Suratna; Srivastava, Ashok N.; Akella, Ram; Turkov, Eugene

    2006-01-01

    This paper addresses the problem of detecting and describing anomalies in large sets of high-dimensional symbol sequences. The approach taken uses unsupervised clustering of sequences using the normalized longest common subsequence (LCS) as a similarity measure, followed by detailed analysis of outliers to detect anomalies. As the LCS measure is expensive to compute, the first part of the paper discusses existing algorithms, such as the Hunt-Szymanski algorithm, that have low time-complexity. We then discuss why these algorithms often do not work well in practice and present a new hybrid algorithm for computing the LCS that, in our tests, outperforms the Hunt-Szymanski algorithm by a factor of five. The second part of the paper presents new algorithms for outlier analysis that provide comprehensible indicators as to why a particular sequence was deemed to be an outlier. The algorithms provide a coherent description to an analyst of the anomalies in the sequence, compared to more normal sequences. The algorithms we present are general and domain-independent, so we discuss applications in related areas such as anomaly detection.

  10. Detection of anomaly in human retina using Laplacian Eigenmaps and vectorized matched filtering

    NASA Astrophysics Data System (ADS)

    Yacoubou Djima, Karamatou A.; Simonelli, Lucia D.; Cunningham, Denise; Czaja, Wojciech

    2015-03-01

    We present a novel method for automated anomaly detection on auto fluorescent data provided by the National Institute of Health (NIH). This is motivated by the need for new tools to improve the capability of diagnosing macular degeneration in its early stages, track the progression over time, and test the effectiveness of new treatment methods. In previous work, macular anomalies have been detected automatically through multiscale analysis procedures such as wavelet analysis or dimensionality reduction algorithms followed by a classification algorithm, e.g., Support Vector Machine. The method that we propose is a Vectorized Matched Filtering (VMF) algorithm combined with Laplacian Eigenmaps (LE), a nonlinear dimensionality reduction algorithm with locality preserving properties. By applying LE, we are able to represent the data in the form of eigenimages, some of which accentuate the visibility of anomalies. We pick significant eigenimages and proceed with the VMF algorithm that classifies anomalies across all of these eigenimages simultaneously. To evaluate our performance, we compare our method to two other schemes: a matched filtering algorithm based on anomaly detection on single images and a combination of PCA and VMF. LE combined with VMF algorithm performs best, yielding a high rate of accurate anomaly detection. This shows the advantage of using a nonlinear approach to represent the data and the effectiveness of VMF, which operates on the images as a data cube rather than individual images.

  11. Lidar detection algorithm for time and range anomalies.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G

    2007-10-10

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  12. Hyperspectral anomaly detection using Sony PlayStation 3

    NASA Astrophysics Data System (ADS)

    Rosario, Dalton; Romano, João; Sepulveda, Rene

    2009-05-01

    We present a proof-of-principle demonstration using Sony's IBM Cell processor-based PlayStation 3 (PS3) to run-in near real-time-a hyperspectral anomaly detection algorithm (HADA) on real hyperspectral (HS) long-wave infrared imagery. The PS3 console proved to be ideal for doing precisely the kind of heavy computational lifting HS based algorithms require, and the fact that it is a relatively open platform makes programming scientific applications feasible. The PS3 HADA is a unique parallel-random sampling based anomaly detection approach that does not require prior spectra of the clutter background. The PS3 HADA is designed to handle known underlying difficulties (e.g., target shape/scale uncertainties) often ignored in the development of autonomous anomaly detection algorithms. The effort is part of an ongoing cooperative contribution between the Army Research Laboratory and the Army's Armament, Research, Development and Engineering Center, which aims at demonstrating performance of innovative algorithmic approaches for applications requiring autonomous anomaly detection using passive sensors.

  13. Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm

    NASA Astrophysics Data System (ADS)

    Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong

    2018-06-01

    The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.

  14. Time series analysis of infrared satellite data for detecting thermal anomalies: a hybrid approach

    NASA Astrophysics Data System (ADS)

    Koeppen, W. C.; Pilger, E.; Wright, R.

    2011-07-01

    We developed and tested an automated algorithm that analyzes thermal infrared satellite time series data to detect and quantify the excess energy radiated from thermal anomalies such as active volcanoes. Our algorithm enhances the previously developed MODVOLC approach, a simple point operation, by adding a more complex time series component based on the methods of the Robust Satellite Techniques (RST) algorithm. Using test sites at Anatahan and Kīlauea volcanoes, the hybrid time series approach detected ~15% more thermal anomalies than MODVOLC with very few, if any, known false detections. We also tested gas flares in the Cantarell oil field in the Gulf of Mexico as an end-member scenario representing very persistent thermal anomalies. At Cantarell, the hybrid algorithm showed only a slight improvement, but it did identify flares that were undetected by MODVOLC. We estimate that at least 80 MODIS images for each calendar month are required to create good reference images necessary for the time series analysis of the hybrid algorithm. The improved performance of the new algorithm over MODVOLC will result in the detection of low temperature thermal anomalies that will be useful in improving our ability to document Earth's volcanic eruptions, as well as detecting low temperature thermal precursors to larger eruptions.

  15. DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field.

    PubMed

    Christiansen, Peter; Nielsen, Lars N; Steen, Kim A; Jørgensen, Rasmus N; Karstoft, Henrik

    2016-11-11

    Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).

  16. DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field

    PubMed Central

    Christiansen, Peter; Nielsen, Lars N.; Steen, Kim A.; Jørgensen, Rasmus N.; Karstoft, Henrik

    2016-01-01

    Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit). PMID:27845717

  17. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network

    PubMed Central

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish–Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection. PMID:26447696

  18. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network.

    PubMed

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish-Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection.

  19. A hyperspectral imagery anomaly detection algorithm based on local three-dimensional orthogonal subspace projection

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Wen, Gongjian

    2015-10-01

    Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.

  20. Intrusion-aware alert validation algorithm for cooperative distributed intrusion detection schemes of wireless sensor networks.

    PubMed

    Shaikh, Riaz Ahmed; Jameel, Hassan; d'Auriol, Brian J; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae

    2009-01-01

    Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.

  1. Intrusion-Aware Alert Validation Algorithm for Cooperative Distributed Intrusion Detection Schemes of Wireless Sensor Networks

    PubMed Central

    Shaikh, Riaz Ahmed; Jameel, Hassan; d’Auriol, Brian J.; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae

    2009-01-01

    Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm. PMID:22454568

  2. Detecting Anomalies in Process Control Networks

    NASA Astrophysics Data System (ADS)

    Rrushi, Julian; Kang, Kyoung-Don

    This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.

  3. A system for learning statistical motion patterns.

    PubMed

    Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve

    2006-09-01

    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.

  4. Bio-Inspired Distributed Decision Algorithms for Anomaly Detection

    DTIC Science & Technology

    2017-03-01

    TERMS DIAMoND, Local Anomaly Detector, Total Impact Estimation, Threat Level Estimator 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU...21 4.2 Performance of the DIAMoND Algorithm as a DNS-Server Level Attack Detection and Mitigation...with 6 Nodes ........................................................................................ 13 8 Hierarchical 2- Level Topology

  5. Enhanced detection and visualization of anomalies in spectral imagery

    NASA Astrophysics Data System (ADS)

    Basener, William F.; Messinger, David W.

    2009-05-01

    Anomaly detection algorithms applied to hyperspectral imagery are able to reliably identify man-made objects from a natural environment based on statistical/geometric likelyhood. The process is more robust than target identification, which requires precise prior knowledge of the object of interest, but has an inherently higher false alarm rate. Standard anomaly detection algorithms measure deviation of pixel spectra from a parametric model (either statistical or linear mixing) estimating the image background. The topological anomaly detector (TAD) creates a fully non-parametric, graph theory-based, topological model of the image background and measures deviation from this background using codensity. In this paper we present a large-scale comparative test of TAD against 80+ targets in four full HYDICE images using the entire canonical target set for generation of ROC curves. TAD will be compared against several statistics-based detectors including local RX and subspace RX. Even a perfect anomaly detection algorithm would have a high practical false alarm rate in most scenes simply because the user/analyst is not interested in every anomalous object. To assist the analyst in identifying and sorting objects of interest, we investigate coloring of the anomalies with principle components projections using statistics computed from the anomalies. This gives a very useful colorization of anomalies in which objects of similar material tend to have the same color, enabling an analyst to quickly sort and identify anomalies of highest interest.

  6. A new comparison of hyperspectral anomaly detection algorithms for real-time applications

    NASA Astrophysics Data System (ADS)

    Díaz, María.; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    Due to the high spectral resolution that remotely sensed hyperspectral images provide, there has been an increasing interest in anomaly detection. The aim of anomaly detection is to stand over pixels whose spectral signature differs significantly from the background spectra. Basically, anomaly detectors mark pixels with a certain score, considering as anomalies those whose scores are higher than a threshold. Receiver Operating Characteristic (ROC) curves have been widely used as an assessment measure in order to compare the performance of different algorithms. ROC curves are graphical plots which illustrate the trade- off between false positive and true positive rates. However, they are limited in order to make deep comparisons due to the fact that they discard relevant factors required in real-time applications such as run times, costs of misclassification and the competence to mark anomalies with high scores. This last fact is fundamental in anomaly detection in order to distinguish them easily from the background without any posterior processing. An extensive set of simulations have been made using different anomaly detection algorithms, comparing their performances and efficiencies using several extra metrics in order to complement ROC curves analysis. Results support our proposal and demonstrate that ROC curves do not provide a good visualization of detection performances for themselves. Moreover, a figure of merit has been proposed in this paper which encompasses in a single global metric all the measures yielded for the proposed additional metrics. Therefore, this figure, named Detection Efficiency (DE), takes into account several crucial types of performance assessment that ROC curves do not consider. Results demonstrate that algorithms with the best detection performances according to ROC curves do not have the highest DE values. Consequently, the recommendation of using extra measures to properly evaluate performances have been supported and justified by the conclusions drawn from the simulations.

  7. Seismic data fusion anomaly detection

    NASA Astrophysics Data System (ADS)

    Harrity, Kyle; Blasch, Erik; Alford, Mark; Ezekiel, Soundararajan; Ferris, David

    2014-06-01

    Detecting anomalies in non-stationary signals has valuable applications in many fields including medicine and meteorology. These include uses such as identifying possible heart conditions from an Electrocardiography (ECG) signals or predicting earthquakes via seismographic data. Over the many choices of anomaly detection algorithms, it is important to compare possible methods. In this paper, we examine and compare two approaches to anomaly detection and see how data fusion methods may improve performance. The first approach involves using an artificial neural network (ANN) to detect anomalies in a wavelet de-noised signal. The other method uses a perspective neural network (PNN) to analyze an arbitrary number of "perspectives" or transformations of the observed signal for anomalies. Possible perspectives may include wavelet de-noising, Fourier transform, peak-filtering, etc.. In order to evaluate these techniques via signal fusion metrics, we must apply signal preprocessing techniques such as de-noising methods to the original signal and then use a neural network to find anomalies in the generated signal. From this secondary result it is possible to use data fusion techniques that can be evaluated via existing data fusion metrics for single and multiple perspectives. The result will show which anomaly detection method, according to the metrics, is better suited overall for anomaly detection applications. The method used in this study could be applied to compare other signal processing algorithms.

  8. Firefly Algorithm in detection of TEC seismo-ionospheric anomalies

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, Mehdi

    2015-07-01

    Anomaly detection in time series of different earthquake precursors is an essential introduction to create an early warning system with an allowable uncertainty. Since these time series are more often non linear, complex and massive, therefore the applied predictor method should be able to detect the discord patterns from a large data in a short time. This study acknowledges Firefly Algorithm (FA) as a simple and robust predictor to detect the TEC (Total Electron Content) seismo-ionospheric anomalies around the time of the some powerful earthquakes including Chile (27 February 2010), Varzeghan (11 August 2012) and Saravan (16 April 2013). Outstanding anomalies were observed 7 and 5 days before the Chile and Varzeghan earthquakes, respectively and also 3 and 8 days prior to the Saravan earthquake.

  9. Development of anomaly detection models for deep subsurface monitoring

    NASA Astrophysics Data System (ADS)

    Sun, A. Y.

    2017-12-01

    Deep subsurface repositories are used for waste disposal and carbon sequestration. Monitoring deep subsurface repositories for potential anomalies is challenging, not only because the number of sensor networks and the quality of data are often limited, but also because of the lack of labeled data needed to train and validate machine learning (ML) algorithms. Although physical simulation models may be applied to predict anomalies (or the system's nominal state for that sake), the accuracy of such predictions may be limited by inherent conceptual and parameter uncertainties. The main objective of this study was to demonstrate the potential of data-driven models for leakage detection in carbon sequestration repositories. Monitoring data collected during an artificial CO2 release test at a carbon sequestration repository were used, which include both scalar time series (pressure) and vector time series (distributed temperature sensing). For each type of data, separate online anomaly detection algorithms were developed using the baseline experiment data (no leak) and then tested on the leak experiment data. Performance of a number of different online algorithms was compared. Results show the importance of including contextual information in the dataset to mitigate the impact of reservoir noise and reduce false positive rate. The developed algorithms were integrated into a generic Web-based platform for real-time anomaly detection.

  10. Locality-constrained anomaly detection for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Liu, Jiabin; Li, Wei; Du, Qian; Liu, Kui

    2015-12-01

    Detecting a target with low-occurrence-probability from unknown background in a hyperspectral image, namely anomaly detection, is of practical significance. Reed-Xiaoli (RX) algorithm is considered as a classic anomaly detector, which calculates the Mahalanobis distance between local background and the pixel under test. Local RX, as an adaptive RX detector, employs a dual-window strategy to consider pixels within the frame between inner and outer windows as local background. However, the detector is sensitive if such a local region contains anomalous pixels (i.e., outliers). In this paper, a locality-constrained anomaly detector is proposed to remove outliers in the local background region before employing the RX algorithm. Specifically, a local linear representation is designed to exploit the internal relationship between linearly correlated pixels in the local background region and the pixel under test and its neighbors. Experimental results demonstrate that the proposed detector improves the original local RX algorithm.

  11. First trimester PAPP-A in the detection of non-Down syndrome aneuploidy.

    PubMed

    Ochshorn, Y; Kupferminc, M J; Wolman, I; Orr-Urtreger, A; Jaffa, A J; Yaron, Y

    2001-07-01

    Combined first trimester screening using pregnancy associated plasma protein-A (PAPP-A), free beta-human chorionic gonadotrophin, and nuchal translucency (NT), is currently accepted as probably the best combination for the detection of Down syndrome (DS). Current first trimester algorithms provide computed risks only for DS. However, low PAPP-A is also associated with other chromosome anomalies such as trisomy 13, 18, and sex chromosome aneuploidy. Thus, using currently available algorithms, some chromosome anomalies may not be detected. The purpose of the present study was to establish a low-end cut-off value for PAPP-A that would increase the detection rates for non-DS chromosome anomalies. The study included 1408 patients who underwent combined first trimester screening. To determine a low-end cut-off value for PAPP-A, a Receiver-Operator Characteristic (ROC) curve analysis was performed. In the entire study group there were 18 cases of chromosome anomalies (trisomy 21, 13, 18, sex chromosome anomalies), 14 of which were among screen-positive patients, a detection rate of 77.7% for all chromosome anomalies (95% CI: 55.7-99.7%). ROC curve analysis detected a statistically significant cut-off for PAPP-A at 0.25 MoM. If the definition of screen-positive were to also include patients with PAPP-A<0.25 MoM, the detection rate would increase to 88.8% for all chromosome anomalies (95% CI: 71.6-106%). This low cut-off value may be used until specific algorithms are implemented for non-Down syndrome aneuploidy. Copyright 2001 John Wiley & Sons, Ltd.

  12. MUSIC algorithm for location searching of dielectric anomalies from S-parameters using microwave imaging

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang; Kim, Hwa Pyung; Lee, Kwang-Jae; Son, Seong-Ho

    2017-11-01

    Motivated by the biomedical engineering used in early-stage breast cancer detection, we investigated the use of MUltiple SIgnal Classification (MUSIC) algorithm for location searching of small anomalies using S-parameters. We considered the application of MUSIC to functional imaging where a small number of dipole antennas are used. Our approach is based on the application of Born approximation or physical factorization. We analyzed cases in which the anomaly is respectively small and large in relation to the wavelength, and the structure of the left-singular vectors is linked to the nonzero singular values of a Multi-Static Response (MSR) matrix whose elements are the S-parameters. Using simulations, we demonstrated the strengths and weaknesses of the MUSIC algorithm in detecting both small and extended anomalies.

  13. Anomaly Detection and Life Pattern Estimation for the Elderly Based on Categorization of Accumulated Data

    NASA Astrophysics Data System (ADS)

    Mori, Taketoshi; Ishino, Takahito; Noguchi, Hiroshi; Shimosaka, Masamichi; Sato, Tomomasa

    2011-06-01

    We propose a life pattern estimation method and an anomaly detection method for elderly people living alone. In our observation system for such people, we deploy some pyroelectric sensors into the house and measure the person's activities all the time in order to grasp the person's life pattern. The data are transferred successively to the operation center and displayed to the nurses in the center in a precise way. Then, the nurses decide whether the data is the anomaly or not. In the system, the people whose features in their life resemble each other are categorized as the same group. Anomalies occurred in the past are shared in the group and utilized in the anomaly detection algorithm. This algorithm is based on "anomaly score." The "anomaly score" is figured out by utilizing the activeness of the person. This activeness is approximately proportional to the frequency of the sensor response in a minute. The "anomaly score" is calculated from the difference between the activeness in the present and the past one averaged in the long term. Thus, the score is positive if the activeness in the present is higher than the average in the past, and the score is negative if the value in the present is lower than the average. If the score exceeds a certain threshold, it means that an anomaly event occurs. Moreover, we developed an activity estimation algorithm. This algorithm estimates the residents' basic activities such as uprising, outing, and so on. The estimation is shown to the nurses with the "anomaly score" of the residents. The nurses can understand the residents' health conditions by combining these two information.

  14. A novel approach for detection of anomalies using measurement data of the Ironton-Russell bridge

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Norouzi, Mehdi; Hunt, Victor; Helmicki, Arthur

    2015-04-01

    Data models have been increasingly used in recent years for documenting normal behavior of structures and hence detect and classify anomalies. Large numbers of machine learning algorithms were proposed by various researchers to model operational and functional changes in structures; however, a limited number of studies were applied to actual measurement data due to limited access to the long term measurement data of structures and lack of access to the damaged states of structures. By monitoring the structure during construction and reviewing the effect of construction events on the measurement data, this study introduces a new approach to detect and eventually classify anomalies during construction and after construction. First, the implementation procedure of the sensory network that develops while the bridge is being built and its current status will be detailed. Second, the proposed anomaly detection algorithm will be applied on the collected data and finally, detected anomalies will be validated against the archived construction events.

  15. Performances of Machine Learning Algorithms for Binary Classification of Network Anomaly Detection System

    NASA Astrophysics Data System (ADS)

    Nawir, Mukrimah; Amir, Amiza; Lynn, Ong Bi; Yaakob, Naimah; Badlishah Ahmad, R.

    2018-05-01

    The rapid growth of technologies might endanger them to various network attacks due to the nature of data which are frequently exchange their data through Internet and large-scale data that need to be handle. Moreover, network anomaly detection using machine learning faced difficulty when dealing the involvement of dataset where the number of labelled network dataset is very few in public and this caused many researchers keep used the most commonly network dataset (KDDCup99) which is not relevant to employ the machine learning (ML) algorithms for a classification. Several issues regarding these available labelled network datasets are discussed in this paper. The aim of this paper to build a network anomaly detection system using machine learning algorithms that are efficient, effective and fast processing. The finding showed that AODE algorithm is performed well in term of accuracy and processing time for binary classification towards UNSW-NB15 dataset.

  16. CHAMP: a locally adaptive unmixing-based hyperspectral anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Crist, Eric P.; Thelen, Brian J.; Carrara, David A.

    1998-10-01

    Anomaly detection offers a means by which to identify potentially important objects in a scene without prior knowledge of their spectral signatures. As such, this approach is less sensitive to variations in target class composition, atmospheric and illumination conditions, and sensor gain settings than would be a spectral matched filter or similar algorithm. The best existing anomaly detectors generally fall into one of two categories: those based on local Gaussian statistics, and those based on linear mixing moles. Unmixing-based approaches better represent the real distribution of data in a scene, but are typically derived and applied on a global or scene-wide basis. Locally adaptive approaches allow detection of more subtle anomalies by accommodating the spatial non-homogeneity of background classes in a typical scene, but provide a poorer representation of the true underlying background distribution. The CHAMP algorithm combines the best attributes of both approaches, applying a linear-mixing model approach in a spatially adaptive manner. The algorithm itself, and teste results on simulated and actual hyperspectral image data, are presented in this paper.

  17. An Investigation of State-Space Model Fidelity for SSME Data

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander

    2008-01-01

    In previous studies, a variety of unsupervised anomaly detection techniques for anomaly detection were applied to SSME (Space Shuttle Main Engine) data. The observed results indicated that the identification of certain anomalies were specific to the algorithmic method under consideration. This is the reason why one of the follow-on goals of these previous investigations was to build an architecture to support the best capabilities of all algorithms. We appeal to that goal here by investigating a cascade, serial architecture for the best performing and most suitable candidates from previous studies. As a precursor to a formal ROC (Receiver Operating Characteristic) curve analysis for validation of resulting anomaly detection algorithms, our primary focus here is to investigate the model fidelity as measured by variants of the AIC (Akaike Information Criterion) for state-space based models. We show that placing constraints on a state-space model during or after the training of the model introduces a modest level of suboptimality. Furthermore, we compare the fidelity of all candidate models including those embodying the cascade, serial architecture. We make recommendations on the most suitable candidates for application to subsequent anomaly detection studies as measured by AIC-based criteria.

  18. Radiation anomaly detection algorithms for field-acquired gamma energy spectra

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ron; Guss, Paul; Mitchell, Stephen

    2015-08-01

    The Remote Sensing Laboratory (RSL) is developing a tactical, networked radiation detection system that will be agile, reconfigurable, and capable of rapid threat assessment with high degree of fidelity and certainty. Our design is driven by the needs of users such as law enforcement personnel who must make decisions by evaluating threat signatures in urban settings. The most efficient tool available to identify the nature of the threat object is real-time gamma spectroscopic analysis, as it is fast and has a very low probability of producing false positive alarm conditions. Urban radiological searches are inherently challenged by the rapid and large spatial variation of background gamma radiation, the presence of benign radioactive materials in terms of the normally occurring radioactive materials (NORM), and shielded and/or masked threat sources. Multiple spectral anomaly detection algorithms have been developed by national laboratories and commercial vendors. For example, the Gamma Detector Response and Analysis Software (GADRAS) a one-dimensional deterministic radiation transport software capable of calculating gamma ray spectra using physics-based detector response functions was developed at Sandia National Laboratories. The nuisance-rejection spectral comparison ratio anomaly detection algorithm (or NSCRAD), developed at Pacific Northwest National Laboratory, uses spectral comparison ratios to detect deviation from benign medical and NORM radiation source and can work in spite of strong presence of NORM and or medical sources. RSL has developed its own wavelet-based gamma energy spectral anomaly detection algorithm called WAVRAD. Test results and relative merits of these different algorithms will be discussed and demonstrated.

  19. Model selection for anomaly detection

    NASA Astrophysics Data System (ADS)

    Burnaev, E.; Erofeev, P.; Smolyakov, D.

    2015-12-01

    Anomaly detection based on one-class classification algorithms is broadly used in many applied domains like image processing (e.g. detection of whether a patient is "cancerous" or "healthy" from mammography image), network intrusion detection, etc. Performance of an anomaly detection algorithm crucially depends on a kernel, used to measure similarity in a feature space. The standard approaches (e.g. cross-validation) for kernel selection, used in two-class classification problems, can not be used directly due to the specific nature of a data (absence of a second, abnormal, class data). In this paper we generalize several kernel selection methods from binary-class case to the case of one-class classification and perform extensive comparison of these approaches using both synthetic and real-world data.

  20. A hybrid approach for efficient anomaly detection using metaheuristic methods

    PubMed Central

    Ghanem, Tamer F.; Elkilani, Wail S.; Abdul-kader, Hatem M.

    2014-01-01

    Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms. PMID:26199752

  1. A hybrid approach for efficient anomaly detection using metaheuristic methods.

    PubMed

    Ghanem, Tamer F; Elkilani, Wail S; Abdul-Kader, Hatem M

    2015-07-01

    Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms.

  2. A new method of real-time detection of changes in periodic data stream

    NASA Astrophysics Data System (ADS)

    Lyu, Chen; Lu, Guoliang; Cheng, Bin; Zheng, Xiangwei

    2017-07-01

    The change point detection in periodic time series is much desirable in many practical usages. We present a novel algorithm for this task, which includes two phases: 1) anomaly measure- on the basis of a typical regression model, we propose a new computation method to measure anomalies in time series which does not require any reference data from other measurement(s); 2) change detection- we introduce a new martingale test for detection which can be operated in an unsupervised and nonparametric way. We have conducted extensive experiments to systematically test our algorithm. The results make us believe that our algorithm can be directly applicable in many real-world change-point-detection applications.

  3. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.

  4. Clustering and Recurring Anomaly Identification: Recurring Anomaly Detection System (ReADS)

    NASA Technical Reports Server (NTRS)

    McIntosh, Dawn

    2006-01-01

    This viewgraph presentation reviews the Recurring Anomaly Detection System (ReADS). The Recurring Anomaly Detection System is a tool to analyze text reports, such as aviation reports and maintenance records: (1) Text clustering algorithms group large quantities of reports and documents; Reduces human error and fatigue (2) Identifies interconnected reports; Automates the discovery of possible recurring anomalies; (3) Provides a visualization of the clusters and recurring anomalies We have illustrated our techniques on data from Shuttle and ISS discrepancy reports, as well as ASRS data. ReADS has been integrated with a secure online search

  5. Recombinant Temporal Aberration Detection Algorithms for Enhanced Biosurveillance

    PubMed Central

    Murphy, Sean Patrick; Burkom, Howard

    2008-01-01

    Objective Broadly, this research aims to improve the outbreak detection performance and, therefore, the cost effectiveness of automated syndromic surveillance systems by building novel, recombinant temporal aberration detection algorithms from components of previously developed detectors. Methods This study decomposes existing temporal aberration detection algorithms into two sequential stages and investigates the individual impact of each stage on outbreak detection performance. The data forecasting stage (Stage 1) generates predictions of time series values a certain number of time steps in the future based on historical data. The anomaly measure stage (Stage 2) compares features of this prediction to corresponding features of the actual time series to compute a statistical anomaly measure. A Monte Carlo simulation procedure is then used to examine the recombinant algorithms’ ability to detect synthetic aberrations injected into authentic syndromic time series. Results New methods obtained with procedural components of published, sometimes widely used, algorithms were compared to the known methods using authentic datasets with plausible stochastic injected signals. Performance improvements were found for some of the recombinant methods, and these improvements were consistent over a range of data types, outbreak types, and outbreak sizes. For gradual outbreaks, the WEWD MovAvg7+WEWD Z-Score recombinant algorithm performed best; for sudden outbreaks, the HW+WEWD Z-Score performed best. Conclusion This decomposition was found not only to yield valuable insight into the effects of the aberration detection algorithms but also to produce novel combinations of data forecasters and anomaly measures with enhanced detection performance. PMID:17947614

  6. Detection of anomalous events

    DOEpatents

    Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.

    2016-06-07

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.

  7. SmartMal: a service-oriented behavioral malware detection framework for mobile devices.

    PubMed

    Wang, Chao; Wu, Zhizhong; Li, Xi; Zhou, Xuehai; Wang, Aili; Hung, Patrick C K

    2014-01-01

    This paper presents SmartMal--a novel service-oriented behavioral malware detection framework for vehicular and mobile devices. The highlight of SmartMal is to introduce service-oriented architecture (SOA) concepts and behavior analysis into the malware detection paradigms. The proposed framework relies on client-server architecture, the client continuously extracts various features and transfers them to the server, and the server's main task is to detect anomalies using state-of-art detection algorithms. Multiple distributed servers simultaneously analyze the feature vector using various detectors and information fusion is used to concatenate the results of detectors. We also propose a cycle-based statistical approach for mobile device anomaly detection. We accomplish this by analyzing the users' regular usage patterns. Empirical results suggest that the proposed framework and novel anomaly detection algorithm are highly effective in detecting malware on Android devices.

  8. SmartMal: A Service-Oriented Behavioral Malware Detection Framework for Mobile Devices

    PubMed Central

    Wu, Zhizhong; Li, Xi; Zhou, Xuehai; Wang, Aili; Hung, Patrick C. K.

    2014-01-01

    This paper presents SmartMal—a novel service-oriented behavioral malware detection framework for vehicular and mobile devices. The highlight of SmartMal is to introduce service-oriented architecture (SOA) concepts and behavior analysis into the malware detection paradigms. The proposed framework relies on client-server architecture, the client continuously extracts various features and transfers them to the server, and the server's main task is to detect anomalies using state-of-art detection algorithms. Multiple distributed servers simultaneously analyze the feature vector using various detectors and information fusion is used to concatenate the results of detectors. We also propose a cycle-based statistical approach for mobile device anomaly detection. We accomplish this by analyzing the users' regular usage patterns. Empirical results suggest that the proposed framework and novel anomaly detection algorithm are highly effective in detecting malware on Android devices. PMID:25165729

  9. Anomaly detection using temporal data mining in a smart home environment.

    PubMed

    Jakkula, V; Cook, D J

    2008-01-01

    To many people, home is a sanctuary. With the maturing of smart home technologies, many people with cognitive and physical disabilities can lead independent lives in their own homes for extended periods of time. In this paper, we investigate the design of machine learning algorithms that support this goal. We hypothesize that machine learning algorithms can be designed to automatically learn models of resident behavior in a smart home, and that the results can be used to perform automated health monitoring and to detect anomalies. Specifically, our algorithms draw upon the temporal nature of sensor data collected in a smart home to build a model of expected activities and to detect unexpected, and possibly health-critical, events in the home. We validate our algorithms using synthetic data and real activity data collected from volunteers in an automated smart environment. The results from our experiments support our hypothesis that a model can be learned from observed smart home data and used to report anomalies, as they occur, in a smart home.

  10. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection.

    PubMed

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.

  11. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection

    PubMed Central

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.

    2015-01-01

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112

  12. Improving Cyber-Security of Smart Grid Systems via Anomaly Detection and Linguistic Domain Knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ondrej Linda; Todd Vollmer; Milos Manic

    The planned large scale deployment of smart grid network devices will generate a large amount of information exchanged over various types of communication networks. The implementation of these critical systems will require appropriate cyber-security measures. A network anomaly detection solution is considered in this work. In common network architectures multiple communications streams are simultaneously present, making it difficult to build an anomaly detection solution for the entire system. In addition, common anomaly detection algorithms require specification of a sensitivity threshold, which inevitably leads to a tradeoff between false positives and false negatives rates. In order to alleviate these issues, thismore » paper proposes a novel anomaly detection architecture. The designed system applies the previously developed network security cyber-sensor method to individual selected communication streams allowing for learning accurate normal network behavior models. Furthermore, the developed system dynamically adjusts the sensitivity threshold of each anomaly detection algorithm based on domain knowledge about the specific network system. It is proposed to model this domain knowledge using Interval Type-2 Fuzzy Logic rules, which linguistically describe the relationship between various features of the network communication and the possibility of a cyber attack. The proposed method was tested on experimental smart grid system demonstrating enhanced cyber-security.« less

  13. Very Large Graphs for Information Extraction (VLG) Detection and Inference in the Presence of Uncertainty

    DTIC Science & Technology

    2015-09-21

    this framework, MIT LL carried out a one-year proof- of-concept study to determine the capabilities and challenges in the detection of anomalies in...extremely large graphs [5]. Under this effort, two real datasets were considered, and algorithms for data modeling and anomaly detection were developed...is required in a well-defined experimental framework for the detection of anomalies in very large graphs. This study is intended to inform future

  14. Feasibility of anomaly detection and characterization using trans-admittance mammography with 60 × 60 electrode array

    NASA Astrophysics Data System (ADS)

    Zhao, Mingkang; Wi, Hun; Lee, Eun Jung; Woo, Eung Je; In Oh, Tong

    2014-10-01

    Electrical impedance imaging has the potential to detect an early stage of breast cancer due to higher admittivity values compared with those of normal breast tissues. The tumor size and extent of axillary lymph node involvement are important parameters to evaluate the breast cancer survival rate. Additionally, the anomaly characterization is required to distinguish a malignant tumor from a benign tumor. In order to overcome the limitation of breast cancer detection using impedance measurement probes, we developed the high density trans-admittance mammography (TAM) system with 60 × 60 electrode array and produced trans-admittance maps obtained at several frequency pairs. We applied the anomaly detection algorithm to the high density TAM system for estimating the volume and position of breast tumor. We tested four different sizes of anomaly with three different conductivity contrasts at four different depths. From multifrequency trans-admittance maps, we can readily observe the transversal position and estimate its volume and depth. Specially, the depth estimated values were obtained accurately, which were independent to the size and conductivity contrast when applying the new formula using Laplacian of trans-admittance map. The volume estimation was dependent on the conductivity contrast between anomaly and background in the breast phantom. We characterized two testing anomalies using frequency difference trans-admittance data to eliminate the dependency of anomaly position and size. We confirmed the anomaly detection and characterization algorithm with the high density TAM system on bovine breast tissue. Both results showed the feasibility of detecting the size and position of anomaly and tissue characterization for screening the breast cancer.

  15. Research on Abnormal Detection Based on Improved Combination of K - means and SVDD

    NASA Astrophysics Data System (ADS)

    Hao, Xiaohong; Zhang, Xiaofeng

    2018-01-01

    In order to improve the efficiency of network intrusion detection and reduce the false alarm rate, this paper proposes an anomaly detection algorithm based on improved K-means and SVDD. The algorithm first uses the improved K-means algorithm to cluster the training samples of each class, so that each class is independent and compact in class; Then, according to the training samples, the SVDD algorithm is used to construct the minimum superspheres. The subordinate relationship of the samples is determined by calculating the distance of the minimum superspheres constructed by SVDD. If the test sample is less than the center of the hypersphere, the test sample belongs to this class, otherwise it does not belong to this class, after several comparisons, the final test of the effective detection of the test sample.In this paper, we use KDD CUP99 data set to simulate the proposed anomaly detection algorithm. The results show that the algorithm has high detection rate and low false alarm rate, which is an effective network security protection method.

  16. Integrated System Health Management (ISHM) for Test Stand and J-2X Engine: Core Implementation

    NASA Technical Reports Server (NTRS)

    Figueroa, Jorge F.; Schmalzel, John L.; Aguilar, Robert; Shwabacher, Mark; Morris, Jon

    2008-01-01

    ISHM capability enables a system to detect anomalies, determine causes and effects, predict future anomalies, and provides an integrated awareness of the health of the system to users (operators, customers, management, etc.). NASA Stennis Space Center, NASA Ames Research Center, and Pratt & Whitney Rocketdyne have implemented a core ISHM capability that encompasses the A1 Test Stand and the J-2X Engine. The implementation incorporates all aspects of ISHM; from anomaly detection (e.g. leaks) to root-cause-analysis based on failure mode and effects analysis (FMEA), to a user interface for an integrated visualization of the health of the system (Test Stand and Engine). The implementation provides a low functional capability level (FCL) in that it is populated with few algorithms and approaches for anomaly detection, and root-cause trees from a limited FMEA effort. However, it is a demonstration of a credible ISHM capability, and it is inherently designed for continuous and systematic augmentation of the capability. The ISHM capability is grounded on an integrating software environment used to create an ISHM model of the system. The ISHM model follows an object-oriented approach: includes all elements of the system (from schematics) and provides for compartmentalized storage of information associated with each element. For instance, a sensor object contains a transducer electronic data sheet (TEDS) with information that might be used by algorithms and approaches for anomaly detection, diagnostics, etc. Similarly, a component, such as a tank, contains a Component Electronic Data Sheet (CEDS). Each element also includes a Health Electronic Data Sheet (HEDS) that contains health-related information such as anomalies and health state. Some practical aspects of the implementation include: (1) near real-time data flow from the test stand data acquisition system through the ISHM model, for near real-time detection of anomalies and diagnostics, (2) insertion of the J-2X predictive model providing predicted sensor values for comparison with measured values and use in anomaly detection and diagnostics, and (3) insertion of third-party anomaly detection algorithms into the integrated ISHM model.

  17. Artificial intelligence techniques for ground test monitoring of rocket engines

    NASA Technical Reports Server (NTRS)

    Ali, Moonis; Gupta, U. K.

    1990-01-01

    An expert system is being developed which can detect anomalies in Space Shuttle Main Engine (SSME) sensor data significantly earlier than the redline algorithm currently in use. The training of such an expert system focuses on two approaches which are based on low frequency and high frequency analyses of sensor data. Both approaches are being tested on data from SSME tests and their results compared with the findings of NASA and Rocketdyne experts. Prototype implementations have detected the presence of anomalies earlier than the redline algorithms that are in use currently. It therefore appears that these approaches have the potential of detecting anomalies early eneough to shut down the engine or take other corrective action before severe damage to the engine occurs.

  18. GDPC: Gravitation-based Density Peaks Clustering algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Jianhua; Hao, Dehao; Chen, Yujun; Parmar, Milan; Li, Keqin

    2018-07-01

    The Density Peaks Clustering algorithm, which we refer to as DPC, is a novel and efficient density-based clustering approach, and it is published in Science in 2014. The DPC has advantages of discovering clusters with varying sizes and varying densities, but has some limitations of detecting the number of clusters and identifying anomalies. We develop an enhanced algorithm with an alternative decision graph based on gravitation theory and nearby distance to identify centroids and anomalies accurately. We apply our method to some UCI and synthetic data sets. We report comparative clustering performances using F-Measure and 2-dimensional vision. We also compare our method to other clustering algorithms, such as K-Means, Affinity Propagation (AP) and DPC. We present F-Measure scores and clustering accuracies of our GDPC algorithm compared to K-Means, AP and DPC on different data sets. We show that the GDPC has the superior performance in its capability of: (1) detecting the number of clusters obviously; (2) aggregating clusters with varying sizes, varying densities efficiently; (3) identifying anomalies accurately.

  19. Robust and Accurate Anomaly Detection in ECG Artifacts Using Time Series Motif Discovery

    PubMed Central

    Sivaraks, Haemwaan

    2015-01-01

    Electrocardiogram (ECG) anomaly detection is an important technique for detecting dissimilar heartbeats which helps identify abnormal ECGs before the diagnosis process. Currently available ECG anomaly detection methods, ranging from academic research to commercial ECG machines, still suffer from a high false alarm rate because these methods are not able to differentiate ECG artifacts from real ECG signal, especially, in ECG artifacts that are similar to ECG signals in terms of shape and/or frequency. The problem leads to high vigilance for physicians and misinterpretation risk for nonspecialists. Therefore, this work proposes a novel anomaly detection technique that is highly robust and accurate in the presence of ECG artifacts which can effectively reduce the false alarm rate. Expert knowledge from cardiologists and motif discovery technique is utilized in our design. In addition, every step of the algorithm conforms to the interpretation of cardiologists. Our method can be utilized to both single-lead ECGs and multilead ECGs. Our experiment results on real ECG datasets are interpreted and evaluated by cardiologists. Our proposed algorithm can mostly achieve 100% of accuracy on detection (AoD), sensitivity, specificity, and positive predictive value with 0% false alarm rate. The results demonstrate that our proposed method is highly accurate and robust to artifacts, compared with competitive anomaly detection methods. PMID:25688284

  20. Rocketdyne Safety Algorithm: Space Shuttle Main Engine Fault Detection

    NASA Technical Reports Server (NTRS)

    Norman, Arnold M., Jr.

    1994-01-01

    The Rocketdyne Safety Algorithm (RSA) has been developed to the point of use on the TTBE at MSFC on Task 4 of LeRC contract NAS3-25884. This document contains a description of the work performed, the results of the nominal test of the major anomaly test cases and a table of the resulting cutoff times, a plot of the RSA value vs. time for each anomaly case, a logic flow description of the algorithm, the algorithm code, and a development plan for future efforts.

  1. Topological anomaly detection performance with multispectral polarimetric imagery

    NASA Astrophysics Data System (ADS)

    Gartley, M. G.; Basener, W.,

    2009-05-01

    Polarimetric imaging has demonstrated utility for increasing contrast of manmade targets above natural background clutter. Manual detection of manmade targets in multispectral polarimetric imagery can be challenging and a subjective process for large datasets. Analyst exploitation may be improved utilizing conventional anomaly detection algorithms such as RX. In this paper we examine the performance of a relatively new approach to anomaly detection, which leverages topology theory, applied to spectral polarimetric imagery. Detection results for manmade targets embedded in a complex natural background will be presented for both the RX and Topological Anomaly Detection (TAD) approaches. We will also present detailed results examining detection sensitivities relative to: (1) the number of spectral bands, (2) utilization of Stoke's images versus intensity images, and (3) airborne versus spaceborne measurements.

  2. Anomaly clustering in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Doster, Timothy J.; Ross, David S.; Messinger, David W.; Basener, William F.

    2009-05-01

    The topological anomaly detection algorithm (TAD) differs from other anomaly detection algorithms in that it uses a topological/graph-theoretic model for the image background instead of modeling the image with a Gaussian normal distribution. In the construction of the model, TAD produces a hard threshold separating anomalous pixels from background in the image. We build on this feature of TAD by extending the algorithm so that it gives a measure of the number of anomalous objects, rather than the number of anomalous pixels, in a hyperspectral image. This is done by identifying, and integrating, clusters of anomalous pixels via a graph theoretical method combining spatial and spectral information. The method is applied to a cluttered HyMap image and combines small groups of pixels containing like materials, such as those corresponding to rooftops and cars, into individual clusters. This improves visualization and interpretation of objects.

  3. Detecting Pulsing Denial-of-Service Attacks with Nondeterministic Attack Intervals

    NASA Astrophysics Data System (ADS)

    Luo, Xiapu; Chan, Edmond W. W.; Chang, Rocky K. C.

    2009-12-01

    This paper addresses the important problem of detecting pulsing denial of service (PDoS) attacks which send a sequence of attack pulses to reduce TCP throughput. Unlike previous works which focused on a restricted form of attacks, we consider a very broad class of attacks. In particular, our attack model admits any attack interval between two adjacent pulses, whether deterministic or not. It also includes the traditional flooding-based attacks as a limiting case (i.e., zero attack interval). Our main contribution is Vanguard, a new anomaly-based detection scheme for this class of PDoS attacks. The Vanguard detection is based on three traffic anomalies induced by the attacks, and it detects them using a CUSUM algorithm. We have prototyped Vanguard and evaluated it on a testbed. The experiment results show that Vanguard is more effective than the previous methods that are based on other traffic anomalies (after a transformation using wavelet transform, Fourier transform, and autocorrelation) and detection algorithms (e.g., dynamic time warping).

  4. A Comparative Evaluation of Anomaly Detection Algorithms for Maritime Video Surveillance

    DTIC Science & Technology

    2011-01-01

    of k-means clustering and the k- NN Localized p-value Estimator ( KNN -LPE). K-means is a popular distance-based clustering algorithm while KNN -LPE...implemented the sparse cluster identification rule we described in Section 3.1. 2. k-NN Localized p-value Estimator ( KNN -LPE): We implemented this using...Average Density ( KNN -NAD): This was implemented as described in Section 3.4. Algorithm Parameter Settings The global and local density-based anomaly

  5. Genetic algorithm for TEC seismo-ionospheric anomalies detection around the time of the Solomon (Mw = 8.0) earthquake of 06 February 2013

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2013-08-01

    On 6 February 2013, at 12:12:27 local time (01:12:27 UTC) a seismic event registering Mw 8.0 struck the Solomon Islands, located at the boundaries of the Australian and Pacific tectonic plates. Time series prediction is an important and widely interesting topic in the research of earthquake precursors. This paper describes a new computational intelligence approach to detect the unusual variations of the total electron content (TEC) seismo-ionospheric anomalies induced by the powerful Solomon earthquake using genetic algorithm (GA). The GA detected a considerable number of anomalous occurrences on earthquake day and also 7 and 8 days prior to the earthquake in a period of high geomagnetic activities. In this study, also the detected TEC anomalies using the proposed method are compared to the results dealing with the observed TEC anomalies by applying the mean, median, wavelet, Kalman filter, ARIMA, neural network and support vector machine methods. The accordance in the final results of all eight methods is a convincing indication for the efficiency of the GA method. It indicates that GA can be an appropriate non-parametric tool for anomaly detection in a non linear time series showing the seismo-ionospheric precursors variations.

  6. Spatially-Aware Temporal Anomaly Mapping of Gamma Spectra

    NASA Astrophysics Data System (ADS)

    Reinhart, Alex; Athey, Alex; Biegalski, Steven

    2014-06-01

    For security, environmental, and regulatory purposes it is useful to continuously monitor wide areas for unexpected changes in radioactivity. We report on a temporal anomaly detection algorithm which uses mobile detectors to build a spatial map of background spectra, allowing sensitive detection of any anomalies through many days or months of monitoring. We adapt previously-developed anomaly detection methods, which compare spectral shape rather than count rate, to function with limited background data, allowing sensitive detection of small changes in spectral shape from day to day. To demonstrate this technique we collected daily observations over the period of six weeks on a 0.33 square mile research campus and performed source injection simulations.

  7. Anomaly Detection Based on Sensor Data in Petroleum Industry Applications

    PubMed Central

    Martí, Luis; Sanchez-Pi, Nayat; Molina, José Manuel; Garcia, Ana Cristina Bicharra

    2015-01-01

    Anomaly detection is the problem of finding patterns in data that do not conform to an a priori expected behavior. This is related to the problem in which some samples are distant, in terms of a given metric, from the rest of the dataset, where these anomalous samples are indicated as outliers. Anomaly detection has recently attracted the attention of the research community, because of its relevance in real-world applications, like intrusion detection, fraud detection, fault detection and system health monitoring, among many others. Anomalies themselves can have a positive or negative nature, depending on their context and interpretation. However, in either case, it is important for decision makers to be able to detect them in order to take appropriate actions. The petroleum industry is one of the application contexts where these problems are present. The correct detection of such types of unusual information empowers the decision maker with the capacity to act on the system in order to correctly avoid, correct or react to the situations associated with them. In that application context, heavy extraction machines for pumping and generation operations, like turbomachines, are intensively monitored by hundreds of sensors each that send measurements with a high frequency for damage prevention. In this paper, we propose a combination of yet another segmentation algorithm (YASA), a novel fast and high quality segmentation algorithm, with a one-class support vector machine approach for efficient anomaly detection in turbomachines. The proposal is meant for dealing with the aforementioned task and to cope with the lack of labeled training data. As a result, we perform a series of empirical studies comparing our approach to other methods applied to benchmark problems and a real-life application related to oil platform turbomachinery anomaly detection. PMID:25633599

  8. Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis

    DOE PAGES

    Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.; ...

    2017-10-16

    This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less

  9. Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.

    This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less

  10. Estimation of anomaly location and size using electrical impedance tomography.

    PubMed

    Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun; Woo, Eung Je; Cho, Young Gu

    2003-01-01

    We developed a new algorithm that estimates locations and sizes of anomalies in electrically conducting medium based on electrical impedance tomography (EIT) technique. When only the boundary current and voltage measurements are available, it is not practically feasible to reconstruct accurate high-resolution cross-sectional conductivity or resistivity images of a subject. In this paper, we focus our attention on the estimation of locations and sizes of anomalies with different conductivity values compared with the background tissues. We showed the performance of the algorithm from experimental results using a 32-channel EIT system and saline phantom. With about 1.73% measurement error in boundary current-voltage data, we found that the minimal size (area) of the detectable anomaly is about 0.72% of the size (area) of the phantom. Potential applications include the monitoring of impedance related physiological events and bubble detection in two-phase flow. Since this new algorithm requires neither any forward solver nor time-consuming minimization process, it is fast enough for various real-time applications in medicine and nondestructive testing.

  11. System for Anomaly and Failure Detection (SAFD) system development

    NASA Technical Reports Server (NTRS)

    Oreilly, D.

    1992-01-01

    This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.

  12. OceanXtremes: Scalable Anomaly Detection in Oceanographic Time-Series

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Armstrong, E. M.; Chin, T. M.; Gill, K. M.; Greguska, F. R., III; Huang, T.; Jacob, J. C.; Quach, N.

    2016-12-01

    The oceanographic community must meet the challenge to rapidly identify features and anomalies in complex and voluminous observations to further science and improve decision support. Given this data-intensive reality, we are developing an anomaly detection system, called OceanXtremes, powered by an intelligent, elastic Cloud-based analytic service backend that enables execution of domain-specific, multi-scale anomaly and feature detection algorithms across the entire archive of 15 to 30-year ocean science datasets.Our parallel analytics engine is extending the NEXUS system and exploits multiple open-source technologies: Apache Cassandra as a distributed spatial "tile" cache, Apache Spark for in-memory parallel computation, and Apache Solr for spatial search and storing pre-computed tile statistics and other metadata. OceanXtremes provides these key capabilities: Parallel generation (Spark on a compute cluster) of 15 to 30-year Ocean Climatologies (e.g. sea surface temperature or SST) in hours or overnight, using simple pixel averages or customizable Gaussian-weighted "smoothing" over latitude, longitude, and time; Parallel pre-computation, tiling, and caching of anomaly fields (daily variables minus a chosen climatology) with pre-computed tile statistics; Parallel detection (over the time-series of tiles) of anomalies or phenomena by regional area-averages exceeding a specified threshold (e.g. high SST in El Nino or SST "blob" regions), or more complex, custom data mining algorithms; Shared discovery and exploration of ocean phenomena and anomalies (facet search using Solr), along with unexpected correlations between key measured variables; Scalable execution for all capabilities on a hybrid Cloud, using our on-premise OpenStack Cloud cluster or at Amazon. The key idea is that the parallel data-mining operations will be run "near" the ocean data archives (a local "network" hop) so that we can efficiently access the thousands of files making up a three decade time-series. The presentation will cover the architecture of OceanXtremes, parallelization of the climatology computation and anomaly detection algorithms using Spark, example results for SST and other time-series, and parallel performance metrics.

  13. Department of Defense Fiscal Year (FY) 2005 Budget Estimates. Research, Development, Test and Evaluation, Defense-Wide. Volume 1 - Defense Advanced Research Projects Agency

    DTIC Science & Technology

    2004-02-01

    UNCLASSIFIED − Conducted experiments to determine the usability of general-purpose anomaly detection algorithms to monitor a large, complex military...reaction and detection modules to perform tailored analysis sequences to monitor environmental conditions, health hazards and physiological states...scalability of lab proven anomaly detection techniques for intrusion detection in real world high volume environments. Narrative Title FY 2003

  14. Detecting Anomalies from End-to-End Internet Performance Measurements (PingER) Using Cluster Based Local Outlier Factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Saqib; Wang, Guojun; Cottrell, Roger Leslie

    PingER (Ping End-to-End Reporting) is a worldwide end-to-end Internet performance measurement framework. It was developed by the SLAC National Accelerator Laboratory, Stanford, USA and running from the last 20 years. It has more than 700 monitoring agents and remote sites which monitor the performance of Internet links around 170 countries of the world. At present, the size of the compressed PingER data set is about 60 GB comprising of 100,000 flat files. The data is publicly available for valuable Internet performance analyses. However, the data sets suffer from missing values and anomalies due to congestion, bottleneck links, queuing overflow, networkmore » software misconfiguration, hardware failure, cable cuts, and social upheavals. Therefore, the objective of this paper is to detect such performance drops or spikes labeled as anomalies or outliers for the PingER data set. In the proposed approach, the raw text files of the data set are transformed into a PingER dimensional model. The missing values are imputed using the k-NN algorithm. The data is partitioned into similar instances using the k-means clustering algorithm. Afterward, clustering is integrated with the Local Outlier Factor (LOF) using the Cluster Based Local Outlier Factor (CBLOF) algorithm to detect the anomalies or outliers from the PingER data. Lastly, anomalies are further analyzed to identify the time frame and location of the hosts generating the major percentage of the anomalies in the PingER data set ranging from 1998 to 2016.« less

  15. Detecting Anomalies from End-to-End Internet Performance Measurements (PingER) Using Cluster Based Local Outlier Factor

    DOE PAGES

    Ali, Saqib; Wang, Guojun; Cottrell, Roger Leslie; ...

    2018-05-28

    PingER (Ping End-to-End Reporting) is a worldwide end-to-end Internet performance measurement framework. It was developed by the SLAC National Accelerator Laboratory, Stanford, USA and running from the last 20 years. It has more than 700 monitoring agents and remote sites which monitor the performance of Internet links around 170 countries of the world. At present, the size of the compressed PingER data set is about 60 GB comprising of 100,000 flat files. The data is publicly available for valuable Internet performance analyses. However, the data sets suffer from missing values and anomalies due to congestion, bottleneck links, queuing overflow, networkmore » software misconfiguration, hardware failure, cable cuts, and social upheavals. Therefore, the objective of this paper is to detect such performance drops or spikes labeled as anomalies or outliers for the PingER data set. In the proposed approach, the raw text files of the data set are transformed into a PingER dimensional model. The missing values are imputed using the k-NN algorithm. The data is partitioned into similar instances using the k-means clustering algorithm. Afterward, clustering is integrated with the Local Outlier Factor (LOF) using the Cluster Based Local Outlier Factor (CBLOF) algorithm to detect the anomalies or outliers from the PingER data. Lastly, anomalies are further analyzed to identify the time frame and location of the hosts generating the major percentage of the anomalies in the PingER data set ranging from 1998 to 2016.« less

  16. MODVOLC2: A Hybrid Time Series Analysis for Detecting Thermal Anomalies Applied to Thermal Infrared Satellite Data

    NASA Astrophysics Data System (ADS)

    Koeppen, W. C.; Wright, R.; Pilger, E.

    2009-12-01

    We developed and tested a new, automated algorithm, MODVOLC2, which analyzes thermal infrared satellite time series data to detect and quantify the excess energy radiated from thermal anomalies such as active volcanoes, fires, and gas flares. MODVOLC2 combines two previously developed algorithms, a simple point operation algorithm (MODVOLC) and a more complex time series analysis (Robust AVHRR Techniques, or RAT) to overcome the limitations of using each approach alone. MODVOLC2 has four main steps: (1) it uses the original MODVOLC algorithm to process the satellite data on a pixel-by-pixel basis and remove thermal outliers, (2) it uses the remaining data to calculate reference and variability images for each calendar month, (3) it compares the original satellite data and any newly acquired data to the reference images normalized by their variability, and it detects pixels that fall outside the envelope of normal thermal behavior, (4) it adds any pixels detected by MODVOLC to those detected in the time series analysis. Using test sites at Anatahan and Kilauea volcanoes, we show that MODVOLC2 was able to detect ~15% more thermal anomalies than using MODVOLC alone, with very few, if any, known false detections. Using gas flares from the Cantarell oil field in the Gulf of Mexico, we show that MODVOLC2 provided results that were unattainable using a time series-only approach. Some thermal anomalies (e.g., Cantarell oil field flares) are so persistent that an additional, semi-automated 12-µm correction must be applied in order to correctly estimate both the number of anomalies and the total excess radiance being emitted by them. Although all available data should be included to make the best possible reference and variability images necessary for the MODVOLC2, we estimate that at least 80 images per calendar month are required to generate relatively good statistics from which to run MODVOLC2, a condition now globally met by a decade of MODIS observations. We also found that MODVOLC2 achieved good results on multiple sensors (MODIS and GOES), which provides confidence that MODVOLC2 can be run on future instruments regardless of their spatial and temporal resolutions. The improved performance of MODVOLC2 over MODVOLC makes possible the detection of lower temperature thermal anomalies that will be useful in improving our ability to document Earth’s volcanic eruptions as well as detect possible low temperature thermal precursors to larger eruptions.

  17. A novel approach for pilot error detection using Dynamic Bayesian Networks.

    PubMed

    Saada, Mohamad; Meng, Qinggang; Huang, Tingwen

    2014-06-01

    In the last decade Dynamic Bayesian Networks (DBNs) have become one type of the most attractive probabilistic modelling framework extensions of Bayesian Networks (BNs) for working under uncertainties from a temporal perspective. Despite this popularity not many researchers have attempted to study the use of these networks in anomaly detection or the implications of data anomalies on the outcome of such models. An abnormal change in the modelled environment's data at a given time, will cause a trailing chain effect on data of all related environment variables in current and consecutive time slices. Albeit this effect fades with time, it still can have an ill effect on the outcome of such models. In this paper we propose an algorithm for pilot error detection, using DBNs as the modelling framework for learning and detecting anomalous data. We base our experiments on the actions of an aircraft pilot, and a flight simulator is created for running the experiments. The proposed anomaly detection algorithm has achieved good results in detecting pilot errors and effects on the whole system.

  18. Data-Driven Anomaly Detection Performance for the Ares I-X Ground Diagnostic Prototype

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Schwabacher, Mark A.; Matthews, Bryan L.

    2010-01-01

    In this paper, we will assess the performance of a data-driven anomaly detection algorithm, the Inductive Monitoring System (IMS), which can be used to detect simulated Thrust Vector Control (TVC) system failures. However, the ability of IMS to detect these failures in a true operational setting may be related to the realistic nature of how they are simulated. As such, we will investigate both a low fidelity and high fidelity approach to simulating such failures, with the latter based upon the underlying physics. Furthermore, the ability of IMS to detect anomalies that were previously unknown and not previously simulated will be studied in earnest, as well as apparent deficiencies or misapplications that result from using the data-driven paradigm. Our conclusions indicate that robust detection performance of simulated failures using IMS is not appreciably affected by the use of a high fidelity simulation. However, we have found that the inclusion of a data-driven algorithm such as IMS into a suite of deployable health management technologies does add significant value.

  19. Anomaly Detection in Test Equipment via Sliding Mode Observers

    NASA Technical Reports Server (NTRS)

    Solano, Wanda M.; Drakunov, Sergey V.

    2012-01-01

    Nonlinear observers were originally developed based on the ideas of variable structure control, and for the purpose of detecting disturbances in complex systems. In this anomaly detection application, these observers were designed for estimating the distributed state of fluid flow in a pipe described by a class of advection equations. The observer algorithm uses collected data in a piping system to estimate the distributed system state (pressure and velocity along a pipe containing liquid gas propellant flow) using only boundary measurements. These estimates are then used to further estimate and localize possible anomalies such as leaks or foreign objects, and instrumentation metering problems such as incorrect flow meter orifice plate size. The observer algorithm has the following parts: a mathematical model of the fluid flow, observer control algorithm, and an anomaly identification algorithm. The main functional operation of the algorithm is in creating the sliding mode in the observer system implemented as software. Once the sliding mode starts in the system, the equivalent value of the discontinuous function in sliding mode can be obtained by filtering out the high-frequency chattering component. In control theory, "observers" are dynamic algorithms for the online estimation of the current state of a dynamic system by measurements of an output of the system. Classical linear observers can provide optimal estimates of a system state in case of uncertainty modeled by white noise. For nonlinear cases, the theory of nonlinear observers has been developed and its success is mainly due to the sliding mode approach. Using the mathematical theory of variable structure systems with sliding modes, the observer algorithm is designed in such a way that it steers the output of the model to the output of the system obtained via a variety of sensors, in spite of possible mismatches between the assumed model and actual system. The unique properties of sliding mode control allow not only control of the model internal states to the states of the real-life system, but also identification of the disturbance or anomaly that may occur.

  20. SU-G-JeP4-03: Anomaly Detection of Respiratory Motion by Use of Singular Spectrum Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotoku, J; Kumagai, S; Nakabayashi, S

    Purpose: The implementation and realization of automatic anomaly detection of respiratory motion is a very important technique to prevent accidental damage during radiation therapy. Here, we propose an automatic anomaly detection method using singular value decomposition analysis. Methods: The anomaly detection procedure consists of four parts:1) measurement of normal respiratory motion data of a patient2) calculation of a trajectory matrix representing normal time-series feature3) real-time monitoring and calculation of a trajectory matrix of real-time data.4) calculation of an anomaly score from the similarity of the two feature matrices. Patient motion was observed by a marker-less tracking system using a depthmore » camera. Results: Two types of motion e.g. cough and sudden stop of breathing were successfully detected in our real-time application. Conclusion: Automatic anomaly detection of respiratory motion using singular spectrum analysis was successful in the cough and sudden stop of breathing. The clinical use of this algorithm will be very hopeful. This work was supported by JSPS KAKENHI Grant Number 15K08703.« less

  1. Thermal and TEC anomalies detection using an intelligent hybrid system around the time of the Saravan, Iran, (Mw = 7.7) earthquake of 16 April 2013

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2014-02-01

    A powerful earthquake of Mw = 7.7 struck the Saravan region (28.107° N, 62.053° E) in Iran on 16 April 2013. Up to now nomination of an automated anomaly detection method in a non linear time series of earthquake precursor has been an attractive and challenging task. Artificial Neural Network (ANN) and Particle Swarm Optimization (PSO) have revealed strong potentials in accurate time series prediction. This paper presents the first study of an integration of ANN and PSO method in the research of earthquake precursors to detect the unusual variations of the thermal and total electron content (TEC) seismo-ionospheric anomalies induced by the strong earthquake of Saravan. In this study, to overcome the stagnation in local minimum during the ANN training, PSO as an optimization method is used instead of traditional algorithms for training the ANN method. The proposed hybrid method detected a considerable number of anomalies 4 and 8 days preceding the earthquake. Since, in this case study, ionospheric TEC anomalies induced by seismic activity is confused with background fluctuations due to solar activity, a multi-resolution time series processing technique based on wavelet transform has been applied on TEC signal variations. In view of the fact that the accordance in the final results deduced from some robust methods is a convincing indication for the efficiency of the method, therefore the detected thermal and TEC anomalies using the ANN + PSO method were compared to the results with regard to the observed anomalies by implementing the mean, median, Wavelet, Kalman filter, Auto-Regressive Integrated Moving Average (ARIMA), Support Vector Machine (SVM) and Genetic Algorithm (GA) methods. The results indicate that the ANN + PSO method is quite promising and deserves serious attention as a new tool for thermal and TEC seismo anomalies detection.

  2. Assessing the impact of background spectral graph construction techniques on the topological anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Ziemann, Amanda K.; Messinger, David W.; Albano, James A.; Basener, William F.

    2012-06-01

    Anomaly detection algorithms have historically been applied to hyperspectral imagery in order to identify pixels whose material content is incongruous with the background material in the scene. Typically, the application involves extracting man-made objects from natural and agricultural surroundings. A large challenge in designing these algorithms is determining which pixels initially constitute the background material within an image. The topological anomaly detection (TAD) algorithm constructs a graph theory-based, fully non-parametric topological model of the background in the image scene, and uses codensity to measure deviation from this background. In TAD, the initial graph theory structure of the image data is created by connecting an edge between any two pixel vertices x and y if the Euclidean distance between them is less than some resolution r. While this type of proximity graph is among the most well-known approaches to building a geometric graph based on a given set of data, there is a wide variety of dierent geometrically-based techniques. In this paper, we present a comparative test of the performance of TAD across four dierent constructs of the initial graph: mutual k-nearest neighbor graph, sigma-local graph for two different values of σ > 1, and the proximity graph originally implemented in TAD.

  3. Residual Error Based Anomaly Detection Using Auto-Encoder in SMD Machine Sound.

    PubMed

    Oh, Dong Yul; Yun, Il Dong

    2018-04-24

    Detecting an anomaly or an abnormal situation from given noise is highly useful in an environment where constantly verifying and monitoring a machine is required. As deep learning algorithms are further developed, current studies have focused on this problem. However, there are too many variables to define anomalies, and the human annotation for a large collection of abnormal data labeled at the class-level is very labor-intensive. In this paper, we propose to detect abnormal operation sounds or outliers in a very complex machine along with reducing the data-driven annotation cost. The architecture of the proposed model is based on an auto-encoder, and it uses the residual error, which stands for its reconstruction quality, to identify the anomaly. We assess our model using Surface-Mounted Device (SMD) machine sound, which is very complex, as experimental data, and state-of-the-art performance is successfully achieved for anomaly detection.

  4. Active Learning with Rationales for Identifying Operationally Significant Anomalies in Aviation

    NASA Technical Reports Server (NTRS)

    Sharma, Manali; Das, Kamalika; Bilgic, Mustafa; Matthews, Bryan; Nielsen, David Lynn; Oza, Nikunj C.

    2016-01-01

    A major focus of the commercial aviation community is discovery of unknown safety events in flight operations data. Data-driven unsupervised anomaly detection methods are better at capturing unknown safety events compared to rule-based methods which only look for known violations. However, not all statistical anomalies that are discovered by these unsupervised anomaly detection methods are operationally significant (e.g., represent a safety concern). Subject Matter Experts (SMEs) have to spend significant time reviewing these statistical anomalies individually to identify a few operationally significant ones. In this paper we propose an active learning algorithm that incorporates SME feedback in the form of rationales to build a classifier that can distinguish between uninteresting and operationally significant anomalies. Experimental evaluation on real aviation data shows that our approach improves detection of operationally significant events by as much as 75% compared to the state-of-the-art. The learnt classifier also generalizes well to additional validation data sets.

  5. Pre-seismic anomalies from optical satellite observations: a review

    NASA Astrophysics Data System (ADS)

    Jiao, Zhong-Hu; Zhao, Jing; Shan, Xinjian

    2018-04-01

    Detecting various anomalies using optical satellite data prior to strong earthquakes is key to understanding and forecasting earthquake activities because of its recognition of thermal-radiation-related phenomena in seismic preparation phases. Data from satellite observations serve as a powerful tool in monitoring earthquake preparation areas at a global scale and in a nearly real-time manner. Over the past several decades, many new different data sources have been utilized in this field, and progressive anomaly detection approaches have been developed. This paper reviews the progress and development of pre-seismic anomaly detection technology in this decade. First, precursor parameters, including parameters from the top of the atmosphere, in the atmosphere, and on the Earth's surface, are stated and discussed. Second, different anomaly detection methods, which are used to extract anomalous signals that probably indicate future seismic events, are presented. Finally, certain critical problems with the current research are highlighted, and new developing trends and perspectives for future work are discussed. The development of Earth observation satellites and anomaly detection algorithms can enrich available information sources, provide advanced tools for multilevel earthquake monitoring, and improve short- and medium-term forecasting, which play a large and growing role in pre-seismic anomaly detection research.

  6. ISHM Anomaly Lexicon for Rocket Test

    NASA Technical Reports Server (NTRS)

    Schmalzel, John L.; Buchanan, Aubri; Hensarling, Paula L.; Morris, Jonathan; Turowski, Mark; Figueroa, Jorge F.

    2007-01-01

    Integrated Systems Health Management (ISHM) is a comprehensive capability. An ISHM system must detect anomalies, identify causes of such anomalies, predict future anomalies, help identify consequences of anomalies for example, suggested mitigation steps. The system should also provide users with appropriate navigation tools to facilitate the flow of information into and out of the ISHM system. Central to the ability of the ISHM to detect anomalies is a clearly defined catalog of anomalies. Further, this lexicon of anomalies must be organized in ways that make it accessible to a suite of tools used to manage the data, information and knowledge (DIaK) associated with a system. In particular, it is critical to ensure that there is optimal mapping between target anomalies and the algorithms associated with their detection. During the early development of our ISHM architecture and approach, it became clear that a lexicon of anomalies would be important to the development of critical anomaly detection algorithms. In our work in the rocket engine test environment at John C. Stennis Space Center, we have access to a repository of discrepancy reports (DRs) that are generated in response to squawks identified during post-test data analysis. The DR is the tool used to document anomalies and the methods used to resolve the issue. These DRs have been generated for many different tests and for all test stands. The result is that they represent a comprehensive summary of the anomalies associated with rocket engine testing. Fig. 1 illustrates some of the data that can be extracted from a DR. Such information includes affected transducer channels, narrative description of the observed anomaly, and the steps used to correct the problem. The primary goal of the anomaly lexicon development efforts we have undertaken is to create a lexicon that could be used in support of an associated health assessment database system (HADS) co-development effort. There are a number of significant byproducts of the anomaly lexicon compilation effort. For example, (1) Allows determination of the frequency distribution of anomalies to help identify those with the potential for high return on investment if included in automated detection as part of an ISHM system, (2) Availability of a regular lexicon could provide the base anomaly name choices to help maintain consistency in the DR collection process, and (3) Although developed for the rocket engine test environment, most of the anomalies are not specific to rocket testing, and thus can be reused in other applications.

  7. Spectral anomaly methods for aerial detection using KUT nuisance rejection

    NASA Astrophysics Data System (ADS)

    Detwiler, R. S.; Pfund, D. M.; Myjak, M. J.; Kulisek, J. A.; Seifert, C. E.

    2015-06-01

    This work discusses the application and optimization of a spectral anomaly method for the real-time detection of gamma radiation sources from an aerial helicopter platform. Aerial detection presents several key challenges over ground-based detection. For one, larger and more rapid background fluctuations are typical due to higher speeds, larger field of view, and geographically induced background changes. As well, the possible large altitude or stand-off distance variations cause significant steps in background count rate as well as spectral changes due to increased gamma-ray scatter with detection at higher altitudes. The work here details the adaptation and optimization of the PNNL-developed algorithm Nuisance-Rejecting Spectral Comparison Ratios for Anomaly Detection (NSCRAD), a spectral anomaly method previously developed for ground-based applications, for an aerial platform. The algorithm has been optimized for two multi-detector systems; a NaI(Tl)-detector-based system and a CsI detector array. The optimization here details the adaptation of the spectral windows for a particular set of target sources to aerial detection and the tailoring for the specific detectors. As well, the methodology and results for background rejection methods optimized for the aerial gamma-ray detection using Potassium, Uranium and Thorium (KUT) nuisance rejection are shown. Results indicate that use of a realistic KUT nuisance rejection may eliminate metric rises due to background magnitude and spectral steps encountered in aerial detection due to altitude changes and geographically induced steps such as at land-water interfaces.

  8. Methods for computational disease surveillance in infection prevention and control: Statistical process control versus Twitter's anomaly and breakout detection algorithms.

    PubMed

    Wiemken, Timothy L; Furmanek, Stephen P; Mattingly, William A; Wright, Marc-Oliver; Persaud, Annuradha K; Guinn, Brian E; Carrico, Ruth M; Arnold, Forest W; Ramirez, Julio A

    2018-02-01

    Although not all health care-associated infections (HAIs) are preventable, reducing HAIs through targeted intervention is key to a successful infection prevention program. To identify areas in need of targeted intervention, robust statistical methods must be used when analyzing surveillance data. The objective of this study was to compare and contrast statistical process control (SPC) charts with Twitter's anomaly and breakout detection algorithms. SPC and anomaly/breakout detection (ABD) charts were created for vancomycin-resistant Enterococcus, Acinetobacter baumannii, catheter-associated urinary tract infection, and central line-associated bloodstream infection data. Both SPC and ABD charts detected similar data points as anomalous/out of control on most charts. The vancomycin-resistant Enterococcus ABD chart detected an extra anomalous point that appeared to be higher than the same time period in prior years. Using a small subset of the central line-associated bloodstream infection data, the ABD chart was able to detect anomalies where the SPC chart was not. SPC charts and ABD charts both performed well, although ABD charts appeared to work better in the context of seasonal variation and autocorrelation. Because they account for common statistical issues in HAI data, ABD charts may be useful for practitioners for analysis of HAI surveillance data. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  9. Target detection using the background model from the topological anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Dorado Munoz, Leidy P.; Messinger, David W.; Ziemann, Amanda K.

    2013-05-01

    The Topological Anomaly Detection (TAD) algorithm has been used as an anomaly detector in hyperspectral and multispectral images. TAD is an algorithm based on graph theory that constructs a topological model of the background in a scene, and computes an anomalousness ranking for all of the pixels in the image with respect to the background in order to identify pixels with uncommon or strange spectral signatures. The pixels that are modeled as background are clustered into groups or connected components, which could be representative of spectral signatures of materials present in the background. Therefore, the idea of using the background components given by TAD in target detection is explored in this paper. In this way, these connected components are characterized in three different approaches, where the mean signature and endmembers for each component are calculated and used as background basis vectors in Orthogonal Subspace Projection (OSP) and Adaptive Subspace Detector (ASD). Likewise, the covariance matrix of those connected components is estimated and used in detectors: Constrained Energy Minimization (CEM) and Adaptive Coherence Estimator (ACE). The performance of these approaches and the different detectors is compared with a global approach, where the background characterization is derived directly from the image. Experiments and results using self-test data set provided as part of the RIT blind test target detection project are shown.

  10. Using Statistical Process Control for detecting anomalies in multivariate spatiotemporal Earth Observations

    NASA Astrophysics Data System (ADS)

    Flach, Milan; Mahecha, Miguel; Gans, Fabian; Rodner, Erik; Bodesheim, Paul; Guanche-Garcia, Yanira; Brenning, Alexander; Denzler, Joachim; Reichstein, Markus

    2016-04-01

    The number of available Earth observations (EOs) is currently substantially increasing. Detecting anomalous patterns in these multivariate time series is an important step in identifying changes in the underlying dynamical system. Likewise, data quality issues might result in anomalous multivariate data constellations and have to be identified before corrupting subsequent analyses. In industrial application a common strategy is to monitor production chains with several sensors coupled to some statistical process control (SPC) algorithm. The basic idea is to raise an alarm when these sensor data depict some anomalous pattern according to the SPC, i.e. the production chain is considered 'out of control'. In fact, the industrial applications are conceptually similar to the on-line monitoring of EOs. However, algorithms used in the context of SPC or process monitoring are rarely considered for supervising multivariate spatio-temporal Earth observations. The objective of this study is to exploit the potential and transferability of SPC concepts to Earth system applications. We compare a range of different algorithms typically applied by SPC systems and evaluate their capability to detect e.g. known extreme events in land surface processes. Specifically two main issues are addressed: (1) identifying the most suitable combination of data pre-processing and detection algorithm for a specific type of event and (2) analyzing the limits of the individual approaches with respect to the magnitude, spatio-temporal size of the event as well as the data's signal to noise ratio. Extensive artificial data sets that represent the typical properties of Earth observations are used in this study. Our results show that the majority of the algorithms used can be considered for the detection of multivariate spatiotemporal events and directly transferred to real Earth observation data as currently assembled in different projects at the European scale, e.g. http://baci-h2020.eu/index.php/ and http://earthsystemdatacube.net/. Known anomalies such as the Russian heatwave are detected as well as anomalies which are not detectable with univariate methods.

  11. Glint-induced false alarm reduction in signature adaptive target detection

    NASA Astrophysics Data System (ADS)

    Crosby, Frank J.

    2002-07-01

    The signal adaptive target detection algorithm developed by Crosby and Riley uses target geometry to discern anomalies in local backgrounds. Detection is not restricted based on specific target signatures. The robustness of the algorithm is limited by an increased false alarm potential. The base algorithm is extended to eliminate one common source of false alarms in a littoral environment. This common source is glint reflected on the surface of water. The spectral and spatial transience of glint prevent straightforward characterization and complicate exclusion. However, the statistical basis of the detection algorithm and its inherent computations allow for glint discernment and the removal of its influence.

  12. Advanced Unsupervised Classification Methods to Detect Anomalies on Earthen Levees Using Polarimetric SAR Imagery

    PubMed Central

    Marapareddy, Ramakalavathi; Aanstoos, James V.; Younan, Nicolas H.

    2016-01-01

    Fully polarimetric Synthetic Aperture Radar (polSAR) data analysis has wide applications for terrain and ground cover classification. The dynamics of surface and subsurface water events can lead to slope instability resulting in slough slides on earthen levees. Early detection of these anomalies by a remote sensing approach could save time versus direct assessment. We used L-band Synthetic Aperture Radar (SAR) to screen levees for anomalies. SAR technology, due to its high spatial resolution and soil penetration capability, is a good choice for identifying problematic areas on earthen levees. Using the parameters entropy (H), anisotropy (A), alpha (α), and eigenvalues (λ, λ1, λ2, and λ3), we implemented several unsupervised classification algorithms for the identification of anomalies on the levee. The classification techniques applied are H/α, H/A, A/α, Wishart H/α, Wishart H/A/α, and H/α/λ classification algorithms. In this work, the effectiveness of the algorithms was demonstrated using quad-polarimetric L-band SAR imagery from the NASA Jet Propulsion Laboratory’s (JPL’s) Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR). The study area is a section of the lower Mississippi River valley in the Southern USA, where earthen flood control levees are maintained by the US Army Corps of Engineers. PMID:27322270

  13. Semi-Supervised Novelty Detection with Adaptive Eigenbases, and Application to Radio Transients

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Majid, Walid A.; Reed, Colorado J.; Wagstaff, Kiri L.

    2011-01-01

    We present a semi-supervised online method for novelty detection and evaluate its performance for radio astronomy time series data. Our approach uses adaptive eigenbases to combine 1) prior knowledge about uninteresting signals with 2) online estimation of the current data properties to enable highly sensitive and precise detection of novel signals. We apply the method to the problem of detecting fast transient radio anomalies and compare it to current alternative algorithms. Tests based on observations from the Parkes Multibeam Survey show both effective detection of interesting rare events and robustness to known false alarm anomalies.

  14. Multiple Kernel Learning for Heterogeneous Anomaly Detection: Algorithm and Aviation Safety Case Study

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.

    2010-01-01

    The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods

  15. Final report for LDRD project 11-0029 : high-interest event detection in large-scale multi-modal data sets : proof of concept.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohrer, Brandon Robinson

    2011-09-01

    Events of interest to data analysts are sometimes difficult to characterize in detail. Rather, they consist of anomalies, events that are unpredicted, unusual, or otherwise incongruent. The purpose of this LDRD was to test the hypothesis that a biologically-inspired anomaly detection algorithm could be used to detect contextual, multi-modal anomalies. There currently is no other solution to this problem, but the existence of a solution would have a great national security impact. The technical focus of this research was the application of a brain-emulating cognition and control architecture (BECCA) to the problem of anomaly detection. One aspect of BECCA inmore » particular was discovered to be critical to improved anomaly detection capabilities: it's feature creator. During the course of this project the feature creator was developed and tested against multiple data types. Development direction was drawn from psychological and neurophysiological measurements. Major technical achievements include the creation of hierarchical feature sets created from both audio and imagery data.« less

  16. Fuzzy Kernel k-Medoids algorithm for anomaly detection problems

    NASA Astrophysics Data System (ADS)

    Rustam, Z.; Talita, A. S.

    2017-07-01

    Intrusion Detection System (IDS) is an essential part of security systems to strengthen the security of information systems. IDS can be used to detect the abuse by intruders who try to get into the network system in order to access and utilize the available data sources in the system. There are two approaches of IDS, Misuse Detection and Anomaly Detection (behavior-based intrusion detection). Fuzzy clustering-based methods have been widely used to solve Anomaly Detection problems. Other than using fuzzy membership concept to determine the object to a cluster, other approaches as in combining fuzzy and possibilistic membership or feature-weighted based methods are also used. We propose Fuzzy Kernel k-Medoids that combining fuzzy and possibilistic membership as a powerful method to solve anomaly detection problem since on numerical experiment it is able to classify IDS benchmark data into five different classes simultaneously. We classify IDS benchmark data KDDCup'99 data set into five different classes simultaneously with the best performance was achieved by using 30 % of training data with clustering accuracy reached 90.28 percent.

  17. A new prior for bayesian anomaly detection: application to biosurveillance.

    PubMed

    Shen, Y; Cooper, G F

    2010-01-01

    Bayesian anomaly detection computes posterior probabilities of anomalous events by combining prior beliefs and evidence from data. However, the specification of prior probabilities can be challenging. This paper describes a Bayesian prior in the context of disease outbreak detection. The goal is to provide a meaningful, easy-to-use prior that yields a posterior probability of an outbreak that performs at least as well as a standard frequentist approach. If this goal is achieved, the resulting posterior could be usefully incorporated into a decision analysis about how to act in light of a possible disease outbreak. This paper describes a Bayesian method for anomaly detection that combines learning from data with a semi-informative prior probability over patterns of anomalous events. A univariate version of the algorithm is presented here for ease of illustration of the essential ideas. The paper describes the algorithm in the context of disease-outbreak detection, but it is general and can be used in other anomaly detection applications. For this application, the semi-informative prior specifies that an increased count over baseline is expected for the variable being monitored, such as the number of respiratory chief complaints per day at a given emergency department. The semi-informative prior is derived based on the baseline prior, which is estimated from using historical data. The evaluation reported here used semi-synthetic data to evaluate the detection performance of the proposed Bayesian method and a control chart method, which is a standard frequentist algorithm that is closest to the Bayesian method in terms of the type of data it uses. The disease-outbreak detection performance of the Bayesian method was statistically significantly better than that of the control chart method when proper baseline periods were used to estimate the baseline behavior to avoid seasonal effects. When using longer baseline periods, the Bayesian method performed as well as the control chart method. The time complexity of the Bayesian algorithm is linear in the number of the observed events being monitored, due to a novel, closed-form derivation that is introduced in the paper. This paper introduces a novel prior probability for Bayesian outbreak detection that is expressive, easy-to-apply, computationally efficient, and performs as well or better than a standard frequentist method.

  18. Comparison of outliers and novelty detection to identify ionospheric TEC irregularities during geomagnetic storm and substorm

    NASA Astrophysics Data System (ADS)

    Pattisahusiwa, Asis; Houw Liong, The; Purqon, Acep

    2016-08-01

    In this study, we compare two learning mechanisms: outliers and novelty detection in order to detect ionospheric TEC disturbance by November 2004 geomagnetic storm and January 2005 substorm. The mechanisms are applied by using v-SVR learning algorithm which is a regression version of SVM. Our results show that both mechanisms are quiet accurate in learning TEC data. However, novelty detection is more accurate than outliers detection in extracting anomalies related to geomagnetic events. The detected anomalies by outliers detection are mostly related to trend of data, while novelty detection are associated to geomagnetic events. Novelty detection also shows evidence of LSTID during geomagnetic events.

  19. Implementation of a General Real-Time Visual Anomaly Detection System Via Soft Computing

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve; Ferrell, Bob; Steinrock, Todd (Technical Monitor)

    2001-01-01

    The intelligent visual system detects anomalies or defects in real time under normal lighting operating conditions. The application is basically a learning machine that integrates fuzzy logic (FL), artificial neural network (ANN), and generic algorithm (GA) schemes to process the image, run the learning process, and finally detect the anomalies or defects. The system acquires the image, performs segmentation to separate the object being tested from the background, preprocesses the image using fuzzy reasoning, performs the final segmentation using fuzzy reasoning techniques to retrieve regions with potential anomalies or defects, and finally retrieves them using a learning model built via ANN and GA techniques. FL provides a powerful framework for knowledge representation and overcomes uncertainty and vagueness typically found in image analysis. ANN provides learning capabilities, and GA leads to robust learning results. An application prototype currently runs on a regular PC under Windows NT, and preliminary work has been performed to build an embedded version with multiple image processors. The application prototype is being tested at the Kennedy Space Center (KSC), Florida, to visually detect anomalies along slide basket cables utilized by the astronauts to evacuate the NASA Shuttle launch pad in an emergency. The potential applications of this anomaly detection system in an open environment are quite wide. Another current, potentially viable application at NASA is in detecting anomalies of the NASA Space Shuttle Orbiter's radiator panels.

  20. An Unsupervised Anomalous Event Detection and Interactive Analysis Framework for Large-scale Satellite Data

    NASA Astrophysics Data System (ADS)

    LIU, Q.; Lv, Q.; Klucik, R.; Chen, C.; Gallaher, D. W.; Grant, G.; Shang, L.

    2016-12-01

    Due to the high volume and complexity of satellite data, computer-aided tools for fast quality assessments and scientific discovery are indispensable for scientists in the era of Big Data. In this work, we have developed a framework for automated anomalous event detection in massive satellite data. The framework consists of a clustering-based anomaly detection algorithm and a cloud-based tool for interactive analysis of detected anomalies. The algorithm is unsupervised and requires no prior knowledge of the data (e.g., expected normal pattern or known anomalies). As such, it works for diverse data sets, and performs well even in the presence of missing and noisy data. The cloud-based tool provides an intuitive mapping interface that allows users to interactively analyze anomalies using multiple features. As a whole, our framework can (1) identify outliers in a spatio-temporal context, (2) recognize and distinguish meaningful anomalous events from individual outliers, (3) rank those events based on "interestingness" (e.g., rareness or total number of outliers) defined by users, and (4) enable interactively query, exploration, and analysis of those anomalous events. In this presentation, we will demonstrate the effectiveness and efficiency of our framework in the application of detecting data quality issues and unusual natural events using two satellite datasets. The techniques and tools developed in this project are applicable for a diverse set of satellite data and will be made publicly available for scientists in early 2017.

  1. Unsupervised, low latency anomaly detection of algorithmically generated domain names by generative probabilistic modeling.

    PubMed

    Raghuram, Jayaram; Miller, David J; Kesidis, George

    2014-07-01

    We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates.

  2. Unsupervised, low latency anomaly detection of algorithmically generated domain names by generative probabilistic modeling

    PubMed Central

    Raghuram, Jayaram; Miller, David J.; Kesidis, George

    2014-01-01

    We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates. PMID:25685511

  3. Real-time failure control (SAFD)

    NASA Technical Reports Server (NTRS)

    Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.

    1990-01-01

    The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.

  4. Security inspection in ports by anomaly detection using hyperspectral imaging technology

    NASA Astrophysics Data System (ADS)

    Rivera, Javier; Valverde, Fernando; Saldaña, Manuel; Manian, Vidya

    2013-05-01

    Applying hyperspectral imaging technology in port security is crucial for the detection of possible threats or illegal activities. One of the most common problems that cargo suffers is tampering. This represents a danger to society because it creates a channel to smuggle illegal and hazardous products. If a cargo is altered, security inspections on that cargo should contain anomalies that reveal the nature of the tampering. Hyperspectral images can detect anomalies by gathering information through multiple electromagnetic bands. The spectrums extracted from these bands can be used to detect surface anomalies from different materials. Based on this technology, a scenario was built in which a hyperspectral camera was used to inspect the cargo for any surface anomalies and a user interface shows the results. The spectrum of items, altered by different materials that can be used to conceal illegal products, is analyzed and classified in order to provide information about the tampered cargo. The image is analyzed with a variety of techniques such as multiple features extracting algorithms, autonomous anomaly detection, and target spectrum detection. The results will be exported to a workstation or mobile device in order to show them in an easy -to-use interface. This process could enhance the current capabilities of security systems that are already implemented, providing a more complete approach to detect threats and illegal cargo.

  5. Multiscale Anomaly Detection and Image Registration Algorithms for Airborne Landmine Detection

    DTIC Science & Technology

    2008-05-01

    with the sensed image. The two- dimensional correlation coefficient r for two matrices A and B both of size M ×N is given by r = ∑ m ∑ n (Amn...correlation based method by matching features in a high- dimensional feature- space . The current implementation of the SIFT algorithm uses a brute-force...by repeatedly convolving the image with a Guassian kernel. Each plane of the scale

  6. Data Mining for Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Biswas, Gautam; Mack, Daniel; Mylaraswamy, Dinkar; Bharadwaj, Raj

    2013-01-01

    The Vehicle Integrated Prognostics Reasoner (VIPR) program describes methods for enhanced diagnostics as well as a prognostic extension to current state of art Aircraft Diagnostic and Maintenance System (ADMS). VIPR introduced a new anomaly detection function for discovering previously undetected and undocumented situations, where there are clear deviations from nominal behavior. Once a baseline (nominal model of operations) is established, the detection and analysis is split between on-aircraft outlier generation and off-aircraft expert analysis to characterize and classify events that may not have been anticipated by individual system providers. Offline expert analysis is supported by data curation and data mining algorithms that can be applied in the contexts of supervised learning methods and unsupervised learning. In this report, we discuss efficient methods to implement the Kolmogorov complexity measure using compression algorithms, and run a systematic empirical analysis to determine the best compression measure. Our experiments established that the combination of the DZIP compression algorithm and CiDM distance measure provides the best results for capturing relevant properties of time series data encountered in aircraft operations. This combination was used as the basis for developing an unsupervised learning algorithm to define "nominal" flight segments using historical flight segments.

  7. Evaluation of Anomaly Detection Capability for Ground-Based Pre-Launch Shuttle Operations. Chapter 8

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander

    2010-01-01

    This chapter will provide a thorough end-to-end description of the process for evaluation of three different data-driven algorithms for anomaly detection to select the best candidate for deployment as part of a suite of IVHM (Integrated Vehicle Health Management) technologies. These algorithms were deemed to be sufficiently mature enough to be considered viable candidates for deployment in support of the maiden launch of Ares I-X, the successor to the Space Shuttle for NASA's Constellation program. Data-driven algorithms are just one of three different types being deployed. The other two types of algorithms being deployed include a "nile-based" expert system, and a "model-based" system. Within these two categories, the deployable candidates have already been selected based upon qualitative factors such as flight heritage. For the rule-based system, SHINE (Spacecraft High-speed Inference Engine) has been selected for deployment, which is a component of BEAM (Beacon-based Exception Analysis for Multimissions), a patented technology developed at NASA's JPL (Jet Propulsion Laboratory) and serves to aid in the management and identification of operational modes. For the "model-based" system, a commercially available package developed by QSI (Qualtech Systems, Inc.), TEAMS (Testability Engineering and Maintenance System) has been selected for deployment to aid in diagnosis. In the context of this particular deployment, distinctions among the use of the terms "data-driven," "rule-based," and "model-based," can be found in. Although there are three different categories of algorithms that have been selected for deployment, our main focus in this chapter will be on the evaluation of three candidates for data-driven anomaly detection. These algorithms will be evaluated upon their capability for robustly detecting incipient faults or failures in the ground-based phase of pre-launch space shuttle operations, rather than based oil heritage as performed in previous studies. Robust detection will allow for the achievement of pre-specified minimum false alarm and/or missed detection rates in the selection of alert thresholds. All algorithms will also be optimized with respect to an aggregation of these same criteria. Our study relies upon the use of Shuttle data to act as was a proxy for and in preparation for application to Ares I-X data, which uses a very similar hardware platform for the subsystems that are being targeted (TVC - Thrust Vector Control subsystem for the SRB (Solid Rocket Booster)).

  8. Unsupervised algorithms for intrusion detection and identification in wireless ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2009-05-01

    In previous work by the author, parameters across network protocol layers were selected as features in supervised algorithms that detect and identify certain intrusion attacks on wireless ad hoc sensor networks (WSNs) carrying multisensor data. The algorithms improved the residual performance of the intrusion prevention measures provided by any dynamic key-management schemes and trust models implemented among network nodes. The approach of this paper does not train algorithms on the signature of known attack traffic, but, instead, the approach is based on unsupervised anomaly detection techniques that learn the signature of normal network traffic. Unsupervised learning does not require the data to be labeled or to be purely of one type, i.e., normal or attack traffic. The approach can be augmented to add any security attributes and quantified trust levels, established during data exchanges among nodes, to the set of cross-layer features from the WSN protocols. A two-stage framework is introduced for the security algorithms to overcome the problems of input size and resource constraints. The first stage is an unsupervised clustering algorithm which reduces the payload of network data packets to a tractable size. The second stage is a traditional anomaly detection algorithm based on a variation of support vector machines (SVMs), whose efficiency is improved by the availability of data in the packet payload. In the first stage, selected algorithms are adapted to WSN platforms to meet system requirements for simple parallel distributed computation, distributed storage and data robustness. A set of mobile software agents, acting like an ant colony in securing the WSN, are distributed at the nodes to implement the algorithms. The agents move among the layers involved in the network response to the intrusions at each active node and trustworthy neighborhood, collecting parametric values and executing assigned decision tasks. This minimizes the need to move large amounts of audit-log data through resource-limited nodes and locates routines closer to that data. Performance of the unsupervised algorithms is evaluated against the network intrusions of black hole, flooding, Sybil and other denial-of-service attacks in simulations of published scenarios. Results for scenarios with intentionally malfunctioning sensors show the robustness of the two-stage approach to intrusion anomalies.

  9. A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.

    This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less

  10. A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization

    DOE PAGES

    Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.; ...

    2016-01-01

    This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less

  11. Processing the Bouguer anomaly map of Biga and the surrounding area by the cellular neural network: application to the southwestern Marmara region

    NASA Astrophysics Data System (ADS)

    Aydogan, D.

    2007-04-01

    An image processing technique called the cellular neural network (CNN) approach is used in this study to locate geological features giving rise to gravity anomalies such as faults or the boundary of two geologic zones. CNN is a stochastic image processing technique based on template optimization using the neighborhood relationships of cells. These cells can be characterized by a functional block diagram that is typical of neural network theory. The functionality of CNN is described in its entirety by a number of small matrices (A, B and I) called the cloning template. CNN can also be considered to be a nonlinear convolution of these matrices. This template describes the strength of the nearest neighbor interconnections in the network. The recurrent perceptron learning algorithm (RPLA) is used in optimization of cloning template. The CNN and standard Canny algorithms were first tested on two sets of synthetic gravity data with the aim of checking the reliability of the proposed approach. The CNN method was compared with classical derivative techniques by applying the cross-correlation method (CC) to the same anomaly map as this latter approach can detect some features that are difficult to identify on the Bouguer anomaly maps. This approach was then applied to the Bouguer anomaly map of Biga and its surrounding area, in Turkey. Structural features in the area between Bandirma, Biga, Yenice and Gonen in the southwest Marmara region are investigated by applying the CNN and CC to the Bouguer anomaly map. Faults identified by these algorithms are generally in accordance with previously mapped surface faults. These examples show that the geologic boundaries can be detected from Bouguer anomaly maps using the cloning template approach. A visual evaluation of the outputs of the CNN and CC approaches is carried out, and the results are compared with each other. This approach provides quantitative solutions based on just a few assumptions, which makes the method more powerful than the classical methods.

  12. Identifying High-Risk Patients without Labeled Training Data: Anomaly Detection Methodologies to Predict Adverse Outcomes

    PubMed Central

    Syed, Zeeshan; Saeed, Mohammed; Rubinfeld, Ilan

    2010-01-01

    For many clinical conditions, only a small number of patients experience adverse outcomes. Developing risk stratification algorithms for these conditions typically requires collecting large volumes of data to capture enough positive and negative for training. This process is slow, expensive, and may not be appropriate for new phenomena. In this paper, we explore different anomaly detection approaches to identify high-risk patients as cases that lie in sparse regions of the feature space. We study three broad categories of anomaly detection methods: classification-based, nearest neighbor-based, and clustering-based techniques. When evaluated on data from the National Surgical Quality Improvement Program (NSQIP), these methods were able to successfully identify patients at an elevated risk of mortality and rare morbidities following inpatient surgical procedures. PMID:21347083

  13. Abnormal global and local event detection in compressive sensing domain

    NASA Astrophysics Data System (ADS)

    Wang, Tian; Qiao, Meina; Chen, Jie; Wang, Chuanyun; Zhang, Wenjia; Snoussi, Hichem

    2018-05-01

    Abnormal event detection, also known as anomaly detection, is one challenging task in security video surveillance. It is important to develop effective and robust movement representation models for global and local abnormal event detection to fight against factors such as occlusion and illumination change. In this paper, a new algorithm is proposed. It can locate the abnormal events on one frame, and detect the global abnormal frame. The proposed algorithm employs a sparse measurement matrix designed to represent the movement feature based on optical flow efficiently. Then, the abnormal detection mission is constructed as a one-class classification task via merely learning from the training normal samples. Experiments demonstrate that our algorithm performs well on the benchmark abnormal detection datasets against state-of-the-art methods.

  14. Unsupervised Anomaly Detection Based on Clustering and Multiple One-Class SVM

    NASA Astrophysics Data System (ADS)

    Song, Jungsuk; Takakura, Hiroki; Okabe, Yasuo; Kwon, Yongjin

    Intrusion detection system (IDS) has played an important role as a device to defend our networks from cyber attacks. However, since it is unable to detect unknown attacks, i.e., 0-day attacks, the ultimate challenge in intrusion detection field is how we can exactly identify such an attack by an automated manner. Over the past few years, several studies on solving these problems have been made on anomaly detection using unsupervised learning techniques such as clustering, one-class support vector machine (SVM), etc. Although they enable one to construct intrusion detection models at low cost and effort, and have capability to detect unforeseen attacks, they still have mainly two problems in intrusion detection: a low detection rate and a high false positive rate. In this paper, we propose a new anomaly detection method based on clustering and multiple one-class SVM in order to improve the detection rate while maintaining a low false positive rate. We evaluated our method using KDD Cup 1999 data set. Evaluation results show that our approach outperforms the existing algorithms reported in the literature; especially in detection of unknown attacks.

  15. Theory and experiments in model-based space system anomaly management

    NASA Astrophysics Data System (ADS)

    Kitts, Christopher Adam

    This research program consists of an experimental study of model-based reasoning methods for detecting, diagnosing and resolving anomalies that occur when operating a comprehensive space system. Using a first principles approach, several extensions were made to the existing field of model-based fault detection and diagnosis in order to develop a general theory of model-based anomaly management. Based on this theory, a suite of algorithms were developed and computationally implemented in order to detect, diagnose and identify resolutions for anomalous conditions occurring within an engineering system. The theory and software suite were experimentally verified and validated in the context of a simple but comprehensive, student-developed, end-to-end space system, which was developed specifically to support such demonstrations. This space system consisted of the Sapphire microsatellite which was launched in 2001, several geographically distributed and Internet-enabled communication ground stations, and a centralized mission control complex located in the Space Technology Center in the NASA Ames Research Park. Results of both ground-based and on-board experiments demonstrate the speed, accuracy, and value of the algorithms compared to human operators, and they highlight future improvements required to mature this technology.

  16. Semi-supervised anomaly detection - towards model-independent searches of new physics

    NASA Astrophysics Data System (ADS)

    Kuusela, Mikael; Vatanen, Tommi; Malmi, Eric; Raiko, Tapani; Aaltonen, Timo; Nagai, Yoshikazu

    2012-06-01

    Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an algorithm based on semi-supervised anomaly detection techniques, which does not require a MC training sample for the signal data. We first model the background using a multivariate Gaussian mixture model. We then search for deviations from this model by fitting to the observations a mixture of the background model and a number of additional Gaussians. This allows us to perform pattern recognition of any anomalous excess over the background. We show by a comparison to neural network classifiers that such an approach is a lot more robust against misspecification of the signal MC than supervised classification. In cases where there is an unexpected signal, a neural network might fail to correctly identify it, while anomaly detection does not suffer from such a limitation. On the other hand, when there are no systematic errors in the training data, both methods perform comparably.

  17. Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa

    2018-01-01

    A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.

  18. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.

    PubMed

    Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang

    2017-01-01

    Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  19. Using Geostationary Communications Satellites as a Sensor: Telemetry Search Algorithms

    NASA Astrophysics Data System (ADS)

    Cahoy, K.; Carlton, A.; Lohmeyer, W. Q.

    2014-12-01

    For decades, operators and manufacturers have collected large amounts of telemetry from geostationary (GEO) communications satellites to monitor system health and performance, yet this data is rarely mined for scientific purposes. The goal of this work is to mine data archives acquired from commercial operators using new algorithms that can detect when a space weather (or non-space weather) event of interest has occurred or is in progress. We have developed algorithms to statistically analyze power amplifier current and temperature telemetry and identify deviations from nominal operations or other trends of interest. We then examine space weather data to see what role, if any, it might have played. We also closely examine both long and short periods of time before an anomaly to determine whether or not the anomaly could have been predicted.

  20. Fuzzy Logic Based Anomaly Detection for Embedded Network Security Cyber Sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ondrej Linda; Todd Vollmer; Jason Wright

    Resiliency and security in critical infrastructure control systems in the modern world of cyber terrorism constitute a relevant concern. Developing a network security system specifically tailored to the requirements of such critical assets is of a primary importance. This paper proposes a novel learning algorithm for anomaly based network security cyber sensor together with its hardware implementation. The presented learning algorithm constructs a fuzzy logic rule based model of normal network behavior. Individual fuzzy rules are extracted directly from the stream of incoming packets using an online clustering algorithm. This learning algorithm was specifically developed to comply with the constrainedmore » computational requirements of low-cost embedded network security cyber sensors. The performance of the system was evaluated on a set of network data recorded from an experimental test-bed mimicking the environment of a critical infrastructure control system.« less

  1. Discovering System Health Anomalies Using Data Mining Techniques

    NASA Technical Reports Server (NTRS)

    Sriastava, Ashok, N.

    2005-01-01

    We present a data mining framework for the analysis and discovery of anomalies in high-dimensional time series of sensor measurements that would be found in an Integrated System Health Monitoring system. We specifically treat the problem of discovering anomalous features in the time series that may be indicative of a system anomaly, or in the case of a manned system, an anomaly due to the human. Identification of these anomalies is crucial to building stable, reusable, and cost-efficient systems. The framework consists of an analysis platform and new algorithms that can scale to thousands of sensor streams to discovers temporal anomalies. We discuss the mathematical framework that underlies the system and also describe in detail how this framework is general enough to encompass both discrete and continuous sensor measurements. We also describe a new set of data mining algorithms based on kernel methods and hidden Markov models that allow for the rapid assimilation, analysis, and discovery of system anomalies. We then describe the performance of the system on a real-world problem in the aircraft domain where we analyze the cockpit data from aircraft as well as data from the aircraft propulsion, control, and guidance systems. These data are discrete and continuous sensor measurements and are dealt with seamlessly in order to discover anomalous flights. We conclude with recommendations that describe the tradeoffs in building an integrated scalable platform for robust anomaly detection in ISHM applications.

  2. Hyperspectral target detection using heavy-tailed distributions

    NASA Astrophysics Data System (ADS)

    Willis, Chris J.

    2009-09-01

    One promising approach to target detection in hyperspectral imagery exploits a statistical mixture model to represent scene content at a pixel level. The process then goes on to look for pixels which are rare, when judged against the model, and marks them as anomalies. It is assumed that military targets will themselves be rare and therefore likely to be detected amongst these anomalies. For the typical assumption of multivariate Gaussianity for the mixture components, the presence of the anomalous pixels within the training data will have a deleterious effect on the quality of the model. In particular, the derivation process itself is adversely affected by the attempt to accommodate the anomalies within the mixture components. This will bias the statistics of at least some of the components away from their true values and towards the anomalies. In many cases this will result in a reduction in the detection performance and an increased false alarm rate. This paper considers the use of heavy-tailed statistical distributions within the mixture model. Such distributions are better able to account for anomalies in the training data within the tails of their distributions, and the balance of the pixels within their central masses. This means that an improved model of the majority of the pixels in the scene may be produced, ultimately leading to a better anomaly detection result. The anomaly detection techniques are examined using both synthetic data and hyperspectral imagery with injected anomalous pixels. A range of results is presented for the baseline Gaussian mixture model and for models accommodating heavy-tailed distributions, for different parameterizations of the algorithms. These include scene understanding results, anomalous pixel maps at given significance levels and Receiver Operating Characteristic curves.

  3. Data cleaning in the energy domain

    NASA Astrophysics Data System (ADS)

    Akouemo Kengmo Kenfack, Hermine N.

    This dissertation addresses the problem of data cleaning in the energy domain, especially for natural gas and electric time series. The detection and imputation of anomalies improves the performance of forecasting models necessary to lower purchasing and storage costs for utilities and plan for peak energy loads or distribution shortages. There are various types of anomalies, each induced by diverse causes and sources depending on the field of study. The definition of false positives also depends on the context. The analysis is focused on energy data because of the availability of data and information to make a theoretical and practical contribution to the field. A probabilistic approach based on hypothesis testing is developed to decide if a data point is anomalous based on the level of significance. Furthermore, the probabilistic approach is combined with statistical regression models to handle time series data. Domain knowledge of energy data and the survey of causes and sources of anomalies in energy are incorporated into the data cleaning algorithm to improve the accuracy of the results. The data cleaning method is evaluated on simulated data sets in which anomalies were artificially inserted and on natural gas and electric data sets. In the simulation study, the performance of the method is evaluated for both detection and imputation on all identified causes of anomalies in energy data. The testing on utilities' data evaluates the percentage of improvement brought to forecasting accuracy by data cleaning. A cross-validation study of the results is also performed to demonstrate the performance of the data cleaning algorithm on smaller data sets and to calculate an interval of confidence for the results. The data cleaning algorithm is able to successfully identify energy time series anomalies. The replacement of those anomalies provides improvement to forecasting models accuracy. The process is automatic, which is important because many data cleaning processes require human input and become impractical for very large data sets. The techniques are also applicable to other fields such as econometrics and finance, but the exogenous factors of the time series data need to be well defined.

  4. Rotor Smoothing and Vibration Monitoring Results for the US Army VMEP

    DTIC Science & Technology

    2009-06-01

    individual component CI detection thresholds, and development of models for diagnostics, prognostics , and anomaly detection . Figure 16 VMEP Server...and prognostics are of current interest. Development of those systems requires large amounts of data (collection, monitoring , manipulation) to capture...development of automated systems and for continuous updating of algorithms to improve detection , classification, and prognostic performance. A test

  5. Enabling the Discovery of Recurring Anomalies in Aerospace System Problem Reports using High-Dimensional Clustering Techniques

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok, N.; Akella, Ram; Diev, Vesselin; Kumaresan, Sakthi Preethi; McIntosh, Dawn M.; Pontikakis, Emmanuel D.; Xu, Zuobing; Zhang, Yi

    2006-01-01

    This paper describes the results of a significant research and development effort conducted at NASA Ames Research Center to develop new text mining techniques to discover anomalies in free-text reports regarding system health and safety of two aerospace systems. We discuss two problems of significant importance in the aviation industry. The first problem is that of automatic anomaly discovery about an aerospace system through the analysis of tens of thousands of free-text problem reports that are written about the system. The second problem that we address is that of automatic discovery of recurring anomalies, i.e., anomalies that may be described m different ways by different authors, at varying times and under varying conditions, but that are truly about the same part of the system. The intent of recurring anomaly identification is to determine project or system weakness or high-risk issues. The discovery of recurring anomalies is a key goal in building safe, reliable, and cost-effective aerospace systems. We address the anomaly discovery problem on thousands of free-text reports using two strategies: (1) as an unsupervised learning problem where an algorithm takes free-text reports as input and automatically groups them into different bins, where each bin corresponds to a different unknown anomaly category; and (2) as a supervised learning problem where the algorithm classifies the free-text reports into one of a number of known anomaly categories. We then discuss the application of these methods to the problem of discovering recurring anomalies. In fact the special nature of recurring anomalies (very small cluster sizes) requires incorporating new methods and measures to enhance the original approach for anomaly detection. ?& pant 0-

  6. Anomaly Detection in Gamma-Ray Vehicle Spectra with Principal Components Analysis and Mahalanobis Distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tardiff, Mark F.; Runkle, Robert C.; Anderson, K. K.

    2006-01-23

    The goal of primary radiation monitoring in support of routine screening and emergency response is to detect characteristics in vehicle radiation signatures that indicate the presence of potential threats. Two conceptual approaches to analyzing gamma-ray spectra for threat detection are isotope identification and anomaly detection. While isotope identification is the time-honored method, an emerging technique is anomaly detection that uses benign vehicle gamma ray signatures to define an expectation of the radiation signature for vehicles that do not pose a threat. Newly acquired spectra are then compared to this expectation using statistical criteria that reflect acceptable false alarm rates andmore » probabilities of detection. The gamma-ray spectra analyzed here were collected at a U.S. land Port of Entry (POE) using a NaI-based radiation portal monitor (RPM). The raw data were analyzed to develop a benign vehicle expectation by decimating the original pulse-height channels to 35 energy bins, extracting composite variables via principal components analysis (PCA), and estimating statistically weighted distances from the mean vehicle spectrum with the mahalanobis distance (MD) metric. This paper reviews the methods used to establish the anomaly identification criteria and presents a systematic analysis of the response of the combined PCA and MD algorithm to modeled mono-energetic gamma-ray sources.« less

  7. Intelligent agent-based intrusion detection system using enhanced multiclass SVM.

    PubMed

    Ganapathy, S; Yogesh, P; Kannan, A

    2012-01-01

    Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set.

  8. Intelligent Agent-Based Intrusion Detection System Using Enhanced Multiclass SVM

    PubMed Central

    Ganapathy, S.; Yogesh, P.; Kannan, A.

    2012-01-01

    Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set. PMID:23056036

  9. Novel data visualizations of X-ray data for aviation security applications using the Open Threat Assessment Platform (OTAP)

    NASA Astrophysics Data System (ADS)

    Gittinger, Jaxon M.; Jimenez, Edward S.; Holswade, Erica A.; Nunna, Rahul S.

    2017-02-01

    This work will demonstrate the implementation of a traditional and non-traditional visualization of x-ray images for aviation security applications that will be feasible with open system architecture initiatives such as the Open Threat Assessment Platform (OTAP). Anomalies of interest to aviation security are fluid, where characteristic signals of anomalies of interest can evolve rapidly. OTAP is a limited scope open architecture baggage screening prototype that intends to allow 3rd-party vendors to develop and easily implement, integrate, and deploy detection algorithms and specialized hardware on a field deployable screening technology [13]. In this study, stereoscopic images were created using an unmodified, field-deployed system and rendered on the Oculus Rift, a commercial virtual reality video gaming headset. The example described in this work is not dependent on the Oculus Rift, and is possible using any comparable hardware configuration capable of rendering stereoscopic images. The depth information provided from viewing the images will aid in the detection of characteristic signals from anomalies of interest. If successful, OTAP has the potential to allow for aviation security to become more fluid in its adaptation to the evolution of anomalies of interest. This work demonstrates one example that is easily implemented using the OTAP platform, that could lead to the future generation of ATR algorithms and data visualization approaches.

  10. Verification of Minimum Detectable Activity for Radiological Threat Source Search

    NASA Astrophysics Data System (ADS)

    Gardiner, Hannah; Myjak, Mitchell; Baciak, James; Detwiler, Rebecca; Seifert, Carolyn

    2015-10-01

    The Department of Homeland Security's Domestic Nuclear Detection Office is working to develop advanced technologies that will improve the ability to detect, localize, and identify radiological and nuclear sources from airborne platforms. The Airborne Radiological Enhanced-sensor System (ARES) program is developing advanced data fusion algorithms for analyzing data from a helicopter-mounted radiation detector. This detector platform provides a rapid, wide-area assessment of radiological conditions at ground level. The NSCRAD (Nuisance-rejection Spectral Comparison Ratios for Anomaly Detection) algorithm was developed to distinguish low-count sources of interest from benign naturally occurring radiation and irrelevant nuisance sources. It uses a number of broad, overlapping regions of interest to statistically compare each newly measured spectrum with the current estimate for the background to identify anomalies. We recently developed a method to estimate the minimum detectable activity (MDA) of NSCRAD in real time. We present this method here and report on the MDA verification using both laboratory measurements and simulated injects on measured backgrounds at or near the detection limits. This work is supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-12-X-00376. This support does not constitute an express or implied endorsement on the part of the Gov't.

  11. Road detection and buried object detection in elevated EO/IR imagery

    NASA Astrophysics Data System (ADS)

    Kennedy, Levi; Kolba, Mark P.; Walters, Joshua R.

    2012-06-01

    To assist the warfighter in visually identifying potentially dangerous roadside objects, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has developed an elevated video sensor system testbed for data collection. This system provides color and mid-wave infrared (MWIR) imagery. Signal Innovations Group (SIG) has developed an automated processing capability that detects the road within the sensor field of view and identifies potentially threatening buried objects within the detected road. The road detection algorithm leverages system metadata to project the collected imagery onto a flat ground plane, allowing for more accurate detection of the road as well as the direct specification of realistic physical constraints in the shape of the detected road. Once the road has been detected in an image frame, a buried object detection algorithm is applied to search for threatening objects within the detected road space. The buried object detection algorithm leverages textural and pixel intensity-based features to detect potential anomalies and then classifies them as threatening or non-threatening objects. Both the road detection and the buried object detection algorithms have been developed to facilitate their implementation in real-time in the NVESD system.

  12. Methodology of automated ionosphere front velocity estimation for ground-based augmentation of GNSS

    NASA Astrophysics Data System (ADS)

    Bang, Eugene; Lee, Jiyun

    2013-11-01

    ionospheric anomalies occurring during severe ionospheric storms can pose integrity threats to Global Navigation Satellite System (GNSS) Ground-Based Augmentation Systems (GBAS). Ionospheric anomaly threat models for each region of operation need to be developed to analyze the potential impact of these anomalies on GBAS users and develop mitigation strategies. Along with the magnitude of ionospheric gradients, the speed of the ionosphere "fronts" in which these gradients are embedded is an important parameter for simulation-based GBAS integrity analysis. This paper presents a methodology for automated ionosphere front velocity estimation which will be used to analyze a vast amount of ionospheric data, build ionospheric anomaly threat models for different regions, and monitor ionospheric anomalies continuously going forward. This procedure automatically selects stations that show a similar trend of ionospheric delays, computes the orientation of detected fronts using a three-station-based trigonometric method, and estimates speeds for the front using a two-station-based method. It also includes fine-tuning methods to improve the estimation to be robust against faulty measurements and modeling errors. It demonstrates the performance of the algorithm by comparing the results of automated speed estimation to those manually computed previously. All speed estimates from the automated algorithm fall within error bars of ± 30% of the manually computed speeds. In addition, this algorithm is used to populate the current threat space with newly generated threat points. A larger number of velocity estimates helps us to better understand the behavior of ionospheric gradients under geomagnetic storm conditions.

  13. Automated novelty detection in the WISE survey with one-class support vector machines

    NASA Astrophysics Data System (ADS)

    Solarz, A.; Bilicki, M.; Gromadzki, M.; Pollo, A.; Durkalec, A.; Wypych, M.

    2017-10-01

    Wide-angle photometric surveys of previously uncharted sky areas or wavelength regimes will always bring in unexpected sources - novelties or even anomalies - whose existence and properties cannot be easily predicted from earlier observations. Such objects can be efficiently located with novelty detection algorithms. Here we present an application of such a method, called one-class support vector machines (OCSVM), to search for anomalous patterns among sources preselected from the mid-infrared AllWISE catalogue covering the whole sky. To create a model of expected data we train the algorithm on a set of objects with spectroscopic identifications from the SDSS DR13 database, present also in AllWISE. The OCSVM method detects as anomalous those sources whose patterns - WISE photometric measurements in this case - are inconsistent with the model. Among the detected anomalies we find artefacts, such as objects with spurious photometry due to blending, but more importantly also real sources of genuine astrophysical interest. Among the latter, OCSVM has identified a sample of heavily reddened AGN/quasar candidates distributed uniformly over the sky and in a large part absent from other WISE-based AGN catalogues. It also allowed us to find a specific group of sources of mixed types, mostly stars and compact galaxies. By combining the semi-supervised OCSVM algorithm with standard classification methods it will be possible to improve the latter by accounting for sources which are not present in the training sample, but are otherwise well-represented in the target set. Anomaly detection adds flexibility to automated source separation procedures and helps verify the reliability and representativeness of the training samples. It should be thus considered as an essential step in supervised classification schemes to ensure completeness and purity of produced catalogues. The catalogues of outlier data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A39

  14. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    PubMed

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  15. A Testbed for Data Fusion for Helicopter Diagnostics and Prognostics

    DTIC Science & Technology

    2003-03-01

    and algorithm design and tuning in order to develop advanced diagnostic and prognostic techniques for air craft health monitoring . Here a...and development of models for diagnostics, prognostics , and anomaly detection . Figure 5 VMEP Server Browser Interface 7 Download... detections , and prognostic prediction time horizons. The VMEP system and in particular the web component are ideal for performing data collection

  16. Ellipsoids for anomaly detection in remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Grosklos, Guenchik; Theiler, James

    2015-05-01

    For many target and anomaly detection algorithms, a key step is the estimation of a centroid (relatively easy) and a covariance matrix (somewhat harder) that characterize the background clutter. For a background that can be modeled as a multivariate Gaussian, the centroid and covariance lead to an explicit probability density function that can be used in likelihood ratio tests for optimal detection statistics. But ellipsoidal contours can characterize a much larger class of multivariate density function, and the ellipsoids that characterize the outer periphery of the distribution are most appropriate for detection in the low false alarm rate regime. Traditionally the sample mean and sample covariance are used to estimate ellipsoid location and shape, but these quantities are confounded both by large lever-arm outliers and non-Gaussian distributions within the ellipsoid of interest. This paper compares a variety of centroid and covariance estimation schemes with the aim of characterizing the periphery of the background distribution. In particular, we will consider a robust variant of the Khachiyan algorithm for minimum-volume enclosing ellipsoid. The performance of these different approaches is evaluated on multispectral and hyperspectral remote sensing imagery using coverage plots of ellipsoid volume versus false alarm rate.

  17. INSPIRE Project (IoNospheric Sounding for Pre-seismic anomalies Identification REsearch): Main Results and Future Prospects

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Andrzej, K.; Hernandez-Pajares, M.; Cherniak, I.; Zakharenkova, I.; Rothkaehl, H.; Davidenko, D.

    2017-12-01

    The INSPIRE project is dedicated to the study of physical processes and their effects in ionosphere which could be determined as earthquake precursors together with detailed description of the methodology of ionospheric pre-seismic anomalies definition. It was initiated by ESA and carried out by international consortium. The physical mechanisms of the ionospheric pre-seismic anomalies generation from ground to the ionosphere altitudes were formulated within framework of the Lithosphere-Atmosphere-Ionosphere-Magnetosphere Coupling (LAIMC) model (Pulinets et al., 2015). The general algorithm for the identification of the ionospheric precursors was formalized which also takes into account the external Space Weather factors able to generate the false alarms. Importance of the special stable pattern called the "precursor mask" was highlighted which is based on self-similarity of pre-seismic ionospheric variations. The role of expert decision in pre-seismic anomalies interpretation for generation of seismic warning is important as well. The algorithm performance of the LAIMC seismo-ionospheric effect detection module has been demonstrated using the L'Aquila 2009 earthquake as a case study. The results of INSPIRE project have demonstrated that the ionospheric anomalies registered before the strong earthquakes could be used as reliable precursors. The detailed classification of the pre-seismic anomalies was presented in different regions of the ionosphere and signatures of the pre-seismic anomalies as detected by ground and satellite based instruments were described what clarified methodology of the precursor's identification from ionospheric multi-instrumental measurements. Configuration for the dedicated multi-observation experiment and satellite payload was proposed for the future implementation of the INSPIRE project results. In this regard the multi-instrument set can be divided by two groups: space equipment and ground-based support, which could be used for real-time monitoring. Together with scientific and technical tasks the set of political, logistic and administrative problems (including certification of approaches by seismological community, juridical procedures by the governmental authorities) should be resolved for the real earthquake forecast effectuation.

  18. A Locally Optimal Algorithm for Estimating a Generating Partition from an Observed Time Series and Its Application to Anomaly Detection.

    PubMed

    Ghalyan, Najah F; Miller, David J; Ray, Asok

    2018-06-12

    Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.

  19. Revisiting negative selection algorithms.

    PubMed

    Ji, Zhou; Dasgupta, Dipankar

    2007-01-01

    This paper reviews the progress of negative selection algorithms, an anomaly/change detection approach in Artificial Immune Systems (AIS). Following its initial model, we try to identify the fundamental characteristics of this family of algorithms and summarize their diversities. There exist various elements in this method, including data representation, coverage estimate, affinity measure, and matching rules, which are discussed for different variations. The various negative selection algorithms are categorized by different criteria as well. The relationship and possible combinations with other AIS or other machine learning methods are discussed. Prospective development and applicability of negative selection algorithms and their influence on related areas are then speculated based on the discussion.

  20. Hyperspectral data collection for the assessment of target detection algorithms: the Viareggio 2013 trial

    NASA Astrophysics Data System (ADS)

    Rossi, Alessandro; Acito, Nicola; Diani, Marco; Corsini, Giovanni; De Ceglie, Sergio Ugo; Riccobono, Aldo; Chiarantini, Leandro

    2014-10-01

    Airborne hyperspectral imagery is valuable for military and civilian applications, such as target identification, detection of anomalies and changes within multiple acquisitions. In target detection (TD) applications, the performance assessment of different algorithms is an important and critical issue. In this context, the small number of public available hyperspectral data motivated us to perform an extensive measurement campaign including various operating scenarios. The campaign was organized by CISAM in cooperation with University of Pisa, Selex ES and CSSN-ITE, and it was conducted in Viareggio, Italy in May, 2013. The Selex ES airborne hyperspectral sensor SIM.GA was mounted on board of an airplane to collect images over different sites in the morning and afternoon of two subsequent days. This paper describes the hyperspectral data collection of the trial. Four different sites were set up, representing a complex urban scenario, two parking lots and a rural area. Targets with dimensions comparable to the sensor ground resolution were deployed in the sites to reproduce different operating situations. An extensive ground truth documentation completes the data collection. Experiments to test anomalous change detection techniques were set up changing the position of the deployed targets. Search and rescue scenarios were simulated to evaluate the performance of anomaly detection algorithms. Moreover, the reflectance signatures of the targets were measured on the ground to perform spectral matching in varying atmospheric and illumination conditions. The paper presents some preliminary results that show the effectiveness of hyperspectral data exploitation for the object detection tasks of interest in this work.

  1. Optimize the Coverage Probability of Prediction Interval for Anomaly Detection of Sensor-Based Monitoring Series

    PubMed Central

    Liu, Datong; Peng, Yu; Peng, Xiyuan

    2018-01-01

    Effective anomaly detection of sensing data is essential for identifying potential system failures. Because they require no prior knowledge or accumulated labels, and provide uncertainty presentation, the probability prediction methods (e.g., Gaussian process regression (GPR) and relevance vector machine (RVM)) are especially adaptable to perform anomaly detection for sensing series. Generally, one key parameter of prediction models is coverage probability (CP), which controls the judging threshold of the testing sample and is generally set to a default value (e.g., 90% or 95%). There are few criteria to determine the optimal CP for anomaly detection. Therefore, this paper designs a graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI) based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points. Furthermore, the Youden index is modified to assess the performance of different CPs, by the minimization of which the optimal CP is derived by the simulated annealing (SA) algorithm. Experiments conducted on two simulation datasets demonstrate the validity of the proposed method. Especially, an actual case study on sensing series from an on-orbit satellite illustrates its significant performance in practical application. PMID:29587372

  2. Detection of Low Temperature Volcanogenic Thermal Anomalies with ASTER

    NASA Astrophysics Data System (ADS)

    Pieri, D. C.; Baxter, S.

    2009-12-01

    Predicting volcanic eruptions is a thorny problem, as volcanoes typically exhibit idiosyncratic waxing and/or waning pre-eruption emission, geodetic, and seismic behavior. It is no surprise that increasing our accuracy and precision in eruption prediction depends on assessing the time-progressions of all relevant precursor geophysical, geochemical, and geological phenomena, and on more frequently observing volcanoes when they become restless. The ASTER instrument on the NASA Terra Earth Observing System satellite in low earth orbit provides important capabilities in the area of detection of volcanogenic anomalies such as thermal precursors and increased passive gas emissions. Its unique high spatial resolution multi-spectral thermal IR imaging data (90m/pixel; 5 bands in the 8-12um region), bore-sighted with visible and near-IR imaging data, and combined with off-nadir pointing and stereo-photogrammetric capabilities make ASTER a potentially important volcanic precursor detection tool. We are utilizing the JPL ASTER Volcano Archive (http://ava.jpl.nasa.gov) to systematically examine 80,000+ ASTER volcano images to analyze (a) thermal emission baseline behavior for over 1500 volcanoes worldwide, (b) the form and magnitude of time-dependent thermal emission variability for these volcanoes, and (c) the spatio-temporal limits of detection of pre-eruption temporal changes in thermal emission in the context of eruption precursor behavior. We are creating and analyzing a catalog of the magnitude, frequency, and distribution of volcano thermal signatures worldwide as observed from ASTER since 2000 at 90m/pixel. Of particular interest as eruption precursors are small low contrast thermal anomalies of low apparent absolute temperature (e.g., melt-water lakes, fumaroles, geysers, grossly sub-pixel hotspots), for which the signal-to-noise ratio may be marginal (e.g., scene confusion due to clouds, water and water vapor, fumarolic emissions, variegated ground emissivity, and their combinations). To systematically detect such intrinsically difficult anomalies within our large archive, we are exploring a four step approach: (a) the recursive application of a GPU-accelerated, edge-preserving bilateral filter prepares a thermal image by removing noise and fine detail; (b) the resulting stylized filtered image is segmented by a path-independent region-growing algorithm, (c) the resulting segments are fused based on thermal affinity, and (d) fused segments are subjected to thermal and geographical tests for hotspot detection and classification, to eliminate false alarms or non-volcanogenic anomalies. We will discuss our progress in creating the general thermal anomaly catalog as well as algorithm approach and results. This work was carried out at the Jet Propulsion Laboratory of the California Institute of Technology under contract to NASA.

  3. Passenger baggage object database (PBOD)

    NASA Astrophysics Data System (ADS)

    Gittinger, Jaxon M.; Suknot, April N.; Jimenez, Edward S.; Spaulding, Terry W.; Wenrich, Steve A.

    2018-04-01

    Detection of anomalies of interest in x-ray images is an ever-evolving problem that requires the rapid development of automatic detection algorithms. Automatic detection algorithms are developed using machine learning techniques, which would require developers to obtain the x-ray machine that was used to create the images being trained on, and compile all associated metadata for those images by hand. The Passenger Baggage Object Database (PBOD) and data acquisition application were designed and developed for acquiring and persisting 2-D and 3-D x-ray image data and associated metadata. PBOD was specifically created to capture simulated airline passenger "stream of commerce" luggage data, but could be applied to other areas of x-ray imaging to utilize machine-learning methods.

  4. Model-based approach for cyber-physical attack detection in water distribution systems.

    PubMed

    Housh, Mashor; Ohar, Ziv

    2018-08-01

    Modern Water Distribution Systems (WDSs) are often controlled by Supervisory Control and Data Acquisition (SCADA) systems and Programmable Logic Controllers (PLCs) which manage their operation and maintain a reliable water supply. As such, and with the cyber layer becoming a central component of WDS operations, these systems are at a greater risk of being subjected to cyberattacks. This paper offers a model-based methodology based on a detailed hydraulic understanding of WDSs combined with an anomaly detection algorithm for the identification of complex cyberattacks that cannot be fully identified by hydraulically based rules alone. The results show that the proposed algorithm is capable of achieving the best-known performance when tested on the data published in the BATtle of the Attack Detection ALgorithms (BATADAL) competition (http://www.batadal.net). Copyright © 2018. Published by Elsevier Ltd.

  5. Visual analytics of anomaly detection in large data streams

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Dayal, Umeshwar; Keim, Daniel A.; Sharma, Ratnesh K.; Mehta, Abhay

    2009-01-01

    Most data streams usually are multi-dimensional, high-speed, and contain massive volumes of continuous information. They are seen in daily applications, such as telephone calls, retail sales, data center performance, and oil production operations. Many analysts want insight into the behavior of this data. They want to catch the exceptions in flight to reveal the causes of the anomalies and to take immediate action. To guide the user in finding the anomalies in the large data stream quickly, we derive a new automated neighborhood threshold marking technique, called AnomalyMarker. This technique is built on cell-based data streams and user-defined thresholds. We extend the scope of the data points around the threshold to include the surrounding areas. The idea is to define a focus area (marked area) which enables users to (1) visually group the interesting data points related to the anomalies (i.e., problems that occur persistently or occasionally) for observing their behavior; (2) discover the factors related to the anomaly by visualizing the correlations between the problem attribute with the attributes of the nearby data items from the entire multi-dimensional data stream. Mining results are quickly presented in graphical representations (i.e., tooltip) for the user to zoom into the problem regions. Different algorithms are introduced which try to optimize the size and extent of the anomaly markers. We have successfully applied this technique to detect data stream anomalies in large real-world enterprise server performance and data center energy management.

  6. Using Multiple Robust Parameter Design Techniques to Improve Hyperspectral Anomaly Detection Algorithm Performance

    DTIC Science & Technology

    2009-03-01

    Set negative pixel values = 0 (remove bad pixels) -------------- [m,n] = size(data_matrix_new); for i =1:m for j= 1:n if...everything from packaging toothpaste to high speed fluid dynamics. While future engagements will continue to require the development of specialized

  7. AnRAD: A Neuromorphic Anomaly Detection Framework for Massive Concurrent Data Streams.

    PubMed

    Chen, Qiuwen; Luley, Ryan; Wu, Qing; Bishop, Morgan; Linderman, Richard W; Qiu, Qinru

    2018-05-01

    The evolution of high performance computing technologies has enabled the large-scale implementation of neuromorphic models and pushed the research in computational intelligence into a new era. Among the machine learning applications, unsupervised detection of anomalous streams is especially challenging due to the requirements of detection accuracy and real-time performance. Designing a computing framework that harnesses the growing computing power of the multicore systems while maintaining high sensitivity and specificity to the anomalies is an urgent research topic. In this paper, we propose anomaly recognition and detection (AnRAD), a bioinspired detection framework that performs probabilistic inferences. We analyze the feature dependency and develop a self-structuring method that learns an efficient confabulation network using unlabeled data. This network is capable of fast incremental learning, which continuously refines the knowledge base using streaming data. Compared with several existing anomaly detection approaches, our method provides competitive detection quality. Furthermore, we exploit the massive parallel structure of the AnRAD framework. Our implementations of the detection algorithm on the graphic processing unit and the Xeon Phi coprocessor both obtain substantial speedups over the sequential implementation on general-purpose microprocessor. The framework provides real-time service to concurrent data streams within diversified knowledge contexts, and can be applied to large problems with multiple local patterns. Experimental results demonstrate high computing performance and memory efficiency. For vehicle behavior detection, the framework is able to monitor up to 16000 vehicles (data streams) and their interactions in real time with a single commodity coprocessor, and uses less than 0.2 ms for one testing subject. Finally, the detection network is ported to our spiking neural network simulator to show the potential of adapting to the emerging neuromorphic architectures.

  8. Using Deep Learning Algorithm to Enhance Image-review Software for Surveillance Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yonggang; Thomas, Maikael A.

    We propose the development of proven deep learning algorithms to flag objects and events of interest in Next Generation Surveillance System (NGSS) surveillance to make IAEA image review more efficient. Video surveillance is one of the core monitoring technologies used by the IAEA Department of Safeguards when implementing safeguards at nuclear facilities worldwide. The current image review software GARS has limited automated functions, such as scene-change detection, black image detection and missing scene analysis, but struggles with highly cluttered backgrounds. A cutting-edge algorithm to be developed in this project will enable efficient and effective searches in images and video streamsmore » by identifying and tracking safeguards relevant objects and detect anomalies in their vicinity. In this project, we will develop the algorithm, test it with the IAEA surveillance cameras and data sets collected at simulated nuclear facilities at BNL and SNL, and implement it in a software program for potential integration into the IAEA’s IRAP (Integrated Review and Analysis Program).« less

  9. Freezing of Gait Detection in Parkinson's Disease: A Subject-Independent Detector Using Anomaly Scores.

    PubMed

    Pham, Thuy T; Moore, Steven T; Lewis, Simon John Geoffrey; Nguyen, Diep N; Dutkiewicz, Eryk; Fuglevand, Andrew J; McEwan, Alistair L; Leong, Philip H W

    2017-11-01

    Freezing of gait (FoG) is common in Parkinsonian gait and strongly relates to falls. Current clinical FoG assessments are patients' self-report diaries and experts' manual video analysis. Both are subjective and yield moderate reliability. Existing detection algorithms have been predominantly designed in subject-dependent settings. In this paper, we aim to develop an automated FoG detector for subject independent. After extracting highly relevant features, we apply anomaly detection techniques to detect FoG events. Specifically, feature selection is performed using correlation and clusterability metrics. From a list of 244 feature candidates, 36 candidates were selected using saliency and robustness criteria. We develop an anomaly score detector with adaptive thresholding to identify FoG events. Then, using accuracy metrics, we reduce the feature list to seven candidates. Our novel multichannel freezing index was the most selective across all window sizes, achieving sensitivity (specificity) of (). On the other hand, freezing index from the vertical axis was the best choice for a single input, achieving sensitivity (specificity) of () for ankle and () for back sensors. Our subject-independent method is not only significantly more accurate than those previously reported, but also uses a much smaller window (e.g., versus ) and/or lower tolerance (e.g., versus ).Freezing of gait (FoG) is common in Parkinsonian gait and strongly relates to falls. Current clinical FoG assessments are patients' self-report diaries and experts' manual video analysis. Both are subjective and yield moderate reliability. Existing detection algorithms have been predominantly designed in subject-dependent settings. In this paper, we aim to develop an automated FoG detector for subject independent. After extracting highly relevant features, we apply anomaly detection techniques to detect FoG events. Specifically, feature selection is performed using correlation and clusterability metrics. From a list of 244 feature candidates, 36 candidates were selected using saliency and robustness criteria. We develop an anomaly score detector with adaptive thresholding to identify FoG events. Then, using accuracy metrics, we reduce the feature list to seven candidates. Our novel multichannel freezing index was the most selective across all window sizes, achieving sensitivity (specificity) of (). On the other hand, freezing index from the vertical axis was the best choice for a single input, achieving sensitivity (specificity) of () for ankle and () for back sensors. Our subject-independent method is not only significantly more accurate than those previously reported, but also uses a much smaller window (e.g., versus ) and/or lower tolerance (e.g., versus ).

  10. Revolution in nuclear detection affairs

    NASA Astrophysics Data System (ADS)

    Stern, Warren M.

    2014-05-01

    The detection of nuclear or radioactive materials for homeland or national security purposes is inherently difficult. This is one reason detection efforts must be seen as just one part of an overall nuclear defense strategy which includes, inter alia, material security, detection, interdiction, consequence management and recovery. Nevertheless, one could argue that there has been a revolution in detection affairs in the past several decades as the innovative application of new technology has changed the character and conduct of detection operations. This revolution will likely be most effectively reinforced in the coming decades with the networking of detectors and innovative application of anomaly detection algorithms.

  11. Deep learning algorithms for detecting explosive hazards in ground penetrating radar data

    NASA Astrophysics Data System (ADS)

    Besaw, Lance E.; Stimac, Philip J.

    2014-05-01

    Buried explosive hazards (BEHs) have been, and continue to be, one of the most deadly threats in modern conflicts. Current handheld sensors rely on a highly trained operator for them to be effective in detecting BEHs. New algorithms are needed to reduce the burden on the operator and improve the performance of handheld BEH detectors. Traditional anomaly detection and discrimination algorithms use "hand-engineered" feature extraction techniques to characterize and classify threats. In this work we use a Deep Belief Network (DBN) to transcend the traditional approaches of BEH detection (e.g., principal component analysis and real-time novelty detection techniques). DBNs are pretrained using an unsupervised learning algorithm to generate compressed representations of unlabeled input data and form feature detectors. They are then fine-tuned using a supervised learning algorithm to form a predictive model. Using ground penetrating radar (GPR) data collected by a robotic cart swinging a handheld detector, our research demonstrates that relatively small DBNs can learn to model GPR background signals and detect BEHs with an acceptable false alarm rate (FAR). In this work, our DBNs achieved 91% probability of detection (Pd) with 1.4 false alarms per square meter when evaluated on anti-tank and anti-personnel targets at temperate and arid test sites. This research demonstrates that DBNs are a viable approach to detect and classify BEHs.

  12. Automatic detection of multiple UXO-like targets using magnetic anomaly inversion and self-adaptive fuzzy c-means clustering

    NASA Astrophysics Data System (ADS)

    Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining

    2017-12-01

    We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.

  13. Automated vehicle detection in forward-looking infrared imagery.

    PubMed

    Der, Sandor; Chan, Alex; Nasrabadi, Nasser; Kwon, Heesung

    2004-01-10

    We describe an algorithm for the detection and clutter rejection of military vehicles in forward-looking infrared (FLIR) imagery. The detection algorithm is designed to be a prescreener that selects regions for further analysis and uses a spatial anomaly approach that looks for target-sized regions of the image that differ in texture, brightness, edge strength, or other spatial characteristics. The features are linearly combined to form a confidence image that is thresholded to find likely target locations. The clutter rejection portion uses target-specific information extracted from training samples to reduce the false alarms of the detector. The outputs of the clutter rejecter and detector are combined by a higher-level evidence integrator to improve performance over simple concatenation of the detector and clutter rejecter. The algorithm has been applied to a large number of FLIR imagery sets, and some of these results are presented here.

  14. Event Detection in Aerospace Systems using Centralized Sensor Networks: A Comparative Study of Several Methodologies

    NASA Technical Reports Server (NTRS)

    Mehr, Ali Farhang; Sauvageon, Julien; Agogino, Alice M.; Tumer, Irem Y.

    2006-01-01

    Recent advances in micro electromechanical systems technology, digital electronics, and wireless communications have enabled development of low-cost, low-power, multifunctional miniature smart sensors. These sensors can be deployed throughout a region in an aerospace vehicle to build a network for measurement, detection and surveillance applications. Event detection using such centralized sensor networks is often regarded as one of the most promising health management technologies in aerospace applications where timely detection of local anomalies has a great impact on the safety of the mission. In this paper, we propose to conduct a qualitative comparison of several local event detection algorithms for centralized redundant sensor networks. The algorithms are compared with respect to their ability to locate and evaluate an event in the presence of noise and sensor failures for various node geometries and densities.

  15. Volcanic hotspots of the central and southern Andes as seen from space by ASTER and MODVOLC between the years 2000-2011

    NASA Astrophysics Data System (ADS)

    Jay, J.; Pritchard, M. E.; Mares, P. J.; Mnich, M. E.; Welch, M. D.; Melkonian, A. K.; Aguilera, F.; Naranjo, J.; Sunagua, M.; Clavero, J. E.

    2011-12-01

    We examine 153 volcanoes and geothermal areas in the central, southern, and austral Andes for temperature anomalies between 2000-2011 from two different spacebourne sensors: 1) those automatically detected by the MODVOLC algorithm (Wright et al., 2004) from MODIS and 2) manually identified hotspots in nighttime images from ASTER. Based on previous work, we expected to find 8 thermal anomalies (volcanoes: Ubinas, Villarrica, Copahue, Láscar, Llaima, Chaitén, Puyehue-Cordón Caulle, Chiliques). We document 31 volcanic areas with pixel integrated temperatures of 4 to more than 100 K above background in at least two images, and another 29 areas that have questionable hotspots with either smaller anomalies or a hotspot in only one image. Most of the thermal anomalies are related to known activity (lava and pyroclastic flows, growing lava domes, fumaroles, and lakes) while others are of unknown origin or reflect activity at volcanoes that were not thought to be active. A handful of volcanoes exhibit temporal variations in the magnitude and location of their temperature anomaly that can be related to both documented and undocumented pulses of activity. Our survey reveals that low amplitude volcanic hotspots detectable from space are more common than expected (based on lower resolution data) and that these features could be more widely used to monitor changes in the activity of remote volcanoes. We find that the shape, size, magnitude, and location on the volcano of the thermal anomaly vary significantly from volcano to volcano, and these variations should be considered when developing algorithms for hotspot identification and detection. We compare our thermal results to satellite InSAR measurements of volcanic deformation and find that there is no simple relationship between deformation and thermal anomalies - while 31 volcanoes have continuous hotspots, at least 17 volcanoes in the same area have exhibited deformation, and these lists do not completely overlap. In order to investigate the relationship between seismic and thermal volcanic activity, we examine seismic data for 5 of the volcanoes (Uturuncu, Olca-Paruma, Ollague, Irruputuncu, and Sol de Mañana) as well as seismological reports from the Chilean geological survey SERNAGEOMIN for 11 additional volcanoes. Although there were 7 earthquakes with Mw > 7 in our study area from 2000-2010, there is essentially no evidence from ASTER or MODVOLC that the thermal anomalies were affected by seismic shaking.

  16. Advanced Health Management Algorithms for Crew Exploration Applications

    NASA Technical Reports Server (NTRS)

    Davidson, Matt; Stephens, John; Jones, Judit

    2005-01-01

    Achieving the goals of the President's Vision for Exploration will require new and innovative ways to achieve reliability increases of key systems and sub-systems. The most prominent approach used in current systems is to maintain hardware redundancy. This imposes constraints to the system and utilizes weight that could be used for payload for extended lunar, Martian, or other deep space missions. A technique to improve reliability while reducing the system weight and constraints is through the use of an Advanced Health Management System (AHMS). This system contains diagnostic algorithms and decision logic to mitigate or minimize the impact of system anomalies on propulsion system performance throughout the powered flight regime. The purposes of the AHMS are to increase the probability of successfully placing the vehicle into the intended orbit (Earth, Lunar, or Martian escape trajectory), increase the probability of being able to safely execute an abort after it has developed anomalous performance during launch or ascent phases of the mission, and to minimize or mitigate anomalies during the cruise portion of the mission. This is accomplished by improving the knowledge of the state of the propulsion system operation at any given turbomachinery vibration protection logic and an overall system analysis algorithm that utilizes an underlying physical model and a wide array of engine system operational parameters to detect and mitigate predefined engine anomalies. These algorithms are generic enough to be utilized on any propulsion system yet can be easily tailored to each application by changing input data and engine specific parameters. The key to the advancement of such a system is the verification of the algorithms. These algorithms will be validated through the use of a database of nominal and anomalous performance from a large propulsion system where data exists for catastrophic and noncatastrophic propulsion sytem failures.

  17. Constraining Mass Anomalies Using Trans-dimensional Gravity Inversions

    NASA Astrophysics Data System (ADS)

    Izquierdo, K.; Montesi, L.; Lekic, V.

    2016-12-01

    The density structure of planetary interiors constitutes a key constraint on their composition, temperature, and dynamics. This has motivated the development of non-invasive methods to infer 3D distribution of density anomalies within a planet's interior using gravity observations made from the surface or orbit. On Earth, this information can be supplemented by seismic and electromagnetic observations, but such data are generally not available on other planets and inferences must be made from gravity observations alone. Unfortunately, inferences of density anomalies from gravity are non-unique and even the dimensionality of the problem - i.e., the number of density anomalies detectable in the planetary interior - is unknown. In this project, we use the Reversible Jump Markov chain Monte Carlo (RJMCMC) algorithm to approach gravity inversions in a trans-dimensional way, that is, considering the magnitude of the mass, the latitude, longitude, depth and number of anomalies itself as unknowns to be constrained by the observed gravity field at the surface of a planet. Our approach builds upon previous work using trans-dimensional gravity inversions in which the density contrast between the anomaly and the surrounding material is known. We validate the algorithm by analyzing a synthetic gravity field produced by a known density structure and comparing the retrieved and input density structures. We find excellent agreement between the input and retrieved structure when working in 1D and 2D domains. However, in 3D domains, comprehensive exploration of the much larger space of possible models makes search efficiency a key ingredient in successful gravity inversion. We find that upon a sufficiently long RJMCMC run, it is possible to use statistical information to recover a predicted model that matches the real model. We argue that even more complex problems, such as those involving real gravity acceleration data of a planet as the constraint, our trans-dimensional gravity inversion algorithm provides a good option to overcome the problem of non-uniqueness while achieving parsimony in gravity inversions.

  18. Towards the Mitigation of Correlation Effects in the Analysis of Hyperspectral Imagery with Extensions to Robust Parameter Design

    DTIC Science & Technology

    2012-08-01

    Difference Vegetation Index ( NDVI ) ..................................... 15  2.3  Methodology...Atmospheric Compensation ........................................................................ 31  3.2.3.1  Normalized Difference Vegetation Index ( NDVI ...anomaly detection algorithms are contrasted and implemented, and explains the use of the Normalized Difference Vegetation Index ( NDVI ) in post

  19. Space Shuttle Main Engine: Advanced Health Monitoring System

    NASA Technical Reports Server (NTRS)

    Singer, Chirs

    1999-01-01

    The main gola of the Space Shuttle Main Engine (SSME) Advanced Health Management system is to improve flight safety. To this end the new SSME has robust new components to improve the operating margen and operability. The features of the current SSME health monitoring system, include automated checkouts, closed loop redundant control system, catastropic failure mitigation, fail operational/ fail-safe algorithms, and post flight data and inspection trend analysis. The features of the advanced health monitoring system include: a real time vibration monitor system, a linear engine model, and an optical plume anomaly detection system. Since vibration is a fundamental measure of SSME turbopump health, it stands to reason that monitoring the vibration, will give some idea of the health of the turbopumps. However, how is it possible to avoid shutdown, when it is not necessary. A sensor algorithm has been developed which has been exposed to over 400 test cases in order to evaluate the logic. The optical plume anomaly detection (OPAD) has been developed to be a sensitive monitor of engine wear, erosion, and breakage.

  20. From Signature-Based Towards Behaviour-Based Anomaly Detection (Extended Abstract)

    DTIC Science & Technology

    2010-11-01

    data acquisition can serve as sensors. De- facto standard for IP flow monitoring is NetFlow format. Although NetFlow was originally developed by Cisco...packets with some common properties that pass through a network device. These collected flows are exported to an external device, the NetFlow ...Thanks to the network-based approach using NetFlow data, the detection algorithm is host independent and highly scalable. Deep Packet Inspection

  1. Statistical Algorithms Accounting for Background Density in the Detection of UXO Target Areas at DoD Munitions Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzke, Brett D.; Wilson, John E.; Hathaway, J.

    2008-02-12

    Statistically defensible methods are presented for developing geophysical detector sampling plans and analyzing data for munitions response sites where unexploded ordnance (UXO) may exist. Detection methods for identifying areas of elevated anomaly density from background density are shown. Additionally, methods are described which aid in the choice of transect pattern and spacing to assure with degree of confidence that a target area (TA) of specific size, shape, and anomaly density will be identified using the detection methods. Methods for evaluating the sensitivity of designs to variation in certain parameters are also discussed. Methods presented have been incorporated into the Visualmore » Sample Plan (VSP) software (free at http://dqo.pnl.gov/vsp) and demonstrated at multiple sites in the United States. Application examples from actual transect designs and surveys from the previous two years are demonstrated.« less

  2. Unsupervised Spatial Event Detection in Targeted Domains with Applications to Civil Unrest Modeling

    PubMed Central

    Zhao, Liang; Chen, Feng; Dai, Jing; Hua, Ting; Lu, Chang-Tien; Ramakrishnan, Naren

    2014-01-01

    Twitter has become a popular data source as a surrogate for monitoring and detecting events. Targeted domains such as crime, election, and social unrest require the creation of algorithms capable of detecting events pertinent to these domains. Due to the unstructured language, short-length messages, dynamics, and heterogeneity typical of Twitter data streams, it is technically difficult and labor-intensive to develop and maintain supervised learning systems. We present a novel unsupervised approach for detecting spatial events in targeted domains and illustrate this approach using one specific domain, viz. civil unrest modeling. Given a targeted domain, we propose a dynamic query expansion algorithm to iteratively expand domain-related terms, and generate a tweet homogeneous graph. An anomaly identification method is utilized to detect spatial events over this graph by jointly maximizing local modularity and spatial scan statistics. Extensive experiments conducted in 10 Latin American countries demonstrate the effectiveness of the proposed approach. PMID:25350136

  3. Algorithms Based on CWT and Classifiers to Control Cardiac Alterations and Stress Using an ECG and a SCR

    PubMed Central

    Villarejo, María Viqueira; Zapirain, Begoña García; Zorrilla, Amaia Méndez

    2013-01-01

    This paper presents the results of using a commercial pulsimeter as an electrocardiogram (ECG) for wireless detection of cardiac alterations and stress levels for home control. For these purposes, signal processing techniques (Continuous Wavelet Transform (CWT) and J48) have been used, respectively. The designed algorithm analyses the ECG signal and is able to detect the heart rate (99.42%), arrhythmia (93.48%) and extrasystoles (99.29%). The detection of stress level is complemented with Skin Conductance Response (SCR), whose success is 94.02%. The heart rate variability does not show added value to the stress detection in this case. With this pulsimeter, it is possible to prevent and detect anomalies for a non-intrusive way associated to a telemedicine system. It is also possible to use it during physical activity due to the fact the CWT minimizes the motion artifacts. PMID:23666135

  4. Algorithms based on CWT and classifiers to control cardiac alterations and stress using an ECG and a SCR.

    PubMed

    Villarejo, María Viqueira; Zapirain, Begoña García; Zorrilla, Amaia Méndez

    2013-05-10

    This paper presents the results of using a commercial pulsimeter as an electrocardiogram (ECG) for wireless detection of cardiac alterations and stress levels for home control. For these purposes, signal processing techniques (Continuous Wavelet Transform (CWT) and J48) have been used, respectively. The designed algorithm analyses the ECG signal and is able to detect the heart rate (99.42%), arrhythmia (93.48%) and extrasystoles (99.29%). The detection of stress level is complemented with Skin Conductance Response (SCR), whose success is 94.02%. The heart rate variability does not show added value to the stress detection in this case. With this pulsimeter, it is possible to prevent and detect anomalies for a non-intrusive way associated to a telemedicine system. It is also possible to use it during physical activity due to the fact the CWT minimizes the motion artifacts.

  5. Amplitude inversion of the 2D analytic signal of magnetic anomalies through the differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ekinci, Yunus Levent; Özyalın, Şenol; Sındırgı, Petek; Balkaya, Çağlayan; Göktürkler, Gökhan

    2017-12-01

    In this work, analytic signal amplitude (ASA) inversion of total field magnetic anomalies has been achieved by differential evolution (DE) which is a population-based evolutionary metaheuristic algorithm. Using an elitist strategy, the applicability and effectiveness of the proposed inversion algorithm have been evaluated through the anomalies due to both hypothetical model bodies and real isolated geological structures. Some parameter tuning studies relying mainly on choosing the optimum control parameters of the algorithm have also been performed to enhance the performance of the proposed metaheuristic. Since ASAs of magnetic anomalies are independent of both ambient field direction and the direction of magnetization of the causative sources in a two-dimensional (2D) case, inversions of synthetic noise-free and noisy single model anomalies have produced satisfactory solutions showing the practical applicability of the algorithm. Moreover, hypothetical studies using multiple model bodies have clearly showed that the DE algorithm is able to cope with complicated anomalies and some interferences from neighbouring sources. The proposed algorithm has then been used to invert small- (120 m) and large-scale (40 km) magnetic profile anomalies of an iron deposit (Kesikköprü-Bala, Turkey) and a deep-seated magnetized structure (Sea of Marmara, Turkey), respectively to determine depths, geometries and exact origins of the source bodies. Inversion studies have yielded geologically reasonable solutions which are also in good accordance with the results of normalized full gradient and Euler deconvolution techniques. Thus, we propose the use of DE not only for the amplitude inversion of 2D analytical signals of magnetic profile anomalies having induced or remanent magnetization effects but also the low-dimensional data inversions in geophysics. A part of this paper was presented as an abstract at the 2nd International Conference on Civil and Environmental Engineering, 8-10 May 2017, Cappadocia-Nevşehir (Turkey).

  6. Revolution in Detection Affairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stern W.

    The detection of nuclear or radioactive materials for homeland or national security purposes is inherently difficult. This is one reason detection efforts must be seen as just one part of an overall nuclear defense strategy which includes, inter alia, material security, detection, interdiction, consequence management and recovery. Nevertheless, one could argue that there has been a revolution in detection affairs in the past several decades as the innovative application of new technology has changed the character and conduct of detection operations. This revolution will likely be most effectively reinforced in the coming decades with the networking of detectors and innovativemore » application of anomaly detection algorithms.« less

  7. Revolution in nuclear detection affairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stern, Warren M.

    The detection of nuclear or radioactive materials for homeland or national security purposes is inherently difficult. This is one reason detection efforts must be seen as just one part of an overall nuclear defense strategy which includes, inter alia, material security, detection, interdiction, consequence management and recovery. Nevertheless, one could argue that there has been a revolution in detection affairs in the past several decades as the innovative application of new technology has changed the character and conduct of detection operations. This revolution will likely be most effectively reinforced in the coming decades with the networking of detectors and innovativemore » application of anomaly detection algorithms.« less

  8. Intrusion detection using rough set classification.

    PubMed

    Zhang, Lian-hua; Zhang, Guan-hua; Zhang, Jie; Bai, Ying-cai

    2004-09-01

    Recently machine learning-based intrusion detection approaches have been subjected to extensive researches because they can detect both misuse and anomaly. In this paper, rough set classification (RSC), a modern learning algorithm, is used to rank the features extracted for detecting intrusions and generate intrusion detection models. Feature ranking is a very critical step when building the model. RSC performs feature ranking before generating rules, and converts the feature ranking to minimal hitting set problem addressed by using genetic algorithm (GA). This is done in classical approaches using Support Vector Machine (SVM) by executing many iterations, each of which removes one useless feature. Compared with those methods, our method can avoid many iterations. In addition, a hybrid genetic algorithm is proposed to increase the convergence speed and decrease the training time of RSC. The models generated by RSC take the form of "IF-THEN" rules, which have the advantage of explication. Tests and comparison of RSC with SVM on DARPA benchmark data showed that for Probe and DoS attacks both RSC and SVM yielded highly accurate results (greater than 99% accuracy on testing set).

  9. Using Satellite Data to Characterize the Temporal Thermal Behavior of an Active Volcano: Mount St. Helens, WA

    NASA Technical Reports Server (NTRS)

    Vaughan, R. Greg; Hook, Simon J.

    2006-01-01

    ASTER thermal infrared data over Mt. St Helens were used to characterize its thermal behavior from Jun 2000 to Feb 2006. Prior to the Oct 2004 eruption, the average crater temperature varied seasonally between -12 and 6 C. After the eruption, maximum single-pixel temperature increased from 10 C (Oct 2004) to 96 C (Aug 2005), then showed a decrease to Feb 2006. The initial increase in temperature was correlated with dome morphology and growth rate and the subsequent decrease was interpreted to relate to both seasonal trends and a decreased growth rate/increased cooling rate, possibly suggesting a significant change in the volcanic system. A single-pixel ASTER thermal anomaly first appeared on Oct 1, 2004, eleven hours after the first eruption - 10 days before new lava was exposed at the surface. By contrast, an automated algorithm for detecting thermal anomalies in MODIS data did not trigger an alert until Dec 18. However, a single-pixel thermal anomaly first appeared in MODIS channel 23 (4 um) on Oct 13, 12 days after the first eruption - 2 days after lava was exposed. The earlier thermal anomaly detected with ASTER data is attributed to the higher spatial resolution (90 m) compared with MODIS (1 m) and the earlier visual observation of anomalous pixels compared to the automated detection method suggests that local spatial statistics and background radiance data could improve automated detection methods.

  10. #FluxFlow: Visual Analysis of Anomalous Information Spreading on Social Media.

    PubMed

    Zhao, Jian; Cao, Nan; Wen, Zhen; Song, Yale; Lin, Yu-Ru; Collins, Christopher

    2014-12-01

    We present FluxFlow, an interactive visual analysis system for revealing and analyzing anomalous information spreading in social media. Everyday, millions of messages are created, commented, and shared by people on social media websites, such as Twitter and Facebook. This provides valuable data for researchers and practitioners in many application domains, such as marketing, to inform decision-making. Distilling valuable social signals from the huge crowd's messages, however, is challenging, due to the heterogeneous and dynamic crowd behaviors. The challenge is rooted in data analysts' capability of discerning the anomalous information behaviors, such as the spreading of rumors or misinformation, from the rest that are more conventional patterns, such as popular topics and newsworthy events, in a timely fashion. FluxFlow incorporates advanced machine learning algorithms to detect anomalies, and offers a set of novel visualization designs for presenting the detected threads for deeper analysis. We evaluated FluxFlow with real datasets containing the Twitter feeds captured during significant events such as Hurricane Sandy. Through quantitative measurements of the algorithmic performance and qualitative interviews with domain experts, the results show that the back-end anomaly detection model is effective in identifying anomalous retweeting threads, and its front-end interactive visualizations are intuitive and useful for analysts to discover insights in data and comprehend the underlying analytical model.

  11. Hot spots of multivariate extreme anomalies in Earth observations

    NASA Astrophysics Data System (ADS)

    Flach, M.; Sippel, S.; Bodesheim, P.; Brenning, A.; Denzler, J.; Gans, F.; Guanche, Y.; Reichstein, M.; Rodner, E.; Mahecha, M. D.

    2016-12-01

    Anomalies in Earth observations might indicate data quality issues, extremes or the change of underlying processes within a highly multivariate system. Thus, considering the multivariate constellation of variables for extreme detection yields crucial additional information over conventional univariate approaches. We highlight areas in which multivariate extreme anomalies are more likely to occur, i.e. hot spots of extremes in global atmospheric Earth observations that impact the Biosphere. In addition, we present the year of the most unusual multivariate extreme between 2001 and 2013 and show that these coincide with well known high impact extremes. Technically speaking, we account for multivariate extremes by using three sophisticated algorithms adapted from computer science applications. Namely an ensemble of the k-nearest neighbours mean distance, a kernel density estimation and an approach based on recurrences is used. However, the impact of atmosphere extremes on the Biosphere might largely depend on what is considered to be normal, i.e. the shape of the mean seasonal cycle and its inter-annual variability. We identify regions with similar mean seasonality by means of dimensionality reduction in order to estimate in each region both the `normal' variance and robust thresholds for detecting the extremes. In addition, we account for challenges like heteroscedasticity in Northern latitudes. Apart from hot spot areas, those anomalies in the atmosphere time series are of particular interest, which can only be detected by a multivariate approach but not by a simple univariate approach. Such an anomalous constellation of atmosphere variables is of interest if it impacts the Biosphere. The multivariate constellation of such an anomalous part of a time series is shown in one case study indicating that multivariate anomaly detection can provide novel insights into Earth observations.

  12. Big Data Analysis of Manufacturing Processes

    NASA Astrophysics Data System (ADS)

    Windmann, Stefan; Maier, Alexander; Niggemann, Oliver; Frey, Christian; Bernardi, Ansgar; Gu, Ying; Pfrommer, Holger; Steckel, Thilo; Krüger, Michael; Kraus, Robert

    2015-11-01

    The high complexity of manufacturing processes and the continuously growing amount of data lead to excessive demands on the users with respect to process monitoring, data analysis and fault detection. For these reasons, problems and faults are often detected too late, maintenance intervals are chosen too short and optimization potential for higher output and increased energy efficiency is not sufficiently used. A possibility to cope with these challenges is the development of self-learning assistance systems, which identify relevant relationships by observation of complex manufacturing processes so that failures, anomalies and need for optimization are automatically detected. The assistance system developed in the present work accomplishes data acquisition, process monitoring and anomaly detection in industrial and agricultural processes. The assistance system is evaluated in three application cases: Large distillation columns, agricultural harvesting processes and large-scale sorting plants. In this paper, the developed infrastructures for data acquisition in these application cases are described as well as the developed algorithms and initial evaluation results.

  13. Optical Detection of Degraded Therapeutic Proteins.

    PubMed

    Herrington, William F; Singh, Gajendra P; Wu, Di; Barone, Paul W; Hancock, William; Ram, Rajeev J

    2018-03-23

    The quality of therapeutic proteins such as hormones, subunit and conjugate vaccines, and antibodies is critical to the safety and efficacy of modern medicine. Identifying malformed proteins at the point-of-care can prevent adverse immune reactions in patients; this is of special concern when there is an insecure supply chain resulting in the delivery of degraded, or even counterfeit, drug product. Identification of degraded protein, for example human growth hormone, is demonstrated by applying automated anomaly detection algorithms. Detection of the degraded protein differs from previous applications of machine-learning and classification to spectral analysis: only example spectra of genuine, high-quality drug products are used to construct the classifier. The algorithm is tested on Raman spectra acquired on protein dilutions typical of formulated drug product and at sample volumes of 25 µL, below the typical overfill (waste) volumes present in vials of injectable drug product. The algorithm is demonstrated to correctly classify anomalous recombinant human growth hormone (rhGH) with 92% sensitivity and 98% specificity even when the algorithm has only previously encountered high-quality drug product.

  14. Networked gamma radiation detection system for tactical deployment

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ronald; Smith, Ethan; Guss, Paul; Mitchell, Stephen

    2015-08-01

    A networked gamma radiation detection system with directional sensitivity and energy spectral data acquisition capability is being developed by the National Security Technologies, LLC, Remote Sensing Laboratory to support the close and intense tactical engagement of law enforcement who carry out counterterrorism missions. In the proposed design, three clusters of 2″ × 4″ × 16″ sodium iodide crystals (4 each) with digiBASE-E (for list mode data collection) would be placed on the passenger side of a minivan. To enhance localization and facilitate rapid identification of isotopes, advanced smart real-time localization and radioisotope identification algorithms like WAVRAD (wavelet-assisted variance reduction for anomaly detection) and NSCRAD (nuisance-rejection spectral comparison ratio anomaly detection) will be incorporated. We will test a collection of algorithms and analysis that centers on the problem of radiation detection with a distributed sensor network. We will study the basic characteristics of a radiation sensor network and focus on the trade-offs between false positive alarm rates, true positive alarm rates, and time to detect multiple radiation sources in a large area. Empirical and simulation analyses of critical system parameters, such as number of sensors, sensor placement, and sensor response functions, will be examined. This networked system will provide an integrated radiation detection architecture and framework with (i) a large nationally recognized search database equivalent that would help generate a common operational picture in a major radiological crisis; (ii) a robust reach back connectivity for search data to be evaluated by home teams; and, finally, (iii) a possibility of integrating search data from multi-agency responders.

  15. Nonlinear Classification of AVO Attributes Using SVM

    NASA Astrophysics Data System (ADS)

    Zhao, B.; Zhou, H.

    2005-05-01

    A key research topic in reservoir characterization is the detection of the presence of fluids using seismic and well-log data. In particular, partial gas discrimination is very challenging because low and high gas saturation can result in similar anomalies in terms of Amplitude Variation with Offset (AVO), bright spot, and velocity sag. Hence, a successful fluid detection will require a good understanding of the seismic signatures of the fluids, high-quality data, and good detection methodology. Traditional attempts of partial gas discrimination employ the Neural Network algorithm. A new approach is to use the Support Vector Machine (SVM) (Vapnik, 1995; Liu and Sacchi, 2003). While the potential of the SVM has not been fully explored for reservoir fluid detection, the current nonlinear methods classify seismic attributes without the use of rock physics constraints. The objective of this study is to improve the capability of distinguishing a fizz-water reservoir from a commercial gas reservoir by developing a new detection method using AVO attributes and rock physics constraints. This study will first test the SVM classification with synthetic data, and then apply the algorithm to field data from the King-Kong and Lisa-Anne fields in Gulf of Mexico. While both field areas have high amplitude seismic anomalies, King-Kong field produces commercial gas but Lisa-Anne field does not. We expect that the new SVM-based nonlinear classification of AVO attributes may be able to separate commercial gas from fizz-water in these two fields.

  16. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    NASA Astrophysics Data System (ADS)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  17. Evaluation of multilayer perceptron algorithms for an analysis of network flow data

    NASA Astrophysics Data System (ADS)

    Bieniasz, Jedrzej; Rawski, Mariusz; Skowron, Krzysztof; Trzepiński, Mateusz

    2016-09-01

    The volume of exchanged information through IP networks is larger than ever and still growing. It creates a space for both benign and malicious activities. The second one raises awareness on security network devices, as well as network infrastructure and a system as a whole. One of the basic tools to prevent cyber attacks is Network Instrusion Detection System (NIDS). NIDS could be realized as a signature-based detector or an anomaly-based one. In the last few years the emphasis has been placed on the latter type, because of the possibility of applying smart and intelligent solutions. An ideal NIDS of next generation should be composed of self-learning algorithms that could react on known and unknown malicious network activities respectively. In this paper we evaluated a machine learning approach for detection of anomalies in IP network data represented as NetFlow records. We considered Multilayer Perceptron (MLP) as the classifier and we used two types of learning algorithms - Backpropagation (BP) and Particle Swarm Optimization (PSO). This paper includes a comprehensive survey on determining the most optimal MLP learning algorithm for the classification problem in application to network flow data. The performance, training time and convergence of BP and PSO methods were compared. The results show that PSO algorithm implemented by the authors outperformed other solutions if accuracy of classifications is considered. The major disadvantage of PSO is training time, which could be not acceptable for larger data sets or in real network applications. At the end we compared some key findings with the results from the other papers to show that in all cases results from this study outperformed them.

  18. Multilayer Statistical Intrusion Detection in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Hamdi, Mohamed; Meddeb-Makhlouf, Amel; Boudriga, Noureddine

    2008-12-01

    The rapid proliferation of mobile applications and services has introduced new vulnerabilities that do not exist in fixed wired networks. Traditional security mechanisms, such as access control and encryption, turn out to be inefficient in modern wireless networks. Given the shortcomings of the protection mechanisms, an important research focuses in intrusion detection systems (IDSs). This paper proposes a multilayer statistical intrusion detection framework for wireless networks. The architecture is adequate to wireless networks because the underlying detection models rely on radio parameters and traffic models. Accurate correlation between radio and traffic anomalies allows enhancing the efficiency of the IDS. A radio signal fingerprinting technique based on the maximal overlap discrete wavelet transform (MODWT) is developed. Moreover, a geometric clustering algorithm is presented. Depending on the characteristics of the fingerprinting technique, the clustering algorithm permits to control the false positive and false negative rates. Finally, simulation experiments have been carried out to validate the proposed IDS.

  19. Traffic Pattern Detection Using the Hough Transformation for Anomaly Detection to Improve Maritime Domain Awareness

    DTIC Science & Technology

    2013-12-01

    Programming code in the Python language used in AIS data preprocessing is contained in Appendix A. The MATLAB programming code used to apply the Hough...described in Chapter III is applied to archived AIS data in this chapter. The implementation of the method, including programming techniques used, is...is contained in the second. To provide a proof of concept for the algorithm described in Chapter III, the PYTHON programming language was used for

  20. Toward the detection of abnormal chest radiographs the way radiologists do it

    NASA Astrophysics Data System (ADS)

    Alzubaidi, Mohammad; Patel, Ameet; Panchanathan, Sethuraman; Black, John A., Jr.

    2011-03-01

    Computer Aided Detection (CADe) and Computer Aided Diagnosis (CADx) are relatively recent areas of research that attempt to employ feature extraction, pattern recognition, and machine learning algorithms to aid radiologists in detecting and diagnosing abnormalities in medical images. However, these computational methods are based on the assumption that there are distinct classes of abnormalities, and that each class has some distinguishing features that set it apart from other classes. However, abnormalities in chest radiographs tend to be very heterogeneous. The literature suggests that thoracic (chest) radiologists develop their ability to detect abnormalities by developing a sense of what is normal, so that anything that is abnormal attracts their attention. This paper discusses an approach to CADe that is based on a technique called anomaly detection (which aims to detect outliers in data sets) for the purpose of detecting atypical regions in chest radiographs. However, in order to apply anomaly detection to chest radiographs, it is necessary to develop a basis for extracting features from corresponding anatomical locations in different chest radiographs. This paper proposes a method for doing this, and describes how it can be used to support CADe.

  1. Detection and clustering of features in aerial images by neuron network-based algorithm

    NASA Astrophysics Data System (ADS)

    Vozenilek, Vit

    2015-12-01

    The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.

  2. Anomaly-specified virtual dimensionality

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Yu; Paylor, Drew; Chang, Chein-I.

    2013-09-01

    Virtual dimensionality (VD) has received considerable interest where VD is used to estimate the number of spectral distinct signatures, denoted by p. Unfortunately, no specific definition is provided by VD for what a spectrally distinct signature is. As a result, various types of spectral distinct signatures determine different values of VD. There is no one value-fit-all for VD. In order to address this issue this paper presents a new concept, referred to as anomaly-specified VD (AS-VD) which determines the number of anomalies of interest present in the data. Specifically, two types of anomaly detection algorithms are of particular interest, sample covariance matrix K-based anomaly detector developed by Reed and Yu, referred to as K-RXD and sample correlation matrix R-based RXD, referred to as R-RXD. Since K-RXD is only determined by 2nd order statistics compared to R-RXD which is specified by statistics of the first two orders including sample mean as the first order statistics, the values determined by K-RXD and R-RXD will be different. Experiments are conducted in comparison with widely used eigen-based approaches.

  3. Spectral methods to detect surface mines

    NASA Astrophysics Data System (ADS)

    Winter, Edwin M.; Schatten Silvious, Miranda

    2008-04-01

    Over the past five years, advances have been made in the spectral detection of surface mines under minefield detection programs at the U. S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD). The problem of detecting surface land mines ranges from the relatively simple, the detection of large anti-vehicle mines on bare soil, to the very difficult, the detection of anti-personnel mines in thick vegetation. While spatial and spectral approaches can be applied to the detection of surface mines, spatial-only detection requires many pixels-on-target such that the mine is actually imaged and shape-based features can be exploited. This method is unreliable in vegetated areas because only part of the mine may be exposed, while spectral detection is possible without the mine being resolved. At NVESD, hyperspectral and multi-spectral sensors throughout the reflection and thermal spectral regimes have been applied to the mine detection problem. Data has been collected on mines in forest and desert regions and algorithms have been developed both to detect the mines as anomalies and to detect the mines based on their spectral signature. In addition to the detection of individual mines, algorithms have been developed to exploit the similarities of mines in a minefield to improve their detection probability. In this paper, the types of spectral data collected over the past five years will be summarized along with the advances in algorithm development.

  4. nu-Anomica: A Fast Support Vector Based Novelty Detection Technique

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Bhaduri, Kanishka; Oza, Nikunj C.; Srivastava, Ashok N.

    2009-01-01

    In this paper we propose nu-Anomica, a novel anomaly detection technique that can be trained on huge data sets with much reduced running time compared to the benchmark one-class Support Vector Machines algorithm. In -Anomica, the idea is to train the machine such that it can provide a close approximation to the exact decision plane using fewer training points and without losing much of the generalization performance of the classical approach. We have tested the proposed algorithm on a variety of continuous data sets under different conditions. We show that under all test conditions the developed procedure closely preserves the accuracy of standard one-class Support Vector Machines while reducing both the training time and the test time by 5 - 20 times.

  5. Finding Cardinality Heavy-Hitters in Massive Traffic Data and Its Application to Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Ishibashi, Keisuke; Mori, Tatsuya; Kawahara, Ryoichi; Hirokawa, Yutaka; Kobayashi, Atsushi; Yamamoto, Kimihiro; Sakamoto, Hitoaki; Asano, Shoichiro

    We propose an algorithm for finding heavy hitters in terms of cardinality (the number of distinct items in a set) in massive traffic data using a small amount of memory. Examples of such cardinality heavy-hitters are hosts that send large numbers of flows, or hosts that communicate with large numbers of other hosts. Finding these hosts is crucial to the provision of good communication quality because they significantly affect the communications of other hosts via either malicious activities such as worm scans, spam distribution, or botnet control or normal activities such as being a member of a flash crowd or performing peer-to-peer (P2P) communication. To precisely determine the cardinality of a host we need tables of previously seen items for each host (e. g., flow tables for every host) and this may infeasible for a high-speed environment with a massive amount of traffic. In this paper, we use a cardinality estimation algorithm that does not require these tables but needs only a little information called the cardinality summary. This is made possible by relaxing the goal from exact counting to estimation of cardinality. In addition, we propose an algorithm that does not need to maintain the cardinality summary for each host, but only for partitioned addresses of a host. As a result, the required number of tables can be significantly decreased. We evaluated our algorithm using actual backbone traffic data to find the heavy-hitters in the number of flows and estimate the number of these flows. We found that while the accuracy degraded when estimating for hosts with few flows, the algorithm could accurately find the top-100 hosts in terms of the number of flows using a limited-sized memory. In addition, we found that the number of tables required to achieve a pre-defined accuracy increased logarithmically with respect to the total number of hosts, which indicates that our method is applicable for large traffic data for a very large number of hosts. We also introduce an application of our algorithm to anomaly detection. With actual traffic data, our method could successfully detect a sudden network scan.

  6. RIDES: Robust Intrusion Detection System for IP-Based Ubiquitous Sensor Networks

    PubMed Central

    Amin, Syed Obaid; Siddiqui, Muhammad Shoaib; Hong, Choong Seon; Lee, Sungwon

    2009-01-01

    The IP-based Ubiquitous Sensor Network (IP-USN) is an effort to build the “Internet of things”. By utilizing IP for low power networks, we can benefit from existing well established tools and technologies of IP networks. Along with many other unresolved issues, securing IP-USN is of great concern for researchers so that future market satisfaction and demands can be met. Without proper security measures, both reactive and proactive, it is hard to envisage an IP-USN realm. In this paper we present a design of an IDS (Intrusion Detection System) called RIDES (Robust Intrusion DEtection System) for IP-USN. RIDES is a hybrid intrusion detection system, which incorporates both Signature and Anomaly based intrusion detection components. For signature based intrusion detection this paper only discusses the implementation of distributed pattern matching algorithm with the help of signature-code, a dynamically created attack-signature identifier. Other aspects, such as creation of rules are not discussed. On the other hand, for anomaly based detection we propose a scoring classifier based on the SPC (Statistical Process Control) technique called CUSUM charts. We also investigate the settings and their effects on the performance of related parameters for both of the components. PMID:22412321

  7. RIDES: Robust Intrusion Detection System for IP-Based Ubiquitous Sensor Networks.

    PubMed

    Amin, Syed Obaid; Siddiqui, Muhammad Shoaib; Hong, Choong Seon; Lee, Sungwon

    2009-01-01

    The IP-based Ubiquitous Sensor Network (IP-USN) is an effort to build the "Internet of things". By utilizing IP for low power networks, we can benefit from existing well established tools and technologies of IP networks. Along with many other unresolved issues, securing IP-USN is of great concern for researchers so that future market satisfaction and demands can be met. Without proper security measures, both reactive and proactive, it is hard to envisage an IP-USN realm. In this paper we present a design of an IDS (Intrusion Detection System) called RIDES (Robust Intrusion DEtection System) for IP-USN. RIDES is a hybrid intrusion detection system, which incorporates both Signature and Anomaly based intrusion detection components. For signature based intrusion detection this paper only discusses the implementation of distributed pattern matching algorithm with the help of signature-code, a dynamically created attack-signature identifier. Other aspects, such as creation of rules are not discussed. On the other hand, for anomaly based detection we propose a scoring classifier based on the SPC (Statistical Process Control) technique called CUSUM charts. We also investigate the settings and their effects on the performance of related parameters for both of the components.

  8. Passive (Micro-) Seismic Event Detection by Identifying Embedded "Event" Anomalies Within Statistically Describable Background Noise

    NASA Astrophysics Data System (ADS)

    Baziw, Erick; Verbeek, Gerald

    2012-12-01

    Among engineers there is considerable interest in the real-time identification of "events" within time series data with a low signal to noise ratio. This is especially true for acoustic emission analysis, which is utilized to assess the integrity and safety of many structures and is also applied in the field of passive seismic monitoring (PSM). Here an array of seismic receivers are used to acquire acoustic signals to monitor locations where seismic activity is expected: underground excavations, deep open pits and quarries, reservoirs into which fluids are injected or from which fluids are produced, permeable subsurface formations, or sites of large underground explosions. The most important element of PSM is event detection: the monitoring of seismic acoustic emissions is a continuous, real-time process which typically runs 24 h a day, 7 days a week, and therefore a PSM system with poor event detection can easily acquire terabytes of useless data as it does not identify crucial acoustic events. This paper outlines a new algorithm developed for this application, the so-called SEED™ (Signal Enhancement and Event Detection) algorithm. The SEED™ algorithm uses real-time Bayesian recursive estimation digital filtering techniques for PSM signal enhancement and event detection.

  9. Searching for Faint Companions to Nearby Stars with the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Schroeder, Daniel J.; Golimowski, David A.

    1996-01-01

    A search for faint companions (FC's) to selected stars within 5 pc of the Sun using the Hubble Space Telescope's Planetary Camera (PC) has been initiated. To assess the PC's ability to detect FCs, we have constructed both model and laboratory-simulated images and compared them to actual PC images. We find that the PC's point-spread function (PSF) is 3-4 times brighter over the angular range 2-5 sec than the PSF expected for a perfect optical system. Azimuthal variations of the PC's PSF are 10-20 times larger than expected for a perfect PSF. These variations suggest that light is scattered nonuniformly from the surface of the detector. Because the anomalies in the PC's PSF cannot be precisely simulated, subtracting a reference PSF from the PC image is problematic. We have developed a computer algorithm that identifies local brightness anomalies within the PSF as potential FCs. We find that this search algorithm will successfully locate FCs anywhere within the circumstellar field provided that the average pixel signal from the FC is at least 10 sigma above the local background. This detection limit suggests that a comprehensive search for extrasolar Jovian planets with the PC is impractical. However, the PC is useful for detecting other types of substellar objects. With a stellar signal of 10(exp 9) e(-), for example, we may detect brown dwarfs as faint as M(sub I) = 16.7 separated by 1 sec from alpha Cen A.

  10. Comparative performance between compressed and uncompressed airborne imagery

    NASA Astrophysics Data System (ADS)

    Phan, Chung; Rupp, Ronald; Agarwal, Sanjeev; Trang, Anh; Nair, Sumesh

    2008-04-01

    The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division is evaluating the compressibility of airborne multi-spectral imagery for mine and minefield detection application. Of particular interest is to assess the highest image data compression rate that can be afforded without the loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify potential individual targets, is used to compare the mine detection performance. This paper presents the compression scheme and compares detection performance results between compressed and uncompressed imagery for various level of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other factors are documented and presented using multi-spectral data.

  11. Data based abnormality detection

    NASA Astrophysics Data System (ADS)

    Purwar, Yashasvi

    Data based abnormality detection is a growing research field focussed on extracting information from feature rich data. They are considered to be non-intrusive and non-destructive in nature which gives them a clear advantage over conventional methods. In this study, we explore different streams of data based anomalies detection. We propose extension and revisions to existing valve stiction detection algorithm supported with industrial case study. We also explored the area of image analysis and proposed a complete solution for Malaria diagnosis. The proposed method is tested over images provided by pathology laboratory at Alberta Health Service. We also address the robustness and practicality of the solution proposed.

  12. The LSST Data Mining Research Agenda

    NASA Astrophysics Data System (ADS)

    Borne, K.; Becla, J.; Davidson, I.; Szalay, A.; Tyson, J. A.

    2008-12-01

    We describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night) multi-resolution methods for exploration of petascale databases; indexing of multi-attribute multi-dimensional astronomical databases (beyond spatial indexing) for rapid querying of petabyte databases; and more.

  13. Domain Anomaly Detection in Machine Perception: A System Architecture and Taxonomy.

    PubMed

    Kittler, Josef; Christmas, William; de Campos, Teófilo; Windridge, David; Yan, Fei; Illingworth, John; Osman, Magda

    2014-05-01

    We address the problem of anomaly detection in machine perception. The concept of domain anomaly is introduced as distinct from the conventional notion of anomaly used in the literature. We propose a unified framework for anomaly detection which exposes the multifaceted nature of anomalies and suggest effective mechanisms for identifying and distinguishing each facet as instruments for domain anomaly detection. The framework draws on the Bayesian probabilistic reasoning apparatus which clearly defines concepts such as outlier, noise, distribution drift, novelty detection (object, object primitive), rare events, and unexpected events. Based on these concepts we provide a taxonomy of domain anomaly events. One of the mechanisms helping to pinpoint the nature of anomaly is based on detecting incongruence between contextual and noncontextual sensor(y) data interpretation. The proposed methodology has wide applicability. It underpins in a unified way the anomaly detection applications found in the literature. To illustrate some of its distinguishing features, in here the domain anomaly detection methodology is applied to the problem of anomaly detection for a video annotation system.

  14. A Comparison of Hybrid Approaches for Turbofan Engine Gas Path Fault Diagnosis

    NASA Astrophysics Data System (ADS)

    Lu, Feng; Wang, Yafan; Huang, Jinquan; Wang, Qihang

    2016-09-01

    A hybrid diagnostic method utilizing Extended Kalman Filter (EKF) and Adaptive Genetic Algorithm (AGA) is presented for performance degradation estimation and sensor anomaly detection of turbofan engine. The EKF is used to estimate engine component performance degradation for gas path fault diagnosis. The AGA is introduced in the integrated architecture and applied for sensor bias detection. The contributions of this work are the comparisons of Kalman Filters (KF)-AGA algorithms and Neural Networks (NN)-AGA algorithms with a unified framework for gas path fault diagnosis. The NN needs to be trained off-line with a large number of prior fault mode data. When new fault mode occurs, estimation accuracy by the NN evidently decreases. However, the application of the Linearized Kalman Filter (LKF) and EKF will not be restricted in such case. The crossover factor and the mutation factor are adapted to the fitness function at each generation in the AGA, and it consumes less time to search for the optimal sensor bias value compared to the Genetic Algorithm (GA). In a word, we conclude that the hybrid EKF-AGA algorithm is the best choice for gas path fault diagnosis of turbofan engine among the algorithms discussed.

  15. Feature Selection and Classifier Development for Radio Frequency Device Identification

    DTIC Science & Technology

    2015-12-01

    adds important background knowledge for this research . 41 Four leading RF-based device identification methods have been proposed: Radio...appropriate level of dimensionality. Both qualitative and quantitative DRA dimensionality assessment methods are possible. Prior RF-DNA DRA research , e.g...Employing experimental designs to find optimal algorithm settings has been seen in hyperspectral anomaly detection research , c.f. [513–520], but not

  16. Adaptive Gaussian mixture models for pre-screening in GPR data

    NASA Astrophysics Data System (ADS)

    Torrione, Peter; Morton, Kenneth, Jr.; Besaw, Lance E.

    2011-06-01

    Due to the large amount of data generated by vehicle-mounted ground penetrating radar (GPR) antennae arrays, advanced feature extraction and classification can only be performed on a small subset of data during real-time operation. As a result, most GPR based landmine detection systems implement "pre-screening" algorithms to processes all of the data generated by the antennae array and identify locations with anomalous signatures for more advanced processing. These pre-screening algorithms must be computationally efficient and obtain high probability of detection, but can permit a false alarm rate which might be higher than the total system requirements. Many approaches to prescreening have previously been proposed, including linear prediction coefficients, the LMS algorithm, and CFAR-based approaches. Similar pre-screening techniques have also been developed in the field of video processing to identify anomalous behavior or anomalous objects. One such algorithm, an online k-means approximation to an adaptive Gaussian mixture model (GMM), is particularly well-suited to application for pre-screening in GPR data due to its computational efficiency, non-linear nature, and relevance of the logic underlying the algorithm to GPR processing. In this work we explore the application of an adaptive GMM-based approach for anomaly detection from the video processing literature to pre-screening in GPR data. Results with the ARA Nemesis landmine detection system demonstrate significant pre-screening performance improvements compared to alternative approaches, and indicate that the proposed algorithm is a complimentary technique to existing methods.

  17. Disk Crack Detection for Seeded Fault Engine Test

    NASA Technical Reports Server (NTRS)

    Luo, Huageng; Rodriguez, Hector; Hallman, Darren; Corbly, Dennis; Lewicki, David G. (Technical Monitor)

    2004-01-01

    Work was performed to develop and demonstrate vibration diagnostic techniques for the on-line detection of engine rotor disk cracks and other anomalies through a real engine test. An existing single-degree-of-freedom non-resonance-based vibration algorithm was extended to a multi-degree-of-freedom model. In addition, a resonance-based algorithm was also proposed for the case of one or more resonances. The algorithms were integrated into a diagnostic system using state-of-the- art commercial analysis equipment. The system required only non-rotating vibration signals, such as accelerometers and proximity probes, and the rotor shaft 1/rev signal to conduct the health monitoring. Before the engine test, the integrated system was tested in the laboratory by using a small rotor with controlled mass unbalances. The laboratory tests verified the system integration and both the non-resonance and the resonance-based algorithm implementations. In the engine test, the system concluded that after two weeks of cycling, the seeded fan disk flaw did not propagate to a large enough size to be detected by changes in the synchronous vibration. The unbalance induced by mass shifting during the start up and coast down was still the dominant response in the synchronous vibration.

  18. Quantum adiabatic machine learning

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen L.; Lidar, Daniel A.

    2013-05-01

    We develop an approach to machine learning and anomaly detection via quantum adiabatic evolution. This approach consists of two quantum phases, with some amount of classical preprocessing to set up the quantum problems. In the training phase we identify an optimal set of weak classifiers, to form a single strong classifier. In the testing phase we adiabatically evolve one or more strong classifiers on a superposition of inputs in order to find certain anomalous elements in the classification space. Both the training and testing phases are executed via quantum adiabatic evolution. All quantum processing is strictly limited to two-qubit interactions so as to ensure physical feasibility. We apply and illustrate this approach in detail to the problem of software verification and validation, with a specific example of the learning phase applied to a problem of interest in flight control systems. Beyond this example, the algorithm can be used to attack a broad class of anomaly detection problems.

  19. Processing Ocean Images to Detect Large Drift Nets

    NASA Technical Reports Server (NTRS)

    Veenstra, Tim

    2009-01-01

    A computer program processes the digitized outputs of a set of downward-looking video cameras aboard an aircraft flying over the ocean. The purpose served by this software is to facilitate the detection of large drift nets that have been lost, abandoned, or jettisoned. The development of this software and of the associated imaging hardware is part of a larger effort to develop means of detecting and removing large drift nets before they cause further environmental damage to the ocean and to shores on which they sometimes impinge. The software is capable of near-realtime processing of as many as three video feeds at a rate of 30 frames per second. After a user sets the parameters of an adjustable algorithm, the software analyzes each video stream, detects any anomaly, issues a command to point a high-resolution camera toward the location of the anomaly, and, once the camera has been so aimed, issues a command to trigger the camera shutter. The resulting high-resolution image is digitized, and the resulting data are automatically uploaded to the operator s computer for analysis.

  20. Satellite Remote Sensing Tools at the Alaska Volcano Observatory

    NASA Astrophysics Data System (ADS)

    Dehn, J.; Dean, K.; Webley, P.; Bailey, J.; Valcic, L.

    2008-12-01

    Volcanoes rarely conform to schedules or convenience. This is even more the case for remote volcanoes that still have impact on local infrastructure and air traffic. With well over 100 eruptions in the North Pacific over 20 years, the Alaska Volcano Observatory has developed a series of web-based tools to rapidly assess satellite imagery of volcanic eruptions from virtually anywhere. These range from automated alarms systems to detect thermal anomalies and ash plumes at volcanoes, as well as efficient image processing that can be done at a moments notice from any computer linked to the internet. The thermal anomaly detection algorithm looks for warm pixels several standard deviations above the background as well as pixels which show stronger mid infrared (3-5 microns) signals relative to available thermal channels (10-12 microns). The ash algorithm primarily uses the brightness temperature difference of two thermal bands, but also looks for shape of clouds and noise elimination. The automated algorithms are far from perfect, with 60-70% success rates, but improve with each eruptions. All of the data is available to the community online in a variety of forms which provide rudimentary processing. The website, avo-animate.images.alaska.edu, is designed for use by AVO's partners and "customers" to provide quick synoptic views of volcanic activity. These tools also have been essential in AVO's efforts in recent years and provide a model for rapid response to eruptions at distant volcanoes anywhere in the world. animate.images.alaska.edu

  1. Inter-satellite links for satellite autonomous integrity monitoring

    NASA Astrophysics Data System (ADS)

    Rodríguez-Pérez, Irma; García-Serrano, Cristina; Catalán Catalán, Carlos; García, Alvaro Mozo; Tavella, Patrizia; Galleani, Lorenzo; Amarillo, Francisco

    2011-01-01

    A new integrity monitoring mechanisms to be implemented on-board on a GNSS taking advantage of inter-satellite links has been introduced. This is based on accurate range and Doppler measurements not affected neither by atmospheric delays nor ground local degradation (multipath and interference). By a linear combination of the Inter-Satellite Links Observables, appropriate observables for both satellite orbits and clock monitoring are obtained and by the proposed algorithms it is possible to reduce the time-to-alarm and the probability of undetected satellite anomalies.Several test cases have been run to assess the performances of the new orbit and clock monitoring algorithms in front of a complete scenario (satellite-to-satellite and satellite-to-ground links) and in a satellite-only scenario. The results of this experimentation campaign demonstrate that the Orbit Monitoring Algorithm is able to detect orbital feared events when the position error at the worst user location is still under acceptable limits. For instance, an unplanned manoeuvre in the along-track direction is detected (with a probability of false alarm equals to 5 × 10-9) when the position error at the worst user location is 18 cm. The experimentation also reveals that the clock monitoring algorithm is able to detect phase jumps, frequency jumps and instability degradation on the clocks but the latency of detection as well as the detection performances strongly depends on the noise added by the clock measurement system.

  2. Algorithm-based arterial blood sampling recognition increasing safety in point-of-care diagnostics.

    PubMed

    Peter, Jörg; Klingert, Wilfried; Klingert, Kathrin; Thiel, Karolin; Wulff, Daniel; Königsrainer, Alfred; Rosenstiel, Wolfgang; Schenk, Martin

    2017-08-04

    To detect blood withdrawal for patients with arterial blood pressure monitoring to increase patient safety and provide better sample dating. Blood pressure information obtained from a patient monitor was fed as a real-time data stream to an experimental medical framework. This framework was connected to an analytical application which observes changes in systolic, diastolic and mean pressure to determine anomalies in the continuous data stream. Detection was based on an increased mean blood pressure caused by the closing of the withdrawal three-way tap and an absence of systolic and diastolic measurements during this manipulation. For evaluation of the proposed algorithm, measured data from animal studies in healthy pigs were used. Using this novel approach for processing real-time measurement data of arterial pressure monitoring, the exact time of blood withdrawal could be successfully detected retrospectively and in real-time. The algorithm was able to detect 422 of 434 (97%) blood withdrawals for blood gas analysis in the retrospective analysis of 7 study trials. Additionally, 64 sampling events for other procedures like laboratory and activated clotting time analyses were detected. The proposed algorithm achieved a sensitivity of 0.97, a precision of 0.96 and an F1 score of 0.97. Arterial blood pressure monitoring data can be used to perform an accurate identification of individual blood samplings in order to reduce sample mix-ups and thereby increase patient safety.

  3. Round Robin evaluation of soil moisture retrieval models for the MetOp-A ASCAT Instrument

    NASA Astrophysics Data System (ADS)

    Gruber, Alexander; Paloscia, Simonetta; Santi, Emanuele; Notarnicola, Claudia; Pasolli, Luca; Smolander, Tuomo; Pulliainen, Jouni; Mittelbach, Heidi; Dorigo, Wouter; Wagner, Wolfgang

    2014-05-01

    Global soil moisture observations are crucial to understand hydrologic processes, earth-atmosphere interactions and climate variability. ESA's Climate Change Initiative (CCI) project aims to create a global consistent long-term soil moisture data set based on the merging of the best available active and passive satellite-based microwave sensors and retrieval algorithms. Within the CCI, a Round Robin evaluation of existing retrieval algorithms for both active and passive instruments was carried out. In this study we present the comparison of five different retrieval algorithms covering three different modelling principles applied to active MetOp-A ASCAT L1 backscatter data. These models include statistical models (Bayesian Regression and Support Vector Regression, provided by the Institute for Applied Remote Sensing, Eurac Research Viale Druso, Italy, and an Artificial Neural Network, provided by the Institute of Applied Physics, CNR-IFAC, Italy), a semi-empirical model (provided by the Finnish Meteorological Institute), and a change detection model (provided by the Vienna University of Technology). The algorithms were applied on L1 backscatter data within the period of 2007-2011, resampled to a 12.5 km grid. The evaluation was performed over 75 globally distributed, quality controlled in situ stations drawn from the International Soil Moisture Network (ISMN) using surface soil moisture data from the Global Land Data Assimilation System (GLDAS-) Noah land surface model as second independent reference. The temporal correlation between the data sets was analyzed and random errors of the the different algorithms were estimated using the triple collocation method. Absolute soil moisture values as well as soil moisture anomalies were considered including both long-term anomalies from the mean seasonal cycle and short-term anomalies from a five weeks moving average window. Results show a very high agreement between all five algorithms for most stations. A slight vegetation dependency of the errors and a spatial decorrelation of the performance patterns of the different algorithms was found. We conclude that future research should focus on understanding, combining and exploiting the advantages of all available modelling approaches rather than trying to optimize one approach to fit every possible condition.

  4. An Algorithm of Association Rule Mining for Microbial Energy Prospection

    PubMed Central

    Shaheen, Muhammad; Shahbaz, Muhammad

    2017-01-01

    The presence of hydrocarbons beneath earth’s surface produces some microbiological anomalies in soils and sediments. The detection of such microbial populations involves pure bio chemical processes which are specialized, expensive and time consuming. This paper proposes a new algorithm of context based association rule mining on non spatial data. The algorithm is a modified form of already developed algorithm which was for spatial database only. The algorithm is applied to mine context based association rules on microbial database to extract interesting and useful associations of microbial attributes with existence of hydrocarbon reserve. The surface and soil manifestations caused by the presence of hydrocarbon oxidizing microbes are selected from existing literature and stored in a shared database. The algorithm is applied on the said database to generate direct and indirect associations among the stored microbial indicators. These associations are then correlated with the probability of hydrocarbon’s existence. The numerical evaluation shows better accuracy for non-spatial data as compared to conventional algorithms at generating reliable and robust rules. PMID:28393846

  5. Aircraft Engine On-Line Diagnostics Through Dual-Channel Sensor Measurements: Development of an Enhanced System

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2008-01-01

    In this paper, an enhanced on-line diagnostic system which utilizes dual-channel sensor measurements is developed for the aircraft engine application. The enhanced system is composed of a nonlinear on-board engine model (NOBEM), the hybrid Kalman filter (HKF) algorithm, and fault detection and isolation (FDI) logic. The NOBEM provides the analytical third channel against which the dual-channel measurements are compared. The NOBEM is further utilized as part of the HKF algorithm which estimates measured engine parameters. Engine parameters obtained from the dual-channel measurements, the NOBEM, and the HKF are compared against each other. When the discrepancy among the signals exceeds a tolerance level, the FDI logic determines the cause of discrepancy. Through this approach, the enhanced system achieves the following objectives: 1) anomaly detection, 2) component fault detection, and 3) sensor fault detection and isolation. The performance of the enhanced system is evaluated in a simulation environment using faults in sensors and components, and it is compared to an existing baseline system.

  6. Approach to explosive hazard detection using sensor fusion and multiple kernel learning with downward-looking GPR and EMI sensor data

    NASA Astrophysics Data System (ADS)

    Pinar, Anthony; Masarik, Matthew; Havens, Timothy C.; Burns, Joseph; Thelen, Brian; Becker, John

    2015-05-01

    This paper explores the effectiveness of an anomaly detection algorithm for downward-looking ground penetrating radar (GPR) and electromagnetic inductance (EMI) data. Threat detection with GPR is challenged by high responses to non-target/clutter objects, leading to a large number of false alarms (FAs), and since the responses of target and clutter signatures are so similar, classifier design is not trivial. We suggest a method based on a Run Packing (RP) algorithm to fuse GPR and EMI data into a composite confidence map to improve detection as measured by the area-under-ROC (NAUC) metric. We examine the value of a multiple kernel learning (MKL) support vector machine (SVM) classifier using image features such as histogram of oriented gradients (HOG), local binary patterns (LBP), and local statistics. Experimental results on government furnished data show that use of our proposed fusion and classification methods improves the NAUC when compared with the results from individual sensors and a single kernel SVM classifier.

  7. Modeling inter-signal arrival times for accurate detection of CAN bus signal injection attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Michael Roy; Bridges, Robert A; Combs, Frank L

    Modern vehicles rely on hundreds of on-board electronic control units (ECUs) communicating over in-vehicle networks. As external interfaces to the car control networks (such as the on-board diagnostic (OBD) port, auxiliary media ports, etc.) become common, and vehicle-to-vehicle / vehicle-to-infrastructure technology is in the near future, the attack surface for vehicles grows, exposing control networks to potentially life-critical attacks. This paper addresses the need for securing the CAN bus by detecting anomalous traffic patterns via unusual refresh rates of certain commands. While previous works have identified signal frequency as an important feature for CAN bus intrusion detection, this paper providesmore » the first such algorithm with experiments on five attack scenarios. Our data-driven anomaly detection algorithm requires only five seconds of training time (on normal data) and achieves true positive / false discovery rates of 0.9998/0.00298, respectively (micro-averaged across the five experimental tests).« less

  8. Compressive Hyperspectral Imaging and Anomaly Detection

    DTIC Science & Technology

    2010-02-01

    Level Set Systems 1058 Embury Street Pacific Palisades , CA 90272 8. PERFORMING ORGANIZATION REPORT NUMBER 1A-2010 9. SPONSORING/MONITORING...were obtained from a simple algorithm, namely, the atoms in the trained image were very similar to the simple cell receptive fields in early vision...Field, "Emergence of simple- cell receptive field properties by learning a sparse code for natural images,’" Nature 381(6583), pp. 607-609, 1996. M

  9. Stochastic Control of Multi-Scale Networks: Modeling, Analysis and Algorithms

    DTIC Science & Technology

    2014-10-20

    Theory, (02 2012): 0. doi: B. T. Swapna, Atilla Eryilmaz, Ness B. Shroff. Throughput-Delay Analysis of Random Linear Network Coding for Wireless ... Wireless Sensor Networks and Effects of Long-Range Dependent Data, Sequential Analysis , (10 2012): 0. doi: 10.1080/07474946.2012.719435 Stefano...Sequential Analysis , (10 2012): 0. doi: John S. Baras, Shanshan Zheng. Sequential Anomaly Detection in Wireless Sensor Networks andEffects of Long

  10. Summary of Progress on SIG Ft. Ord ESTCP DemVal

    DTIC Science & Technology

    2007-04-01

    We report on progress under an ESTCP demonstration plan dedicated to demonstrating active learning - based UXO detection on an actual former UXO site...Ft. Ord), using EMI data. In addition to describing the details of the active - learning algorithm, we discuss techniques that were required when...terms of two dipole-moment magnitudes and two resonant frequencies. Information-theoretic active learning is then conducted on all anomalies to

  11. Cellular telephone-based radiation sensor and wide-area detection network

    DOEpatents

    Craig, William W [Pittsburg, CA; Labov, Simon E [Berkeley, CA

    2006-12-12

    A network of radiation detection instruments, each having a small solid state radiation sensor module integrated into a cellular phone for providing radiation detection data and analysis directly to a user. The sensor module includes a solid-state crystal bonded to an ASIC readout providing a low cost, low power, light weight compact instrument to detect and measure radiation energies in the local ambient radiation field. In particular, the photon energy, time of event, and location of the detection instrument at the time of detection is recorded for real time transmission to a central data collection/analysis system. The collected data from the entire network of radiation detection instruments are combined by intelligent correlation/analysis algorithms which map the background radiation and detect, identify and track radiation anomalies in the region.

  12. Cellular telephone-based radiation detection instrument

    DOEpatents

    Craig, William W [Pittsburg, CA; Labov, Simon E [Berkeley, CA

    2011-06-14

    A network of radiation detection instruments, each having a small solid state radiation sensor module integrated into a cellular phone for providing radiation detection data and analysis directly to a user. The sensor module includes a solid-state crystal bonded to an ASIC readout providing a low cost, low power, light weight compact instrument to detect and measure radiation energies in the local ambient radiation field. In particular, the photon energy, time of event, and location of the detection instrument at the time of detection is recorded for real time transmission to a central data collection/analysis system. The collected data from the entire network of radiation detection instruments are combined by intelligent correlation/analysis algorithms which map the background radiation and detect, identify and track radiation anomalies in the region.

  13. Cellular telephone-based wide-area radiation detection network

    DOEpatents

    Craig, William W [Pittsburg, CA; Labov, Simon E [Berkeley, CA

    2009-06-09

    A network of radiation detection instruments, each having a small solid state radiation sensor module integrated into a cellular phone for providing radiation detection data and analysis directly to a user. The sensor module includes a solid-state crystal bonded to an ASIC readout providing a low cost, low power, light weight compact instrument to detect and measure radiation energies in the local ambient radiation field. In particular, the photon energy, time of event, and location of the detection instrument at the time of detection is recorded for real time transmission to a central data collection/analysis system. The collected data from the entire network of radiation detection instruments are combined by intelligent correlation/analysis algorithms which map the background radiation and detect, identify and track radiation anomalies in the region.

  14. A real negative selection algorithm with evolutionary preference for anomaly detection

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Chen, Wen; Li, Tao

    2017-04-01

    Traditional real negative selection algorithms (RNSAs) adopt the estimated coverage (c0) as the algorithm termination threshold, and generate detectors randomly. With increasing dimensions, the data samples could reside in the low-dimensional subspace, so that the traditional detectors cannot effectively distinguish these samples. Furthermore, in high-dimensional feature space, c0 cannot exactly reflect the detectors set coverage rate for the nonself space, and it could lead the algorithm to be terminated unexpectedly when the number of detectors is insufficient. These shortcomings make the traditional RNSAs to perform poorly in high-dimensional feature space. Based upon "evolutionary preference" theory in immunology, this paper presents a real negative selection algorithm with evolutionary preference (RNSAP). RNSAP utilizes the "unknown nonself space", "low-dimensional target subspace" and "known nonself feature" as the evolutionary preference to guide the generation of detectors, thus ensuring the detectors can cover the nonself space more effectively. Besides, RNSAP uses redundancy to replace c0 as the termination threshold, in this way RNSAP can generate adequate detectors under a proper convergence rate. The theoretical analysis and experimental result demonstrate that, compared to the classical RNSA (V-detector), RNSAP can achieve a higher detection rate, but with less detectors and computing cost.

  15. Artificial immune system algorithm in VLSI circuit configuration

    NASA Astrophysics Data System (ADS)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    In artificial intelligence, the artificial immune system is a robust bio-inspired heuristic method, extensively used in solving many constraint optimization problems, anomaly detection, and pattern recognition. This paper discusses the implementation and performance of artificial immune system (AIS) algorithm integrated with Hopfield neural networks for VLSI circuit configuration based on 3-Satisfiability problems. Specifically, we emphasized on the clonal selection technique in our binary artificial immune system algorithm. We restrict our logic construction to 3-Satisfiability (3-SAT) clauses in order to outfit with the transistor configuration in VLSI circuit. The core impetus of this research is to find an ideal hybrid model to assist in the VLSI circuit configuration. In this paper, we compared the artificial immune system (AIS) algorithm (HNN-3SATAIS) with the brute force algorithm incorporated with Hopfield neural network (HNN-3SATBF). Microsoft Visual C++ 2013 was used as a platform for training, simulating and validating the performances of the proposed network. The results depict that the HNN-3SATAIS outperformed HNN-3SATBF in terms of circuit accuracy and CPU time. Thus, HNN-3SATAIS can be used to detect an early error in the VLSI circuit design.

  16. Analysis Spectrum of ECG Signal and QRS Detection during Running on Treadmill

    NASA Astrophysics Data System (ADS)

    Agung Suhendra, M.; Ilham R., M.; Simbolon, Artha I.; Faizal A., M.; Munandar, A.

    2018-03-01

    The heart is an important organ in our metabolism in which it controls circulatory and oxygen. The heart exercise is needed one of them using the treadmill to prevent health. To analysis, it using electrocardiograph (ECG) to investigating and diagnosing anomalies of the heart. In this paper, we would like to analysis ECG signals during running on the treadmill with kinds of speeds. There are two analysis ECG signals i.e. QRS detection and power spectrum density (PSD). The result of PSD showed that subject 3 has highly for all subject and the result of QRS detection using pan Tomkins algorithm that a percentage of failed detection is an approaching to 0 % for all subject.

  17. Detection of anomalies in radio tomography of asteroids: Source count and forward errors

    NASA Astrophysics Data System (ADS)

    Pursiainen, S.; Kaasalainen, M.

    2014-09-01

    The purpose of this study was to advance numerical methods for radio tomography in which asteroid's internal electric permittivity distribution is to be recovered from radio frequency data gathered by an orbiter. The focus was on signal generation via multiple sources (transponders) providing one potential, or even essential, scenario to be implemented in a challenging in situ measurement environment and within tight payload limits. As a novel feature, the effects of forward errors including noise and a priori uncertainty of the forward (data) simulation were examined through a combination of the iterative alternating sequential (IAS) inverse algorithm and finite-difference time-domain (FDTD) simulation of time evolution data. Single and multiple source scenarios were compared in two-dimensional localization of permittivity anomalies. Three different anomaly strengths and four levels of total noise were tested. Results suggest, among other things, that multiple sources can be necessary to obtain appropriate results, for example, to distinguish three separate anomalies with permittivity less or equal than half of the background value, relevant in recovery of internal cavities.

  18. Condition monitoring of 3G cellular networks through competitive neural models.

    PubMed

    Barreto, Guilherme A; Mota, João C M; Souza, Luis G M; Frota, Rewbenio A; Aguayo, Leonardo

    2005-09-01

    We develop an unsupervised approach to condition monitoring of cellular networks using competitive neural algorithms. Training is carried out with state vectors representing the normal functioning of a simulated CDMA2000 network. Once training is completed, global and local normality profiles (NPs) are built from the distribution of quantization errors of the training state vectors and their components, respectively. The global NP is used to evaluate the overall condition of the cellular system. If abnormal behavior is detected, local NPs are used in a component-wise fashion to find abnormal state variables. Anomaly detection tests are performed via percentile-based confidence intervals computed over the global and local NPs. We compared the performance of four competitive algorithms [winner-take-all (WTA), frequency-sensitive competitive learning (FSCL), self-organizing map (SOM), and neural-gas algorithm (NGA)] and the results suggest that the joint use of global and local NPs is more efficient and more robust than current single-threshold methods.

  19. Using total precipitable water anomaly as a forecast aid for heavy precipitation events

    NASA Astrophysics Data System (ADS)

    VandenBoogart, Lance M.

    Heavy precipitation events are of interest to weather forecasters, local government officials, and the Department of Defense. These events can cause flooding which endangers lives and property. Military concerns include decreased trafficability for military vehicles, which hinders both war- and peace-time missions. Even in data-rich areas such as the United States, it is difficult to determine when and where a heavy precipitation event will occur. The challenges are compounded in data-denied regions. The hypothesis that total precipitable water anomaly (TPWA) will be positive and increasing preceding heavy precipitation events is tested in order to establish an understanding of TPWA evolution. Results are then used to create a precipitation forecast aid. The operational, 16 km-gridded, 6-hourly TPWA product developed at the Cooperative Institute for Research in the Atmosphere (CIRA) compares a blended TPW product with a TPW climatology to give a percent of normal TPWA value. TPWA evolution is examined for 84 heavy precipitation events which occurred between August 2010 and November 2011. An algorithm which uses various TPWA thresholds derived from the 84 events is then developed and tested using dichotomous contingency table verification statistics to determine the extent to which satellite-based TPWA might be used to aid in forecasting precipitation over mesoscale domains. The hypothesis of positive and increasing TPWA preceding heavy precipitation events is supported by the analysis. Event-average TPWA rises for 36 hours and peaks at 154% of normal at the event time. The average precipitation event detected by the forecast algorithm is not of sufficient magnitude to be termed a "heavy" precipitation event; however, the algorithm adds skill to a climatological precipitation forecast. Probability of detection is low and false alarm ratios are large, thus qualifying the algorithm's current use as an aid rather than a deterministic forecast tool. The algorithm's ability to be easily modified and quickly run gives it potential for future use in precipitation forecasting.

  20. CNNEDGEPOT: CNN based edge detection of 2D near surface potential field data

    NASA Astrophysics Data System (ADS)

    Aydogan, D.

    2012-09-01

    All anomalies are important in the interpretation of gravity and magnetic data because they indicate some important structural features. One of the advantages of using gravity or magnetic data for searching contacts is to be detected buried structures whose signs could not be seen on the surface. In this paper, a general view of the cellular neural network (CNN) method with a large scale nonlinear circuit is presented focusing on its image processing applications. The proposed CNN model is used consecutively in order to extract body and body edges. The algorithm is a stochastic image processing method based on close neighborhood relationship of the cells and optimization of A, B and I matrices entitled as cloning template operators. Setting up a CNN (continues time cellular neural network (CTCNN) or discrete time cellular neural network (DTCNN)) for a particular task needs a proper selection of cloning templates which determine the dynamics of the method. The proposed algorithm is used for image enhancement and edge detection. The proposed method is applied on synthetic and field data generated for edge detection of near-surface geological bodies that mask each other in various depths and dimensions. The program named as CNNEDGEPOT is a set of functions written in MATLAB software. The GUI helps the user to easily change all the required CNN model parameters. A visual evaluation of the outputs due to DTCNN and CTCNN are carried out and the results are compared with each other. These examples demonstrate that in detecting the geological features the CNN model can be used for visual interpretation of near surface gravity or magnetic anomaly maps.

  1. Detecting Visually Observable Disease Symptoms from Faces.

    PubMed

    Wang, Kuan; Luo, Jiebo

    2016-12-01

    Recent years have witnessed an increasing interest in the application of machine learning to clinical informatics and healthcare systems. A significant amount of research has been done on healthcare systems based on supervised learning. In this study, we present a generalized solution to detect visually observable symptoms on faces using semi-supervised anomaly detection combined with machine vision algorithms. We rely on the disease-related statistical facts to detect abnormalities and classify them into multiple categories to narrow down the possible medical reasons of detecting. Our method is in contrast with most existing approaches, which are limited by the availability of labeled training data required for supervised learning, and therefore offers the major advantage of flagging any unusual and visually observable symptoms.

  2. Using Machine Learning for Advanced Anomaly Detection and Classification

    NASA Astrophysics Data System (ADS)

    Lane, B.; Poole, M.; Camp, M.; Murray-Krezan, J.

    2016-09-01

    Machine Learning (ML) techniques have successfully been used in a wide variety of applications to automatically detect and potentially classify changes in activity, or a series of activities by utilizing large amounts data, sometimes even seemingly-unrelated data. The amount of data being collected, processed, and stored in the Space Situational Awareness (SSA) domain has grown at an exponential rate and is now better suited for ML. This paper describes development of advanced algorithms to deliver significant improvements in characterization of deep space objects and indication and warning (I&W) using a global network of telescopes that are collecting photometric data on a multitude of space-based objects. The Phase II Air Force Research Laboratory (AFRL) Small Business Innovative Research (SBIR) project Autonomous Characterization Algorithms for Change Detection and Characterization (ACDC), contracted to ExoAnalytic Solutions Inc. is providing the ability to detect and identify photometric signature changes due to potential space object changes (e.g. stability, tumble rate, aspect ratio), and correlate observed changes to potential behavioral changes using a variety of techniques, including supervised learning. Furthermore, these algorithms run in real-time on data being collected and processed by the ExoAnalytic Space Operations Center (EspOC), providing timely alerts and warnings while dynamically creating collection requirements to the EspOC for the algorithms that generate higher fidelity I&W. This paper will discuss the recently implemented ACDC algorithms, including the general design approach and results to date. The usage of supervised algorithms, such as Support Vector Machines, Neural Networks, k-Nearest Neighbors, etc., and unsupervised algorithms, for example k-means, Principle Component Analysis, Hierarchical Clustering, etc., and the implementations of these algorithms is explored. Results of applying these algorithms to EspOC data both in an off-line "pattern of life" analysis as well as using the algorithms on-line in real-time, meaning as data is collected, will be presented. Finally, future work in applying ML for SSA will be discussed.

  3. Conditional anomaly detection methods for patient–management alert systems

    PubMed Central

    Valko, Michal; Cooper, Gregory; Seybert, Amy; Visweswaran, Shyam; Saul, Melissa; Hauskrecht, Milos

    2010-01-01

    Anomaly detection methods can be very useful in identifying unusual or interesting patterns in data. A recently proposed conditional anomaly detection framework extends anomaly detection to the problem of identifying anomalous patterns on a subset of attributes in the data. The anomaly always depends (is conditioned) on the value of remaining attributes. The work presented in this paper focuses on instance–based methods for detecting conditional anomalies. The methods rely on the distance metric to identify examples in the dataset that are most critical for detecting the anomaly. We investigate various metrics and metric learning methods to optimize the performance of the instance–based anomaly detection methods. We show the benefits of the instance–based methods on two real–world detection problems: detection of unusual admission decisions for patients with the community–acquired pneumonia and detection of unusual orders of an HPF4 test that is used to confirm Heparin induced thrombocytopenia — a life–threatening condition caused by the Heparin therapy. PMID:25392850

  4. Effects of sea ice cover on satellite-detected primary production in the Arctic Ocean

    PubMed Central

    Lee, Zhongping; Mitchell, B. Greg; Nevison, Cynthia D.

    2016-01-01

    The influence of decreasing Arctic sea ice on net primary production (NPP) in the Arctic Ocean has been considered in multiple publications but is not well constrained owing to the potentially large errors in satellite algorithms. In particular, the Arctic Ocean is rich in coloured dissolved organic matter (CDOM) that interferes in the detection of chlorophyll a concentration of the standard algorithm, which is the primary input to NPP models. We used the quasi-analytic algorithm (Lee et al. 2002 Appl. Opti. 41, 5755−5772. (doi:10.1364/AO.41.005755)) that separates absorption by phytoplankton from absorption by CDOM and detrital matter. We merged satellite data from multiple satellite sensors and created a 19 year time series (1997–2015) of NPP. During this period, both the estimated annual total and the summer monthly maximum pan-Arctic NPP increased by about 47%. Positive monthly anomalies in NPP are highly correlated with positive anomalies in open water area during the summer months. Following the earlier ice retreat, the start of the high-productivity season has become earlier, e.g. at a mean rate of −3.0 d yr−1 in the northern Barents Sea, and the length of the high-productivity period has increased from 15 days in 1998 to 62 days in 2015. While in some areas, the termination of the productive season has been extended, owing to delayed ice formation, the termination has also become earlier in other areas, likely owing to limited nutrients. PMID:27881759

  5. A rapid local singularity analysis algorithm with applications

    NASA Astrophysics Data System (ADS)

    Chen, Zhijun; Cheng, Qiuming; Agterberg, Frits

    2015-04-01

    The local singularity model developed by Cheng is fast gaining popularity in characterizing mineralization and detecting anomalies of geochemical, geophysical and remote sensing data. However in one of the conventional algorithms involving the moving average values with different scales is time-consuming especially while analyzing a large dataset. Summed area table (SAT), also called as integral image, is a fast algorithm used within the Viola-Jones object detection framework in computer vision area. Historically, the principle of SAT is well-known in the study of multi-dimensional probability distribution functions, namely in computing 2D (or ND) probabilities (area under the probability distribution) from the respective cumulative distribution functions. We introduce SAT and it's variation Rotated Summed Area Table in the isotropic, anisotropic or directional local singularity mapping in this study. Once computed using SAT, any one of the rectangular sum can be computed at any scale or location in constant time. The area for any rectangular region in the image can be computed by using only 4 array accesses in constant time independently of the size of the region; effectively reducing the time complexity from O(n) to O(1). New programs using Python, Julia, matlab and C++ are implemented respectively to satisfy different applications, especially to the big data analysis. Several large geochemical and remote sensing datasets are tested. A wide variety of scale changes (linear spacing or log spacing) for non-iterative or iterative approach are adopted to calculate the singularity index values and compare the results. The results indicate that the local singularity analysis with SAT is more robust and superior to traditional approach in identifying anomalies.

  6. Effects of sea ice cover on satellite-detected primary production in the Arctic Ocean.

    PubMed

    Kahru, Mati; Lee, Zhongping; Mitchell, B Greg; Nevison, Cynthia D

    2016-11-01

    The influence of decreasing Arctic sea ice on net primary production (NPP) in the Arctic Ocean has been considered in multiple publications but is not well constrained owing to the potentially large errors in satellite algorithms. In particular, the Arctic Ocean is rich in coloured dissolved organic matter (CDOM) that interferes in the detection of chlorophyll a concentration of the standard algorithm, which is the primary input to NPP models. We used the quasi-analytic algorithm (Lee et al 2002 Appl. Opti. 41, 5755-5772. (doi:10.1364/AO.41.005755)) that separates absorption by phytoplankton from absorption by CDOM and detrital matter. We merged satellite data from multiple satellite sensors and created a 19 year time series (1997-2015) of NPP. During this period, both the estimated annual total and the summer monthly maximum pan-Arctic NPP increased by about 47%. Positive monthly anomalies in NPP are highly correlated with positive anomalies in open water area during the summer months. Following the earlier ice retreat, the start of the high-productivity season has become earlier, e.g. at a mean rate of -3.0 d yr -1 in the northern Barents Sea, and the length of the high-productivity period has increased from 15 days in 1998 to 62 days in 2015. While in some areas, the termination of the productive season has been extended, owing to delayed ice formation, the termination has also become earlier in other areas, likely owing to limited nutrients. © 2016 The Author(s).

  7. An algorithm for monitoring the traffic on a less-travelled road using multi-modal sensor suite

    NASA Astrophysics Data System (ADS)

    Damarla, Thyagaraju; Chatters, Gary; Liss, Brian; Vu, Hao; Sabatier, James M.

    2014-06-01

    We conducted an experiment to correlate the information gathered by a suite of hard sensors with the information on social networks such as Twitter, Facebook, etc. The experiment consisting of monitoring traffic on a well- traveled road and on a road inside a facility. The sensors suite selected mainly consists of sensors that require low power for operation and last a longtime. The output of each sensor is analyzed to classify the targets as ground vehicles, humans, and airborne targets. The algorithm is also used to count the number of targets belonging to each type so the sensor can store the information for anomaly detection. In this paper, we describe the classifier algorithms used for acoustic, seismic, and passive infrared (PIR) sensor data.

  8. Hardware-software and algorithmic provision of multipoint systems for long-term monitoring of dynamic processes

    NASA Astrophysics Data System (ADS)

    Yakunin, A. G.; Hussein, H. M.

    2017-08-01

    An example of information-measuring systems for climate monitoring and operational control of energy resources consumption of the university campus that is functioning in the Altai State Technical University since 2009. The advantages of using such systems for studying various physical processes are discussed. General principles of construction of similar systems, their software, hardware and algorithmic support are considered. It is shown that their fundamental difference from traditional SCADA - systems is the use of databases for storing the results of the observation with a specialized data structure, and by preprocessing of the input signal for its compression. Another difference is the absence of clear criteria for detecting the anomalies in the time series of the observed process. The examples of algorithms that solve this problem are given.

  9. Real-time detection and classification of anomalous events in streaming data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferragut, Erik M.; Goodall, John R.; Iannacone, Michael D.

    2016-04-19

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The events can be displayed to a user in user-defined groupings in an animated fashion. The system can include a plurality of anomaly detectors that together implement an algorithm to identify low probability events and detect atypical traffic patterns. The atypical traffic patterns can then be classified as being of interest or not. In one particular example, in a network environment, the classification can be whether the network traffic is malicious or not.

  10. Time Series Discord Detection in Medical Data using a Parallel Relational Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodbridge, Diane; Rintoul, Mark Daniel; Wilson, Andrew T.

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less

  11. Time Series Discord Detection in Medical Data using a Parallel Relational Database [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodbridge, Diane; Wilson, Andrew T.; Rintoul, Mark Daniel

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less

  12. A study on efficient detection of network-based IP spoofing DDoS and malware-infected Systems.

    PubMed

    Seo, Jung Woo; Lee, Sang Jin

    2016-01-01

    Large-scale network environments require effective detection and response methods against DDoS attacks. Depending on the advancement of IT infrastructure such as the server or network equipment, DDoS attack traffic arising from a few malware-infected systems capable of crippling the organization's internal network has become a significant threat. This study calculates the frequency of network-based packet attributes and analyzes the anomalies of the attributes in order to detect IP-spoofed DDoS attacks. Also, a method is proposed for the effective detection of malware infection systems triggering IP-spoofed DDoS attacks on an edge network. Detection accuracy and performance of the collected real-time traffic on a core network is analyzed thru the use of the proposed algorithm, and a prototype was developed to evaluate the performance of the algorithm. As a result, DDoS attacks on the internal network were detected in real-time and whether or not IP addresses were spoofed was confirmed. Detecting hosts infected by malware in real-time allowed the execution of intrusion responses before stoppage of the internal network caused by large-scale attack traffic.

  13. Systems Modeling to Implement Integrated System Health Management Capability

    NASA Technical Reports Server (NTRS)

    Figueroa, Jorge F.; Walker, Mark; Morris, Jonathan; Smith, Harvey; Schmalzel, John

    2007-01-01

    ISHM capability includes: detection of anomalies, diagnosis of causes of anomalies, prediction of future anomalies, and user interfaces that enable integrated awareness (past, present, and future) by users. This is achieved by focused management of data, information and knowledge (DIaK) that will likely be distributed across networks. Management of DIaK implies storage, sharing (timely availability), maintaining, evolving, and processing. Processing of DIaK encapsulates strategies, methodologies, algorithms, etc. focused on achieving high ISHM Functional Capability Level (FCL). High FCL means a high degree of success in detecting anomalies, diagnosing causes, predicting future anomalies, and enabling health integrated awareness by the user. A model that enables ISHM capability, and hence, DIaK management, is denominated the ISHM Model of the System (IMS). We describe aspects of the IMS that focus on processing of DIaK. Strategies, methodologies, and algorithms require proper context. We describe an approach to define and use contexts, implementation in an object-oriented software environment (G2), and validation using actual test data from a methane thruster test program at NASA SSC. Context is linked to existence of relationships among elements of a system. For example, the context to use a strategy to detect leak is to identify closed subsystems (e.g. bounded by closed valves and by tanks) that include pressure sensors, and check if the pressure is changing. We call these subsystems Pressurizable Subsystems. If pressure changes are detected, then all members of the closed subsystem become suspect of leakage. In this case, the context is defined by identifying a subsystem that is suitable for applying a strategy. Contexts are defined in many ways. Often, a context is defined by relationships of function (e.g. liquid flow, maintaining pressure, etc.), form (e.g. part of the same component, connected to other components, etc.), or space (e.g. physically close, touching the same common element, etc.). The context might be defined dynamically (if conditions for the context appear and disappear dynamically) or statically. Although this approach is akin to case-based reasoning, we are implementing it using a software environment that embodies tools to define and manage relationships (of any nature) among objects in a very intuitive manner. Context for higher level inferences (that use detected anomalies or events), primarily for diagnosis and prognosis, are related to causal relationships. This is useful to develop root-cause analysis trees showing an event linked to its possible causes and effects. The innovation pertaining to RCA trees encompasses use of previously defined subsystems as well as individual elements in the tree. This approach allows more powerful implementations of RCA capability in object-oriented environments. For example, if a pressurizable subsystem is leaking, its root-cause representation within an RCA tree will show that the cause is that all elements of that subsystem are suspect of leak. Such a tree would apply to all instances of leak-events detected and all elements in all pressurizable subsystems in the system. Example subsystems in our environment to build IMS include: Pressurizable Subsystem, Fluid-Fill Subsystem, Flow-Thru-Valve Subsystem, and Fluid Supply Subsystem. The software environment for IMS is designed to potentially allow definition of any relationship suitable to create a context to achieve ISHM capability.

  14. A study of malware detection on smart mobile devices

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Zhang, Hanlin; Xu, Guobin

    2013-05-01

    The growing in use of smart mobile devices for everyday applications has stimulated the spread of mobile malware, especially on popular mobile platforms. As a consequence, malware detection becomes ever more critical in sustaining the mobile market and providing a better user experience. In this paper, we review the existing malware and detection schemes. Using real-world malware samples with known signatures, we evaluate four popular commercial anti-virus tools and our data shows that these tools can achieve high detection accuracy. To deal with the new malware with unknown signatures, we study the anomaly based detection using decision tree algorithm. We evaluate the effectiveness of our detection scheme using malware and legitimate software samples. Our data shows that the detection scheme using decision tree can achieve a detection rate up to 90% and a false positive rate as low as 10%.

  15. Statistical Techniques For Real-time Anomaly Detection Using Spark Over Multi-source VMware Performance Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solaimani, Mohiuddin; Iftekhar, Mohammed; Khan, Latifur

    Anomaly detection refers to the identi cation of an irregular or unusual pat- tern which deviates from what is standard, normal, or expected. Such deviated patterns typically correspond to samples of interest and are assigned different labels in different domains, such as outliers, anomalies, exceptions, or malware. Detecting anomalies in fast, voluminous streams of data is a formidable chal- lenge. This paper presents a novel, generic, real-time distributed anomaly detection framework for heterogeneous streaming data where anomalies appear as a group. We have developed a distributed statistical approach to build a model and later use it to detect anomaly. Asmore » a case study, we investigate group anomaly de- tection for a VMware-based cloud data center, which maintains a large number of virtual machines (VMs). We have built our framework using Apache Spark to get higher throughput and lower data processing time on streaming data. We have developed a window-based statistical anomaly detection technique to detect anomalies that appear sporadically. We then relaxed this constraint with higher accuracy by implementing a cluster-based technique to detect sporadic and continuous anomalies. We conclude that our cluster-based technique out- performs other statistical techniques with higher accuracy and lower processing time.« less

  16. Developing an Automated Machine Learning Marine Oil Spill Detection System with Synthetic Aperture Radar

    NASA Astrophysics Data System (ADS)

    Pinales, J. C.; Graber, H. C.; Hargrove, J. T.; Caruso, M. J.

    2016-02-01

    Previous studies have demonstrated the ability to detect and classify marine hydrocarbon films with spaceborne synthetic aperture radar (SAR) imagery. The dampening effects of hydrocarbon discharges on small surface capillary-gravity waves renders the ocean surface "radar dark" compared with the standard wind-borne ocean surfaces. Given the scope and impact of events like the Deepwater Horizon oil spill, the need for improved, automated and expedient monitoring of hydrocarbon-related marine anomalies has become a pressing and complex issue for governments and the extraction industry. The research presented here describes the development, training, and utilization of an algorithm that detects marine oil spills in an automated, semi-supervised manner, utilizing X-, C-, or L-band SAR data as the primary input. Ancillary datasets include related radar-borne variables (incidence angle, etc.), environmental data (wind speed, etc.) and textural descriptors. Shapefiles produced by an experienced human-analyst served as targets (validation) during the training portion of the investigation. Training and testing datasets were chosen for development and assessment of algorithm effectiveness as well as optimal conditions for oil detection in SAR data. The algorithm detects oil spills by following a 3-step methodology: object detection, feature extraction, and classification. Previous oil spill detection and classification methodologies such as machine learning algorithms, artificial neural networks (ANN), and multivariate classification methods like partial least squares-discriminant analysis (PLS-DA) are evaluated and compared. Statistical, transform, and model-based image texture techniques, commonly used for object mapping directly or as inputs for more complex methodologies, are explored to determine optimal textures for an oil spill detection system. The influence of the ancillary variables is explored, with a particular focus on the role of strong vs. weak wind forcing.

  17. Towards Resilient Critical Infrastructures: Application of Type-2 Fuzzy Logic in Embedded Network Security Cyber Sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ondrej Linda; Todd Vollmer; Jim Alves-Foss

    2011-08-01

    Resiliency and cyber security of modern critical infrastructures is becoming increasingly important with the growing number of threats in the cyber-environment. This paper proposes an extension to a previously developed fuzzy logic based anomaly detection network security cyber sensor via incorporating Type-2 Fuzzy Logic (T2 FL). In general, fuzzy logic provides a framework for system modeling in linguistic form capable of coping with imprecise and vague meanings of words. T2 FL is an extension of Type-1 FL which proved to be successful in modeling and minimizing the effects of various kinds of dynamic uncertainties. In this paper, T2 FL providesmore » a basis for robust anomaly detection and cyber security state awareness. In addition, the proposed algorithm was specifically developed to comply with the constrained computational requirements of low-cost embedded network security cyber sensors. The performance of the system was evaluated on a set of network data recorded from an experimental cyber-security test-bed.« less

  18. Model parameter estimations from residual gravity anomalies due to simple-shaped sources using Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil

    2016-06-01

    An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of M-H sampler. Although it is not a common inversion technique in geophysics, it can be stated that DE algorithm is worth to get more interest for parameter estimations from potential field data in geophysics considering its good accuracy, less computational cost (in the present problem) and the fact that a well-constructed initial guess is not required to reach the global minimum.

  19. Scalable Parallel Density-based Clustering and Applications

    NASA Astrophysics Data System (ADS)

    Patwary, Mostofa Ali

    2014-04-01

    Recently, density-based clustering algorithms (DBSCAN and OPTICS) have gotten significant attention of the scientific community due to their unique capability of discovering arbitrary shaped clusters and eliminating noise data. These algorithms have several applications, which require high performance computing, including finding halos and subhalos (clusters) from massive cosmology data in astrophysics, analyzing satellite images, X-ray crystallography, and anomaly detection. However, parallelization of these algorithms are extremely challenging as they exhibit inherent sequential data access order, unbalanced workload resulting in low parallel efficiency. To break the data access sequentiality and to achieve high parallelism, we develop new parallel algorithms, both for DBSCAN and OPTICS, designed using graph algorithmic techniques. For example, our parallel DBSCAN algorithm exploits the similarities between DBSCAN and computing connected components. Using datasets containing up to a billion floating point numbers, we show that our parallel density-based clustering algorithms significantly outperform the existing algorithms, achieving speedups up to 27.5 on 40 cores on shared memory architecture and speedups up to 5,765 using 8,192 cores on distributed memory architecture. In our experiments, we found that while achieving the scalability, our algorithms produce clustering results with comparable quality to the classical algorithms.

  20. FRaC: a feature-modeling approach for semi-supervised and unsupervised anomaly detection.

    PubMed

    Noto, Keith; Brodley, Carla; Slonim, Donna

    2012-01-01

    Anomaly detection involves identifying rare data instances (anomalies) that come from a different class or distribution than the majority (which are simply called "normal" instances). Given a training set of only normal data, the semi-supervised anomaly detection task is to identify anomalies in the future. Good solutions to this task have applications in fraud and intrusion detection. The unsupervised anomaly detection task is different: Given unlabeled, mostly-normal data, identify the anomalies among them. Many real-world machine learning tasks, including many fraud and intrusion detection tasks, are unsupervised because it is impractical (or impossible) to verify all of the training data. We recently presented FRaC, a new approach for semi-supervised anomaly detection. FRaC is based on using normal instances to build an ensemble of feature models, and then identifying instances that disagree with those models as anomalous. In this paper, we investigate the behavior of FRaC experimentally and explain why FRaC is so successful. We also show that FRaC is a superior approach for the unsupervised as well as the semi-supervised anomaly detection task, compared to well-known state-of-the-art anomaly detection methods, LOF and one-class support vector machines, and to an existing feature-modeling approach.

  1. FRaC: a feature-modeling approach for semi-supervised and unsupervised anomaly detection

    PubMed Central

    Brodley, Carla; Slonim, Donna

    2011-01-01

    Anomaly detection involves identifying rare data instances (anomalies) that come from a different class or distribution than the majority (which are simply called “normal” instances). Given a training set of only normal data, the semi-supervised anomaly detection task is to identify anomalies in the future. Good solutions to this task have applications in fraud and intrusion detection. The unsupervised anomaly detection task is different: Given unlabeled, mostly-normal data, identify the anomalies among them. Many real-world machine learning tasks, including many fraud and intrusion detection tasks, are unsupervised because it is impractical (or impossible) to verify all of the training data. We recently presented FRaC, a new approach for semi-supervised anomaly detection. FRaC is based on using normal instances to build an ensemble of feature models, and then identifying instances that disagree with those models as anomalous. In this paper, we investigate the behavior of FRaC experimentally and explain why FRaC is so successful. We also show that FRaC is a superior approach for the unsupervised as well as the semi-supervised anomaly detection task, compared to well-known state-of-the-art anomaly detection methods, LOF and one-class support vector machines, and to an existing feature-modeling approach. PMID:22639542

  2. Graph-based structural change detection for rotating machinery monitoring

    NASA Astrophysics Data System (ADS)

    Lu, Guoliang; Liu, Jie; Yan, Peng

    2018-01-01

    Detection of structural changes is critically important in operational monitoring of a rotating machine. This paper presents a novel framework for this purpose, where a graph model for data modeling is adopted to represent/capture statistical dynamics in machine operations. Meanwhile we develop a numerical method for computing temporal anomalies in the constructed graphs. The martingale-test method is employed for the change detection when making decisions on possible structural changes, where excellent performance is demonstrated outperforming exciting results such as the autoregressive-integrated-moving average (ARIMA) model. Comprehensive experimental results indicate good potentials of the proposed algorithm in various engineering applications. This work is an extension of a recent result (Lu et al., 2017).

  3. Monitoring System for Storm Readiness and Recovery of Test Facilities: Integrated System Health Management (ISHM) Approach

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Morris, Jon; Turowski, Mark; Franzl, Richard; Walker, Mark; Kapadia, Ravi; Venkatesh, Meera; Schmalzel, John

    2010-01-01

    Severe weather events are likely occurrences on the Mississippi Gulf Coast. It is important to rapidly diagnose and mitigate the effects of storms on Stennis Space Center's rocket engine test complex to avoid delays to critical test article programs, reduce costs, and maintain safety. An Integrated Systems Health Management (ISHM) approach and technologies are employed to integrate environmental (weather) monitoring, structural modeling, and the suite of available facility instrumentation to provide information for readiness before storms, rapid initial damage assessment to guide mitigation planning, and then support on-going assurance as repairs are effected and finally support recertification. The system is denominated Katrina Storm Monitoring System (KStorMS). Integrated Systems Health Management (ISHM) describes a comprehensive set of capabilities that provide insight into the behavior the health of a system. Knowing the status of a system allows decision makers to effectively plan and execute their mission. For example, early insight into component degradation and impending failures provides more time to develop work around strategies and more effectively plan for maintenance. Failures of system elements generally occur over time. Information extracted from sensor data, combined with system-wide knowledge bases and methods for information extraction and fusion, inference, and decision making, can be used to detect incipient failures. If failures do occur, it is critical to detect and isolate them, and suggest an appropriate course of action. ISHM enables determining the condition (health) of every element in a complex system-of-systems or SoS (detect anomalies, diagnose causes, predict future anomalies), and provide data, information, and knowledge (DIaK) to control systems for safe and effective operation. ISHM capability is achieved by using a wide range of technologies that enable anomaly detection, diagnostics, prognostics, and advise for control: (1) anomaly detection algorithms and strategies, (2) fusion of DIaK for anomaly detection (model-based, numerical, statistical, empirical, expert-based, qualitative, etc.), (3) diagnostics/prognostics strategies and methods, (4) user interface, (5) advanced control strategies, (6) integration architectures/frameworks, (7) embedding of intelligence. Many of these technologies are mature, and they are being used in the KStorMS. The paper will describe the design, implementation, and operation of the KStorMS; and discuss further evolution to support other needs such as condition-based maintenance (CBM).

  4. Simulation-Based Evaluation of the Performances of an Algorithm for Detecting Abnormal Disease-Related Features in Cattle Mortality Records.

    PubMed

    Perrin, Jean-Baptiste; Durand, Benoît; Gay, Emilie; Ducrot, Christian; Hendrikx, Pascal; Calavas, Didier; Hénaux, Viviane

    2015-01-01

    We performed a simulation study to evaluate the performances of an anomaly detection algorithm considered in the frame of an automated surveillance system of cattle mortality. The method consisted in a combination of temporal regression and spatial cluster detection which allows identifying, for a given week, clusters of spatial units showing an excess of deaths in comparison with their own historical fluctuations. First, we simulated 1,000 outbreaks of a disease causing extra deaths in the French cattle population (about 200,000 herds and 20 million cattle) according to a model mimicking the spreading patterns of an infectious disease and injected these disease-related extra deaths in an authentic mortality dataset, spanning from January 2005 to January 2010. Second, we applied our algorithm on each of the 1,000 semi-synthetic datasets to identify clusters of spatial units showing an excess of deaths considering their own historical fluctuations. Third, we verified if the clusters identified by the algorithm did contain simulated extra deaths in order to evaluate the ability of the algorithm to identify unusual mortality clusters caused by an outbreak. Among the 1,000 simulations, the median duration of simulated outbreaks was 8 weeks, with a median number of 5,627 simulated deaths and 441 infected herds. Within the 12-week trial period, 73% of the simulated outbreaks were detected, with a median timeliness of 1 week, and a mean of 1.4 weeks. The proportion of outbreak weeks flagged by an alarm was 61% (i.e. sensitivity) whereas one in three alarms was a true alarm (i.e. positive predictive value). The performances of the detection algorithm were evaluated for alternative combination of epidemiologic parameters. The results of our study confirmed that in certain conditions automated algorithms could help identifying abnormal cattle mortality increases possibly related to unidentified health events.

  5. Simulation-Based Evaluation of the Performances of an Algorithm for Detecting Abnormal Disease-Related Features in Cattle Mortality Records

    PubMed Central

    Perrin, Jean-Baptiste; Durand, Benoît; Gay, Emilie; Ducrot, Christian; Hendrikx, Pascal; Calavas, Didier; Hénaux, Viviane

    2015-01-01

    We performed a simulation study to evaluate the performances of an anomaly detection algorithm considered in the frame of an automated surveillance system of cattle mortality. The method consisted in a combination of temporal regression and spatial cluster detection which allows identifying, for a given week, clusters of spatial units showing an excess of deaths in comparison with their own historical fluctuations. First, we simulated 1,000 outbreaks of a disease causing extra deaths in the French cattle population (about 200,000 herds and 20 million cattle) according to a model mimicking the spreading patterns of an infectious disease and injected these disease-related extra deaths in an authentic mortality dataset, spanning from January 2005 to January 2010. Second, we applied our algorithm on each of the 1,000 semi-synthetic datasets to identify clusters of spatial units showing an excess of deaths considering their own historical fluctuations. Third, we verified if the clusters identified by the algorithm did contain simulated extra deaths in order to evaluate the ability of the algorithm to identify unusual mortality clusters caused by an outbreak. Among the 1,000 simulations, the median duration of simulated outbreaks was 8 weeks, with a median number of 5,627 simulated deaths and 441 infected herds. Within the 12-week trial period, 73% of the simulated outbreaks were detected, with a median timeliness of 1 week, and a mean of 1.4 weeks. The proportion of outbreak weeks flagged by an alarm was 61% (i.e. sensitivity) whereas one in three alarms was a true alarm (i.e. positive predictive value). The performances of the detection algorithm were evaluated for alternative combination of epidemiologic parameters. The results of our study confirmed that in certain conditions automated algorithms could help identifying abnormal cattle mortality increases possibly related to unidentified health events. PMID:26536596

  6. Real-time implementation of a multispectral mine target detection algorithm

    NASA Astrophysics Data System (ADS)

    Samson, Joseph W.; Witter, Lester J.; Kenton, Arthur C.; Holloway, John H., Jr.

    2003-09-01

    Spatial-spectral anomaly detection (the "RX Algorithm") has been exploited on the USMC's Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) and several associated technology base studies, and has been found to be a useful method for the automated detection of surface-emplaced antitank land mines in airborne multispectral imagery. RX is a complex image processing algorithm that involves the direct spatial convolution of a target/background mask template over each multispectral image, coupled with a spatially variant background spectral covariance matrix estimation and inversion. The RX throughput on the ATD was about 38X real time using a single Sun UltraSparc system. A goal to demonstrate RX in real-time was begun in FY01. We now report the development and demonstration of a Field Programmable Gate Array (FPGA) solution that achieves a real-time implementation of the RX algorithm at video rates using COBRA ATD data. The approach uses an Annapolis Microsystems Firebird PMC card containing a Xilinx XCV2000E FPGA with over 2,500,000 logic gates and 18MBytes of memory. A prototype system was configured using a Tek Microsystems VME board with dual-PowerPC G4 processors and two PMC slots. The RX algorithm was translated from its C programming implementation into the VHDL language and synthesized into gates that were loaded into the FPGA. The VHDL/synthesizer approach allows key RX parameters to be quickly changed and a new implementation automatically generated. Reprogramming the FPGA is done rapidly and in-circuit. Implementation of the RX algorithm in a single FPGA is a major first step toward achieving real-time land mine detection.

  7. Automated detection of records in biological sequence databases that are inconsistent with the literature.

    PubMed

    Bouadjenek, Mohamed Reda; Verspoor, Karin; Zobel, Justin

    2017-07-01

    We investigate and analyse the data quality of nucleotide sequence databases with the objective of automatic detection of data anomalies and suspicious records. Specifically, we demonstrate that the published literature associated with each data record can be used to automatically evaluate its quality, by cross-checking the consistency of the key content of the database record with the referenced publications. Focusing on GenBank, we describe a set of quality indicators based on the relevance paradigm of information retrieval (IR). Then, we use these quality indicators to train an anomaly detection algorithm to classify records as "confident" or "suspicious". Our experiments on the PubMed Central collection show assessing the coherence between the literature and database records, through our algorithms, is an effective mechanism for assisting curators to perform data cleansing. Although fewer than 0.25% of the records in our data set are known to be faulty, we would expect that there are many more in GenBank that have not yet been identified. By automated comparison with literature they can be identified with a precision of up to 10% and a recall of up to 30%, while strongly outperforming several baselines. While these results leave substantial room for improvement, they reflect both the very imbalanced nature of the data, and the limited explicitly labelled data that is available. Overall, the obtained results show promise for the development of a new kind of approach to detecting low-quality and suspicious sequence records based on literature analysis and consistency. From a practical point of view, this will greatly help curators in identifying inconsistent records in large-scale sequence databases by highlighting records that are likely to be inconsistent with the literature. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Non-supervised method for early forest fire detection and rapid mapping

    NASA Astrophysics Data System (ADS)

    Artés, Tomás; Boca, Roberto; Liberta, Giorgio; San-Miguel, Jesús

    2017-09-01

    Natural hazards are a challenge for the society. Scientific community efforts have been severely increased assessing tasks about prevention and damage mitigation. The most important points to minimize natural hazard damages are monitoring and prevention. This work focuses particularly on forest fires. This phenomenon depends on small-scale factors and fire behavior is strongly related to the local weather. Forest fire spread forecast is a complex task because of the scale of the phenomena, the input data uncertainty and time constraints in forest fire monitoring. Forest fire simulators have been improved, including some calibration techniques avoiding data uncertainty and taking into account complex factors as the atmosphere. Such techniques increase dramatically the computational cost in a context where the available time to provide a forecast is a hard constraint. Furthermore, an early mapping of the fire becomes crucial to assess it. In this work, a non-supervised method for forest fire early detection and mapping is proposed. As main sources, the method uses daily thermal anomalies from MODIS and VIIRS combined with land cover map to identify and monitor forest fires with very few resources. This method relies on a clustering technique (DBSCAN algorithm) and on filtering thermal anomalies to detect the forest fires. In addition, a concave hull (alpha shape algorithm) is applied to obtain rapid mapping of the fire area (very coarse accuracy mapping). Therefore, the method leads to a potential use for high-resolution forest fire rapid mapping based on satellite imagery using the extent of each early fire detection. It shows the way to an automatic rapid mapping of the fire at high resolution processing as few data as possible.

  9. Data mining of atmospheric parameters associated with coastal earthquakes

    NASA Astrophysics Data System (ADS)

    Cervone, Guido

    Earthquakes are natural hazards that pose a serious threat to society and the environment. A single earthquake can claim thousands of lives, cause damages for billions of dollars, destroy natural landmarks and render large territories uninhabitable. Studying earthquakes and the processes that govern their occurrence, is of fundamental importance to protect lives, properties and the environment. Recent studies have shown that anomalous changes in land, ocean and atmospheric parameters occur prior to earthquakes. The present dissertation introduces an innovative methodology and its implementation to identify anomalous changes in atmospheric parameters associated with large coastal earthquakes. Possible geophysical mechanisms are discussed in view of the close interaction between the lithosphere, the hydrosphere and the atmosphere. The proposed methodology is a multi strategy data mining approach which combines wavelet transformations, evolutionary algorithms, and statistical analysis of atmospheric data to analyze possible precursory signals. One dimensional wavelet transformations and statistical tests are employed to identify significant singularities in the data, which may correspond to anomalous peaks due to the earthquake preparatory processes. Evolutionary algorithms and other localized search strategies are used to analyze the spatial and temporal continuity of the anomalies detected over a large area (about 2000 km2), to discriminate signals that are most likely associated with earthquakes from those due to other, mostly atmospheric, phenomena. Only statistically significant singularities occurring within a very short time of each other, and which tract a rigorous geometrical path related to the geological properties of the epicentral area, are considered to be associated with a seismic event. A program called CQuake was developed to implement and validate the proposed methodology. CQuake is a fully automated, real time semi-operational system, developed to study precursory signals associated with earthquakes. CQuake can be used for the retrospective analysis of past earthquakes, and for detecting early warning information about impending events. Using CQuake more than 300 earthquakes have been analyzed. In the case of coastal earthquakes with magnitude larger than 5.0, prominent anomalies are found up to two weeks prior to the main event. In case of earthquakes occurring away from the coast, no strong anomaly is detected. The identified anomalies provide a potentially reliable mean to mitigate earthquake risks in the future, and can be used to develop a fully operational forecasting system.

  10. 3D non-linear inversion of magnetic anomalies caused by prismatic bodies using differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Balkaya, Çağlayan; Ekinci, Yunus Levent; Göktürkler, Gökhan; Turan, Seçil

    2017-01-01

    3D non-linear inversion of total field magnetic anomalies caused by vertical-sided prismatic bodies has been achieved by differential evolution (DE), which is one of the population-based evolutionary algorithms. We have demonstrated the efficiency of the algorithm on both synthetic and field magnetic anomalies by estimating horizontal distances from the origin in both north and east directions, depths to the top and bottom of the bodies, inclination and declination angles of the magnetization, and intensity of magnetization of the causative bodies. In the synthetic anomaly case, we have considered both noise-free and noisy data sets due to two vertical-sided prismatic bodies in a non-magnetic medium. For the field case, airborne magnetic anomalies originated from intrusive granitoids at the eastern part of the Biga Peninsula (NW Turkey) which is composed of various kinds of sedimentary, metamorphic and igneous rocks, have been inverted and interpreted. Since the granitoids are the outcropped rocks in the field, the estimations for the top depths of two prisms representing the magnetic bodies were excluded during inversion studies. Estimated bottom depths are in good agreement with the ones obtained by a different approach based on 3D modelling of pseudogravity anomalies. Accuracy of the estimated parameters from both cases has been also investigated via probability density functions. Based on the tests in the present study, it can be concluded that DE is a useful tool for the parameter estimation of source bodies using magnetic anomalies.

  11. Particle Filtering for Model-Based Anomaly Detection in Sensor Networks

    NASA Technical Reports Server (NTRS)

    Solano, Wanda; Banerjee, Bikramjit; Kraemer, Landon

    2012-01-01

    A novel technique has been developed for anomaly detection of rocket engine test stand (RETS) data. The objective was to develop a system that postprocesses a csv file containing the sensor readings and activities (time-series) from a rocket engine test, and detects any anomalies that might have occurred during the test. The output consists of the names of the sensors that show anomalous behavior, and the start and end time of each anomaly. In order to reduce the involvement of domain experts significantly, several data-driven approaches have been proposed where models are automatically acquired from the data, thus bypassing the cost and effort of building system models. Many supervised learning methods can efficiently learn operational and fault models, given large amounts of both nominal and fault data. However, for domains such as RETS data, the amount of anomalous data that is actually available is relatively small, making most supervised learning methods rather ineffective, and in general met with limited success in anomaly detection. The fundamental problem with existing approaches is that they assume that the data are iid, i.e., independent and identically distributed, which is violated in typical RETS data. None of these techniques naturally exploit the temporal information inherent in time series data from the sensor networks. There are correlations among the sensor readings, not only at the same time, but also across time. However, these approaches have not explicitly identified and exploited such correlations. Given these limitations of model-free methods, there has been renewed interest in model-based methods, specifically graphical methods that explicitly reason temporally. The Gaussian Mixture Model (GMM) in a Linear Dynamic System approach assumes that the multi-dimensional test data is a mixture of multi-variate Gaussians, and fits a given number of Gaussian clusters with the help of the wellknown Expectation Maximization (EM) algorithm. The parameters thus learned are used for calculating the joint distribution of the observations. However, this GMM assumption is essentially an approximation and signals the potential viability of non-parametric density estimators. This is the key idea underlying the new approach.

  12. A sonographic approach to prenatal classification of congenital spine anomalies

    PubMed Central

    Robertson, Meiri; Sia, Sock Bee

    2015-01-01

    Abstract Objective: To develop a classification system for congenital spine anomalies detected by prenatal ultrasound. Methods: Data were collected from fetuses with spine abnormalities diagnosed in our institution over a five‐year period between June 2005 and June 2010. The ultrasound images were analysed to determine which features were associated with different congenital spine anomalies. Findings of the prenatal ultrasound images were correlated with other prenatal imaging, post mortem findings, post mortem imaging, neonatal imaging, karyotype, and other genetic workup. Data from published case reports of prenatal diagnosis of rare congenital spine anomalies were analysed to provide a comprehensive work. Results: During the study period, eighteen cases of spine abnormalities were diagnosed in 7819 women. The mean gestational age at diagnosis was 18.8w ± 2.2 SD. While most cases represented open NTD, a spectrum of vertebral abnormalities were diagnosed prenatally. These included hemivertebrae, block vertebrae, cleft or butterfly vertebrae, sacral agenesis, and a lipomeningocele. The most sensitive features for diagnosis of a spine abnormality included flaring of the vertebral arch ossification centres, abnormal spine curvature, and short spine length. While reported findings at the time of diagnosis were often conservative, retrospective analysis revealed good correlation with radiographic imaging. 3D imaging was found to be a valuable tool in many settings. Conclusions: Analysis of the study findings showed prenatal ultrasound allowed detection of disruption to the normal appearances of the fetal spine. Using the three features of flaring of the vertebral arch ossification centres, abnormal spine curvature, and short spine length, an algorithm was devised to aid with the diagnosis of spine anomalies for those who perform and report prenatal ultrasound. PMID:28191204

  13. Hyperspectral processing in graphical processing units

    NASA Astrophysics Data System (ADS)

    Winter, Michael E.; Winter, Edwin M.

    2011-06-01

    With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance increase with each generation of new video cards. While these cards were designed primarily for visualization and video games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers. It has been found that many image processing problems scale well to modern GPU systems. We have implemented four popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special cases showing extreme speedups of a hundred times or more.

  14. Parameter estimation by Differential Search Algorithm from horizontal loop electromagnetic (HLEM) data

    NASA Astrophysics Data System (ADS)

    Alkan, Hilal; Balkaya, Çağlayan

    2018-02-01

    We present an efficient inversion tool for parameter estimation from horizontal loop electromagnetic (HLEM) data using Differential Search Algorithm (DSA) which is a swarm-intelligence-based metaheuristic proposed recently. The depth, dip, and origin of a thin subsurface conductor causing the anomaly are the parameters estimated by the HLEM method commonly known as Slingram. The applicability of the developed scheme was firstly tested on two synthetically generated anomalies with and without noise content. Two control parameters affecting the convergence characteristic to the solution of the algorithm were tuned for the so-called anomalies including one and two conductive bodies, respectively. Tuned control parameters yielded more successful statistical results compared to widely used parameter couples in DSA applications. Two field anomalies measured over a dipping graphitic shale from Northern Australia were then considered, and the algorithm provided the depth estimations being in good agreement with those of previous studies and drilling information. Furthermore, the efficiency and reliability of the results obtained were investigated via probability density function. Considering the results obtained, we can conclude that DSA characterized by the simple algorithmic structure is an efficient and promising metaheuristic for the other relatively low-dimensional geophysical inverse problems. Finally, the researchers after being familiar with the content of developed scheme displaying an easy to use and flexible characteristic can easily modify and expand it for their scientific optimization problems.

  15. A global algorithm for estimating Absolute Salinity

    NASA Astrophysics Data System (ADS)

    McDougall, T. J.; Jackett, D. R.; Millero, F. J.; Pawlowicz, R.; Barker, P. M.

    2012-12-01

    The International Thermodynamic Equation of Seawater - 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density) than does Practical Salinity. When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic), Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg-1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p) in the world ocean. To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811). In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally).

  16. A model for anomaly classification in intrusion detection systems

    NASA Astrophysics Data System (ADS)

    Ferreira, V. O.; Galhardi, V. V.; Gonçalves, L. B. L.; Silva, R. C.; Cansian, A. M.

    2015-09-01

    Intrusion Detection Systems (IDS) are traditionally divided into two types according to the detection methods they employ, namely (i) misuse detection and (ii) anomaly detection. Anomaly detection has been widely used and its main advantage is the ability to detect new attacks. However, the analysis of anomalies generated can become expensive, since they often have no clear information about the malicious events they represent. In this context, this paper presents a model for automated classification of alerts generated by an anomaly based IDS. The main goal is either the classification of the detected anomalies in well-defined taxonomies of attacks or to identify whether it is a false positive misclassified by the IDS. Some common attacks to computer networks were considered and we achieved important results that can equip security analysts with best resources for their analyses.

  17. Robust feature extraction for rapid classification of damage in composites

    NASA Astrophysics Data System (ADS)

    Coelho, Clyde K.; Reynolds, Whitney; Chattopadhyay, Aditi

    2009-03-01

    The ability to detect anomalies in signals from sensors is imperative for structural health monitoring (SHM) applications. Many of the candidate algorithms for these applications either require a lot of training examples or are very computationally inefficient for large sample sizes. The damage detection framework presented in this paper uses a combination of Linear Discriminant Analysis (LDA) along with Support Vector Machines (SVM) to obtain a computationally efficient classification scheme for rapid damage state determination. LDA was used for feature extraction of damage signals from piezoelectric sensors on a composite plate and these features were used to train the SVM algorithm in parts, reducing the computational intensity associated with the quadratic optimization problem that needs to be solved during training. SVM classifiers were organized into a binary tree structure to speed up classification, which also reduces the total training time required. This framework was validated on composite plates that were impacted at various locations. The results show that the algorithm was able to correctly predict the different impact damage cases in composite laminates using less than 21 percent of the total available training data after data reduction.

  18. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  19. Detecting Abnormal Machine Characteristics in Cloud Infrastructures

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Das, Kamalika; Matthews, Bryan L.

    2011-01-01

    In the cloud computing environment resources are accessed as services rather than as a product. Monitoring this system for performance is crucial because of typical pay-peruse packages bought by the users for their jobs. With the huge number of machines currently in the cloud system, it is often extremely difficult for system administrators to keep track of all machines using distributed monitoring programs such as Ganglia1 which lacks system health assessment and summarization capabilities. To overcome this problem, we propose a technique for automated anomaly detection using machine performance data in the cloud. Our algorithm is entirely distributed and runs locally on each computing machine on the cloud in order to rank the machines in order of their anomalous behavior for given jobs. There is no need to centralize any of the performance data for the analysis and at the end of the analysis, our algorithm generates error reports, thereby allowing the system administrators to take corrective actions. Experiments performed on real data sets collected for different jobs validate the fact that our algorithm has a low overhead for tracking anomalous machines in a cloud infrastructure.

  20. An immunity-based anomaly detection system with sensor agents.

    PubMed

    Okamoto, Takeshi; Ishida, Yoshiteru

    2009-01-01

    This paper proposes an immunity-based anomaly detection system with sensor agents based on the specificity and diversity of the immune system. Each agent is specialized to react to the behavior of a specific user. Multiple diverse agents decide whether the behavior is normal or abnormal. Conventional systems have used only a single sensor to detect anomalies, while the immunity-based system makes use of multiple sensors, which leads to improvements in detection accuracy. In addition, we propose an evaluation framework for the anomaly detection system, which is capable of evaluating the differences in detection accuracy between internal and external anomalies. This paper focuses on anomaly detection in user's command sequences on UNIX-like systems. In experiments, the immunity-based system outperformed some of the best conventional systems.

  1. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  2. A topology visualization early warning distribution algorithm for large-scale network security incidents.

    PubMed

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  3. Euclidean commute time distance embedding and its application to spectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Albano, James A.; Messinger, David W.

    2012-06-01

    Spectral image analysis problems often begin by performing a preprocessing step composed of applying a transformation that generates an alternative representation of the spectral data. In this paper, a transformation based on a Markov-chain model of a random walk on a graph is introduced. More precisely, we quantify the random walk using a quantity known as the average commute time distance and find a nonlinear transformation that embeds the nodes of a graph in a Euclidean space where the separation between them is equal to the square root of this quantity. This has been referred to as the Commute Time Distance (CTD) transformation and it has the important characteristic of increasing when the number of paths between two nodes decreases and/or the lengths of those paths increase. Remarkably, a closed form solution exists for computing the average commute time distance that avoids running an iterative process and is found by simply performing an eigendecomposition on the graph Laplacian matrix. Contained in this paper is a discussion of the particular graph constructed on the spectral data for which the commute time distance is then calculated from, an introduction of some important properties of the graph Laplacian matrix, and a subspace projection that approximately preserves the maximal variance of the square root commute time distance. Finally, RX anomaly detection and Topological Anomaly Detection (TAD) algorithms will be applied to the CTD subspace followed by a discussion of their results.

  4. A semi-supervised classification algorithm using the TAD-derived background as training data

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Ambeau, Brittany; Messinger, David W.

    2013-05-01

    In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.

  5. Using multi-scale entropy and principal component analysis to monitor gears degradation via the motor current signature analysis

    NASA Astrophysics Data System (ADS)

    Aouabdi, Salim; Taibi, Mahmoud; Bouras, Slimane; Boutasseta, Nadir

    2017-06-01

    This paper describes an approach for identifying localized gear tooth defects, such as pitting, using phase currents measured from an induction machine driving the gearbox. A new tool of anomaly detection based on multi-scale entropy (MSE) algorithm SampEn which allows correlations in signals to be identified over multiple time scales. The motor current signature analysis (MCSA) in conjunction with principal component analysis (PCA) and the comparison of observed values with those predicted from a model built using nominally healthy data. The Simulation results show that the proposed method is able to detect gear tooth pitting in current signals.

  6. Value Focused Thinking Applications to Supervised Pattern Classification With Extensions to Hyperspectral Anomaly Detection Algorithms

    DTIC Science & Technology

    2015-03-26

    performing. All reasonable permutations of factors will be used to develop a multitude of unique combinations. These combinations are considered different...are seen below (Duda et al., 2001). Entropy impurity: () = −�P�ωj�log2P(ωj) j (9) Gini impurity: () =�()�� = 1 2 ∗ [1...proportion of one class to another approaches 0.5, the impurity measure reaches its maximum, which for Entropy is 1.0, while it is 0.5 for Gini and

  7. Statistical Traffic Anomaly Detection in Time-Varying Communication Networks

    DTIC Science & Technology

    2015-02-01

    methods perform better than their vanilla counterparts, which assume that normal traffic is stationary. Statistical Traffic Anomaly Detection in Time...our methods perform better than their vanilla counterparts, which assume that normal traffic is stationary. Index Terms—Statistical anomaly detection...anomaly detection but also for understanding the normal traffic in time-varying networks. C. Comparison with vanilla stochastic methods For both types

  8. Statistical Traffic Anomaly Detection in Time Varying Communication Networks

    DTIC Science & Technology

    2015-02-01

    methods perform better than their vanilla counterparts, which assume that normal traffic is stationary. Statistical Traffic Anomaly Detection in Time...our methods perform better than their vanilla counterparts, which assume that normal traffic is stationary. Index Terms—Statistical anomaly detection...anomaly detection but also for understanding the normal traffic in time-varying networks. C. Comparison with vanilla stochastic methods For both types

  9. A Semi-Vectorization Algorithm to Synthesis of Gravitational Anomaly Quantities on the Earth

    NASA Astrophysics Data System (ADS)

    Abdollahzadeh, M.; Eshagh, M.; Najafi Alamdari, M.

    2009-04-01

    The Earth's gravitational potential can be expressed by the well-known spherical harmonic expansion. The computational time of summing up this expansion is an important practical issue which can be reduced by an efficient numerical algorithm. This paper proposes such a method for block-wise synthesizing the anomaly quantities on the Earth surface using vectorization. Fully-vectorization means transformation of the summations to the simple matrix and vector products. It is not a practical for the matrices with large dimensions. Here a semi-vectorization algorithm is proposed to avoid working with large vectors and matrices. It speeds up the computations by using one loop for the summation either on degrees or on orders. The former is a good option to synthesize the anomaly quantities on the Earth surface considering a digital elevation model (DEM). This approach is more efficient than the two-step method which computes the quantities on the reference ellipsoid and continues them upward to the Earth surface. The algorithm has been coded in MATLAB which synthesizes a global grid of 5′Ã- 5′ (corresponding 9 million points) of gravity anomaly or geoid height using a geopotential model to degree 360 in 10000 seconds by an ordinary computer with 2G RAM.

  10. Geothermal area detection using Landsat ETM+ thermal infrared data and its mechanistic analysis—A case study in Tengchong, China

    NASA Astrophysics Data System (ADS)

    Qin, Qiming; Zhang, Ning; Nan, Peng; Chai, Leilei

    2011-08-01

    Thermal infrared (TIR) remote sensing is an important technique in the exploration of geothermal resources. In this study, a geothermal survey is conducted in Tengchong area of Yunnan province in China using TIR data from Landsat-7 Enhanced Thematic Mapper Plus (ETM+) sensor. Based on radiometric calibration, atmospheric correction and emissivity calculation, a simple but efficient single channel algorithm with acceptable precision is applied to retrieve the land surface temperature (LST) of study area. The LST anomalous areas with temperature about 4-10 K higher than background area are discovered. Four geothermal areas are identified with the discussion of geothermal mechanism and the further analysis of regional geologic structure. The research reveals that the distribution of geothermal areas is consistent with the fault development in study area. Magmatism contributes abundant thermal source to study area and the faults provide thermal channels for heat transfer from interior earth to land surface and facilitate the present of geothermal anomalies. Finally, we conclude that TIR remote sensing is a cost-effective technique to detect LST anomalies. Combining TIR remote sensing with geological analysis and the understanding of geothermal mechanism is an accurate and efficient approach to geothermal area detection.

  11. A Survey on Anomaly Based Host Intrusion Detection System

    NASA Astrophysics Data System (ADS)

    Jose, Shijoe; Malathi, D.; Reddy, Bharath; Jayaseeli, Dorathi

    2018-04-01

    An intrusion detection system (IDS) is hardware, software or a combination of two, for monitoring network or system activities to detect malicious signs. In computer security, designing a robust intrusion detection system is one of the most fundamental and important problems. The primary function of system is detecting intrusion and gives alerts when user tries to intrusion on timely manner. In these techniques when IDS find out intrusion it will send alert massage to the system administrator. Anomaly detection is an important problem that has been researched within diverse research areas and application domains. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. From the existing anomaly detection techniques, each technique has relative strengths and weaknesses. The current state of the experiment practice in the field of anomaly-based intrusion detection is reviewed and survey recent studies in this. This survey provides a study of existing anomaly detection techniques, and how the techniques used in one area can be applied in another application domain.

  12. Unattended Sensor System With CLYC Detectors

    NASA Astrophysics Data System (ADS)

    Myjak, Mitchell J.; Becker, Eric M.; Gilbert, Andrew J.; Hoff, Jonathan E.; Knudson, Christa K.; Landgren, Peter C.; Lee, Samantha F.; McDonald, Benjamin S.; Pfund, David M.; Redding, Rebecca L.; Smart, John E.; Taubman, Matthew S.; Torres-Torres, Carlos R.; Wiseman, Clinton G.

    2016-06-01

    We have developed an unattended sensor for detecting anomalous radiation sources. The system combines several technologies to reduce size and weight, increase battery lifetime, and improve decision-making capabilities. Sixteen Cs2LiYCl6:Ce (CLYC) scintillators allow for gamma-ray spectroscopy and neutron detection in the same volume. Low-power electronics for readout, high voltage bias, and digital processing reduce the total operating power to 1.7 W. Computationally efficient analysis algorithms perform spectral anomaly detection and isotope identification. When an alarm occurs, the system transmits alarm information over a cellular modem. In this paper, we describe the overall design of the unattended sensor, present characterization results, and compare the performance to stock NaI:Tl and 3He detectors.

  13. An artificial bioindicator system for network intrusion detection.

    PubMed

    Blum, Christian; Lozano, José A; Davidson, Pedro Pinacho

    An artificial bioindicator system is developed in order to solve a network intrusion detection problem. The system, inspired by an ecological approach to biological immune systems, evolves a population of agents that learn to survive in their environment. An adaptation process allows the transformation of the agent population into a bioindicator that is capable of reacting to system anomalies. Two characteristics stand out in our proposal. On the one hand, it is able to discover new, previously unseen attacks, and on the other hand, contrary to most of the existing systems for network intrusion detection, it does not need any previous training. We experimentally compare our proposal with three state-of-the-art algorithms and show that it outperforms the competing approaches on widely used benchmark data.

  14. Unattended Sensor System With CLYC Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myjak, Mitchell J.; Becker, Eric M.; Gilbert, Andrew J.

    2016-06-01

    We have developed a next-generation unattended sensor for detecting anomalous radiation sources. The system combines several technologies to reduce size and weight, increase battery lifetime, and improve decision-making capabilities. Sixteen Cs2LiYCl6:Ce (CLYC) scintillators allow for gamma-ray spectroscopy and neutron detection in the same volume. Low-power electronics for readout, high voltage bias, and digital processing reduce the total operating power to 1.3 W. Computationally efficient analysis algorithms perform spectral anomaly detection and isotope identification. When an alarm occurs, the system transmits alarm information over a cellular modem. In this paper, we describe the overall design of the unattended sensor, present characterizationmore » results, and compare the performance to stock NaI:Tl and 3He detectors.« less

  15. Thermal remote sensing as a part of Exupéry volcano fast response system

    NASA Astrophysics Data System (ADS)

    Zakšek, Klemen; Hort, Matthias

    2010-05-01

    In order to understand the eruptive potential of a volcanic system one has to characterize the actual state of stress of a volcanic system that involves proper monitoring strategies. As several volcanoes in highly populated areas especially in south east Asia are still nearly unmonitored a mobile volcano monitoring system is currently being developed in Germany. One of the major novelties of this mobile volcano fast response system called Exupéry is the direct inclusion of satellite based observations. Remote sensing data are introduced together with ground based field measurements into the GIS database, where the statistical properties of all recorded data are estimated. Using physical modelling and statistical methods we hope to constrain the probability of future eruptions. The emphasis of this contribution is on using thermal remote sensing as tool for monitoring active volcanoes. One can detect thermal anomalies originating from a volcano by comparing signals in mid and thermal infrared spectra. A reliable and effective thermal anomalies detection algorithm was developed by Wright (2002) for MODIS sensor; it is based on the threshold of the so called normalized thermal index (NTI). This is the method we use in Exupéry, where we characterize each detected thermal anomaly by temperature, area, heat flux and effusion rate. The recent work has shown that radiant flux is the most robust parameter for this characterization. Its derivation depends on atmosphere, satellite viewing angle and sensor characteristics. Some of these influences are easy to correct using standard remote sensing pre-processing techniques, however, some noise still remains in data. In addition, satellites in polar orbits have long revisit times and thus they might fail to follow a fast evolving volcanic crisis due to long revisit times. Thus we are currently testing Kalman filter on simultaneous use of MODIS and AVHRR data to improve the thermal anomaly characterization. The advantage of this technique is that it increases the temporal resolution through using images from different satellites having different resolution and sensitivity. This algorithm has been tested for an eruption at Mt. Etna (2002) and successfully captures more details of the eruption evolution than would be seen by using only one satellite source. At the moment for Exupéry, merely MODIS (a sensor aboard NASA's Terra and Aqua satellite) data are used for the operational use. As MODIS is a meteorological sensor, it is suitable also for producing general overview images of the crisis area. Therefore, for each processed MODIS image we also produce RGB image where some basic meteorological features are classified: e.g. clouds, volcanic ash plumes, ocean, etc. In the case of detected hotspot an additional image is created; it contains the original measured radiances of the selected channels for the crisis area. All anomaly and processing parameters are additionally written into an XML file. The results are available in web GIS in the worst case two hours after NASA provides level 1b data online.

  16. Setup Instructions for the Applied Anomaly Detection Tool (AADT) Web Server

    DTIC Science & Technology

    2016-09-01

    ARL-TR-7798 ● SEP 2016 US Army Research Laboratory Setup Instructions for the Applied Anomaly Detection Tool (AADT) Web Server...for the Applied Anomaly Detection Tool (AADT) Web Server by Christian D Schlesiger Computational and Information Sciences Directorate, ARL...SUBTITLE Setup Instructions for the Applied Anomaly Detection Tool (AADT) Web Server 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT

  17. Flight-Tested Prototype of BEAM Software

    NASA Technical Reports Server (NTRS)

    Mackey, Ryan; Tikidjian, Raffi; James, Mark; Wang, David

    2006-01-01

    Researchers at JPL have completed a software prototype of BEAM (Beacon-based Exception Analysis for Multi-missions) and successfully tested its operation in flight onboard a NASA research aircraft. BEAM (see NASA Tech Briefs, Vol. 26, No. 9; and Vol. 27, No. 3) is an ISHM (Integrated Systems Health Management) technology that automatically analyzes sensor data and classifies system behavior as either nominal or anomalous, and further characterizes anomalies according to strength, duration, and affected signals. BEAM (see figure) can be used to monitor a wide variety of physical systems and sensor types in real time. In this series of tests, BEAM monitored the engines of a Dryden Flight Research Center F-18 aircraft, and performed onboard, unattended analysis of 26 engine sensors from engine startup to shutdown. The BEAM algorithm can detect anomalies based solely on the sensor data, which includes but is not limited to sensor failure, performance degradation, incorrect operation such as unplanned engine shutdown or flameout in this example, and major system faults. BEAM was tested on an F-18 simulator, static engine tests, and 25 individual flights totaling approximately 60 hours of flight time. During these tests, BEAM successfully identified planned anomalies (in-flight shutdowns of one engine) as well as minor unplanned anomalies (e.g., transient oil- and fuel-pressure drops), with no false alarms or suspected false-negative results for the period tested. BEAM also detected previously unknown behavior in the F- 18 compressor section during several flights. This result, confirmed by direct analysis of the raw data, serves as a significant test of BEAM's capability.

  18. Detrending Algorithms in Large Time Series: Application to TFRM-PSES Data

    NASA Astrophysics Data System (ADS)

    del Ser, D.; Fors, O.; Núñez, J.; Voss, H.; Rosich, A.; Kouprianov, V.

    2015-07-01

    Certain instrumental effects and data reduction anomalies introduce systematic errors in photometric time series. Detrending algorithms such as the Trend Filtering Algorithm (TFA; Kovács et al. 2004) have played a key role in minimizing the effects caused by these systematics. Here we present the results obtained after applying the TFA, Savitzky & Golay (1964) detrending algorithms, and the Box Least Square phase-folding algorithm (Kovács et al. 2002) to the TFRM-PSES data (Fors et al. 2013). Tests performed on these data show that by applying these two filtering methods together the photometric RMS is on average improved by a factor of 3-4, with better efficiency towards brighter magnitudes, while applying TFA alone yields an improvement of a factor 1-2. As a result of this improvement, we are able to detect and analyze a large number of stars per TFRM-PSES field which present some kind of variability. Also, after porting these algorithms to Python and parallelizing them, we have improved, even for large data samples, the computational performance of the overall detrending+BLS algorithm by a factor of ˜10 with respect to Kovács et al. (2004).

  19. A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Srivastava, Ashok N.

    2009-01-01

    This paper offers a local distributed algorithm for expectation maximization in large peer-to-peer environments. The algorithm can be used for a variety of well-known data mining tasks in a distributed environment such as clustering, anomaly detection, target tracking to name a few. This technology is crucial for many emerging peer-to-peer applications for bioinformatics, astronomy, social networking, sensor networks and web mining. Centralizing all or some of the data for building global models is impractical in such peer-to-peer environments because of the large number of data sources, the asynchronous nature of the peer-to-peer networks, and dynamic nature of the data/network. The distributed algorithm we have developed in this paper is provably-correct i.e. it converges to the same result compared to a similar centralized algorithm and can automatically adapt to changes to the data and the network. We show that the communication overhead of the algorithm is very low due to its local nature. This monitoring algorithm is then used as a feedback loop to sample data from the network and rebuild the model when it is outdated. We present thorough experimental results to verify our theoretical claims.

  20. Automatic Multi-sensor Data Quality Checking and Event Detection for Environmental Sensing

    NASA Astrophysics Data System (ADS)

    LIU, Q.; Zhang, Y.; Zhao, Y.; Gao, D.; Gallaher, D. W.; Lv, Q.; Shang, L.

    2017-12-01

    With the advances in sensing technologies, large-scale environmental sensing infrastructures are pervasively deployed to continuously collect data for various research and application fields, such as air quality study and weather condition monitoring. In such infrastructures, many sensor nodes are distributed in a specific area and each individual sensor node is capable of measuring several parameters (e.g., humidity, temperature, and pressure), providing massive data for natural event detection and analysis. However, due to the dynamics of the ambient environment, sensor data can be contaminated by errors or noise. Thus, data quality is still a primary concern for scientists before drawing any reliable scientific conclusions. To help researchers identify potential data quality issues and detect meaningful natural events, this work proposes a novel algorithm to automatically identify and rank anomalous time windows from multiple sensor data streams. More specifically, (1) the algorithm adaptively learns the characteristics of normal evolving time series and (2) models the spatial-temporal relationship among multiple sensor nodes to infer the anomaly likelihood of a time series window for a particular parameter in a sensor node. Case studies using different data sets are presented and the experimental results demonstrate that the proposed algorithm can effectively identify anomalous time windows, which may resulted from data quality issues and natural events.

  1. The use of Compton scattering in detecting anomaly in soil-possible use in pyromaterial detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abedin, Ahmad Firdaus Zainal; Ibrahim, Noorddin; Zabidi, Noriza Ahmad

    The Compton scattering is able to determine the signature of land mine detection based on dependency of density anomaly and energy change of scattered photons. In this study, 4.43 MeV gamma of the Am-Be source was used to perform Compton scattering. Two detectors were placed between source with distance of 8 cm and radius of 1.9 cm. Detectors of thallium-doped sodium iodide NaI(TI) was used for detecting gamma ray. There are 9 anomalies used in this simulation. The physical of anomaly is in cylinder form with radius of 10 cm and 8.9 cm height. The anomaly is buried 5 cm deep in the bed soil measuredmore » 80 cm radius and 53.5 cm height. Monte Carlo methods indicated the scattering of photons is directly proportional to density of anomalies. The difference between detector response with anomaly and without anomaly namely contrast ratio values are in a linear relationship with density of anomalies. Anomalies of air, wood and water give positive contrast ratio values whereas explosive, sand, concrete, graphite, limestone and polyethylene give negative contrast ratio values. Overall, the contrast ratio values are greater than 2 % for all anomalies. The strong contrast ratios result a good detection capability and distinction between anomalies.« less

  2. Data Mining: The Art of Automated Knowledge Extraction

    NASA Astrophysics Data System (ADS)

    Karimabadi, H.; Sipes, T.

    2012-12-01

    Data mining algorithms are used routinely in a wide variety of fields and they are gaining adoption in sciences. The realities of real world data analysis are that (a) data has flaws, and (b) the models and assumptions that we bring to the data are inevitably flawed, and/or biased and misspecified in some way. Data mining can improve data analysis by detecting anomalies in the data, check for consistency of the user model assumptions, and decipher complex patterns and relationships that would not be possible otherwise. The common form of data collected from in situ spacecraft measurements is multi-variate time series which represents one of the most challenging problems in data mining. We have successfully developed algorithms to deal with such data and have extended the algorithms to handle streaming data. In this talk, we illustrate the utility of our algorithms through several examples including automated detection of reconnection exhausts in the solar wind and flux ropes in the magnetotail. We also show examples from successful applications of our technique to analysis of 3D kinetic simulations. With an eye to the future, we provide an overview of our upcoming plans that include collaborative data mining, expert outsourcing data mining, computer vision for image analysis, among others. Finally, we discuss the integration of data mining algorithms with web-based services such as VxOs and other Heliophysics data centers and the resulting capabilities that it would enable.

  3. Spent nuclear fuel assembly inspection using neutron computed tomography

    NASA Astrophysics Data System (ADS)

    Pope, Chad Lee

    The research presented here focuses on spent nuclear fuel assembly inspection using neutron computed tomography. Experimental measurements involving neutron beam transmission through a spent nuclear fuel assembly serve as benchmark measurements for an MCNP simulation model. Comparison of measured results to simulation results shows good agreement. Generation of tomography images from MCNP tally results was accomplished using adapted versions of built in MATLAB algorithms. Multiple fuel assembly models were examined to provide a broad set of conclusions. Tomography images revealing assembly geometric information including the fuel element lattice structure and missing elements can be obtained using high energy neutrons. A projection difference technique was developed which reveals the substitution of unirradiated fuel elements for irradiated fuel elements, using high energy neutrons. More subtle material differences such as altering the burnup of individual elements can be identified with lower energy neutrons provided the scattered neutron contribution to the image is limited. The research results show that neutron computed tomography can be used to inspect spent nuclear fuel assemblies for the purpose of identifying anomalies such as missing elements or substituted elements. The ability to identify anomalies in spent fuel assemblies can be used to deter diversion of material by increasing the risk of early detection as well as improve reprocessing facility operations by confirming the spent fuel configuration is as expected or allowing segregation if anomalies are detected.

  4. RoboTAP: Target priorities for robotic microlensing observations

    NASA Astrophysics Data System (ADS)

    Hundertmark, M.; Street, R. A.; Tsapras, Y.; Bachelet, E.; Dominik, M.; Horne, K.; Bozza, V.; Bramich, D. M.; Cassan, A.; D'Ago, G.; Figuera Jaimes, R.; Kains, N.; Ranc, C.; Schmidt, R. W.; Snodgrass, C.; Wambsganss, J.; Steele, I. A.; Mao, S.; Ment, K.; Menzies, J.; Li, Z.; Cross, S.; Maoz, D.; Shvartzvald, Y.

    2018-01-01

    Context. The ability to automatically select scientifically-important transient events from an alert stream of many such events, and to conduct follow-up observations in response, will become increasingly important in astronomy. With wide-angle time domain surveys pushing to fainter limiting magnitudes, the capability to follow-up on transient alerts far exceeds our follow-up telescope resources, and effective target prioritization becomes essential. The RoboNet-II microlensing program is a pathfinder project, which has developed an automated target selection process (RoboTAP) for gravitational microlensing events, which are observed in real time using the Las Cumbres Observatory telescope network. Aims: Follow-up telescopes typically have a much smaller field of view compared to surveys, therefore the most promising microlensing events must be automatically selected at any given time from an annual sample exceeding 2000 events. The main challenge is to select between events with a high planet detection sensitivity, with the aim of detecting many planets and characterizing planetary anomalies. Methods: Our target selection algorithm is a hybrid system based on estimates of the planet detection zones around a microlens. It follows automatic anomaly alerts and respects the expected survey coverage of specific events. Results: We introduce the RoboTAP algorithm, whose purpose is to select and prioritize microlensing events with high sensitivity to planetary companions. In this work, we determine the planet sensitivity of the RoboNet follow-up program and provide a working example of how a broker can be designed for a real-life transient science program conducting follow-up observations in response to alerts; we explore the issues that will confront similar programs being developed for the Large Synoptic Survey Telescope (LSST) and other time domain surveys.

  5. Data Mining Citizen Science Results

    NASA Astrophysics Data System (ADS)

    Borne, K. D.

    2012-12-01

    Scientific discovery from big data is enabled through multiple channels, including data mining (through the application of machine learning algorithms) and human computation (commonly implemented through citizen science tasks). We will describe the results of new data mining experiments on the results from citizen science activities. Discovering patterns, trends, and anomalies in data are among the powerful contributions of citizen science. Establishing scientific algorithms that can subsequently re-discover the same types of patterns, trends, and anomalies in automatic data processing pipelines will ultimately result from the transformation of those human algorithms into computer algorithms, which can then be applied to much larger data collections. Scientific discovery from big data is thus greatly amplified through the marriage of data mining with citizen science.

  6. Detecting anomalies in astronomical signals using machine learning algorithms embedded in an FPGA

    NASA Astrophysics Data System (ADS)

    Saez, Alejandro F.; Herrera, Daniel E.

    2016-07-01

    Taking a large interferometer for radio astronomy, such as the ALMA1 telescope, where the amount of stations (50 in the case of ALMA's main array, which can extend to 64 antennas) produces an enormous amount of data in a short period of time - visibilities can be produced every 16msec or total power information every 1msec (this means up to 2016 baselines). With the aforementioned into account it is becoming more difficult to detect problems in the signal produced by each antenna in a timely manner (one antenna produces 4 x 2GHz spectral windows x 2 polarizations, which means a 16 GHz bandwidth signal which is later digitized using 3-bits samplers). This work will present an approach based on machine learning algorithms for detecting problems in the already digitized signal produced by the active antennas (the set of antennas which is being used in an observation). The aim of this work is to detect unsuitable, or totally corrupted, signals. In addition, this development also provides an almost real time warning which finally helps stop and investigate the problem in order to avoid collecting useless information.

  7. A Tensor-Based Structural Damage Identification and Severity Assessment

    PubMed Central

    Anaissi, Ali; Makki Alamdari, Mehrisadat; Rakotoarivelo, Thierry; Khoa, Nguyen Lu Dang

    2018-01-01

    Early damage detection is critical for a large set of global ageing infrastructure. Structural Health Monitoring systems provide a sensor-based quantitative and objective approach to continuously monitor these structures, as opposed to traditional engineering visual inspection. Analysing these sensed data is one of the major Structural Health Monitoring (SHM) challenges. This paper presents a novel algorithm to detect and assess damage in structures such as bridges. This method applies tensor analysis for data fusion and feature extraction, and further uses one-class support vector machine on this feature to detect anomalies, i.e., structural damage. To evaluate this approach, we collected acceleration data from a sensor-based SHM system, which we deployed on a real bridge and on a laboratory specimen. The results show that our tensor method outperforms a state-of-the-art approach using the wavelet energy spectrum of the measured data. In the specimen case, our approach succeeded in detecting 92.5% of induced damage cases, as opposed to 61.1% for the wavelet-based approach. While our method was applied to bridges, its algorithm and computation can be used on other structures or sensor-data analysis problems, which involve large series of correlated data from multiple sensors. PMID:29301314

  8. Improved gravity anomaly fields from retracked multimission satellite radar altimetry observations over the Persian Gulf and the Caspian Sea

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Forootan, E.; Sharifi, M. A.; Awange, J.; Kuhn, M.

    2015-09-01

    Satellite radar altimetry observations are used to derive short wavelength gravity anomaly fields over the Persian Gulf and the Caspian Sea, where in situ and ship-borne gravity measurements have limited spatial coverage. In this study the retracking algorithm `Extrema Retracking' (ExtR) was employed to improve sea surface height (SSH) measurements that are highly biased in the study regions due to land contaminations in the footprints of the satellite altimetry observations. ExtR was applied to the waveforms sampled by the five satellite radar altimetry missions: TOPEX/POSEIDON, JASON-1, JASON-2, GFO and ERS-1. Along-track slopes have been estimated from the improved SSH measurements and used in an iterative process to estimate deflections of the vertical, and subsequently, the desired gravity anomalies. The main steps of the gravity anomaly computations involve estimating improved SSH using the ExtR technique, computing deflections of the vertical from interpolated SSHs on a regular grid using a biharmonic spline interpolation and finally estimating gridded gravity anomalies. A remove-compute-restore algorithm, based on the fast Fourier transform, has been applied to convert deflections of the vertical into gravity anomalies. Finally, spline interpolation has been used to estimate regular gravity anomaly grids over the two study regions. Results were evaluated by comparing the estimated altimetry-derived gravity anomalies (with and without implementing the ExtR algorithm) with ship-borne free air gravity anomaly observations, and free air gravity anomalies from the Earth Gravitational Model 2008 (EGM2008). The comparison indicates a range of 3-5 mGal in the residuals, which were computed by taking the differences between the retracked altimetry-derived gravity anomaly and the ship-borne data. The comparison of retracked data with ship-borne data indicates a range in the root-mean-square-error (RMSE) between approximately 1.8 and 4.4 mGal and a bias between 0.4062 and 2.1413 mGal over different areas. Also a maximum RMSE of 4.4069 mGal, with a mean value of 0.7615 mGal was obtained in the residuals. An average improvement of 5.2746 mGal in the RMSE of the altimetry-derived gravity anomalies corresponding to 89.9 per cent was obtained after applying the ExtR post-processing.

  9. Non-seismic tsunamis: filling the forecast gap

    NASA Astrophysics Data System (ADS)

    Moore, C. W.; Titov, V. V.; Spillane, M. C.

    2015-12-01

    Earthquakes are the generation mechanism in over 85% of tsunamis. However, non-seismic tsunamis, including those generated by meteorological events, landslides, volcanoes, and asteroid impacts, can inundate significant area and have a large far-field effect. The current National Oceanographic and Atmospheric Administration (NOAA) tsunami forecast system falls short in detecting these phenomena. This study attempts to classify the range of effects possible from these non-seismic threats, and to investigate detection methods appropriate for use in a forecast system. Typical observation platforms are assessed, including DART bottom pressure recorders and tide gauges. Other detection paths include atmospheric pressure anomaly algorithms for detecting meteotsunamis and the early identification of asteroids large enough to produce a regional hazard. Real-time assessment of observations for forecast use can provide guidance to mitigate the effects of a non-seismic tsunami.

  10. A fuzzy case based reasoning tool for model based approach to rocket engine health monitoring

    NASA Technical Reports Server (NTRS)

    Krovvidy, Srinivas; Nolan, Adam; Hu, Yong-Lin; Wee, William G.

    1992-01-01

    In this system we develop a fuzzy case based reasoner that can build a case representation for several past anomalies detected, and we develop case retrieval methods that can be used to index a relevant case when a new problem (case) is presented using fuzzy sets. The choice of fuzzy sets is justified by the uncertain data. The new problem can be solved using knowledge of the model along with the old cases. This system can then be used to generalize the knowledge from previous cases and use this generalization to refine the existing model definition. This in turn can help to detect failures using the model based algorithms.

  11. Mining Building Energy Management System Data Using Fuzzy Anomaly Detection and Linguistic Descriptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumidu Wijayasekara; Ondrej Linda; Milos Manic

    Building Energy Management Systems (BEMSs) are essential components of modern buildings that utilize digital control technologies to minimize energy consumption while maintaining high levels of occupant comfort. However, BEMSs can only achieve these energy savings when properly tuned and controlled. Since indoor environment is dependent on uncertain criteria such as weather, occupancy, and thermal state, performance of BEMS can be sub-optimal at times. Unfortunately, the complexity of BEMS control mechanism, the large amount of data available and inter-relations between the data can make identifying these sub-optimal behaviors difficult. This paper proposes a novel Fuzzy Anomaly Detection and Linguistic Description (Fuzzy-ADLD)more » based method for improving the understandability of BEMS behavior for improved state-awareness. The presented method is composed of two main parts: 1) detection of anomalous BEMS behavior and 2) linguistic representation of BEMS behavior. The first part utilizes modified nearest neighbor clustering algorithm and fuzzy logic rule extraction technique to build a model of normal BEMS behavior. The second part of the presented method computes the most relevant linguistic description of the identified anomalies. The presented Fuzzy-ADLD method was applied to real-world BEMS system and compared against a traditional alarm based BEMS. In six different scenarios, the Fuzzy-ADLD method identified anomalous behavior either as fast as or faster (an hour or more), that the alarm based BEMS. In addition, the Fuzzy-ADLD method identified cases that were missed by the alarm based system, demonstrating potential for increased state-awareness of abnormal building behavior.« less

  12. Implementing Operational Analytics using Big Data Technologies to Detect and Predict Sensor Anomalies

    NASA Astrophysics Data System (ADS)

    Coughlin, J.; Mital, R.; Nittur, S.; SanNicolas, B.; Wolf, C.; Jusufi, R.

    2016-09-01

    Operational analytics when combined with Big Data technologies and predictive techniques have been shown to be valuable in detecting mission critical sensor anomalies that might be missed by conventional analytical techniques. Our approach helps analysts and leaders make informed and rapid decisions by analyzing large volumes of complex data in near real-time and presenting it in a manner that facilitates decision making. It provides cost savings by being able to alert and predict when sensor degradations pass a critical threshold and impact mission operations. Operational analytics, which uses Big Data tools and technologies, can process very large data sets containing a variety of data types to uncover hidden patterns, unknown correlations, and other relevant information. When combined with predictive techniques, it provides a mechanism to monitor and visualize these data sets and provide insight into degradations encountered in large sensor systems such as the space surveillance network. In this study, data from a notional sensor is simulated and we use big data technologies, predictive algorithms and operational analytics to process the data and predict sensor degradations. This study uses data products that would commonly be analyzed at a site. This study builds on a big data architecture that has previously been proven valuable in detecting anomalies. This paper outlines our methodology of implementing an operational analytic solution through data discovery, learning and training of data modeling and predictive techniques, and deployment. Through this methodology, we implement a functional architecture focused on exploring available big data sets and determine practical analytic, visualization, and predictive technologies.

  13. Apparatus for detecting a magnetic anomaly contiguous to remote location by squid gradiometer and magnetometer systems

    DOEpatents

    Overton, Jr., William C.; Steyert, Jr., William A.

    1984-01-01

    A superconducting quantum interference device (SQUID) magnetic detection apparatus detects magnetic fields, signals, and anomalies at remote locations. Two remotely rotatable SQUID gradiometers may be housed in a cryogenic environment to search for and locate unambiguously magnetic anomalies. The SQUID magnetic detection apparatus can be used to determine the azimuth of a hydrofracture by first flooding the hydrofracture with a ferrofluid to create an artificial magnetic anomaly therein.

  14. Apparatus and method for detecting a magnetic anomaly contiguous to remote location by SQUID gradiometer and magnetometer systems

    DOEpatents

    Overton, W.C. Jr.; Steyert, W.A. Jr.

    1981-05-22

    A superconducting quantum interference device (SQUID) magnetic detection apparatus detects magnetic fields, signals, and anomalies at remote locations. Two remotely rotatable SQUID gradiometers may be housed in a cryogenic environment to search for and locate unambiguously magnetic anomalies. The SQUID magnetic detection apparatus can be used to determine the azimuth of a hydrofracture by first flooding the hydrofracture with a ferrofluid to create an artificial magnetic anomaly therein.

  15. Explosive hazard detection using MIMO forward-looking ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Shaw, Darren; Ho, K. C.; Stone, Kevin; Keller, James M.; Popescu, Mihail; Anderson, Derek T.; Luke, Robert H.; Burns, Brian

    2015-05-01

    This paper proposes a machine learning algorithm for subsurface object detection on multiple-input-multiple-output (MIMO) forward-looking ground-penetrating radar (FLGPR). By detecting hazards using FLGPR, standoff distances of up to tens of meters can be acquired, but this is at the degradation of performance due to high false alarm rates. The proposed system utilizes an anomaly detection prescreener to identify potential object locations. Alarm locations have multiple one-dimensional (ML) spectral features, two-dimensional (2D) spectral features, and log-Gabor statistic features extracted. The ability of these features to reduce the number of false alarms and increase the probability of detection is evaluated for both co-polarizations present in the Akela MIMO array. Classification is performed by a Support Vector Machine (SVM) with lane-based cross-validation for training and testing. Class imbalance and optimized SVM kernel parameters are considered during classifier training.

  16. Glial brain tumor detection by using symmetry analysis

    NASA Astrophysics Data System (ADS)

    Pedoia, Valentina; Binaghi, Elisabetta; Balbi, Sergio; De Benedictis, Alessandro; Monti, Emanuele; Minotto, Renzo

    2012-02-01

    In this work a fully automatic algorithm to detect brain tumors by using symmetry analysis is proposed. In recent years a great effort of the research in field of medical imaging was focused on brain tumors segmentation. The quantitative analysis of MRI brain tumor allows to obtain useful key indicators of disease progression. The complex problem of segmenting tumor in MRI can be successfully addressed by considering modular and multi-step approaches mimicking the human visual inspection process. The tumor detection is often an essential preliminary phase to solvethe segmentation problem successfully. In visual analysis of the MRI, the first step of the experts cognitive process, is the detection of an anomaly respect the normal tissue, whatever its nature. An healthy brain has a strong sagittal symmetry, that is weakened by the presence of tumor. The comparison between the healthy and ill hemisphere, considering that tumors are generally not symmetrically placed in both hemispheres, was used to detect the anomaly. A clustering method based on energy minimization through Graph-Cut is applied on the volume computed as a difference between the left hemisphere and the right hemisphere mirrored across the symmetry plane. Differential analysis involves the loss the knowledge of the tumor side. Through an histogram analysis the ill hemisphere is recognized. Many experiments are performed to assess the performance of the detection strategy on MRI volumes in presence of tumors varied in terms of shapes positions and intensity levels. The experiments showed good results also in complex situations.

  17. Seismic waveform classification using deep learning

    NASA Astrophysics Data System (ADS)

    Kong, Q.; Allen, R. M.

    2017-12-01

    MyShake is a global smartphone seismic network that harnesses the power of crowdsourcing. It has an Artificial Neural Network (ANN) algorithm running on the phone to distinguish earthquake motion from human activities recorded by the accelerometer on board. Once the ANN detects earthquake-like motion, it sends a 5-min chunk of acceleration data back to the server for further analysis. The time-series data collected contains both earthquake data and human activity data that the ANN confused. In this presentation, we will show the Convolutional Neural Network (CNN) we built under the umbrella of supervised learning to find out the earthquake waveform. The waveforms of the recorded motion could treat easily as images, and by taking the advantage of the power of CNN processing the images, we achieved very high successful rate to select the earthquake waveforms out. Since there are many non-earthquake waveforms than the earthquake waveforms, we also built an anomaly detection algorithm using the CNN. Both these two methods can be easily extended to other waveform classification problems.

  18. OPAD-EDIFIS Real-Time Processing

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1997-01-01

    The Optical Plume Anomaly Detection (OPAD) detects engine hardware degradation of flight vehicles through identification and quantification of elemental species found in the plume by analyzing the plume emission spectra in a real-time mode. Real-time performance of OPAD relies on extensive software which must report metal amounts in the plume faster than once every 0.5 sec. OPAD software previously written by NASA scientists performed most necessary functions at speeds which were far below what is needed for real-time operation. The research presented in this report improved the execution speed of the software by optimizing the code without changing the algorithms and converting it into a parallelized form which is executed in a shared-memory multiprocessor system. The resulting code was subjected to extensive timing analysis. The report also provides suggestions for further performance improvement by (1) identifying areas of algorithm optimization, (2) recommending commercially available multiprocessor architectures and operating systems to support real-time execution and (3) presenting an initial study of fault-tolerance requirements.

  19. Assessment of the Broadleaf Crops Leaf Area Index Product from the Terra MODIS Instrument

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Hu, Jiannan; Huang, Dong; Yang, Wenze; Zhang, Ping; Shabanov, Nikolay V.; Knyazikhin, Yuri; Nemani, Ramakrishna R.; Myneni, Ranga B.

    2005-01-01

    The first significant processing of Terra MODIS data, called Collection 3, covered the period from November 2000 to December 2002. The Collection 3 leaf area index (LAI) and fraction vegetation absorbed photosynthetically active radiation (FPAR) products for broadleaf crops exhibited three anomalies (a) high LAI values during the peak growing season, (b) differences in LAI seasonality between the radiative transfer-based main algorithm and the vegetation index based back-up algorithm, and (c) too few retrievals from the main algorithm during the summer period when the crops are at full flush. The cause of these anomalies is a mismatch between reflectances modeled by the algorithm and MODIS measurements. Therefore, the Look-Up-Tables accompanying the algorithm were revised and implemented in Collection 4 processing. The main algorithm with the revised Look-Up-Tables generated retrievals for over 80% of the pixels with valid data. Retrievals from the back-up algorithm, although few, should be used with caution as they are generated from surface reflectances with high uncertainties.

  20. Machine learning for the automatic detection of anomalous events

    NASA Astrophysics Data System (ADS)

    Fisher, Wendy D.

    In this dissertation, we describe our research contributions for a novel approach to the application of machine learning for the automatic detection of anomalous events. We work in two different domains to ensure a robust data-driven workflow that could be generalized for monitoring other systems. Specifically, in our first domain, we begin with the identification of internal erosion events in earth dams and levees (EDLs) using geophysical data collected from sensors located on the surface of the levee. As EDLs across the globe reach the end of their design lives, effectively monitoring their structural integrity is of critical importance. The second domain of interest is related to mobile telecommunications, where we investigate a system for automatically detecting non-commercial base station routers (BSRs) operating in protected frequency space. The presence of non-commercial BSRs can disrupt the connectivity of end users, cause service issues for the commercial providers, and introduce significant security concerns. We provide our motivation, experimentation, and results from investigating a generalized novel data-driven workflow using several machine learning techniques. In Chapter 2, we present results from our performance study that uses popular unsupervised clustering algorithms to gain insights to our real-world problems, and evaluate our results using internal and external validation techniques. Using EDL passive seismic data from an experimental laboratory earth embankment, results consistently show a clear separation of events from non-events in four of the five clustering algorithms applied. Chapter 3 uses a multivariate Gaussian machine learning model to identify anomalies in our experimental data sets. For the EDL work, we used experimental data from two different laboratory earth embankments. Additionally, we explore five wavelet transform methods for signal denoising. The best performance is achieved with the Haar wavelets. We achieve up to 97.3% overall accuracy and less than 1.4% false negatives in anomaly detection. In Chapter 4, we research using two-class and one-class support vector machines (SVMs) for an effective anomaly detection system. We again use the two different EDL data sets from experimental laboratory earth embankments (each having approximately 80% normal and 20% anomalies) to ensure our workflow is robust enough to work with multiple data sets and different types of anomalous events (e.g., cracks and piping). We apply Haar wavelet-denoising techniques and extract nine spectral features from decomposed segments of the time series data. The two-class SVM with 10-fold cross validation achieved over 94% overall accuracy and 96% F1-score. Our approach provides a means for automatically identifying anomalous events using various machine learning techniques. Detecting internal erosion events in aging EDLs, earlier than is currently possible, can allow more time to prevent or mitigate catastrophic failures. Results show that we can successfully separate normal from anomalous data observations in passive seismic data, and provide a step towards techniques for continuous real-time monitoring of EDL health. Our lightweight non-commercial BSR detection system also has promise in separating commercial from non-commercial BSR scans without the need for prior geographic location information, extensive time-lapse surveys, or a database of known commercial carriers. (Abstract shortened by ProQuest.).

  1. Network anomaly detection system with optimized DS evidence theory.

    PubMed

    Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu

    2014-01-01

    Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network-complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each sensor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly.

  2. Network Anomaly Detection System with Optimized DS Evidence Theory

    PubMed Central

    Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu

    2014-01-01

    Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network—complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each senor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly. PMID:25254258

  3. Evaluation of Anomaly Detection Method Based on Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Fontugne, Romain; Himura, Yosuke; Fukuda, Kensuke

    The number of threats on the Internet is rapidly increasing, and anomaly detection has become of increasing importance. High-speed backbone traffic is particularly degraded, but their analysis is a complicated task due to the amount of data, the lack of payload data, the asymmetric routing and the use of sampling techniques. Most anomaly detection schemes focus on the statistical properties of network traffic and highlight anomalous traffic through their singularities. In this paper, we concentrate on unusual traffic distributions, which are easily identifiable in temporal-spatial space (e.g., time/address or port). We present an anomaly detection method that uses a pattern recognition technique to identify anomalies in pictures representing traffic. The main advantage of this method is its ability to detect attacks involving mice flows. We evaluate the parameter set and the effectiveness of this approach by analyzing six years of Internet traffic collected from a trans-Pacific link. We show several examples of detected anomalies and compare our results with those of two other methods. The comparison indicates that the only anomalies detected by the pattern-recognition-based method are mainly malicious traffic with a few packets.

  4. The Inverse Bagging Algorithm: Anomaly Detection by Inverse Bootstrap Aggregating

    NASA Astrophysics Data System (ADS)

    Vischia, Pietro; Dorigo, Tommaso

    2017-03-01

    For data sets populated by a very well modeled process and by another process of unknown probability density function (PDF), a desired feature when manipulating the fraction of the unknown process (either for enhancing it or suppressing it) consists in avoiding to modify the kinematic distributions of the well modeled one. A bootstrap technique is used to identify sub-samples rich in the well modeled process, and classify each event according to the frequency of it being part of such sub-samples. Comparisons with general MVA algorithms will be shown, as well as a study of the asymptotic properties of the method, making use of a public domain data set that models a typical search for new physics as performed at hadronic colliders such as the Large Hadron Collider (LHC).

  5. Real-time anomaly detection for very short-term load forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Jian; Hong, Tao; Yue, Meng

    Although the recent load information is critical to very short-term load forecasting (VSTLF), power companies often have difficulties in collecting the most recent load values accurately and timely for VSTLF applications. This paper tackles the problem of real-time anomaly detection in most recent load information used by VSTLF. This paper proposes a model-based anomaly detection method that consists of two components, a dynamic regression model and an adaptive anomaly threshold. The case study is developed using the data from ISO New England. This paper demonstrates that the proposed method significantly outperforms three other anomaly detection methods including two methods commonlymore » used in the field and one state-of-the-art method used by a winning team of the Global Energy Forecasting Competition 2014. Lastly, a general anomaly detection framework is proposed for the future research.« less

  6. Real-time anomaly detection for very short-term load forecasting

    DOE PAGES

    Luo, Jian; Hong, Tao; Yue, Meng

    2018-01-06

    Although the recent load information is critical to very short-term load forecasting (VSTLF), power companies often have difficulties in collecting the most recent load values accurately and timely for VSTLF applications. This paper tackles the problem of real-time anomaly detection in most recent load information used by VSTLF. This paper proposes a model-based anomaly detection method that consists of two components, a dynamic regression model and an adaptive anomaly threshold. The case study is developed using the data from ISO New England. This paper demonstrates that the proposed method significantly outperforms three other anomaly detection methods including two methods commonlymore » used in the field and one state-of-the-art method used by a winning team of the Global Energy Forecasting Competition 2014. Lastly, a general anomaly detection framework is proposed for the future research.« less

  7. Using statistical anomaly detection models to find clinical decision support malfunctions.

    PubMed

    Ray, Soumi; McEvoy, Dustin S; Aaron, Skye; Hickman, Thu-Trang; Wright, Adam

    2018-05-11

    Malfunctions in Clinical Decision Support (CDS) systems occur due to a multitude of reasons, and often go unnoticed, leading to potentially poor outcomes. Our goal was to identify malfunctions within CDS systems. We evaluated 6 anomaly detection models: (1) Poisson Changepoint Model, (2) Autoregressive Integrated Moving Average (ARIMA) Model, (3) Hierarchical Divisive Changepoint (HDC) Model, (4) Bayesian Changepoint Model, (5) Seasonal Hybrid Extreme Studentized Deviate (SHESD) Model, and (6) E-Divisive with Median (EDM) Model and characterized their ability to find known anomalies. We analyzed 4 CDS alerts with known malfunctions from the Longitudinal Medical Record (LMR) and Epic® (Epic Systems Corporation, Madison, WI, USA) at Brigham and Women's Hospital, Boston, MA. The 4 rules recommend lead testing in children, aspirin therapy in patients with coronary artery disease, pneumococcal vaccination in immunocompromised adults and thyroid testing in patients taking amiodarone. Poisson changepoint, ARIMA, HDC, Bayesian changepoint and the SHESD model were able to detect anomalies in an alert for lead screening in children and in an alert for pneumococcal conjugate vaccine in immunocompromised adults. EDM was able to detect anomalies in an alert for monitoring thyroid function in patients on amiodarone. Malfunctions/anomalies occur frequently in CDS alert systems. It is important to be able to detect such anomalies promptly. Anomaly detection models are useful tools to aid such detections.

  8. 22nd Annual Logistics Conference and Exhibition

    DTIC Science & Technology

    2006-04-20

    Prognostics & Health Management at GE Dr. Piero P.Bonissone Industrial AI Lab GE Global Research NCD Select detection model Anomaly detection results...Mode 213 x Failure mode histogram 2130014 Anomaly detection from event-log data Anomaly detection from event-log data Diagnostics/ Prognostics Using...Failure Monitoring & AssessmentTactical C4ISR Sense Respond 7 •Diagnostics, Prognostics and health management

  9. Temporal Data-Driven Sleep Scheduling and Spatial Data-Driven Anomaly Detection for Clustered Wireless Sensor Networks

    PubMed Central

    Li, Gang; He, Bin; Huang, Hongwei; Tang, Limin

    2016-01-01

    The spatial–temporal correlation is an important feature of sensor data in wireless sensor networks (WSNs). Most of the existing works based on the spatial–temporal correlation can be divided into two parts: redundancy reduction and anomaly detection. These two parts are pursued separately in existing works. In this work, the combination of temporal data-driven sleep scheduling (TDSS) and spatial data-driven anomaly detection is proposed, where TDSS can reduce data redundancy. The TDSS model is inspired by transmission control protocol (TCP) congestion control. Based on long and linear cluster structure in the tunnel monitoring system, cooperative TDSS and spatial data-driven anomaly detection are then proposed. To realize synchronous acquisition in the same ring for analyzing the situation of every ring, TDSS is implemented in a cooperative way in the cluster. To keep the precision of sensor data, spatial data-driven anomaly detection based on the spatial correlation and Kriging method is realized to generate an anomaly indicator. The experiment results show that cooperative TDSS can realize non-uniform sensing effectively to reduce the energy consumption. In addition, spatial data-driven anomaly detection is quite significant for maintaining and improving the precision of sensor data. PMID:27690035

  10. WE-H-BRC-06: A Unified Machine-Learning Based Probabilistic Model for Automated Anomaly Detection in the Treatment Plan Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, X; Liu, S; Kalet, A

    Purpose: The purpose of this work was to investigate the ability of a machine-learning based probabilistic approach to detect radiotherapy treatment plan anomalies given initial disease classes information. Methods In total we obtained 1112 unique treatment plans with five plan parameters and disease information from a Mosaiq treatment management system database for use in the study. The plan parameters include prescription dose, fractions, fields, modality and techniques. The disease information includes disease site, and T, M and N disease stages. A Bayesian network method was employed to model the probabilistic relationships between tumor disease information, plan parameters and an anomalymore » flag. A Bayesian learning method with Dirichlet prior was useed to learn the joint probabilities between dependent variables in error-free plan data and data with artificially induced anomalies. In the study, we randomly sampled data with anomaly in a specified anomaly space.We tested the approach with three groups of plan anomalies – improper concurrence of values of all five plan parameters and values of any two out of five parameters, and all single plan parameter value anomalies. Totally, 16 types of plan anomalies were covered by the study. For each type, we trained an individual Bayesian network. Results: We found that the true positive rate (recall) and positive predictive value (precision) to detect concurrence anomalies of five plan parameters in new patient cases were 94.45±0.26% and 93.76±0.39% respectively. To detect other 15 types of plan anomalies, the average recall and precision were 93.61±2.57% and 93.78±3.54% respectively. The computation time to detect the plan anomaly of each type in a new plan is ∼0.08 seconds. Conclusion: The proposed method for treatment plan anomaly detection was found effective in the initial tests. The results suggest that this type of models could be applied to develop plan anomaly detection tools to assist manual and automated plan checks. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less

  11. Automated Network Anomaly Detection with Learning, Control and Mitigation

    ERIC Educational Resources Information Center

    Ippoliti, Dennis

    2014-01-01

    Anomaly detection is a challenging problem that has been researched within a variety of application domains. In network intrusion detection, anomaly based techniques are particularly attractive because of their ability to identify previously unknown attacks without the need to be programmed with the specific signatures of every possible attack.…

  12. Systematic Screening for Subtelomeric Anomalies in a Clinical Sample of Autism

    ERIC Educational Resources Information Center

    Wassink, Thomas H.; Losh, Molly; Piven, Joseph; Sheffield, Val C.; Ashley, Elizabeth; Westin, Erik R.; Patil, Shivanand R.

    2007-01-01

    High-resolution karyotyping detects cytogenetic anomalies in 5-10% of cases of autism. Karyotyping, however, may fail to detect abnormalities of chromosome subtelomeres, which are gene rich regions prone to anomalies. We assessed whether panels of FISH probes targeted for subtelomeres could detect abnormalities beyond those identified by…

  13. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    PubMed

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  14. Inverse Problems in Geodynamics Using Machine Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Shahnas, M. H.; Yuen, D. A.; Pysklywec, R. N.

    2018-01-01

    During the past few decades numerical studies have been widely employed to explore the style of circulation and mixing in the mantle of Earth and other planets. However, in geodynamical studies there are many properties from mineral physics, geochemistry, and petrology in these numerical models. Machine learning, as a computational statistic-related technique and a subfield of artificial intelligence, has rapidly emerged recently in many fields of sciences and engineering. We focus here on the application of supervised machine learning (SML) algorithms in predictions of mantle flow processes. Specifically, we emphasize on estimating mantle properties by employing machine learning techniques in solving an inverse problem. Using snapshots of numerical convection models as training samples, we enable machine learning models to determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at midmantle depths. Employing support vector machine algorithms, we show that SML techniques can successfully predict the magnitude of mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex geodynamic problems in mantle dynamics by employing deep learning algorithms for putting constraints on properties such as viscosity, elastic parameters, and the nature of thermal and chemical anomalies.

  15. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  16. Interpretation of magnetic anomalies using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Kaftan, İlknur

    2017-08-01

    A genetic algorithm (GA) is an artificial intelligence method used for optimization. We applied a GA to the inversion of magnetic anomalies over a thick dike. Inversion of nonlinear geophysical problems using a GA has advantages because it does not require model gradients or well-defined initial model parameters. The evolution process consists of selection, crossover, and mutation genetic operators that look for the best fit to the observed data and a solution consisting of plausible compact sources. The efficiency of a GA on both synthetic and real magnetic anomalies of dikes by estimating model parameters, such as depth to the top of the dike ( H), the half-width of the dike ( B), the distance from the origin to the reference point ( D), the dip of the thick dike ( δ), and the susceptibility contrast ( k), has been shown. For the synthetic anomaly case, it has been considered for both noise-free and noisy magnetic data. In the real case, the vertical magnetic anomaly from the Pima copper mine in Arizona, USA, and the vertical magnetic anomaly in the Bayburt-Sarıhan skarn zone in northeastern Turkey have been inverted and interpreted. We compared the estimated parameters with the results of conventional inversion methods used in previous studies. We can conclude that the GA method used in this study is a useful tool for evaluating magnetic anomalies for dike models.

  17. Intelligent Elements for ISHM

    NASA Technical Reports Server (NTRS)

    Schmalzel, John L.; Morris, Jon; Turowski, Mark; Figueroa, Fernando; Oostdyk, Rebecca

    2008-01-01

    There are a number of architecture models for implementing Integrated Systems Health Management (ISHM) capabilities. For example, approaches based on the OSA-CBM and OSA-EAI models, or specific architectures developed in response to local needs. NASA s John C. Stennis Space Center (SSC) has developed one such version of an extensible architecture in support of rocket engine testing that integrates a palette of functions in order to achieve an ISHM capability. Among the functional capabilities that are supported by the framework are: prognostic models, anomaly detection, a data base of supporting health information, root cause analysis, intelligent elements, and integrated awareness. This paper focuses on the role that intelligent elements can play in ISHM architectures. We define an intelligent element as a smart element with sufficient computing capacity to support anomaly detection or other algorithms in support of ISHM functions. A smart element has the capabilities of supporting networked implementations of IEEE 1451.x smart sensor and actuator protocols. The ISHM group at SSC has been actively developing intelligent elements in conjunction with several partners at other Centers, universities, and companies as part of our ISHM approach for better supporting rocket engine testing. We have developed several implementations. Among the key features for these intelligent sensors is support for IEEE 1451.1 and incorporation of a suite of algorithms for determination of sensor health. Regardless of the potential advantages that can be achieved using intelligent sensors, existing large-scale systems are still based on conventional sensors and data acquisition systems. In order to bring the benefits of intelligent sensors to these environments, we have also developed virtual implementations of intelligent sensors.

  18. A new non-iterative reconstruction method for the electrical impedance tomography problem

    NASA Astrophysics Data System (ADS)

    Ferreira, A. D.; Novotny, A. A.

    2017-03-01

    The electrical impedance tomography (EIT) problem consists in determining the distribution of the electrical conductivity of a medium subject to a set of current fluxes, from measurements of the corresponding electrical potentials on its boundary. EIT is probably the most studied inverse problem since the fundamental works by Calderón from the 1980s. It has many relevant applications in medicine (detection of tumors), geophysics (localization of mineral deposits) and engineering (detection of corrosion in structures). In this work, we are interested in reconstructing a number of anomalies with different electrical conductivity from the background. Since the EIT problem is written in the form of an overdetermined boundary value problem, the idea is to rewrite it as a topology optimization problem. In particular, a shape functional measuring the misfit between the boundary measurements and the electrical potentials obtained from the model is minimized with respect to a set of ball-shaped anomalies by using the concept of topological derivatives. It means that the objective functional is expanded and then truncated up to the second order term, leading to a quadratic and strictly convex form with respect to the parameters under consideration. Thus, a trivial optimization step leads to a non-iterative second order reconstruction algorithm. As a result, the reconstruction process becomes very robust with respect to noisy data and independent of any initial guess. Finally, in order to show the effectiveness of the devised reconstruction algorithm, some numerical experiments into two spatial dimensions are presented, taking into account total and partial boundary measurements.

  19. Tensor Spectral Clustering for Partitioning Higher-order Network Structures.

    PubMed

    Benson, Austin R; Gleich, David F; Leskovec, Jure

    2015-01-01

    Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms.

  20. Tensor Spectral Clustering for Partitioning Higher-order Network Structures

    PubMed Central

    Benson, Austin R.; Gleich, David F.; Leskovec, Jure

    2016-01-01

    Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms. PMID:27812399

  1. Systematic review and meta-analysis of isolated posterior fossa malformations on prenatal ultrasound imaging (part 1): nomenclature, diagnostic accuracy and associated anomalies.

    PubMed

    D'Antonio, F; Khalil, A; Garel, C; Pilu, G; Rizzo, G; Lerman-Sagie, T; Bhide, A; Thilaganathan, B; Manzoli, L; Papageorghiou, A T

    2016-06-01

    To explore the outcome in fetuses with prenatal diagnosis of posterior fossa anomalies apparently isolated on ultrasound imaging. MEDLINE and EMBASE were searched electronically utilizing combinations of relevant medical subject headings for 'posterior fossa' and 'outcome'. The posterior fossa anomalies analyzed were Dandy-Walker malformation (DWM), mega cisterna magna (MCM), Blake's pouch cyst (BPC) and vermian hypoplasia (VH). The outcomes observed were rate of chromosomal abnormalities, additional anomalies detected at prenatal magnetic resonance imaging (MRI), additional anomalies detected at postnatal imaging and concordance between prenatal and postnatal diagnoses. Only isolated cases of posterior fossa anomalies - defined as having no cerebral or extracerebral additional anomalies detected on ultrasound examination - were included in the analysis. Quality assessment of the included studies was performed using the Newcastle-Ottawa Scale for cohort studies. We used meta-analyses of proportions to combine data and fixed- or random-effects models according to the heterogeneity of the results. Twenty-two studies including 531 fetuses with posterior fossa anomalies were included in this systematic review. The prevalence of chromosomal abnormalities in fetuses with isolated DWM was 16.3% (95% CI, 8.7-25.7%). The prevalence of additional central nervous system (CNS) abnormalities that were missed at ultrasound examination and detected only at prenatal MRI was 13.7% (95% CI, 0.2-42.6%), and the prevalence of additional CNS anomalies that were missed at prenatal imaging and detected only after birth was 18.2% (95% CI, 6.2-34.6%). Prenatal diagnosis was not confirmed after birth in 28.2% (95% CI, 8.5-53.9%) of cases. MCM was not significantly associated with additional anomalies detected at prenatal MRI or detected after birth. Prenatal diagnosis was not confirmed postnatally in 7.1% (95% CI, 2.3-14.5%) of cases. The rate of chromosomal anomalies in fetuses with isolated BPC was 5.2% (95% CI, 0.9-12.7%) and there was no associated CNS anomaly detected at prenatal MRI or only after birth. Prenatal diagnosis of BPC was not confirmed after birth in 9.8% (95% CI, 2.9-20.1%) of cases. The rate of chromosomal anomalies in fetuses with isolated VH was 6.5% (95% CI, 0.8-17.1%) and there were no additional anomalies detected at prenatal MRI (0% (95% CI, 0.0-45.9%)). The proportions of cerebral anomalies detected only after birth was 14.2% (95% CI, 2.9-31.9%). Prenatal diagnosis was not confirmed after birth in 32.4% (95% CI, 18.3-48.4%) of cases. DWM apparently isolated on ultrasound imaging is a condition with a high risk for chromosomal and associated structural anomalies. Isolated MCM and BPC have a low risk for aneuploidy or associated structural anomalies. The small number of cases with isolated VH prevents robust conclusions regarding their management from being drawn. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.

  2. Activity Learning as a Foundation for Security Monitoring in Smart Homes.

    PubMed

    Dahmen, Jessamyn; Thomas, Brian L; Cook, Diane J; Wang, Xiaobo

    2017-03-31

    Smart environment technology has matured to the point where it is regularly used in everyday homes as well as research labs. With this maturation of the technology, we can consider using smart homes as a practical mechanism for improving home security. In this paper, we introduce an activity-aware approach to security monitoring and threat detection in smart homes. We describe our approach using the CASAS smart home framework and activity learning algorithms. By monitoring for activity-based anomalies we can detect possible threats and take appropriate action. We evaluate our proposed method using data collected in CASAS smart homes and demonstrate the partnership between activity-aware smart homes and biometric devices in the context of the CASAS on-campus smart apartment testbed.

  3. Vector Graph Assisted Pedestrian Dead Reckoning Using an Unconstrained Smartphone

    PubMed Central

    Qian, Jiuchao; Pei, Ling; Ma, Jiabin; Ying, Rendong; Liu, Peilin

    2015-01-01

    The paper presents a hybrid indoor positioning solution based on a pedestrian dead reckoning (PDR) approach using built-in sensors on a smartphone. To address the challenges of flexible and complex contexts of carrying a phone while walking, a robust step detection algorithm based on motion-awareness has been proposed. Given the fact that step length is influenced by different motion states, an adaptive step length estimation algorithm based on motion recognition is developed. Heading estimation is carried out by an attitude acquisition algorithm, which contains a two-phase filter to mitigate the distortion of magnetic anomalies. In order to estimate the heading for an unconstrained smartphone, principal component analysis (PCA) of acceleration is applied to determine the offset between the orientation of smartphone and the actual heading of a pedestrian. Moreover, a particle filter with vector graph assisted particle weighting is introduced to correct the deviation in step length and heading estimation. Extensive field tests, including four contexts of carrying a phone, have been conducted in an office building to verify the performance of the proposed algorithm. Test results show that the proposed algorithm can achieve sub-meter mean error in all contexts. PMID:25738763

  4. [The microbiological aspects of orthodontic treatment of children with dental maxillary anomalies].

    PubMed

    Chesnokov, V A; Chesnokova, M G; Leontiev, V K; Mironov, A Yu; Lomiashvili, L M; Kriga, A S

    2015-03-01

    The issues of pre-nosologic diagnostic and effectiveness of treatment of diseases of oral cavity is an actual issue in dentistry. The long- duration orthodontic treatment of patients with dentoalveolar anomalies using non-removable devices is followed by such negative consequences as development demineralization of enamel and caries registered during treatment and after remove ofdevices. The level of quantitative content of oral streptococci was analyzed and dental status in children with dentoalveolar anomalies was evaluated during treatment with non-removable devices was evaluated. The caries and inflammation of periodontium of oral cavity were most often detected in children with high level of content of streptococci. In different periods of study the firm tendency of increasing of concentration of Streptococcus mutans and S. sanguis of dental plaque of oral cavity is established. The established index indicators of dental status of patients testify intensity of caries damage, level of poor hygiene of oral cavity, development of average degree of severity of inflammation process of periodontium. The obtained results substantiate involvement ofstreptococci, associates of microbiota of dental plaque of oral cavity in children, in process of development of caries. The characteristics of micro-ecology of dental plaque to evaluate cariesgenic situation that can be used as a basis for constructing diagnostic algorithm under monitoring of patients with dentoalveolar anomalies with purpose of forthcoming planning and implementation of effective orthodontic treatment.

  5. A modified anomaly detection method for capsule endoscopy images using non-linear color conversion and Higher-order Local Auto-Correlation (HLAC).

    PubMed

    Hu, Erzhong; Nosato, Hirokazu; Sakanashi, Hidenori; Murakawa, Masahiro

    2013-01-01

    Capsule endoscopy is a patient-friendly endoscopy broadly utilized in gastrointestinal examination. However, the efficacy of diagnosis is restricted by the large quantity of images. This paper presents a modified anomaly detection method, by which both known and unknown anomalies in capsule endoscopy images of small intestine are expected to be detected. To achieve this goal, this paper introduces feature extraction using a non-linear color conversion and Higher-order Local Auto Correlation (HLAC) Features, and makes use of image partition and subspace method for anomaly detection. Experiments are implemented among several major anomalies with combinations of proposed techniques. As the result, the proposed method achieved 91.7% and 100% detection accuracy for swelling and bleeding respectively, so that the effectiveness of proposed method is demonstrated.

  6. Parallel and Scalable Clustering and Classification for Big Data in Geosciences

    NASA Astrophysics Data System (ADS)

    Riedel, M.

    2015-12-01

    Machine learning, data mining, and statistical computing are common techniques to perform analysis in earth sciences. This contribution will focus on two concrete and widely used data analytics methods suitable to analyse 'big data' in the context of geoscience use cases: clustering and classification. From the broad class of available clustering methods we focus on the density-based spatial clustering of appliactions with noise (DBSCAN) algorithm that enables the identification of outliers or interesting anomalies. A new open source parallel and scalable DBSCAN implementation will be discussed in the light of a scientific use case that detects water mixing events in the Koljoefjords. The second technique we cover is classification, with a focus set on the support vector machines algorithm (SVMs), as one of the best out-of-the-box classification algorithm. A parallel and scalable SVM implementation will be discussed in the light of a scientific use case in the field of remote sensing with 52 different classes of land cover types.

  7. Recursive least squares background prediction of univariate syndromic surveillance data

    PubMed Central

    2009-01-01

    Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. Results We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. Conclusion The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the residuals of the RLS forecasts. We compare the detection performance of this algorithm to the W2 method recently implemented at CDC. The modified RLS method gives consistently better sensitivity at multiple background alert rates, and we recommend that it should be considered for routine application in bio-surveillance systems. PMID:19149886

  8. Recursive least squares background prediction of univariate syndromic surveillance data.

    PubMed

    Najmi, Amir-Homayoon; Burkom, Howard

    2009-01-16

    Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the residuals of the RLS forecasts. We compare the detection performance of this algorithm to the W2 method recently implemented at CDC. The modified RLS method gives consistently better sensitivity at multiple background alert rates, and we recommend that it should be considered for routine application in bio-surveillance systems.

  9. Application of Ground-Penetrating Radar for Detecting Internal Anomalies in Tree Trunks with Irregular Contours.

    PubMed

    Li, Weilin; Wen, Jian; Xiao, Zhongliang; Xu, Shengxia

    2018-02-22

    To assess the health conditions of tree trunks, it is necessary to estimate the layers and anomalies of their internal structure. The main objective of this paper is to investigate the internal part of tree trunks considering their irregular contour. In this respect, we used ground penetrating radar (GPR) for non-invasive detection of defects and deteriorations in living trees trunks. The Hilbert transform algorithm and the reflection amplitudes were used to estimate the relative dielectric constant. The point cloud data technique was applied as well to extract the irregular contours of trunks. The feasibility and accuracy of the methods were examined through numerical simulations, laboratory and field measurements. The results demonstrated that the applied methodology allowed for accurate characterizations of the internal inhomogeneity. Furthermore, the point cloud technique resolved the trunk well by providing high-precision coordinate information. This study also demonstrated that cross-section tomography provided images with high resolution and accuracy. These integrated techniques thus proved to be promising for observing tree trunks and other cylindrical objects. The applied approaches offer a great promise for future 3D reconstruction of tomographic images with radar wave.

  10. A Comparative Study of Anomaly Detection Techniques for Smart City Wireless Sensor Networks.

    PubMed

    Garcia-Font, Victor; Garrigues, Carles; Rifà-Pous, Helena

    2016-06-13

    In many countries around the world, smart cities are becoming a reality. These cities contribute to improving citizens' quality of life by providing services that are normally based on data extracted from wireless sensor networks (WSN) and other elements of the Internet of Things. Additionally, public administration uses these smart city data to increase its efficiency, to reduce costs and to provide additional services. However, the information received at smart city data centers is not always accurate, because WSNs are sometimes prone to error and are exposed to physical and computer attacks. In this article, we use real data from the smart city of Barcelona to simulate WSNs and implement typical attacks. Then, we compare frequently used anomaly detection techniques to disclose these attacks. We evaluate the algorithms under different requirements on the available network status information. As a result of this study, we conclude that one-class Support Vector Machines is the most appropriate technique. We achieve a true positive rate at least 56% higher than the rates achieved with the other compared techniques in a scenario with a maximum false positive rate of 5% and a 26% higher in a scenario with a false positive rate of 15%.

  11. A Comparative Study of Anomaly Detection Techniques for Smart City Wireless Sensor Networks

    PubMed Central

    Garcia-Font, Victor; Garrigues, Carles; Rifà-Pous, Helena

    2016-01-01

    In many countries around the world, smart cities are becoming a reality. These cities contribute to improving citizens’ quality of life by providing services that are normally based on data extracted from wireless sensor networks (WSN) and other elements of the Internet of Things. Additionally, public administration uses these smart city data to increase its efficiency, to reduce costs and to provide additional services. However, the information received at smart city data centers is not always accurate, because WSNs are sometimes prone to error and are exposed to physical and computer attacks. In this article, we use real data from the smart city of Barcelona to simulate WSNs and implement typical attacks. Then, we compare frequently used anomaly detection techniques to disclose these attacks. We evaluate the algorithms under different requirements on the available network status information. As a result of this study, we conclude that one-class Support Vector Machines is the most appropriate technique. We achieve a true positive rate at least 56% higher than the rates achieved with the other compared techniques in a scenario with a maximum false positive rate of 5% and a 26% higher in a scenario with a false positive rate of 15%. PMID:27304957

  12. Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling

    NASA Astrophysics Data System (ADS)

    Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji

    We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.

  13. Winter Precipitation Forecast in the European and Mediterranean Regions Using Cluster Analysis

    NASA Astrophysics Data System (ADS)

    Totz, Sonja; Tziperman, Eli; Coumou, Dim; Pfeiffer, Karl; Cohen, Judah

    2017-12-01

    The European climate is changing under global warming, and especially the Mediterranean region has been identified as a hot spot for climate change with climate models projecting a reduction in winter rainfall and a very pronounced increase in summertime heat waves. These trends are already detectable over the historic period. Hence, it is beneficial to forecast seasonal droughts well in advance so that water managers and stakeholders can prepare to mitigate deleterious impacts. We developed a new cluster-based empirical forecast method to predict precipitation anomalies in winter. This algorithm considers not only the strength but also the pattern of the precursors. We compare our algorithm with dynamic forecast models and a canonical correlation analysis-based prediction method demonstrating that our prediction method performs better in terms of time and pattern correlation in the Mediterranean and European regions.

  14. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data.

    PubMed

    Song, Hongchao; Jiang, Zhuqing; Men, Aidong; Yang, Bo

    2017-01-01

    Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k -nearest neighbor graphs- ( K -NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  15. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    PubMed Central

    Jiang, Zhuqing; Men, Aidong; Yang, Bo

    2017-01-01

    Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k-nearest neighbor graphs- (K-NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity. PMID:29270197

  16. Anomaly Detection in Power Quality at Data Centers

    NASA Technical Reports Server (NTRS)

    Grichine, Art; Solano, Wanda M.

    2015-01-01

    The goal during my internship at the National Center for Critical Information Processing and Storage (NCCIPS) is to implement an anomaly detection method through the StruxureWare SCADA Power Monitoring system. The benefit of the anomaly detection mechanism is to provide the capability to detect and anticipate equipment degradation by monitoring power quality prior to equipment failure. First, a study is conducted that examines the existing techniques of power quality management. Based on these findings, and the capabilities of the existing SCADA resources, recommendations are presented for implementing effective anomaly detection. Since voltage, current, and total harmonic distortion demonstrate Gaussian distributions, effective set-points are computed using this model, while maintaining a low false positive count.

  17. Community Seismic Network (CSN)

    NASA Astrophysics Data System (ADS)

    Clayton, R. W.; Heaton, T. H.; Kohler, M. D.; Cheng, M.; Guy, R.; Chandy, M.; Krause, A.; Bunn, J.; Olson, M.; Faulkner, M.; Liu, A.; Strand, L.

    2012-12-01

    We report on developments in sensor connectivity, architecture, and data fusion algorithms executed in Cloud computing systems in the Community Seismic Network (CSN), a network of low-cost sensors housed in homes and offices by volunteers in the Pasadena, CA area. The network has over 200 sensors continuously reporting anomalies in local acceleration through the Internet to a Cloud computing service (the Google App Engine) that continually fuses sensor data to rapidly detect shaking from earthquakes. The Cloud computing system consists of data centers geographically distributed across the continent and is likely to be resilient even during earthquakes and other local disasters. The region of Southern California is partitioned in a multi-grid style into sets of telescoping cells called geocells. Data streams from sensors within a geocell are fused to detect anomalous shaking across the geocell. Temporal spatial patterns across geocells are used to detect anomalies across regions. The challenge is to detect earthquakes rapidly with an extremely low false positive rate. We report on two data fusion algorithms, one that tessellates the surface so as to fuse data from a large region around Pasadena and the other, which uses a standard tessellation of equal-sized cells. Since September 2011, the network has successfully detected earthquakes of magnitude 2.5 or higher within 40 Km of Pasadena. In addition to the standard USB device, which connects to the host's computer, we have developed a stand-alone sensor that directly connects to the internet via Ethernet or wifi. This bypasses security concerns that some companies have with the USB-connected devices, and allows for 24/7 monitoring at sites that would otherwise shut down their computers after working hours. In buildings we use the sensors to model the behavior of the structures during weak events in order to understand how they will perform during strong events. Visualization models of instrumented buildings ranging between five and 22 stories tall have been constructed using Google SketchUp. Ambient vibration records are used to identify the first set of horizontal vibrational modal frequencies of the buildings. These frequencies are used to compute the response on every floor of the building, given either observed data or scenario ground motion input at the buildings' base.

  18. Road Anomalies Detection System Evaluation.

    PubMed

    Silva, Nuno; Shah, Vaibhav; Soares, João; Rodrigues, Helena

    2018-06-21

    Anomalies on road pavement cause discomfort to drivers and passengers, and may cause mechanical failure or even accidents. Governments spend millions of Euros every year on road maintenance, often causing traffic jams and congestion on urban roads on a daily basis. This paper analyses the difference between the deployment of a road anomalies detection and identification system in a “conditioned” and a real world setup, where the system performed worse compared to the “conditioned” setup. It also presents a system performance analysis based on the analysis of the training data sets; on the analysis of the attributes complexity, through the application of PCA techniques; and on the analysis of the attributes in the context of each anomaly type, using acceleration standard deviation attributes to observe how different anomalies classes are distributed in the Cartesian coordinates system. Overall, in this paper, we describe the main insights on road anomalies detection challenges to support the design and deployment of a new iteration of our system towards the deployment of a road anomaly detection service to provide information about roads condition to drivers and government entities.

  19. Visualizing Uncertainty for Data Fusion Graphics: Review of Selected Literature and Industry Approaches

    DTIC Science & Technology

    2015-06-09

    anomaly detection , which is generally considered part of high level information fusion (HLIF) involving temporal-geospatial data as well as meta-data... Anomaly detection in the Maritime defence and security domain typically focusses on trying to identify vessels that are behaving in an unusual...manner compared with lawful vessels operating in the area – an applied case of target detection among distractors. Anomaly detection is a complex problem

  20. The Monitoring, Detection, Isolation and Assessment of Information Warfare Attacks Through Multi-Level, Multi-Scale System Modeling and Model Based Technology

    DTIC Science & Technology

    2004-01-01

    login identity to the one under which the system call is executed, the parameters of the system call execution - file names including full path...Anomaly detection COAST-EIMDT Distributed on target hosts EMERALD Distributed on target hosts and security servers Signature recognition Anomaly...uses a centralized architecture, and employs an anomaly detection technique for intrusion detection. The EMERALD project [80] proposes a

  1. SeaWiFS Technical Report Series. Volume 38; SeaWiFS Calibration and Validation Quality Control Procedures

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); McClain, Charles R.; Darzi, Michael; Barnes, Robert A.; Eplee, Robert E.; Firestone, James K.; Patt, Frederick S.; Robinson, Wayne D.; Schieber, Brian D.; hide

    1996-01-01

    This document provides five brief reports that address several quality control procedures under the auspices of the Calibration and Validation Element (CVE) within the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Project. Chapter 1 describes analyses of the 32 sensor engineering telemetry streams. Anomalies in any of the values may impact sensor performance in direct or indirect ways. The analyses are primarily examinations of parameter time series combined with statistical methods such as auto- and cross-correlation functions. Chapter 2 describes how the various onboard (solar and lunar) and vicarious (in situ) calibration data will be analyzed to quantify sensor degradation, if present. The analyses also include methods for detecting the influence of charged particles on sensor performance such as might be expected in the South Atlantic Anomaly (SAA). Chapter 3 discusses the quality control of the ancillary environmental data that are routinely received from other agencies or projects which are used in the atmospheric correction algorithm (total ozone, surface wind velocity, and surface pressure; surface relative humidity is also obtained, but is not used in the initial operational algorithm). Chapter 4 explains the procedures for screening level-, level-2, and level-3 products. These quality control operations incorporate both automated and interactive procedures which check for file format errors (all levels), navigation offsets (level-1), mask and flag performance (level-2), and product anomalies (all levels). Finally, Chapter 5 discusses the match-up data set development for comparing SeaWiFS level-2 derived products with in situ observations, as well as the subsequent outlier analyses that will be used for evaluating error sources.

  2. Post-processing for improving hyperspectral anomaly detection accuracy

    NASA Astrophysics Data System (ADS)

    Wu, Jee-Cheng; Jiang, Chi-Ming; Huang, Chen-Liang

    2015-10-01

    Anomaly detection is an important topic in the exploitation of hyperspectral data. Based on the Reed-Xiaoli (RX) detector and a morphology operator, this research proposes a novel technique for improving the accuracy of hyperspectral anomaly detection. Firstly, the RX-based detector is used to process a given input scene. Then, a post-processing scheme using morphology operator is employed to detect those pixels around high-scoring anomaly pixels. Tests were conducted using two real hyperspectral images with ground truth information and the results based on receiver operating characteristic curves, illustrated that the proposed method reduced the false alarm rates of the RXbased detector.

  3. Novel Hyperspectral Anomaly Detection Methods Based on Unsupervised Nearest Regularized Subspace

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Chen, Y.; Tan, K.; Du, P.

    2018-04-01

    Anomaly detection has been of great interest in hyperspectral imagery analysis. Most conventional anomaly detectors merely take advantage of spectral and spatial information within neighboring pixels. In this paper, two methods of Unsupervised Nearest Regularized Subspace-based with Outlier Removal Anomaly Detector (UNRSORAD) and Local Summation UNRSORAD (LSUNRSORAD) are proposed, which are based on the concept that each pixel in background can be approximately represented by its spatial neighborhoods, while anomalies cannot. Using a dual window, an approximation of each testing pixel is a representation of surrounding data via a linear combination. The existence of outliers in the dual window will affect detection accuracy. Proposed detectors remove outlier pixels that are significantly different from majority of pixels. In order to make full use of various local spatial distributions information with the neighboring pixels of the pixels under test, we take the local summation dual-window sliding strategy. The residual image is constituted by subtracting the predicted background from the original hyperspectral imagery, and anomalies can be detected in the residual image. Experimental results show that the proposed methods have greatly improved the detection accuracy compared with other traditional detection method.

  4. Comparison of algorithms for blood stain detection applied to forensic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Messinger, David W.; Mathew, Jobin J.; Dube, Roger R.

    2016-05-01

    Blood stains are among the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Early detection of blood stains is particularly important since the blood reacts physically and chemically with air and materials over time. Accurate identification of blood remnants, including regions that might have been intentionally cleaned, is an important aspect of forensic investigation. Hyperspectral imaging might be a potential method to detect blood stains because it is non-contact and provides substantial spectral information that can be used to identify regions in a scene with trace amounts of blood. The potential complexity of scenes in which such vast violence occurs can be high when the range of scene material types and conditions containing blood stains at a crime scene are considered. Some stains are hard to detect by the unaided eye, especially if a conscious effort to clean the scene has occurred (we refer to these as "latent" blood stains). In this paper we present the initial results of a study of the use of hyperspectral imaging algorithms for blood detection in complex scenes. We describe a hyperspectral imaging system which generates images covering 400 nm - 700 nm visible range with a spectral resolution of 10 nm. Three image sets of 31 wavelength bands were generated using this camera for a simulated indoor crime scene in which blood stains were placed on a T-shirt and walls. To detect blood stains in the scene, Principal Component Analysis (PCA), Subspace Reed Xiaoli Detection (SRXD), and Topological Anomaly Detection (TAD) algorithms were used. Comparison of the three hyperspectral image analysis techniques shows that TAD is most suitable for detecting blood stains and discovering latent blood stains.

  5. Advancements of Data Anomaly Detection Research in Wireless Sensor Networks: A Survey and Open Issues

    PubMed Central

    Rassam, Murad A.; Zainal, Anazida; Maarof, Mohd Aizaini

    2013-01-01

    Wireless Sensor Networks (WSNs) are important and necessary platforms for the future as the concept “Internet of Things” has emerged lately. They are used for monitoring, tracking, or controlling of many applications in industry, health care, habitat, and military. However, the quality of data collected by sensor nodes is affected by anomalies that occur due to various reasons, such as node failures, reading errors, unusual events, and malicious attacks. Therefore, anomaly detection is a necessary process to ensure the quality of sensor data before it is utilized for making decisions. In this review, we present the challenges of anomaly detection in WSNs and state the requirements to design efficient and effective anomaly detection models. We then review the latest advancements of data anomaly detection research in WSNs and classify current detection approaches in five main classes based on the detection methods used to design these approaches. Varieties of the state-of-the-art models for each class are covered and their limitations are highlighted to provide ideas for potential future works. Furthermore, the reviewed approaches are compared and evaluated based on how well they meet the stated requirements. Finally, the general limitations of current approaches are mentioned and further research opportunities are suggested and discussed. PMID:23966182

  6. Intelligent transient transitions detection of LRE test bed

    NASA Astrophysics Data System (ADS)

    Zhu, Fengyu; Shen, Zhengguang; Wang, Qi

    2013-01-01

    Health Monitoring Systems is an implementation of monitoring strategies for complex systems whereby avoiding catastrophic failure, extending life and leading to improved asset management. A Health Monitoring Systems generally encompasses intelligence at many levels and sub-systems including sensors, actuators, devices, etc. In this paper, a smart sensor is studied, which is use to detect transient transitions of liquid-propellant rocket engines test bed. In consideration of dramatic changes of variable condition, wavelet decomposition is used to work real time in areas. Contrast to traditional Fourier transform method, the major advantage of adding wavelet analysis is the ability to detect transient transitions as well as obtaining the frequency content using a much smaller data set. Historically, transient transitions were only detected by offline analysis of the data. The methods proposed in this paper provide an opportunity to detect transient transitions automatically as well as many additional data anomalies, and provide improved data-correction and sensor health diagnostic abilities. The developed algorithms have been tested on actual rocket test data.

  7. Multi-Level Anomaly Detection on Time-Varying Graph Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Robert A; Collins, John P; Ferragut, Erik M

    This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating probabilities at finer levels, and these closely related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, thismore » multi-scale analysis facilitates intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. To illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less

  8. On estimating gravity anomalies - A comparison of least squares collocation with conventional least squares techniques

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Lowrey, B.

    1977-01-01

    The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described.

  9. An Adaptive Network-based Fuzzy Inference System for the detection of thermal and TEC anomalies around the time of the Varzeghan, Iran, (Mw = 6.4) earthquake of 11 August 2012

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2013-09-01

    Anomaly detection is extremely important for forecasting the date, location and magnitude of an impending earthquake. In this paper, an Adaptive Network-based Fuzzy Inference System (ANFIS) has been proposed to detect the thermal and Total Electron Content (TEC) anomalies around the time of the Varzeghan, Iran, (Mw = 6.4) earthquake jolted in 11 August 2012 NW Iran. ANFIS is the famous hybrid neuro-fuzzy network for modeling the non-linear complex systems. In this study, also the detected thermal and TEC anomalies using the proposed method are compared to the results dealing with the observed anomalies by applying the classical and intelligent methods including Interquartile, Auto-Regressive Integrated Moving Average (ARIMA), Artificial Neural Network (ANN) and Support Vector Machine (SVM) methods. The duration of the dataset which is comprised from Aqua-MODIS Land Surface Temperature (LST) night-time snapshot images and also Global Ionospheric Maps (GIM), is 62 days. It can be shown that, if the difference between the predicted value using the ANFIS method and the observed value, exceeds the pre-defined threshold value, then the observed precursor value in the absence of non seismic effective parameters could be regarded as precursory anomaly. For two precursors of LST and TEC, the ANFIS method shows very good agreement with the other implemented classical and intelligent methods and this indicates that ANFIS is capable of detecting earthquake anomalies. The applied methods detected anomalous occurrences 1 and 2 days before the earthquake. This paper indicates that the detection of the thermal and TEC anomalies derive their credibility from the overall efficiencies and potentialities of the five integrated methods.

  10. Effective Sensor Selection and Data Anomaly Detection for Condition Monitoring of Aircraft Engines

    PubMed Central

    Liu, Liansheng; Liu, Datong; Zhang, Yujie; Peng, Yu

    2016-01-01

    In a complex system, condition monitoring (CM) can collect the system working status. The condition is mainly sensed by the pre-deployed sensors in/on the system. Most existing works study how to utilize the condition information to predict the upcoming anomalies, faults, or failures. There is also some research which focuses on the faults or anomalies of the sensing element (i.e., sensor) to enhance the system reliability. However, existing approaches ignore the correlation between sensor selecting strategy and data anomaly detection, which can also improve the system reliability. To address this issue, we study a new scheme which includes sensor selection strategy and data anomaly detection by utilizing information theory and Gaussian Process Regression (GPR). The sensors that are more appropriate for the system CM are first selected. Then, mutual information is utilized to weight the correlation among different sensors. The anomaly detection is carried out by using the correlation of sensor data. The sensor data sets that are utilized to carry out the evaluation are provided by National Aeronautics and Space Administration (NASA) Ames Research Center and have been used as Prognostics and Health Management (PHM) challenge data in 2008. By comparing the two different sensor selection strategies, the effectiveness of selection method on data anomaly detection is proved. PMID:27136561

  11. Effective Sensor Selection and Data Anomaly Detection for Condition Monitoring of Aircraft Engines.

    PubMed

    Liu, Liansheng; Liu, Datong; Zhang, Yujie; Peng, Yu

    2016-04-29

    In a complex system, condition monitoring (CM) can collect the system working status. The condition is mainly sensed by the pre-deployed sensors in/on the system. Most existing works study how to utilize the condition information to predict the upcoming anomalies, faults, or failures. There is also some research which focuses on the faults or anomalies of the sensing element (i.e., sensor) to enhance the system reliability. However, existing approaches ignore the correlation between sensor selecting strategy and data anomaly detection, which can also improve the system reliability. To address this issue, we study a new scheme which includes sensor selection strategy and data anomaly detection by utilizing information theory and Gaussian Process Regression (GPR). The sensors that are more appropriate for the system CM are first selected. Then, mutual information is utilized to weight the correlation among different sensors. The anomaly detection is carried out by using the correlation of sensor data. The sensor data sets that are utilized to carry out the evaluation are provided by National Aeronautics and Space Administration (NASA) Ames Research Center and have been used as Prognostics and Health Management (PHM) challenge data in 2008. By comparing the two different sensor selection strategies, the effectiveness of selection method on data anomaly detection is proved.

  12. Variable Discretisation for Anomaly Detection using Bayesian Networks

    DTIC Science & Technology

    2017-01-01

    UNCLASSIFIED DST- Group –TR–3328 1 Introduction Bayesian network implementations usually require each variable to take on a finite number of mutually...UNCLASSIFIED Variable Discretisation for Anomaly Detection using Bayesian Networks Jonathan Legg National Security and ISR Division Defence Science...and Technology Group DST- Group –TR–3328 ABSTRACT Anomaly detection is the process by which low probability events are automatically found against a

  13. Pediatric tinnitus: Incidence of imaging anomalies and the impact of hearing loss.

    PubMed

    Kerr, Rhorie; Kang, Elise; Hopkins, Brandon; Anne, Samantha

    2017-12-01

    Guidelines exist for evaluation and management of tinnitus in adults; however lack of evidence in children limits applicability of these guidelines to pediatric patients. Objective of this study is to determine the incidence of inner ear anomalies detected on imaging studies within the pediatric population with tinnitus and evaluate if presence of hearing loss increases the rate of detection of anomalies in comparison to normal hearing patients. Retrospective review of all children with diagnosis of tinnitus from 2010 to 2015 ;at a tertiary care academic center. 102 pediatric patients with tinnitus were identified. Overall, 53 patients had imaging studies with 6 abnormal findings (11.3%). 51/102 patients had hearing loss of which 33 had imaging studies demonstrating 6 inner ear anomalies detected. This is an incidence of 18.2% for inner ear anomalies identified in patients with hearing loss (95% confidence interval (CI) of 7.0-35.5%). 4 of these 6 inner ear anomalies detected were vestibular aqueduct abnormalities. The other two anomalies were cochlear hypoplasia and bilateral semicircular canal dysmorphism. 51 patients had no hearing loss and of these patients, 20 had imaging studies with no inner ear abnormalities detected. There was no statistical difference in incidence of abnormal imaging findings in patients with and without hearing loss (Fisher's exact test, p ;= ;0.072.) CONCLUSION: There is a high incidence of anomalies detected in imaging studies done in pediatric patients with tinnitus, especially in the presence of hearing loss. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Efficient dynamic events discrimination technique for fiber distributed Brillouin sensors.

    PubMed

    Galindez, Carlos A; Madruga, Francisco J; Lopez-Higuera, Jose M

    2011-09-26

    A technique to detect real time variations of temperature or strain in Brillouin based distributed fiber sensors is proposed and is investigated in this paper. The technique is based on anomaly detection methods such as the RX-algorithm. Detection and isolation of dynamic events from the static ones are demonstrated by a proper processing of the Brillouin gain values obtained by using a standard BOTDA system. Results also suggest that better signal to noise ratio, dynamic range and spatial resolution can be obtained. For a pump pulse of 5 ns the spatial resolution is enhanced, (from 0.541 m obtained by direct gain measurement, to 0.418 m obtained with the technique here exposed) since the analysis is concentrated in the variation of the Brillouin gain and not only on the averaging of the signal along the time. © 2011 Optical Society of America

  15. Activity Learning as a Foundation for Security Monitoring in Smart Homes

    PubMed Central

    Dahmen, Jessamyn; Thomas, Brian L.; Cook, Diane J.; Wang, Xiaobo

    2017-01-01

    Smart environment technology has matured to the point where it is regularly used in everyday homes as well as research labs. With this maturation of the technology, we can consider using smart homes as a practical mechanism for improving home security. In this paper, we introduce an activity-aware approach to security monitoring and threat detection in smart homes. We describe our approach using the CASAS smart home framework and activity learning algorithms. By monitoring for activity-based anomalies we can detect possible threats and take appropriate action. We evaluate our proposed method using data collected in CASAS smart homes and demonstrate the partnership between activity-aware smart homes and biometric devices in the context of the CASAS on-campus smart apartment testbed. PMID:28362342

  16. Accumulating pyramid spatial-spectral collaborative coding divergence for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Zou, Huanxin; Zhou, Shilin

    2016-03-01

    Detection of anomalous targets of various sizes in hyperspectral data has received a lot of attention in reconnaissance and surveillance applications. Many anomaly detectors have been proposed in literature. However, current methods are susceptible to anomalies in the processing window range and often make critical assumptions about the distribution of the background data. Motivated by the fact that anomaly pixels are often distinctive from their local background, in this letter, we proposed a novel hyperspectral anomaly detection framework for real-time remote sensing applications. The proposed framework consists of four major components, sparse feature learning, pyramid grid window selection, joint spatial-spectral collaborative coding and multi-level divergence fusion. It exploits the collaborative representation difference in the feature space to locate potential anomalies and is totally unsupervised without any prior assumptions. Experimental results on airborne recorded hyperspectral data demonstrate that the proposed methods adaptive to anomalies in a large range of sizes and is well suited for parallel processing.

  17. A Distance Measure for Attention Focusing and Anomaly Detection in Systems Monitoring

    NASA Technical Reports Server (NTRS)

    Doyle, R.

    1994-01-01

    Any attempt to introduce automation into the monitoring of complex physical systems must start from a robust anomaly detection capability. This task is far from straightforward, for a single definition of what constitutes an anomaly is difficult to come by. In addition, to make the monitoring process efficient, and to avoid the potential for information overload on human operators, attention focusing must also be addressed. When an anomaly occurs, more often than not several sensors are affected, and the partially redundant information they provide can be confusing, particularly in a crisis situation where a response is needed quickly. Previous results on extending traditional anomaly detection techniques are summarized. The focus of this paper is a new technique for attention focusing.

  18. Detection of admittivity anomaly on high-contrast heterogeneous backgrounds using frequency difference EIT.

    PubMed

    Jang, J; Seo, J K

    2015-06-01

    This paper describes a multiple background subtraction method in frequency difference electrical impedance tomography (fdEIT) to detect an admittivity anomaly from a high-contrast background conductivity distribution. The proposed method expands the use of the conventional weighted frequency difference EIT method, which has been used limitedly to detect admittivity anomalies in a roughly homogeneous background. The proposed method can be viewed as multiple weighted difference imaging in fdEIT. Although the spatial resolutions of the output images by fdEIT are very low due to the inherent ill-posedness, numerical simulations and phantom experiments of the proposed method demonstrate its feasibility to detect anomalies. It has potential application in stroke detection in a head model, which is highly heterogeneous due to the skull.

  19. Multi-Level Modeling of Complex Socio-Technical Systems - Phase 1

    DTIC Science & Technology

    2013-06-06

    is to detect anomalous organizational outcomes, diagnose the causes of these anomalies , and decide upon appropriate compensation schemes. All of...monitor process outcomes. The purpose of this monitoring is to detect anomalous process outcomes, diagnose the causes of these anomalies , and decide upon...monitor work outcomes in terms of performance. The purpose of this monitoring is to detect anomalous work outcomes, diagnose the causes of these anomalies

  20. Statistical Algorithms for Designing Geophysical Surveys to Detect UXO Target Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Robert F.; Carlson, Deborah K.; Gilbert, Richard O.

    2005-07-29

    The U.S. Department of Defense is in the process of assessing and remediating closed, transferred, and transferring military training ranges across the United States. Many of these sites have areas that are known to contain unexploded ordnance (UXO). Other sites or portions of sites are not expected to contain UXO, but some verification of this expectation using geophysical surveys is needed. Many sites are so large that it is often impractical and/or cost prohibitive to perform surveys over 100% of the site. In that case, it is particularly important to be explicit about the performance required of the survey. Thismore » article presents the statistical algorithms developed to support the design of geophysical surveys along transects (swaths) to find target areas (TAs) of anomalous geophysical readings that may indicate the presence of UXO. The algorithms described here determine 1) the spacing between transects that should be used for the surveys to achieve a specified probability of traversing the TA, 2) the probability of both traversing and detecting a TA of anomalous geophysical readings when the spatial density of anomalies within the TA is either uniform (unchanging over space) or has a bivariate normal distribution, and 3) the probability that a TA exists when it was not found by surveying along transects. These algorithms have been implemented in the Visual Sample Plan (VSP) software to develop cost-effective transect survey designs that meet performance objectives.« less

  1. Statistical Algorithms for Designing Geophysical Surveys to Detect UXO Target Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Robert F.; Carlson, Deborah K.; Gilbert, Richard O.

    2005-07-28

    The U.S. Department of Defense is in the process of assessing and remediating closed, transferred, and transferring military training ranges across the United States. Many of these sites have areas that are known to contain unexploded ordnance (UXO). Other sites or portions of sites are not expected to contain UXO, but some verification of this expectation using geophysical surveys is needed. Many sites are so large that it is often impractical and/or cost prohibitive to perform surveys over 100% of the site. In such cases, it is particularly important to be explicit about the performance required of the surveys. Thismore » article presents the statistical algorithms developed to support the design of geophysical surveys along transects (swaths) to find target areas (TAs) of anomalous geophysical readings that may indicate the presence of UXO. The algorithms described here determine (1) the spacing between transects that should be used for the surveys to achieve a specified probability of traversing the TA, (2) the probability of both traversing and detecting a TA of anomalous geophysical readings when the spatial density of anomalies within the TA is either uniform (unchanging over space) or has a bivariate normal distribution, and (3) the probability that a TA exists when it was not found by surveying along transects. These algorithms have been implemented in the Visual Sample Plan (VSP) software to develop cost-effective transect survey designs that meet performance objectives.« less

  2. An Optimized Method to Detect BDS Satellites' Orbit Maneuvering and Anomalies in Real-Time.

    PubMed

    Huang, Guanwen; Qin, Zhiwei; Zhang, Qin; Wang, Le; Yan, Xingyuan; Wang, Xiaolei

    2018-02-28

    The orbital maneuvers of Global Navigation Satellite System (GNSS) Constellations will decrease the performance and accuracy of positioning, navigation, and timing (PNT). Because satellites in the Chinese BeiDou Navigation Satellite System (BDS) are in Geostationary Orbit (GEO) and Inclined Geosynchronous Orbit (IGSO), maneuvers occur more frequently. Also, the precise start moment of the BDS satellites' orbit maneuvering cannot be obtained by common users. This paper presented an improved real-time detecting method for BDS satellites' orbit maneuvering and anomalies with higher timeliness and higher accuracy. The main contributions to this improvement are as follows: (1) instead of the previous two-steps method, a new one-step method with higher accuracy is proposed to determine the start moment and the pseudo random noise code (PRN) of the satellite orbit maneuvering in that time; (2) BDS Medium Earth Orbit (MEO) orbital maneuvers are firstly detected according to the proposed selection strategy for the stations; and (3) the classified non-maneuvering anomalies are detected by a new median robust method using the weak anomaly detection factor and the strong anomaly detection factor. The data from the Multi-GNSS Experiment (MGEX) in 2017 was used for experimental analysis. The experimental results and analysis showed that the start moment of orbital maneuvers and the period of non-maneuver anomalies can be determined more accurately in real-time. When orbital maneuvers and anomalies occur, the proposed method improved the data utilization for 91 and 95 min in 2017.

  3. An Optimized Method to Detect BDS Satellites’ Orbit Maneuvering and Anomalies in Real-Time

    PubMed Central

    Huang, Guanwen; Qin, Zhiwei; Zhang, Qin; Wang, Le; Yan, Xingyuan; Wang, Xiaolei

    2018-01-01

    The orbital maneuvers of Global Navigation Satellite System (GNSS) Constellations will decrease the performance and accuracy of positioning, navigation, and timing (PNT). Because satellites in the Chinese BeiDou Navigation Satellite System (BDS) are in Geostationary Orbit (GEO) and Inclined Geosynchronous Orbit (IGSO), maneuvers occur more frequently. Also, the precise start moment of the BDS satellites’ orbit maneuvering cannot be obtained by common users. This paper presented an improved real-time detecting method for BDS satellites’ orbit maneuvering and anomalies with higher timeliness and higher accuracy. The main contributions to this improvement are as follows: (1) instead of the previous two-steps method, a new one-step method with higher accuracy is proposed to determine the start moment and the pseudo random noise code (PRN) of the satellite orbit maneuvering in that time; (2) BDS Medium Earth Orbit (MEO) orbital maneuvers are firstly detected according to the proposed selection strategy for the stations; and (3) the classified non-maneuvering anomalies are detected by a new median robust method using the weak anomaly detection factor and the strong anomaly detection factor. The data from the Multi-GNSS Experiment (MGEX) in 2017 was used for experimental analysis. The experimental results and analysis showed that the start moment of orbital maneuvers and the period of non-maneuver anomalies can be determined more accurately in real-time. When orbital maneuvers and anomalies occur, the proposed method improved the data utilization for 91 and 95 min in 2017. PMID:29495638

  4. Atmospheric circulation patterns and phenological anomalies of grapevine in Italy

    NASA Astrophysics Data System (ADS)

    Cola, Gabriele; Alilla, Roberta; Dal Monte, Giovanni; Epifani, Chiara; Mariani, Luigi; Parisi, Simone Gabriele

    2014-05-01

    Grapevine (Vitis vinifera L.) is a fundamental crop for Italian agriculture as testified by the first place of Italy in the world producers ranking. This justify the importance of quantitative analyses referred to this crucial crop and aimed to quantify meteorological resources and limitations to development and production. Phenological rhythms of grapevine are strongly affected by surface fields of air temperature which in their turn are affected by synoptic circulation. This evidence highlights the importance of an approach based on dynamic climatology in order to detect and explain phenological anomalies that can have relevant effects on quantity and quality of grapevine production. In this context, this research is aimed to study the existing relation among the 850 hPa circulation patterns over the Euro-Mediterranean area from NOAA Ncep dataset and grapevine phenological fields for Italy over the period 2006-2013, highlighting the main phenological anomalies and analyzing synoptic determinants. This work is based on phenological fields with a standard pixel of 2 km routinely produced from 2006 by the Iphen project (Italian Phenological network) on the base of phenological observations spatialized by means of a specific algorithm based on cumulated thermal resources expressed as Normal Heat Hours (NHH). Anomalies have been evaluated with reference to phenological normal fields defined for the Italian area on the base of phenological observations and Iphen model. Results show that relevant phenological anomalies observed over the reference period are primarily associated with long lasting blocking systems driving cold air masses (Arctic or Polar-Continental) or hot ones (Sub-Tropical) towards the Italian area. Specific cases are presented for some years like 2007 and 2011.

  5. Application of the Augmented Operator Function Model for Developing Cognitive Metrics in Persistent Surveillance

    DTIC Science & Technology

    2013-09-26

    vehicle-lengths between frames. The low specificity of object detectors in WAMI means all vehicle detections are treated equally. Motion clutter...timing of the anomaly . If an anomaly was detected , recent activity would have a priority over older activity. This is due to the reasoning that if the...this could be a potential anomaly detected . Other baseline activities include normal work hours, religious observance times and interactions between

  6. Critical Infrastructure Protection and Resilience Literature Survey: Modeling and Simulation

    DTIC Science & Technology

    2014-11-01

    2013 Page 34 of 63 Below the yellow set is a purple cluster bringing together detection , anomaly , intrusion, sensors, monitoring and alerting (early...hazards and threats to security56 Water ADWICE, PSS®SINCAL ADWICE for real-time anomaly detection in water management systems57 One tool that...Systems. Cybernetics and Information Technologies. 2008;8(4):57-68. 57. Raciti M, Cucurull J, Nadjm-Tehrani S. Anomaly detection in water management

  7. Symbolic Time-Series Analysis for Anomaly Detection in Mechanical Systems

    DTIC Science & Technology

    2006-08-01

    Amol Khatkhate, Asok Ray , Fellow, IEEE, Eric Keller, Shalabh Gupta, and Shin C. Chin Abstract—This paper examines the efficacy of a novel method for...recognition. KHATKHATE et al.: SYMBOLIC TIME-SERIES ANALYSIS FOR ANOMALY DETECTION 447 Asok Ray (F’02) received graduate degrees in electri- cal...anomaly detection has been pro- posed by Ray [6], where the underlying information on the dynamical behavior of complex systems is derived based on

  8. Autonomous detection of crowd anomalies in multiple-camera surveillance feeds

    NASA Astrophysics Data System (ADS)

    Nordlöf, Jonas; Andersson, Maria

    2016-10-01

    A novel approach for autonomous detection of anomalies in crowded environments is presented in this paper. The proposed models uses a Gaussian mixture probability hypothesis density (GM-PHD) filter as feature extractor in conjunction with different Gaussian mixture hidden Markov models (GM-HMMs). Results, based on both simulated and recorded data, indicate that this method can track and detect anomalies on-line in individual crowds through multiple camera feeds in a crowded environment.

  9. Deep learning on temporal-spectral data for anomaly detection

    NASA Astrophysics Data System (ADS)

    Ma, King; Leung, Henry; Jalilian, Ehsan; Huang, Daniel

    2017-05-01

    Detecting anomalies is important for continuous monitoring of sensor systems. One significant challenge is to use sensor data and autonomously detect changes that cause different conditions to occur. Using deep learning methods, we are able to monitor and detect changes as a result of some disturbance in the system. We utilize deep neural networks for sequence analysis of time series. We use a multi-step method for anomaly detection. We train the network to learn spectral and temporal features from the acoustic time series. We test our method using fiber-optic acoustic data from a pipeline.

  10. Latent Space Tracking from Heterogeneous Data with an Application for Anomaly Detection

    DTIC Science & Technology

    2015-11-01

    specific, if the anomaly behaves as a sudden outlier after which the data stream goes back to normal state, then the anomalous data point should be...introduced three types of anomalies , all of them are sudden outliers . 438 J. Huang and X. Ning Table 2. Synthetic dataset: AUC and parameters method...Latent Space Tracking from Heterogeneous Data with an Application for Anomaly Detection Jiaji Huang1(B) and Xia Ning2 1 Department of Electrical

  11. Defect detection around rebars in concrete using focused ultrasound and reverse time migration.

    PubMed

    Beniwal, Surendra; Ganguli, Abhijit

    2015-09-01

    Experimental and numerical investigations have been performed to assess the feasibility of damage detection around rebars in concrete using focused ultrasound and a Reverse Time Migration (RTM) based subsurface imaging algorithm. Since concrete is heterogeneous, an unfocused ultrasonic field will be randomly scattered by the aggregates, thereby masking information about damage(s). A focused ultrasonic field, on the other hand, increases the possibility of detection of an anomaly due to enhanced amplitude of the incident field in the focal region. Further, the RTM based reconstruction using scattered focused field data is capable of creating clear images of the inspected region of interest. Since scattering of a focused field by a damaged rebar differs qualitatively from that of an undamaged rebar, distinct images of damaged and undamaged situations are obtained in the RTM generated images. This is demonstrated with both numerical and experimental investigations. The total scattered field, acquired on the surface of the concrete medium, is used as input for the RTM algorithm to generate the subsurface image that helps to identify the damage. The proposed technique, therefore, has some advantage since knowledge about the undamaged scenario for the concrete medium is not necessary to assess its integrity. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Studies of atmospheric refraction effects on laser data

    NASA Technical Reports Server (NTRS)

    Dunn, P. J.; Pearce, W. A.; Johnson, T. S.

    1982-01-01

    The refraction effect from three perspectives was considered. An analysis of the axioms on which the accepted correction algorithms were based was the first priority. The integrity of the meteorological measurements on which the correction model is based was also considered and a large quantity of laser observations was processed in an effort to detect any serious anomalies in them. The effect of refraction errors on geodetic parameters estimated from laser data using the most recent analysis procedures was the focus of the third element of study. The results concentrate on refraction errors which were found to be critical in the eventual use of the data for measurements of crustal dynamics.

  13. A Differential Evolution Based Approach to Estimate the Shape and Size of Complex Shaped Anomalies Using EIT Measurements

    NASA Astrophysics Data System (ADS)

    Rashid, Ahmar; Khambampati, Anil Kumar; Kim, Bong Seok; Liu, Dong; Kim, Sin; Kim, Kyung Youn

    EIT image reconstruction is an ill-posed problem, the spatial resolution of the estimated conductivity distribution is usually poor and the external voltage measurements are subject to variable noise. Therefore, EIT conductivity estimation cannot be used in the raw form to correctly estimate the shape and size of complex shaped regional anomalies. An efficient algorithm employing a shape based estimation scheme is needed. The performance of traditional inverse algorithms, such as the Newton Raphson method, used for this purpose is below par and depends upon the initial guess and the gradient of the cost functional. This paper presents the application of differential evolution (DE) algorithm to estimate complex shaped region boundaries, expressed as coefficients of truncated Fourier series, using EIT. DE is a simple yet powerful population-based, heuristic algorithm with the desired features to solve global optimization problems under realistic conditions. The performance of the algorithm has been tested through numerical simulations, comparing its results with that of the traditional modified Newton Raphson (mNR) method.

  14. Thinned crustal structure and tectonic boundary of the Nansha Block, southern South China Sea

    NASA Astrophysics Data System (ADS)

    Dong, Miao; Wu, Shi-Guo; Zhang, Jian

    2016-12-01

    The southern South China Sea margin consists of the thinned crustal Nansha Block and a compressional collision zone. The Nansha Block's deep structure and tectonic evolution contains critical information about the South China Sea's rifting. Multiple geophysical data sets, including regional magnetic, gravity and reflection seismic data, reveal the deep structure and rifting processes. Curie point depth (CPD), estimated from magnetic anomalies using a windowed wavenumber-domain algorithm, enables us to image thermal structures. To derive a 3D Moho topography and crustal thickness model, we apply Oldenburg algorithm to the gravity anomaly, which was extracted from the observed free air gravity anomaly data after removing the gravity effect of density variations of sediments, and temperature and pressure variations of the lithospheric mantle. We found that the Moho depth (20 km) is shallower than the CPD (24 km) in the Northwest Borneo Trough, possibly caused by thinned crust, low heat flow and a low vertical geothermal gradient. The Nansha Block's northern boundary is a narrow continent-ocean transition zone constrained by magnetic anomalies, reflection seismic data, gravity anomalies and an interpretation of Moho depth (about 13 km). The block extends southward beneath a gravity-driven deformed sediment wedge caused by uplift on land after a collision, with a contribution from deep crustal flow. Its southwestern boundary is close to the Lupar Line defined by a significant negative reduction to the pole (RTP) of magnetic anomaly and short-length-scale variation in crustal thickness, increasing from 18 to 26 km.

  15. Network Inference via the Time-Varying Graphical Lasso

    PubMed Central

    Hallac, David; Park, Youngsuk; Boyd, Stephen; Leskovec, Jure

    2018-01-01

    Many important problems can be modeled as a system of interconnected entities, where each entity is recording time-dependent observations or measurements. In order to spot trends, detect anomalies, and interpret the temporal dynamics of such data, it is essential to understand the relationships between the different entities and how these relationships evolve over time. In this paper, we introduce the time-varying graphical lasso (TVGL), a method of inferring time-varying networks from raw time series data. We cast the problem in terms of estimating a sparse time-varying inverse covariance matrix, which reveals a dynamic network of interdependencies between the entities. Since dynamic network inference is a computationally expensive task, we derive a scalable message-passing algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in an efficient way. We also discuss several extensions, including a streaming algorithm to update the model and incorporate new observations in real time. Finally, we evaluate our TVGL algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming state-of-the-art baselines in terms of both accuracy and scalability. PMID:29770256

  16. Characterization of normality of chaotic systems including prediction and detection of anomalies

    NASA Astrophysics Data System (ADS)

    Engler, Joseph John

    Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.

  17. Behavioral pattern identification for structural health monitoring in complex systems

    NASA Astrophysics Data System (ADS)

    Gupta, Shalabh

    Estimation of structural damage and quantification of structural integrity are critical for safe and reliable operation of human-engineered complex systems, such as electromechanical, thermofluid, and petrochemical systems. Damage due to fatigue crack is one of the most commonly encountered sources of structural degradation in mechanical systems. Early detection of fatigue damage is essential because the resulting structural degradation could potentially cause catastrophic failures, leading to loss of expensive equipment and human life. Therefore, for reliable operation and enhanced availability, it is necessary to develop capabilities for prognosis and estimation of impending failures, such as the onset of wide-spread fatigue crack damage in mechanical structures. This dissertation presents information-based online sensing of fatigue damage using the analytical tools of symbolic time series analysis ( STSA). Anomaly detection using STSA is a pattern recognition method that has been recently developed based upon a fixed-structure, fixed-order Markov chain. The analysis procedure is built upon the principles of Symbolic Dynamics, Information Theory and Statistical Pattern Recognition. The dissertation demonstrates real-time fatigue damage monitoring based on time series data of ultrasonic signals. Statistical pattern changes are measured using STSA to monitor the evolution of fatigue damage. Real-time anomaly detection is presented as a solution to the forward (analysis) problem and the inverse (synthesis) problem. (1) the forward problem - The primary objective of the forward problem is identification of the statistical changes in the time series data of ultrasonic signals due to gradual evolution of fatigue damage. (2) the inverse problem - The objective of the inverse problem is to infer the anomalies from the observed time series data in real time based on the statistical information generated during the forward problem. A computer-controlled special-purpose fatigue test apparatus, equipped with multiple sensing devices (e.g., ultrasonics and optical microscope) for damage analysis, has been used to experimentally validate the STSA method for early detection of anomalous behavior. The sensor information is integrated with a software module consisting of the STSA algorithm for real-time monitoring of fatigue damage. Experiments have been conducted under different loading conditions on specimens constructed from the ductile aluminium alloy 7075 - T6. The dissertation has also investigated the application of the STSA method for early detection of anomalies in other engineering disciplines. Two primary applications include combustion instability in a generic thermal pulse combustor model and whirling phenomenon in a typical misaligned shaft.

  18. Disparity : scalable anomaly detection for clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, N.; Bradshaw, R.; Lusk, E.

    2008-01-01

    In this paper, we describe disparity, a tool that does parallel, scalable anomaly detection for clusters. Disparity uses basic statistical methods and scalable reduction operations to perform data reduction on client nodes and uses these results to locate node anomalies. We discuss the implementation of disparity and present results of its use on a SiCortex SC5832 system.

  19. Robust and efficient anomaly detection using heterogeneous representations

    NASA Astrophysics Data System (ADS)

    Hu, Xing; Hu, Shiqiang; Xie, Jinhua; Zheng, Shiyou

    2015-05-01

    Various approaches have been proposed for video anomaly detection. Yet these approaches typically suffer from one or more limitations: they often characterize the pattern using its internal information, but ignore its external relationship which is important for local anomaly detection. Moreover, the high-dimensionality and the lack of robustness of pattern representation may lead to problems, including overfitting, increased computational cost and memory requirements, and high false alarm rate. We propose a video anomaly detection framework which relies on a heterogeneous representation to account for both the pattern's internal information and external relationship. The internal information is characterized by slow features learned by slow feature analysis from low-level representations, and the external relationship is characterized by the spatial contextual distances. The heterogeneous representation is compact, robust, efficient, and discriminative for anomaly detection. Moreover, both the pattern's internal information and external relationship can be taken into account in the proposed framework. Extensive experiments demonstrate the robustness and efficiency of our approach by comparison with the state-of-the-art approaches on the widely used benchmark datasets.

  20. Anomaly detection of turbopump vibration in Space Shuttle Main Engine using statistics and neural networks

    NASA Technical Reports Server (NTRS)

    Lo, C. F.; Wu, K.; Whitehead, B. A.

    1993-01-01

    The statistical and neural networks methods have been applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME. The anomalies are detected based on the amplitude of peaks of fundamental and harmonic frequencies in the power spectral density. These data are reduced to the proper format from sensor data measured by strain gauges and accelerometers. Both methods are feasible to detect the vibration anomalies. The statistical method requires sufficient data points to establish a reasonable statistical distribution data bank. This method is applicable for on-line operation. The neural networks method also needs to have enough data basis to train the neural networks. The testing procedure can be utilized at any time so long as the characteristics of components remain unchanged.

  1. An algorithm for automated ROI definition in water or epoxy-filled NEMA NU-2 image quality phantoms.

    PubMed

    Pierce, Larry A; Byrd, Darrin W; Elston, Brian F; Karp, Joel S; Sunderland, John J; Kinahan, Paul E

    2016-01-08

    Drawing regions of interest (ROIs) in positron emission tomography/computed tomography (PET/CT) scans of the National Electrical Manufacturers Association (NEMA) NU-2 Image Quality (IQ) phantom is a time-consuming process that allows for interuser variability in the measurements. In order to reduce operator effort and allow batch processing of IQ phantom images, we propose a fast, robust, automated algorithm for performing IQ phantom sphere localization and analysis. The algorithm is easily altered to accommodate different configurations of the IQ phantom. The proposed algorithm uses information from both the PET and CT image volumes in order to overcome the challenges of detecting the smallest spheres in the PET volume. This algorithm has been released as an open-source plug-in to the Osirix medical image viewing software package. We test the algorithm under various noise conditions, positions within the scanner, air bubbles in the phantom spheres, and scanner misalignment conditions. The proposed algorithm shows run-times between 3 and 4 min and has proven to be robust under all tested conditions, with expected sphere localization deviations of less than 0.2 mm and variations of PET ROI mean and maximum values on the order of 0.5% and 2%, respectively, over multiple PET acquisitions. We conclude that the proposed algorithm is stable when challenged with a variety of physical and imaging anomalies, and that the algorithm can be a valuable tool for those who use the NEMA NU-2 IQ phantom for PET/CT scanner acceptance testing and QA/QC.

  2. Machine Learning and Inverse Problem in Geodynamics

    NASA Astrophysics Data System (ADS)

    Shahnas, M. H.; Yuen, D. A.; Pysklywec, R.

    2017-12-01

    During the past few decades numerical modeling and traditional HPC have been widely deployed in many diverse fields for problem solutions. However, in recent years the rapid emergence of machine learning (ML), a subfield of the artificial intelligence (AI), in many fields of sciences, engineering, and finance seems to mark a turning point in the replacement of traditional modeling procedures with artificial intelligence-based techniques. The study of the circulation in the interior of Earth relies on the study of high pressure mineral physics, geochemistry, and petrology where the number of the mantle parameters is large and the thermoelastic parameters are highly pressure- and temperature-dependent. More complexity arises from the fact that many of these parameters that are incorporated in the numerical models as input parameters are not yet well established. In such complex systems the application of machine learning algorithms can play a valuable role. Our focus in this study is the application of supervised machine learning (SML) algorithms in predicting mantle properties with the emphasis on SML techniques in solving the inverse problem. As a sample problem we focus on the spin transition in ferropericlase and perovskite that may cause slab and plume stagnation at mid-mantle depths. The degree of the stagnation depends on the degree of negative density anomaly at the spin transition zone. The training and testing samples for the machine learning models are produced by the numerical convection models with known magnitudes of density anomaly (as the class labels of the samples). The volume fractions of the stagnated slabs and plumes which can be considered as measures for the degree of stagnation are assigned as sample features. The machine learning models can determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at mid-mantle depths. Employing support vector machine (SVM) algorithms we show that SML techniques can successfully predict the magnitude of the mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex problems in mantle dynamics by employing deep learning algorithms for estimation of mantle properties such as viscosity, elastic parameters, and thermal and chemical anomalies.

  3. Integrated System Health Management (ISHM) Implementation in Rocket Engine Testing

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Morris, Jon; Turowski, Mark; Franzl, Richard; Walker, Mark; Kapadia, Ravi; Venkatesh, Meera

    2010-01-01

    A pilot operational ISHM capability has been implemented for the E-2 Rocket Engine Test Stand (RETS) and a Chemical Steam Generator (CSG) test article at NASA Stennis Space Center. The implementation currently includes an ISHM computer and a large display in the control room. The paper will address the overall approach, tools, and requirements. It will also address the infrastructure and architecture. Specific anomaly detection algorithms will be discussed regarding leak detection and diagnostics, valve validation, and sensor validation. It will also describe development and use of a Health Assessment Database System (HADS) as a repository for measurements, health, configuration, and knowledge related to a system with ISHM capability. It will conclude with a discussion of user interfaces, and a description of the operation of the ISHM system prior, during, and after testing.

  4. Reliable detection of fluence anomalies in EPID-based IMRT pretreatment quality assurance using pixel intensity deviations

    PubMed Central

    Gordon, J. J.; Gardner, J. K.; Wang, S.; Siebers, J. V.

    2012-01-01

    Purpose: This work uses repeat images of intensity modulated radiation therapy (IMRT) fields to quantify fluence anomalies (i.e., delivery errors) that can be reliably detected in electronic portal images used for IMRT pretreatment quality assurance. Methods: Repeat images of 11 clinical IMRT fields are acquired on a Varian Trilogy linear accelerator at energies of 6 MV and 18 MV. Acquired images are corrected for output variations and registered to minimize the impact of linear accelerator and electronic portal imaging device (EPID) positioning deviations. Detection studies are performed in which rectangular anomalies of various sizes are inserted into the images. The performance of detection strategies based on pixel intensity deviations (PIDs) and gamma indices is evaluated using receiver operating characteristic analysis. Results: Residual differences between registered images are due to interfraction positional deviations of jaws and multileaf collimator leaves, plus imager noise. Positional deviations produce large intensity differences that degrade anomaly detection. Gradient effects are suppressed in PIDs using gradient scaling. Background noise is suppressed using median filtering. In the majority of images, PID-based detection strategies can reliably detect fluence anomalies of ≥5% in ∼1 mm2 areas and ≥2% in ∼20 mm2 areas. Conclusions: The ability to detect small dose differences (≤2%) depends strongly on the level of background noise. This in turn depends on the accuracy of image registration, the quality of the reference image, and field properties. The longer term aim of this work is to develop accurate and reliable methods of detecting IMRT delivery errors and variations. The ability to resolve small anomalies will allow the accuracy of advanced treatment techniques, such as image guided, adaptive, and arc therapies, to be quantified. PMID:22894421

  5. Implementing Classification on a Munitions Response Project

    DTIC Science & Technology

    2011-12-01

    Detection Dig List  IVS/Seed Site Planning Decisions Dig All  Anomalies Site Characterization Implementing Classification on a Munitions Response...Details ● Seed emplacement ● EM61-MK2 detection survey  RTK GPS ● Select anomalies for further investigation ● Collect cued data using MetalMapper...5.2 mV in channel 2  938 anomalies selected ● All QC seeds detected using this threshold  Some just inside the 60-cm halo ● IVS reproducibility

  6. Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity.

    PubMed

    Napoletano, Paolo; Piccoli, Flavio; Schettini, Raimondo

    2018-01-12

    Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art.

  7. PSO (Particle Swarm Optimization) for Interpretation of Magnetic Anomalies Caused by Simple Geometrical Structures

    NASA Astrophysics Data System (ADS)

    Essa, Khalid S.; Elhussein, Mahmoud

    2018-04-01

    A new efficient approach to estimate parameters that controlled the source dimensions from magnetic anomaly profile data in light of PSO algorithm (particle swarm optimization) has been presented. The PSO algorithm has been connected in interpreting the magnetic anomaly profiles data onto a new formula for isolated sources embedded in the subsurface. The model parameters deciphered here are the depth of the body, the amplitude coefficient, the angle of effective magnetization, the shape factor and the horizontal coordinates of the source. The model parameters evaluated by the present technique, generally the depth of the covered structures were observed to be in astounding concurrence with the real parameters. The root mean square (RMS) error is considered as a criterion in estimating the misfit between the observed and computed anomalies. Inversion of noise-free synthetic data, noisy synthetic data which contains different levels of random noise (5, 10, 15 and 20%) as well as multiple structures and in additional two real-field data from USA and Egypt exhibits the viability of the approach. Thus, the final results of the different parameters are matched with those given in the published literature and from geologic results.

  8. Modeling And Detecting Anomalies In Scada Systems

    NASA Astrophysics Data System (ADS)

    Svendsen, Nils; Wolthusen, Stephen

    The detection of attacks and intrusions based on anomalies is hampered by the limits of specificity underlying the detection techniques. However, in the case of many critical infrastructure systems, domain-specific knowledge and models can impose constraints that potentially reduce error rates. At the same time, attackers can use their knowledge of system behavior to mask their manipulations, causing adverse effects to observed only after a significant period of time. This paper describes elementary statistical techniques that can be applied to detect anomalies in critical infrastructure networks. A SCADA system employed in liquefied natural gas (LNG) production is used as a case study.

  9. On estimating gravity anomalies: A comparison of least squares collocation with least squares techniques

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Lowrey, B.

    1976-01-01

    The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described, and its numerical properties are compared with the numerical properties of the conventional least squares estimator.

  10. First and second trimester screening for fetal structural anomalies.

    PubMed

    Edwards, Lindsay; Hui, Lisa

    2018-04-01

    Fetal structural anomalies are found in up to 3% of all pregnancies and ultrasound-based screening has been an integral part of routine prenatal care for decades. The prenatal detection of fetal anomalies allows for optimal perinatal management, providing expectant parents with opportunities for additional imaging, genetic testing, and the provision of information regarding prognosis and management options. Approximately one-half of all major structural anomalies can now be detected in the first trimester, including acrania/anencephaly, abdominal wall defects, holoprosencephaly and cystic hygromata. Due to the ongoing development of some organ systems however, some anomalies will not be evident until later in the pregnancy. To this extent, the second trimester anatomy is recommended by professional societies as the standard investigation for the detection of fetal structural anomalies. The reported detection rates of structural anomalies vary according to the organ system being examined, and are also dependent upon factors such as the equipment settings and sonographer experience. Technological advances over the past two decades continue to support the role of ultrasound as the primary imaging modality in pregnancy, and the safety of ultrasound for the developing fetus is well established. With increasing capabilities and experience, detailed examination of the central nervous system and cardiovascular system is possible, with dedicated examinations such as the fetal neurosonogram and the fetal echocardiogram now widely performed in tertiary centers. Magnetic resonance imaging (MRI) is well recognized for its role in the assessment of fetal brain anomalies; other potential indications for fetal MRI include lung volume measurement (in cases of congenital diaphragmatic hernia), and pre-surgical planning prior to fetal spina bifida repair. When a major structural abnormality is detected prenatally, genetic testing with chromosomal microarray is recommended over routine karyotype due to its higher genomic resolution. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Identifying Threats Using Graph-based Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Eberle, William; Holder, Lawrence; Cook, Diane

    Much of the data collected during the monitoring of cyber and other infrastructures is structural in nature, consisting of various types of entities and relationships between them. The detection of threatening anomalies in such data is crucial to protecting these infrastructures. We present an approach to detecting anomalies in a graph-based representation of such data that explicitly represents these entities and relationships. The approach consists of first finding normative patterns in the data using graph-based data mining and then searching for small, unexpected deviations to these normative patterns, assuming illicit behavior tries to mimic legitimate, normative behavior. The approach is evaluated using several synthetic and real-world datasets. Results show that the approach has high truepositive rates, low false-positive rates, and is capable of detecting complex structural anomalies in real-world domains including email communications, cellphone calls and network traffic.

  12. A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Goldberg, Hirsh; Nasrabadi, Nasser M.

    2007-04-01

    In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold. Comparisons are made using receiver operating characteristics (ROC) curves.

  13. An Analysis of Drop Outs and Unusual Behavior from Primary and Secondary Radar

    NASA Astrophysics Data System (ADS)

    Allen, Nicholas J.

    An evaluation of the radar systems in the Red River Valley of North Dakota (ND) and its surrounding areas for its ability to provide Detect and Avoid (DAA) capabilities for manned and unmanned aircraft systems (UAS) was performed. Additionally, the data was analyzed for its feasibility to be used in autonomous Air Traffic Control (ATC) systems in the future. With the almost certain increase in airspace congestion over the coming years, the need for a robust and accurate radar system is crucial. This study focused on the Airport Surveillance Radar (ASR) at Fargo, ND and the Air Route Surveillance Radar at Finley, ND. Each of these radar sites contain primary and secondary radars. It was found that both locations exhibit data anomalies, such as: drop outs, altitude outliers, prolonged altitude failures, repeated data, and multiple aircraft with the same identification number (ID) number. Four weeks of data provided by Harris Corporation throughout the year were analyzed using a MATLAB algorithm developed to identify the data anomalies. The results showed Fargo intercepts on average 450 aircraft, while Finley intercepts 1274 aircraft. Of these aircraft an average of 34% experienced drop outs at Fargo and 69% at Finley. With the average drop out at Fargo of 23.58 seconds and 42.45 seconds at Finley, and several lasting more than several minutes, it shows these data anomalies can occur for an extended period of time. Between 1% to 26% aircraft experienced the other data anomalies, depending on the type of data anomaly and location. When aircraft were near airports or the edge of the effective radar radius, the largest proportion of data anomalies were experienced. It was also discovered that drop outs, altitude outliers, andrepeated data are radar induced errors, while prolonged altitude failures and multiple aircraft with the same ID are transponder induced errors. The risk associated with each data anomaly, by looking at the severity of the event and the occurrence was also produced. The findings from this report will provide meaningful data and likely influence the development of UAS DAA logic and the logic behind autonomous ATC systems.

  14. Real-time Bayesian anomaly detection in streaming environmental data

    NASA Astrophysics Data System (ADS)

    Hill, David J.; Minsker, Barbara S.; Amir, Eyal

    2009-04-01

    With large volumes of data arriving in near real time from environmental sensors, there is a need for automated detection of anomalous data caused by sensor or transmission errors or by infrequent system behaviors. This study develops and evaluates three automated anomaly detection methods using dynamic Bayesian networks (DBNs), which perform fast, incremental evaluation of data as they become available, scale to large quantities of data, and require no a priori information regarding process variables or types of anomalies that may be encountered. This study investigates these methods' abilities to identify anomalies in eight meteorological data streams from Corpus Christi, Texas. The results indicate that DBN-based detectors, using either robust Kalman filtering or Rao-Blackwellized particle filtering, outperform a DBN-based detector using Kalman filtering, with the former having false positive/negative rates of less than 2%. These methods were successful at identifying data anomalies caused by two real events: a sensor failure and a large storm.

  15. Conditional Anomaly Detection with Soft Harmonic Functions

    PubMed Central

    Valko, Michal; Kveton, Branislav; Valizadegan, Hamed; Cooper, Gregory F.; Hauskrecht, Milos

    2012-01-01

    In this paper, we consider the problem of conditional anomaly detection that aims to identify data instances with an unusual response or a class label. We develop a new non-parametric approach for conditional anomaly detection based on the soft harmonic solution, with which we estimate the confidence of the label to detect anomalous mislabeling. We further regularize the solution to avoid the detection of isolated examples and examples on the boundary of the distribution support. We demonstrate the efficacy of the proposed method on several synthetic and UCI ML datasets in detecting unusual labels when compared to several baseline approaches. We also evaluate the performance of our method on a real-world electronic health record dataset where we seek to identify unusual patient-management decisions. PMID:25309142

  16. Conditional Anomaly Detection with Soft Harmonic Functions.

    PubMed

    Valko, Michal; Kveton, Branislav; Valizadegan, Hamed; Cooper, Gregory F; Hauskrecht, Milos

    2011-01-01

    In this paper, we consider the problem of conditional anomaly detection that aims to identify data instances with an unusual response or a class label. We develop a new non-parametric approach for conditional anomaly detection based on the soft harmonic solution, with which we estimate the confidence of the label to detect anomalous mislabeling. We further regularize the solution to avoid the detection of isolated examples and examples on the boundary of the distribution support. We demonstrate the efficacy of the proposed method on several synthetic and UCI ML datasets in detecting unusual labels when compared to several baseline approaches. We also evaluate the performance of our method on a real-world electronic health record dataset where we seek to identify unusual patient-management decisions.

  17. Anomaly Detection in Dynamic Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turcotte, Melissa

    2014-10-14

    Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. Amore » second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behavior inherent in communication networks. This seasonal behavior is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the change points over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don’t fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the communication counts. In a sequential analysis, anomalous behavior is then identified from outlying behavior with respect to the fitted predictive probability models. Seasonality is again incorporated into the model and is treated as a changepoint model on the transition probabilities of a discrete time Markov process. Second stage analytics are then developed which combine anomalous edges to identify anomalous substructures in the network.« less

  18. Contemporaneous disequilibrium of bio-optical properties in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Kahru, Mati; Lee, Zhongping; Mitchell, B. Greg

    2017-03-01

    Significant changes in satellite-detected net primary production (NPP, mg C m-2 d-1) were observed in the Southern Ocean during 2011-2016: an increase in the Pacific sector and a decrease in the Atlantic sector. While no clear physical forcing was identified, we hypothesize that the changes in NPP were associated with changes in the phytoplankton community and reflected in the concomitant bio-optical properties. Satellite algorithms for chlorophyll a concentration (Chl a, mg m-3) use a combination of estimates of the remote sensing reflectance Rrs(λ) that are statistically fitted to a global reference data set. In any particular region or point in space/time the estimate produced by the global "mean" algorithm can deviate from the true value. Reflectance anomaly (RA) is supposed to remove the first-order variability in Rrs(λ) associated with Chl a and reveal bio-optical properties that are due to the composition of phytoplankton and associated materials. Time series of RA showed variability at multiple scales, including the life span of the sensor, multiyear and annual. Models of plankton functional types using estimated Chl a as input cannot be expected to correctly resolve regional and seasonal anomalies due to biases in the Chl a estimate that they are based on. While a statistical model using RA(λ) time series can predict the times series of NPP with high accuracy (R2 = 0.82) in both Pacific and Atlantic regions, the underlying mechanisms in terms of phytoplankton groups and the associated materials remain elusive.

  19. Assessing Space and Satellite Environment and System Security

    NASA Astrophysics Data System (ADS)

    Haith, G.; Upton, S.

    Satellites and other spacecraft are key assets and critical vulnerabilities in our communications, surveillance and defense infrastructure. Despite their strategic importance, there are significant gaps in our real-time knowledge of satellite security. One reason is the lack of infrastructure and applications to filter and process the overwhelming amounts of relevant data. Some efforts are addressing this challenge by fusing the data gathered from ground, air and space based sensors to detect and categorize anomalous situations. The aim is to provide decision support for Space Situational Awareness (SSA) and Defensive Counterspace (DCS). Most results have not yielded estimates of impact and cost of a given situation or suggested courses of action (level 3 data fusion). This paper describes an effort to provide high level data fusion for SSA/DCS though two complementary thrusts: threat scenario simulation with Automatic Red Teaming (ART), and historical data warehousing and mining. ART uses stochastic search algorithms (e.g., evolutionary algorithms) to evolve strategies in agent based simulations. ART provides techniques to formally specify anomalous condition scenarios envisioned by subject matter experts and to explore alternative scenarios. The simulation data can then support impact estimates and course of action evaluations. The data mining thrust has focused on finding correlations between subsystems anomalies on MightySat II and publicly available space weather data. This paper describes the ART approach, some potential correlations discovered between satellite subsystem anomalies and space weather events, and future work planned on the project.

  20. Anomaly-Related Pathologic Atlantoaxial Displacement in Pediatric Patients.

    PubMed

    Pavlova, Olga M; Ryabykh, Sergey O; Burcev, Alexander V; Gubin, Alexander V

    2018-06-01

    To analyze clinical and radiologic features of pathologic atlantoaxial displacement (PAAD) in pediatric patients and to compose a treatment algorithm for anomaly-related PAAD. Criteria of different types of PAAD and treatment algorithms have been widely reported in the literature but are difficult to apply to patients with odontoid abnormalities, C2-C3 block, spina bifida C1, and children. We evaluated results of treatment of 29 pediatric patients with PAAD caused by congenital anomalies of the craniovertebral junction (CVJ), treated in Ilizarov Center in 2009-2017, including 20 patients with atlantoaxial displacement (AAD) and 9 patients with atlantoaxial rotatory fixation. There were 14 males (48.3%) and 15 females (51.7%). We singled out 3 groups of patients: nonsyndromic (6 patients, 20.7%), Klippel-Feil syndrome (13 patients, 44.8%), and syndromic (10 patients, 34.5%). Odontoid abnormalities and C1 dysplasia were widely represented in the syndromic group. Local symptoms predominated in the nonsyndromic and KFS groups. In the syndromic group, all patients had AAD and myelopathy. A pronounced decrease of space available for chord C1 and increase of anterior atlantodental interval were noted compared with other groups. We present a unified treatment algorithm of pediatric anomaly-related PAAD. Syndromic AAD are often accompanied by anterior and central dislocation and myelopathy and atlantooccipital dissociation. These patients require early aggressive surgical treatment. Nonsyndromic and Klippel-Feil syndrome AAD, atlantoaxial subluxation, and atlantoaxial fixation often manifest by local symptoms and need to eliminate CVJ instability. Existing classifications of symptomatic atlantoaxial displacement are not always suitable for patients with CVJ abnormalities. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Rate based failure detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward

    This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less

  2. Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity

    PubMed Central

    Schettini, Raimondo

    2018-01-01

    Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art. PMID:29329268

  3. Paternal psychological response after ultrasonographic detection of structural fetal anomalies with a comparison to maternal response: a cohort study.

    PubMed

    Kaasen, Anne; Helbig, Anne; Malt, Ulrik Fredrik; Naes, Tormod; Skari, Hans; Haugen, Guttorm Nils

    2013-07-12

    In Norway almost all pregnant women attend one routine ultrasound examination. Detection of fetal structural anomalies triggers psychological stress responses in the women affected. Despite the frequent use of ultrasound examination in pregnancy, little attention has been devoted to the psychological response of the expectant father following the detection of fetal anomalies. This is important for later fatherhood and the psychological interaction within the couple. We aimed to describe paternal psychological responses shortly after detection of structural fetal anomalies by ultrasonography, and to compare paternal and maternal responses within the same couple. A prospective observational study was performed at a tertiary referral centre for fetal medicine. Pregnant women with a structural fetal anomaly detected by ultrasound and their partners (study group,n=155) and 100 with normal ultrasound findings (comparison group) were included shortly after sonographic examination (inclusion period: May 2006-February 2009). Gestational age was >12 weeks. We used psychometric questionnaires to assess self-reported social dysfunction, health perception, and psychological distress (intrusion, avoidance, arousal, anxiety, and depression): Impact of Event Scale. General Health Questionnaire and Edinburgh Postnatal Depression Scale. Fetal anomalies were classified according to severity and diagnostic or prognostic ambiguity at the time of assessment. Median (range) gestational age at inclusion in the study and comparison group was 19 (12-38) and 19 (13-22) weeks, respectively. Men and women in the study group had significantly higher levels of psychological distress than men and women in the comparison group on all psychometric endpoints. The lowest level of distress in the study group was associated with the least severe anomalies with no diagnostic or prognostic ambiguity (p < 0.033). Men had lower scores than women on all psychometric outcome variables. The correlation in distress scores between men and women was high in the fetal anomaly group (p < 0.001), but non-significant in the comparison group. Severity of the anomaly including ambiguity significantly influenced paternal response. Men reported lower scores on all psychometric outcomes than women. This knowledge may facilitate support for both expectant parents to reduce strain within the family after detectionof a fetal anomaly.

  4. Using ADOPT Algorithm and Operational Data to Discover Precursors to Aviation Adverse Events

    NASA Technical Reports Server (NTRS)

    Janakiraman, Vijay; Matthews, Bryan; Oza, Nikunj

    2018-01-01

    The US National Airspace System (NAS) is making its transition to the NextGen system and assuring safety is one of the top priorities in NextGen. At present, safety is managed reactively (correct after occurrence of an unsafe event). While this strategy works for current operations, it may soon become ineffective for future airspace designs and high density operations. There is a need for proactive management of safety risks by identifying hidden and "unknown" risks and evaluating the impacts on future operations. To this end, NASA Ames has developed data mining algorithms that finds anomalies and precursors (high-risk states) to safety issues in the NAS. In this paper, we describe a recently developed algorithm called ADOPT that analyzes large volumes of data and automatically identifies precursors from real world data. Precursors help in detecting safety risks early so that the operator can mitigate the risk in time. In addition, precursors also help identify causal factors and help predict the safety incident. The ADOPT algorithm scales well to large data sets and to multidimensional time series, reduce analyst time significantly, quantify multiple safety risks giving a holistic view of safety among other benefits. This paper details the algorithm and includes several case studies to demonstrate its application to discover the "known" and "unknown" safety precursors in aviation operation.

  5. Apollo-Soyuz pamphlet no. 4: Gravitational field. [experimental design

    NASA Technical Reports Server (NTRS)

    Page, L. W.; From, T. P.

    1977-01-01

    Two Apollo Soyuz experiments designed to detect gravity anomalies from spacecraft motion are described. The geodynamics experiment (MA-128) measured large-scale gravity anomalies by detecting small accelerations of Apollo in the 222 km orbit, using Doppler tracking from the ATS-6 satellite. Experiment MA-089 measured 300 km anomalies on the earth's surface by detecting minute changes in the separation between Apollo and the docking module. Topics discussed in relation to these experiments include the Doppler effect, gravimeters, and the discovery of mascons on the moon.

  6. Thermal wake/vessel detection technique

    DOEpatents

    Roskovensky, John K [Albuquerque, NM; Nandy, Prabal [Albuquerque, NM; Post, Brian N [Albuquerque, NM

    2012-01-10

    A computer-automated method for detecting a vessel in water based on an image of a portion of Earth includes generating a thermal anomaly mask. The thermal anomaly mask flags each pixel of the image initially deemed to be a wake pixel based on a comparison of a thermal value of each pixel against other thermal values of other pixels localized about each pixel. Contiguous pixels flagged by the thermal anomaly mask are grouped into pixel clusters. A shape of each of the pixel clusters is analyzed to determine whether each of the pixel clusters represents a possible vessel detection event. The possible vessel detection events are represented visually within the image.

  7. Anomaly Monitoring Method for Key Components of Satellite

    PubMed Central

    Fan, Linjun; Xiao, Weidong; Tang, Jun

    2014-01-01

    This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703

  8. Enhanced flyby science with onboard computer vision: Tracking and surface feature detection at small bodies

    NASA Astrophysics Data System (ADS)

    Fuchs, Thomas J.; Thompson, David R.; Bue, Brian D.; Castillo-Rogez, Julie; Chien, Steve A.; Gharibian, Dero; Wagstaff, Kiri L.

    2015-10-01

    Spacecraft autonomy is crucial to increase the science return of optical remote sensing observations at distant primitive bodies. To date, most small bodies exploration has involved short timescale flybys that execute prescripted data collection sequences. Light time delay means that the spacecraft must operate completely autonomously without direct control from the ground, but in most cases the physical properties and morphologies of prospective targets are unknown before the flyby. Surface features of interest are highly localized, and successful observations must account for geometry and illumination constraints. Under these circumstances onboard computer vision can improve science yield by responding immediately to collected imagery. It can reacquire bad data or identify features of opportunity for additional targeted measurements. We present a comprehensive framework for onboard computer vision for flyby missions at small bodies. We introduce novel algorithms for target tracking, target segmentation, surface feature detection, and anomaly detection. The performance and generalization power are evaluated in detail using expert annotations on data sets from previous encounters with primitive bodies.

  9. Prevalence and distribution of dental anomalies in orthodontic patients.

    PubMed

    Montasser, Mona A; Taha, Mahasen

    2012-01-01

    To study the prevalence and distribution of dental anomalies in a sample of orthodontic patients. The dental casts, intraoral photographs, and lateral panoramic and cephalometric radiographs of 509 Egyptian orthodontic patients were studied. Patients were examined for dental anomalies in number, size, shape, position, and structure. The prevalence of each dental anomaly was calculated and compared between sexes. Of the total study sample, 32.6% of the patients had at least one dental anomaly other than agenesis of third molars; 32.1% of females and 33.5% of males had at least one dental anomaly other than agenesis of third molars. The most commonly detected dental anomalies were impaction (12.8%) and ectopic eruption (10.8%). The total prevalence of hypodontia (excluding third molars) and hyperdontia was 2.4% and 2.8%, respectively, with similiar distributions in females and males. Gemination and accessory roots were reported in this study; each of these anomalies was detected in 0.2% of patients. In addition to genetic and racial factors, environmental factors could have more important influence on the prevalence of dental anomalies in every population. Impaction, ectopic eruption, hyperdontia, hypodontia, and microdontia were the most common dental anomalies, while fusion and dentinogenesis imperfecta were absent.

  10. Stochastic subset selection for learning with kernel machines.

    PubMed

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  11. Constraining mass anomalies in the interior of spherical bodies using Trans-dimensional Bayesian Hierarchical inference.

    NASA Astrophysics Data System (ADS)

    Izquierdo, K.; Lekic, V.; Montesi, L.

    2017-12-01

    Gravity inversions are especially important for planetary applications since measurements of the variations in gravitational acceleration are often the only constraint available to map out lateral density variations in the interiors of planets and other Solar system objects. Currently, global gravity data is available for the terrestrial planets and the Moon. Although several methods for inverting these data have been developed and applied, the non-uniqueness of global density models that fit the data has not yet been fully characterized. We make use of Bayesian inference and a Reversible Jump Markov Chain Monte Carlo (RJMCMC) approach to develop a Trans-dimensional Hierarchical Bayesian (THB) inversion algorithm that yields a large sample of models that fit a gravity field. From this group of models, we can determine the most likely value of parameters of a global density model and a measure of the non-uniqueness of each parameter when the number of anomalies describing the gravity field is not fixed a priori. We explore the use of a parallel tempering algorithm and fast multipole method to reduce the number of iterations and computing time needed. We applied this method to a synthetic gravity field of the Moon and a long wavelength synthetic model of density anomalies in the Earth's lower mantle. We obtained a good match between the given gravity field and the gravity field produced by the most likely model in each inversion. The number of anomalies of the models showed parsimony of the algorithm, the value of the noise variance of the input data was retrieved, and the non-uniqueness of the models was quantified. Our results show that the ability to constrain the latitude and longitude of density anomalies, which is excellent at shallow locations (<200 km), decreases with increasing depth. With higher computational resources, this THB method for gravity inversion could give new information about the overall density distribution of celestial bodies even when there is no other geophysical data available.

  12. Geomagnetic Navigation of Autonomous Underwater Vehicle Based on Multi-objective Evolutionary Algorithm.

    PubMed

    Li, Hong; Liu, Mingyong; Zhang, Feihu

    2017-01-01

    This paper presents a multi-objective evolutionary algorithm of bio-inspired geomagnetic navigation for Autonomous Underwater Vehicle (AUV). Inspired by the biological navigation behavior, the solution was proposed without using a priori information, simply by magnetotaxis searching. However, the existence of the geomagnetic anomalies has significant influence on the geomagnetic navigation system, which often disrupts the distribution of the geomagnetic field. An extreme value region may easily appear in abnormal regions, which makes AUV lost in the navigation phase. This paper proposes an improved bio-inspired algorithm with behavior constraints, for sake of making AUV escape from the abnormal region. First, the navigation problem is considered as the optimization problem. Second, the environmental monitoring operator is introduced, to determine whether the algorithm falls into the geomagnetic anomaly region. Then, the behavior constraint operator is employed to get out of the abnormal region. Finally, the termination condition is triggered. Compared to the state-of- the-art, the proposed approach effectively overcomes the disturbance of the geomagnetic abnormal. The simulation result demonstrates the reliability and feasibility of the proposed approach in complex environments.

  13. Geomagnetic Navigation of Autonomous Underwater Vehicle Based on Multi-objective Evolutionary Algorithm

    PubMed Central

    Li, Hong; Liu, Mingyong; Zhang, Feihu

    2017-01-01

    This paper presents a multi-objective evolutionary algorithm of bio-inspired geomagnetic navigation for Autonomous Underwater Vehicle (AUV). Inspired by the biological navigation behavior, the solution was proposed without using a priori information, simply by magnetotaxis searching. However, the existence of the geomagnetic anomalies has significant influence on the geomagnetic navigation system, which often disrupts the distribution of the geomagnetic field. An extreme value region may easily appear in abnormal regions, which makes AUV lost in the navigation phase. This paper proposes an improved bio-inspired algorithm with behavior constraints, for sake of making AUV escape from the abnormal region. First, the navigation problem is considered as the optimization problem. Second, the environmental monitoring operator is introduced, to determine whether the algorithm falls into the geomagnetic anomaly region. Then, the behavior constraint operator is employed to get out of the abnormal region. Finally, the termination condition is triggered. Compared to the state-of- the-art, the proposed approach effectively overcomes the disturbance of the geomagnetic abnormal. The simulation result demonstrates the reliability and feasibility of the proposed approach in complex environments. PMID:28747884

  14. Performance Analysis of Automatic Dependent Surveillance-Broadcast (ADS-B) and Breakdown of Anomalies

    NASA Astrophysics Data System (ADS)

    Tabassum, Asma

    This thesis work analyzes the performance of Automatic Dependent Surveillance-Broadcast (ADS-B) data received from Grand Forks International Airport, detects anomalies in the data and quantifies the associated potential risk. This work also assesses severity associated anomalous data in Detect and Avoid (DAA) for Unmanned Aircraft System (UAS). The received data were raw and archived in GDL-90 format. A python module is developed to parse the raw data into readable data in a .csv file. The anomaly detection algorithm is based on Federal Aviation Administration's (FAA) ADS-B performance assessment report. An extensive study is carried out on two main types of anomalies, namely dropouts and altitude deviations. A dropout is considered when the update rate exceeds three seconds. Dropouts are of different durations and have a different level of risk depending on how much time ADS-B is unavailable as the surveillance system. Altitude deviation refers to the deviation between barometric and geometric altitude. Deviation ranges from 25 feet to 600 feet have been observed. As of now, barometric altitude has been used for separation and surveillance while geometric altitude can be used in cases where barometric altitude is not available. Many UAS might not have both sensors installed on board due to size and weight constrains. There might be a chance of misinterpretation of vertical separation specially while flying in National Airspace (NAS) if the ownship UAS and intruder manned aircraft use two different altitude sources for separation standard. The characteristics and agreement between two different altitudes is investigated with a regression based approach. Multiple risk matrices are established based on the severity of the DAA well clear. ADS-B is called the Backbone of FAA Next Generation Air Transportation System, NextGen. NextGen is the series of inter-linked programs, systems, and policies that implement advanced technologies and capabilities. ADS-B utilizes the Satellite based Global Positioning System (GPS) technology to provide the pilot and the Air Traffic Control (ATC) with more information which enables an efficient navigation of aircraft in increasingly congested airspace. FAA mandated all aircraft, both manned and unmanned, be equipped with ADS-B out by the year 2020 to fly within most controlled airspace. As a fundamental component of NextGen it is crucial to understand the behavior and potential risk with ADS-B Systems.

  15. Global Anomaly Detection in Two-Dimensional Symmetry-Protected Topological Phases

    NASA Astrophysics Data System (ADS)

    Bultinck, Nick; Vanhove, Robijn; Haegeman, Jutho; Verstraete, Frank

    2018-04-01

    Edge theories of symmetry-protected topological phases are well known to possess global symmetry anomalies. In this Letter we focus on two-dimensional bosonic phases protected by an on-site symmetry and analyze the corresponding edge anomalies in more detail. Physical interpretations of the anomaly in terms of an obstruction to orbifolding and constructing symmetry-preserving boundaries are connected to the cohomology classification of symmetry-protected phases in two dimensions. Using the tensor network and matrix product state formalism we numerically illustrate our arguments and discuss computational detection schemes to identify symmetry-protected order in a ground state wave function.

  16. EMPACT 3D: an advanced EMI discrimination sensor for CONUS and OCONUS applications

    NASA Astrophysics Data System (ADS)

    Keranen, Joe; Miller, Jonathan S.; Schultz, Gregory; Sander-Olhoeft, Morgan; Laudato, Stephen

    2018-04-01

    We recently developed a new, man-portable, electromagnetic induction (EMI) sensor designed to detect and classify small, unexploded sub-munitions and discriminate them from non-hazardous debris. The ability to distinguish innocuous metal clutter from potentially hazardous unexploded ordnance (UXO) and other explosive remnants of war (ERW) before excavation can significantly accelerate land reclamation efforts by eliminating time spent removing harmless scrap metal. The EMI sensor employs a multi-axis transmitter and receiver configuration to produce data sufficient for anomaly discrimination. A real-time data inversion routine produces intrinsic and extrinsic anomaly features describing the polarizability, location, and orientation of the anomaly under test. We discuss data acquisition and post-processing software development, and results from laboratory and field tests demonstrating the discrimination capability of the system. Data acquisition and real-time processing emphasize ease-of-use, quality control (QC), and display of discrimination results. Integration of the QC and discrimination methods into the data acquisition software reduces the time required between sensor data collection and the final anomaly discrimination result. The system supports multiple concepts of operations (CONOPs) including: 1) a non-GPS cued configuration in which detected anomalies are discriminated and excavated immediately following the anomaly survey; 2) GPS integration to survey multiple anomalies to produce a prioritized dig list with global anomaly locations; and 3) a dynamic mapping configuration supporting detection followed by discrimination and excavation of targets of interest.

  17. Accelerometer and Camera-Based Strategy for Improved Human Fall Detection.

    PubMed

    Zerrouki, Nabil; Harrou, Fouzi; Sun, Ying; Houacine, Amrane

    2016-12-01

    In this paper, we address the problem of detecting human falls using anomaly detection. Detection and classification of falls are based on accelerometric data and variations in human silhouette shape. First, we use the exponentially weighted moving average (EWMA) monitoring scheme to detect a potential fall in the accelerometric data. We used an EWMA to identify features that correspond with a particular type of fall allowing us to classify falls. Only features corresponding with detected falls were used in the classification phase. A benefit of using a subset of the original data to design classification models minimizes training time and simplifies models. Based on features corresponding to detected falls, we used the support vector machine (SVM) algorithm to distinguish between true falls and fall-like events. We apply this strategy to the publicly available fall detection databases from the university of Rzeszow's. Results indicated that our strategy accurately detected and classified fall events, suggesting its potential application to early alert mechanisms in the event of fall situations and its capability for classification of detected falls. Comparison of the classification results using the EWMA-based SVM classifier method with those achieved using three commonly used machine learning classifiers, neural network, K-nearest neighbor and naïve Bayes, proved our model superior.

  18. Anomaly Detection in Host Signaling Pathways for the Early Prognosis of Acute Infection.

    PubMed

    Wang, Kun; Langevin, Stanley; O'Hern, Corey S; Shattuck, Mark D; Ogle, Serenity; Forero, Adriana; Morrison, Juliet; Slayden, Richard; Katze, Michael G; Kirby, Michael

    2016-01-01

    Clinical diagnosis of acute infectious diseases during the early stages of infection is critical to administering the appropriate treatment to improve the disease outcome. We present a data driven analysis of the human cellular response to respiratory viruses including influenza, respiratory syncytia virus, and human rhinovirus, and compared this with the response to the bacterial endotoxin, Lipopolysaccharides (LPS). Using an anomaly detection framework we identified pathways that clearly distinguish between asymptomatic and symptomatic patients infected with the four different respiratory viruses and that accurately diagnosed patients exposed to a bacterial infection. Connectivity pathway analysis comparing the viral and bacterial diagnostic signatures identified host cellular pathways that were unique to patients exposed to LPS endotoxin indicating this type of analysis could be used to identify host biomarkers that can differentiate clinical etiologies of acute infection. We applied the Multivariate State Estimation Technique (MSET) on two human influenza (H1N1 and H3N2) gene expression data sets to define host networks perturbed in the asymptomatic phase of infection. Our analysis identified pathways in the respiratory virus diagnostic signature as prognostic biomarkers that triggered prior to clinical presentation of acute symptoms. These early warning pathways correctly predicted that almost half of the subjects would become symptomatic in less than forty hours post-infection and that three of the 18 subjects would become symptomatic after only 8 hours. These results provide a proof-of-concept for utility of anomaly detection algorithms to classify host pathway signatures that can identify presymptomatic signatures of acute diseases and differentiate between etiologies of infection. On a global scale, acute respiratory infections cause a significant proportion of human co-morbidities and account for 4.25 million deaths annually. The development of clinical diagnostic tools to distinguish between acute viral and bacterial respiratory infections is critical to improve patient care and limit the overuse of antibiotics in the medical community. The identification of prognostic respiratory virus biomarkers provides an early warning system that is capable of predicting which subjects will become symptomatic to expand our medical diagnostic capabilities and treatment options for acute infectious diseases. The host response to acute infection may be viewed as a deterministic signaling network responsible for maintaining the health of the host organism. We identify pathway signatures that reflect the very earliest perturbations in the host response to acute infection. These pathways provide a monitor the health state of the host using anomaly detection to quantify and predict health outcomes to pathogens.

  19. Anomaly Detection in Host Signaling Pathways for the Early Prognosis of Acute Infection

    PubMed Central

    O’Hern, Corey S.; Shattuck, Mark D.; Ogle, Serenity; Forero, Adriana; Morrison, Juliet; Slayden, Richard; Katze, Michael G.

    2016-01-01

    Clinical diagnosis of acute infectious diseases during the early stages of infection is critical to administering the appropriate treatment to improve the disease outcome. We present a data driven analysis of the human cellular response to respiratory viruses including influenza, respiratory syncytia virus, and human rhinovirus, and compared this with the response to the bacterial endotoxin, Lipopolysaccharides (LPS). Using an anomaly detection framework we identified pathways that clearly distinguish between asymptomatic and symptomatic patients infected with the four different respiratory viruses and that accurately diagnosed patients exposed to a bacterial infection. Connectivity pathway analysis comparing the viral and bacterial diagnostic signatures identified host cellular pathways that were unique to patients exposed to LPS endotoxin indicating this type of analysis could be used to identify host biomarkers that can differentiate clinical etiologies of acute infection. We applied the Multivariate State Estimation Technique (MSET) on two human influenza (H1N1 and H3N2) gene expression data sets to define host networks perturbed in the asymptomatic phase of infection. Our analysis identified pathways in the respiratory virus diagnostic signature as prognostic biomarkers that triggered prior to clinical presentation of acute symptoms. These early warning pathways correctly predicted that almost half of the subjects would become symptomatic in less than forty hours post-infection and that three of the 18 subjects would become symptomatic after only 8 hours. These results provide a proof-of-concept for utility of anomaly detection algorithms to classify host pathway signatures that can identify presymptomatic signatures of acute diseases and differentiate between etiologies of infection. On a global scale, acute respiratory infections cause a significant proportion of human co-morbidities and account for 4.25 million deaths annually. The development of clinical diagnostic tools to distinguish between acute viral and bacterial respiratory infections is critical to improve patient care and limit the overuse of antibiotics in the medical community. The identification of prognostic respiratory virus biomarkers provides an early warning system that is capable of predicting which subjects will become symptomatic to expand our medical diagnostic capabilities and treatment options for acute infectious diseases. The host response to acute infection may be viewed as a deterministic signaling network responsible for maintaining the health of the host organism. We identify pathway signatures that reflect the very earliest perturbations in the host response to acute infection. These pathways provide a monitor the health state of the host using anomaly detection to quantify and predict health outcomes to pathogens. PMID:27532264

  20. Detection of sinkholes or anomalies using full seismic wave fields.

    DOT National Transportation Integrated Search

    2013-04-01

    This research presents an application of two-dimensional (2-D) time-domain waveform tomography for detection of embedded sinkholes and anomalies. The measured seismic surface wave fields were inverted using a full waveform inversion (FWI) technique, ...

  1. Integrated System Health Management: Foundational Concepts, Approach, and Implementation

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando

    2009-01-01

    A sound basis to guide the community in the conception and implementation of ISHM (Integrated System Health Management) capability in operational systems was provided. The concept of "ISHM Model of a System" and a related architecture defined as a unique Data, Information, and Knowledge (DIaK) architecture were described. The ISHM architecture is independent of the typical system architecture, which is based on grouping physical elements that are assembled to make up a subsystem, and subsystems combine to form systems, etc. It was emphasized that ISHM capability needs to be implemented first at a low functional capability level (FCL), or limited ability to detect anomalies, diagnose, determine consequences, etc. As algorithms and tools to augment or improve the FCL are identified, they should be incorporated into the system. This means that the architecture, DIaK management, and software, must be modular and standards-based, in order to enable systematic augmentation of FCL (no ad-hoc modifications). A set of technologies (and tools) needed to implement ISHM were described. One essential tool is a software environment to create the ISHM Model. The software environment encapsulates DIaK, and an infrastructure to focus DIaK on determining health (detect anomalies, determine causes, determine effects, and provide integrated awareness of the system to the operator). The environment includes gateways to communicate in accordance to standards, specially the IEEE 1451.1 Standard for Smart Sensors and Actuators.

  2. Construction of an Overhauser magnetic gradiometer and the applications in geomagnetic observation and ferromagnetic target localization

    NASA Astrophysics Data System (ADS)

    Liu, H.; Dong, H.; Liu, Z.; Ge, J.; Bai, B.; Zhang, C.

    2017-10-01

    The proton precession magnetometer with single sensor is commonly used in geomagnetic observation and magnetic anomaly detection. Due to technological limitations, the measurement accuracy is restricted by several factors such as the sensor performance, frequency measurement precision, instability of polarization module, etc. Aimed to improve the anti-interference ability, an Overhauser magnetic gradiometer with dual sensor structure was designed. An alternative design of a geomagnetic sensor with differential dual-coil structure was presented. A multi-channel frequency measurement algorithm was proposed to increase the measurement accuracy. A silicon oscillator was adopted to resolve the instability of polarization system. This paper briefly discusses the design and development of the gradiometer and compares the data recorded by this instrument with a commonly used commercially Overhauser magnetometer in the world market. The proposed gradiometer records the earth magnetic field in 24 hours with measurement accuracy of ± 0.3 nT and a sampling rate of 3 seconds per sample. The quality of data recorded is excellent and consistent with the commercial instrument. In addition, experiments of ferromagnetic target localization were conducted. This gradiometer shows a strong ability in magnetic anomaly detection and localization. To sum up, it has the advantages of convenient operation, high precision, strong anti-interference, etc., which proves the effectiveness of the dual sensor structure Overhauser magnetic gradiometer.

  3. A Testbed for Data Fusion for Engine Diagnostics and Prognostics1

    DTIC Science & Technology

    2002-03-01

    detected ; too late to be useful for prognostics development. Table 1. Table of acronyms ACRONYM MEANING AD Anomaly detector...strictly defined points. Determining where we are on the engine health curve is the first step in prognostics . Fault detection / diagnostic reasoning... Detection As described above the ability of the monitoring system to detect an anomaly is especially important for knowledge-based systems, i.e.,

  4. Integrated Multivariate Health Monitoring System for Helicopters Main Rotor Drives: Development and Validation with In-Service Data

    DTIC Science & Technology

    2014-10-02

    potential advantages of using multi- variate classification/discrimination/ anomaly detection meth- ods on real world accelerometric condition monitoring ...case of false anomaly reports. A possible explanation of this phenomenon could be given 8 ANNUAL CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT...of those helicopters. 1. Anomaly detection by means of a self-learning Shewhart control chart. A problem highlighted by the experts of Agusta- Westland

  5. Encke-Beta Predictor for Orion Burn Targeting and Guidance

    NASA Technical Reports Server (NTRS)

    Robinson, Shane; Scarritt, Sara; Goodman, John L.

    2016-01-01

    The state vector prediction algorithm selected for Orion on-board targeting and guidance is known as the Encke-Beta method. Encke-Beta uses a universal anomaly (beta) as the independent variable, valid for circular, elliptical, parabolic, and hyperbolic orbits. The variable, related to the change in eccentric anomaly, results in integration steps that cover smaller arcs of the trajectory at or near perigee, when velocity is higher. Some burns in the EM-1 and EM-2 mission plans are much longer than burns executed with the Apollo and Space Shuttle vehicles. Burn length, as well as hyperbolic trajectories, has driven the use of the Encke-Beta numerical predictor by the predictor/corrector guidance algorithm in place of legacy analytic thrust and gravity integrals.

  6. New sensors and techniques for the structural health monitoring of propulsion systems.

    PubMed

    Woike, Mark; Abdul-Aziz, Ali; Oza, Nikunj; Matthews, Bryan

    2013-01-01

    The ability to monitor the structural health of the rotating components, especially in the hot sections of turbine engines, is of major interest to aero community in improving engine safety and reliability. The use of instrumentation for these applications remains very challenging. It requires sensors and techniques that are highly accurate, are able to operate in a high temperature environment, and can detect minute changes and hidden flaws before catastrophic events occur. The National Aeronautics and Space Administration (NASA), through the Aviation Safety Program (AVSP), has taken a lead role in the development of new sensor technologies and techniques for the in situ structural health monitoring of gas turbine engines. This paper presents a summary of key results and findings obtained from three different structural health monitoring approaches that have been investigated. This includes evaluating the performance of a novel microwave blade tip clearance sensor; a vibration based crack detection technique using an externally mounted capacitive blade tip clearance sensor; and lastly the results of using data driven anomaly detection algorithms for detecting cracks in a rotating disk.

  7. New Sensors and Techniques for the Structural Health Monitoring of Propulsion Systems

    PubMed Central

    2013-01-01

    The ability to monitor the structural health of the rotating components, especially in the hot sections of turbine engines, is of major interest to aero community in improving engine safety and reliability. The use of instrumentation for these applications remains very challenging. It requires sensors and techniques that are highly accurate, are able to operate in a high temperature environment, and can detect minute changes and hidden flaws before catastrophic events occur. The National Aeronautics and Space Administration (NASA), through the Aviation Safety Program (AVSP), has taken a lead role in the development of new sensor technologies and techniques for the in situ structural health monitoring of gas turbine engines. This paper presents a summary of key results and findings obtained from three different structural health monitoring approaches that have been investigated. This includes evaluating the performance of a novel microwave blade tip clearance sensor; a vibration based crack detection technique using an externally mounted capacitive blade tip clearance sensor; and lastly the results of using data driven anomaly detection algorithms for detecting cracks in a rotating disk. PMID:23935425

  8. Detecting ship targets in spaceborne infrared image based on modeling radiation anomalies

    NASA Astrophysics Data System (ADS)

    Wang, Haibo; Zou, Zhengxia; Shi, Zhenwei; Li, Bo

    2017-09-01

    Using infrared imaging sensors to detect ship target in the ocean environment has many advantages compared to other sensor modalities, such as better thermal sensitivity and all-weather detection capability. We propose a new ship detection method by modeling radiation anomalies for spaceborne infrared image. The proposed method can be decomposed into two stages, where in the first stage, a test infrared image is densely divided into a set of image patches and the radiation anomaly of each patch is estimated by a Gaussian Mixture Model (GMM), and thereby target candidates are obtained from anomaly image patches. In the second stage, target candidates are further checked by a more discriminative criterion to obtain the final detection result. The main innovation of the proposed method is inspired by the biological mechanism that human eyes are sensitive to the unusual and anomalous patches among complex background. The experimental result on short wavelength infrared band (1.560 - 2.300 μm) and long wavelength infrared band (10.30 - 12.50 μm) of Landsat-8 satellite shows the proposed method achieves a desired ship detection accuracy with higher recall than other classical ship detection methods.

  9. Analysis of SSEM Sensor Data Using BEAM

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Park, Han; James, Mark

    2004-01-01

    A report describes analysis of space shuttle main engine (SSME) sensor data using Beacon-based Exception Analysis for Multimissions (BEAM) [NASA Tech Briefs articles, the two most relevant being Beacon-Based Exception Analysis for Multimissions (NPO- 20827), Vol. 26, No.9 (September 2002), page 32 and Integrated Formulation of Beacon-Based Exception Analysis for Multimissions (NPO- 21126), Vol. 27, No. 3 (March 2003), page 74] for automated detection of anomalies. A specific implementation of BEAM, using the Dynamical Invariant Anomaly Detector (DIAD), is used to find anomalies commonly encountered during SSME ground test firings. The DIAD detects anomalies by computing coefficients of an autoregressive model and comparing them to expected values extracted from previous training data. The DIAD was trained using nominal SSME test-firing data. DIAD detected all the major anomalies including blade failures, frozen sense lines, and deactivated sensors. The DIAD was particularly sensitive to anomalies caused by faulty sensors and unexpected transients. The system offers a way to reduce SSME analysis time and cost by automatically indicating specific time periods, signals, and features contributing to each anomaly. The software described here executes on a standard workstation and delivers analyses in seconds, a computing time comparable to or faster than the test duration itself, offering potential for real-time analysis.

  10. Intelligent system for a remote diagnosis of a photovoltaic solar power plant

    NASA Astrophysics Data System (ADS)

    Sanz-Bobi, M. A.; Muñoz San Roque, A.; de Marcos, A.; Bada, M.

    2012-05-01

    Usually small and mid-sized photovoltaic solar power plants are located in rural areas and typically they operate unattended. Some technicians are in charge of the supervision of these plants and, if an alarm is automatically issued, they try to investigate the problem and correct it. Sometimes these anomalies are detected some hours or days after they begin. Also the analysis of the causes once the anomaly is detected can take some additional time. All these factors motivated the development of a methodology able to perform continuous and automatic monitoring of the basic parameters of a photovoltaic solar power plant in order to detect anomalies as soon as possible, to diagnose their causes, and to immediately inform the personnel in charge of the plant. The methodology proposed starts from the study of the most significant failure modes of a photovoltaic plant through a FMEA and using this information, its typical performance is characterized by the creation of its normal behaviour models. They are used to detect the presence of a failure in an incipient or current form. Once an anomaly is detected, an automatic and intelligent diagnosis process is started in order to investigate the possible causes. The paper will describe the main features of a software tool able to detect anomalies and to diagnose them in a photovoltaic solar power plant.

  11. Method and system for monitoring environmental conditions

    DOEpatents

    Kulesz, James J [Oak Ridge, TN; Lee, Ronald W [Oak Ridge, TN

    2010-11-16

    A system for detecting the occurrence of anomalies includes a plurality of spaced apart nodes, with each node having adjacent nodes, each of the nodes having one or more sensors associated with the node and capable of detecting anomalies, and each of the nodes having a controller connected to the sensors associated with the node. The system also includes communication links between adjacent nodes, whereby the nodes form a network. At least one software agent is capable of changing the operation of at least one of the controllers in response to the detection of an anomaly by a sensor.

  12. Method for locating underground anomalies by diffraction of electromagnetic waves passing between spaced boreholes

    DOEpatents

    Lytle, R. Jeffrey; Lager, Darrel L.; Laine, Edwin F.; Davis, Donald T.

    1979-01-01

    Underground anomalies or discontinuities, such as holes, tunnels, and caverns, are located by lowering an electromagnetic signal transmitting antenna down one borehole and a receiving antenna down another, the ground to be surveyed for anomalies being situated between the boreholes. Electronic transmitting and receiving equipment associated with the antennas is activated and the antennas are lowered in unison at the same rate down their respective boreholes a plurality of times, each time with the receiving antenna at a different level with respect to the transmitting antenna. The transmitted electromagnetic waves diffract at each edge of an anomaly. This causes minimal signal reception at the receiving antenna. Triangulation of the straight lines between the antennas for the depths at which the signal minimums are detected precisely locates the anomaly. Alternatively, phase shifts of the transmitted waves may be detected to locate an anomaly, the phase shift being distinctive for the waves directed at the anomaly.

  13. A study of the effect of seasonal climatic factors on the electrical resistivity response of three experimental graves

    NASA Astrophysics Data System (ADS)

    Jervis, John R.; Pringle, Jamie K.

    2014-09-01

    Electrical resistivity surveys have proven useful for locating clandestine graves in a number of forensic searches. However, some aspects of grave detection with resistivity surveys remain imperfectly understood. One such aspect is the effect of seasonal changes in climate on the resistivity response of graves. In this study, resistivity survey data collected over three years over three simulated graves were analysed in order to assess how the graves' resistivity anomalies varied seasonally and when they could most easily be detected. Thresholds were used to identify anomalies, and the ‘residual volume' of grave-related anomalies was calculated as the area bounded by the relevant thresholds multiplied by the anomaly's average value above the threshold. The residual volume of a resistivity anomaly associated with a buried pig cadaver showed evidence of repeating annual patterns and was moderately correlated with the soil moisture budget. This anomaly was easiest to detect between January and April each year, after prolonged periods of high net gain in soil moisture. The resistivity response of a wrapped cadaver was more complex, although it also showed evidence of seasonal variation during the third year after burial. We suggest that the observed variation in the graves' resistivity anomalies was caused by seasonal change in survey data noise levels, which was in turn influenced by the soil moisture budget. It is possible that similar variations occur elsewhere for sites with seasonal climate variations and this could affect successful detection of other subsurface features. Further research to investigate how different climates and soil types affect seasonal variation in grave-related resistivity anomalies would be useful.

  14. Gravity anomaly detection: Apollo/Soyuz

    NASA Technical Reports Server (NTRS)

    Vonbun, F. O.; Kahn, W. D.; Bryan, J. W.; Schmid, P. E.; Wells, W. T.; Conrad, D. T.

    1976-01-01

    The Goddard Apollo-Soyuz Geodynamics Experiment is described. It was performed to demonstrate the feasibility of tracking and recovering high frequency components of the earth's gravity field by utilizing a synchronous orbiting tracking station such as ATS-6. Gravity anomalies of 5 MGLS or larger having wavelengths of 300 to 1000 kilometers on the earth's surface are important for geologic studies of the upper layers of the earth's crust. Short wavelength Earth's gravity anomalies were detected from space. Two prime areas of data collection were selected for the experiment: (1) the center of the African continent and (2) the Indian Ocean Depression centered at 5% north latitude and 75% east longitude. Preliminary results show that the detectability objective of the experiment was met in both areas as well as at several additional anomalous areas around the globe. Gravity anomalies of the Karakoram and Himalayan mountain ranges, ocean trenches, as well as the Diamantina Depth, can be seen. Maps outlining the anomalies discovered are shown.

  15. Imbalanced learning for pattern recognition: an empirical study

    NASA Astrophysics Data System (ADS)

    He, Haibo; Chen, Sheng; Man, Hong; Desai, Sachi; Quoraishee, Shafik

    2010-10-01

    The imbalanced learning problem (learning from imbalanced data) presents a significant new challenge to the pattern recognition and machine learning society because in most instances real-world data is imbalanced. When considering military applications, the imbalanced learning problem becomes much more critical because such skewed distributions normally carry the most interesting and critical information. This critical information is necessary to support the decision-making process in battlefield scenarios, such as anomaly or intrusion detection. The fundamental issue with imbalanced learning is the ability of imbalanced data to compromise the performance of standard learning algorithms, which assume balanced class distributions or equal misclassification penalty costs. Therefore, when presented with complex imbalanced data sets these algorithms may not be able to properly represent the distributive characteristics of the data. In this paper we present an empirical study of several popular imbalanced learning algorithms on an army relevant data set. Specifically we will conduct various experiments with SMOTE (Synthetic Minority Over-Sampling Technique), ADASYN (Adaptive Synthetic Sampling), SMOTEBoost (Synthetic Minority Over-Sampling in Boosting), and AdaCost (Misclassification Cost-Sensitive Boosting method) schemes. Detailed experimental settings and simulation results are presented in this work, and a brief discussion of future research opportunities/challenges is also presented.

  16. Exploiting spectral content for image segmentation in GPR data

    NASA Astrophysics Data System (ADS)

    Wang, Patrick K.; Morton, Kenneth D., Jr.; Collins, Leslie M.; Torrione, Peter A.

    2011-06-01

    Ground-penetrating radar (GPR) sensors provide an effective means for detecting changes in the sub-surface electrical properties of soils, such as changes indicative of landmines or other buried threats. However, most GPR-based pre-screening algorithms only localize target responses along the surface of the earth, and do not provide information regarding an object's position in depth. As a result, feature extraction algorithms are forced to process data from entire cubes of data around pre-screener alarms, which can reduce feature fidelity and hamper performance. In this work, spectral analysis is investigated as a method for locating subsurface anomalies in GPR data. In particular, a 2-D spatial/frequency decomposition is applied to pre-screener flagged GPR B-scans. Analysis of these spatial/frequency regions suggests that aspects (e.g. moments, maxima, mode) of the frequency distribution of GPR energy can be indicative of the presence of target responses. After translating a GPR image to a function of the spatial/frequency distributions at each pixel, several image segmentation approaches can be applied to perform segmentation in this new transformed feature space. To illustrate the efficacy of the approach, a performance comparison between feature processing with and without the image segmentation algorithm is provided.

  17. A Stochastic-entropic Approach to Detect Persistent Low-temperature Volcanogenic Thermal Anomalies

    NASA Astrophysics Data System (ADS)

    Pieri, D. C.; Baxter, S.

    2011-12-01

    Eruption prediction is a chancy idiosyncratic affair, as volcanoes often manifest waxing and/or waning pre-eruption emission, geodetic, and seismic behavior that is unsystematic. Thus, fundamental to increased prediction accuracy and precision are good and frequent assessments of the time-series behavior of relevant precursor geophysical, geochemical, and geological phenomena, especially when volcanoes become restless. The Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER), in orbit since 1999 on the NASA Terra Earth Observing System satellite is an important capability for detection of thermal eruption precursors (even subtle ones) and increased passive gas emissions. The unique combination of ASTER high spatial resolution multi-spectral thermal IR imaging data (90m/pixel; 5 bands in the 8-12um region), combined with simultaneous visible and near-IR imaging data, and stereo-photogrammetric capabilities make it a useful, especially thermal, precursor detection tool. The JPL ASTER Volcano Archive consisting of 80,000+ASTER volcano images allows systematic analysis of (a) baseline thermal emissions for 1550+ volcanoes, (b) important aspects of the time-dependent thermal variability, and (c) the limits of detection of temporal dynamics of eruption precursors. We are analyzing a catalog of the magnitude, frequency, and distribution of ASTER-documented volcano thermal signatures, compiled from 2000 onward, at 90m/pixel. Low contrast thermal anomalies of relatively low apparent absolute temperature (e.g., summit lakes, fumarolically altered areas, geysers, very small sub-pixel hotspots), for which the signal-to-noise ratio may be marginal (e.g., scene confusion due to clouds, water and water vapor, fumarolic emissions, variegated ground emissivity, and their combinations), are particularly important to discern and monitor. We have developed a technique to detect persistent hotspots that takes into account in-scene observed pixel joint frequency distributions over time, temperature contrast, and Shannon entropy. Preliminary analyses of Fogo Volcano and Yellowstone hotspots, among others, indicate that this is a very sensitive technique with good potential to be applied over the entire ASTER global night-time archive. We will discuss our progress in creating the global thermal anomaly catalog as well as algorithm approach and results. This work was carried out at the Jet Propulsion Laboratory of the California Institute of Technology under contract to NASA.

  18. Network Anomaly Detection Based on Wavelet Analysis

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Ghorbani, Ali A.

    2008-12-01

    Signal processing techniques have been applied recently for analyzing and detecting network anomalies due to their potential to find novel or unknown intrusions. In this paper, we propose a new network signal modelling technique for detecting network anomalies, combining the wavelet approximation and system identification theory. In order to characterize network traffic behaviors, we present fifteen features and use them as the input signals in our system. We then evaluate our approach with the 1999 DARPA intrusion detection dataset and conduct a comprehensive analysis of the intrusions in the dataset. Evaluation results show that the approach achieves high-detection rates in terms of both attack instances and attack types. Furthermore, we conduct a full day's evaluation in a real large-scale WiFi ISP network where five attack types are successfully detected from over 30 millions flows.

  19. Applications and error correction for adiabatic quantum optimization

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen

    Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.

  20. Skeletonization of Gridded Potential-Field Images

    NASA Astrophysics Data System (ADS)

    Gao, L.; Morozov, I. B.

    2012-12-01

    A new approach to skeletonization was developed for gridded potential-field data. Generally, skeletonization is a pattern-recognition technique allowing automatic recognition of near-linear features in the images, measurement of their parameters, and analyzing them for similarities. Our approach decomposes the images into arbitrarily-oriented "wavelets" characterized by positive or negative amplitudes, orientation angles, spatial dimensions, polarities, and other attributes. Orientations of the wavelets are obtained by scanning the azimuths to detect the strike direction of each anomaly. The wavelets are connected according to the similarities of these attributes, which leads to a "skeleton" map of the potential-field data. In addition, 2-D filtering is conducted concurrently with the wavelet-identification process, which allows extracting parameters of background trends and reduces the adverse effects of low-frequency background (which is often strong in potential-field maps) on skeletonization.. By correlating the neighboring wavelets, linear anomalies are identified and characterized. The advantages of this algorithm are the generality and isotropy of feature detection, as well as being specifically designed for gridded data. With several options for background-trend extraction, the stability for identification of lineaments is improved and optimized. The algorithm is also integrated in a powerful processing system which allows combining it with numerous other tools, such as filtering, computation of analytical signal, empirical mode decomposition, and various types of plotting. The method is applied to potential-field data for the Western Canada Sedimentary Basin, in a study area which extends from southern Saskatchewan into southwestern Manitoba. The target is the structure of crystalline basement beneath Phanerozoic sediments. The examples illustrate that skeletonization aid in the interpretation of complex structures at different scale lengths. The results indicate that this method is useful for identifying structures in complex geophysical images and for automatic extraction of their attributes as well as for quantitative characterization and analysis of potential-field images. Skeletonized potential-field images should also be useful for inversion.

  1. Temperature anomalies in the plumes of the August, 18 and August, 29, 2000 eruptions of Miyake Jima volcano (Japan) inferred from delays of GPS waves crossing them.

    NASA Astrophysics Data System (ADS)

    Houlié, N.; Nercessian, A.; Briole, P.; Murakami, M.

    2003-12-01

    Using the GAMIT software we processed seventy days of GPS data (30s sampling rate) collected by the GSI at four sites on Miyake Jima volcanic island (Japan) between June 27, 2000 and September 5, 2000. This period includes a large seismic swarm (June 27, 2000 - July 8, 2000) followed by several major paroxysms at the volcano crater (July 9, 10, 14, 15, August 29) producing a 1 km wide caldera. The medium term velocity of the stations coordinates, already published elsewhere, is maximum during the seismic swarm and corresponds to a large dyke intrusion mostly offshore west of the volcano. No anomalies are observed in the time series of the daily GPS coordinates for the days of the paroxysms. An epoch by epoch processing of those days, using a kinematic software shows that there is no deformation during the paroxysms themselves. We then examined epoch by epoch the path delay residuals of the GPS phases at each GPS station during the events. Those delays exceed 200 mm in some cases. As they cannot be explained by a temporal change of the stations coordinates, we conclude that the cause of these delays is the presence of the hot volcanic plume not modeled by the GPS data processing which assumes a homogenous troposphere. We used a classical seismic tomography algorithm (modified to handle 3D + time) to map the path delay anomaly in the plume as a function of time. We interpret the anomalous delays as temperature anomalies in the plume, assuming a normal pressure and a plume saturated in humidity. The maximum average temperature anomaly is 20° , a low value compared to what is currently proposed in the literature. Higher temperature should exist in the inner part of the plume, but the horizontal extension of this hot zone cannot be more than 50-100 m, otherwise the GPS data would detect it.

  2. Urbanization and climate change implications in flood risk management: Developing an efficient decision support system for flood susceptibility mapping.

    PubMed

    Mahmoud, Shereif H; Gan, Thian Yew

    2018-04-26

    The effects of urbanization and climate change impact to the flood risk of two governorates in Egypt were analyzed. Non-parametric change point and trend detection algorithms were applied to the annual rainfall, rainfall anomaly, and temperature anomaly of both study sites. Next, change points and trends of the annual and monthly surface runoff data generated by the Curve Number method over 1948-2014 were also analyzed to detect the effects of urbanization on the surface runoff. Lastly, a GIS decision support system was developed to delineate flood susceptibility zones for the two governorates. The significant decline in annual rainfall and rainfall anomaly after 1994 at 8.96 and 15.3 mm/decade respectively was likely due to climate change impact, especially significant warming trend since 1976 at 0.16 °C/decade, though that could partly be attributed to rapid urbanization. Since 1970, effects of urbanization to flood risk are clear, because despite a decline in rainfall, the annual surface runoff and runoff anomaly show positive trends of 12.7 and of 14.39 mm/decade, respectively. Eleven flood contributing factors have been identified and used in mapping flood susceptibility zones of both sites. In the El-Beheira governorate, 9.2%, 17.9%, 32.3%, 28.3% and 12.3% of its area are categorized as very high, high, moderate, low and very low susceptibility to flooding, respectively. Similarly, in Alexandria governorate, 15.9%, 33.5%, 41%, 8.8% and 0.8% of its area are categorized as very high, high, moderate, low and very low susceptibility to flooding, respectively. Very high and high susceptible zones are located in the northern, northwestern and northeastern parts of the Beheira governorates, and in the northeastern and northwestern parts of Alexandria. The flood related information obtained in this study will be useful to assist mitigating potential flood damages and future land use planning of both governorates of Egypt. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Advanced Health Management System for the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Davidson, Matt; Stephens, John

    2004-01-01

    Boeing-Canoga Park (BCP) and NASA-Marshall Space Flight Center (NASA-MSFC) are developing an Advanced Health Management System (AHMS) for use on the Space Shuttle Main Engine (SSME) that will improve Shuttle safety by reducing the probability of catastrophic engine failures during the powered ascent phase of a Shuttle mission. This is a phased approach that consists of an upgrade to the current Space Shuttle Main Engine Controller (SSMEC) to add turbomachinery synchronous vibration protection and addition of a separate Health Management Computer (HMC) that will utilize advanced algorithms to detect and mitigate predefined engine anomalies. The purpose of the Shuttle AHMS is twofold; one is to increase the probability of successfully placing the Orbiter into the intended orbit, and the other is to increase the probability of being able to safely execute an abort of a Space Transportation System (STS) launch. Both objectives are achieved by increasing the useful work envelope of a Space Shuttle Main Engine after it has developed anomalous performance during launch and the ascent phase of the mission. This increase in work envelope will be the result of two new anomaly mitigation options, in addition to existing engine shutdown, that were previously unavailable. The added anomaly mitigation options include engine throttle-down and performance correction (adjustment of engine oxidizer to fuel ratio), as well as enhanced sensor disqualification capability. The HMC is intended to provide the computing power necessary to diagnose selected anomalous engine behaviors and for making recommendations to the engine controller for anomaly mitigation. Independent auditors have assessed the reduction in Shuttle ascent risk to be on the order of 40% with the combined system and a three times improvement in mission success.

  4. [Management of occult malformations at the lateral skull base].

    PubMed

    Bryson, E; Draf, W; Hofmann, E; Bockmühl, U

    2005-12-01

    Occult malformations of the lateral skull base are rare anomalies, but can cause severe complications such as recurrent meningitis. Therefore, they need to be precisely delineated and sufficient surgical closure is mandatory. Between 1986 and 2004 twenty patients (10 children and 10 adults) with occult malformations at the lateral skull base were treated surgically at the ENT-Department of the Hospital Fulda gAG. Of these 3 Mondini-malformations, 11 defects of the tegmen tympani or the mastoidal roof, 2 dural lesions to the posterior fossa and 4 malformations within the pyramidal apex have been found. Four patients have had multiple anomalies. Routing symptom was in all cases at least one previous meningitis. Radiological diagnostics included high-resolution computed tomography (CT) and magnetic resonance imaging (MRI) as well as CT- or MR-cisternography. Depending on type and localisation of the defect the following surgical algorithm was carried out: The trans-mastoidal approach was used in all cases of Mondini-malformation (including obliteration of the ear), in case of lesions to the posterior fossa as well as partly in anomalies at the tegmen tympani and mastoidal roof, respectively. Defects of the pyramidal apex should be explored via the trans-mastoidal way if the lesion is located caudally to the inner auditory canal (IAC), whereas the trans-temporal approach should be used if the lesion is situated ventral to the IAC and dorso-medially to the internal carotid artery (ICA). The trans-temporal approach was also performed in large defects of the tegmen tympani and mastoidal roof as well as in recurrences. In all cases of recurrent meningitis caused by agents of the upper airway tract the basic principle should be to search for occult skull base malformations radiologically as well as by sodium fluorescein endoscopy as long as the anomaly is detected.

  5. ASTER Thermal Anomalies in Western Colorado

    DOE Data Explorer

    Richard E. Zehner

    2013-01-01

    This layer contains the areas identified as areas of anomalous surface temperature from ASTER satellite imagery. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. Areas that had temperature greater than 2o, and areas with temperature equal to 1o to 2o, were considered ASTER modeled very warm and warm surface exposures (thermal anomalies), respectively Note: 'o' is used in place of lowercase sigma in this description.

  6. A Distance Measure for Attention Focusing and Anaomaly Detection in Systems Monitoring

    NASA Technical Reports Server (NTRS)

    Doyle, R. J.

    1994-01-01

    Any attempt to introduce automation into the monitoring of complex physical systems must start from a robust anomaly detection capability. This task is far from straightforward, for a single definition of what constitutes an anomaly is difficult to come by.

  7. Detection and characterization of buried lunar craters with GRAIL data

    NASA Astrophysics Data System (ADS)

    Sood, Rohan; Chappaz, Loic; Melosh, Henry J.; Howell, Kathleen C.; Milbury, Colleen; Blair, David M.; Zuber, Maria T.

    2017-06-01

    We used gravity mapping observations from NASA's Gravity Recovery and Interior Laboratory (GRAIL) to detect, characterize and validate the presence of large impact craters buried beneath the lunar maria. In this paper we focus on two prominent anomalies detected in the GRAIL data using the gravity gradiometry technique. Our detection strategy is applied to both free-air and Bouguer gravity field observations to identify gravitational signatures that are similar to those observed over buried craters. The presence of buried craters is further supported by individual analysis of regional free-air gravity anomalies, Bouguer gravity anomaly maps, and forward modeling. Our best candidate, for which we propose the informal name of Earhart Crater, is approximately 200 km in diameter and forms part of the northwestern rim of Lacus Somniorum, The other candidate, for which we propose the informal name of Ashoka Anomaly, is approximately 160 km in diameter and lies completely buried beneath Mare Tranquillitatis. Other large, still unrecognized, craters undoubtedly underlie other portions of the Moon's vast mare lavas.

  8. Creating realistic models and resolution assessment in tomographic inversion of wide-angle active seismic profiling data

    NASA Astrophysics Data System (ADS)

    Stupina, T.; Koulakov, I.; Kopp, H.

    2009-04-01

    We consider questions of creating structural models and resolution assessment in tomographic inversion of wide-angle active seismic profiling data. For our investigations, we use the PROFIT (Profile Forward and Inverse Tomographic modeling) algorithm which was tested earlier with different datasets. Here we consider offshore seismic profiling data from three areas (Chile, Java and Central Pacific). Two of the study areas are characterized by subduction zones whereas the third data set covers a seamount province. We have explored different algorithmic issues concerning the quality of the solution, such as (1) resolution assessment using different sizes and complexity of synthetic anomalies; (2) grid spacing effects; (3) amplitude damping and smoothing; (4) criteria for rejection of outliers; (5) quantitative criteria for comparing models. Having determined optimal algorithmic parameters for the observed seismic profiling data we have created structural synthetic models which reproduce the results of the observed data inversion. For the Chilean and Java subduction zones our results show similar patterns: a relatively thin sediment layer on the oceanic plate, thicker inhomogeneous sediments in the overlying plate and a large area of very strong low velocity anomalies in the accretionary wedge. For two seamounts in the Pacific we observe high velocity anomalies in the crust which can be interpreted as frozen channels inside the dormant volcano cones. Along both profiles we obtain considerable crustal thickening beneath the seamounts.

  9. Anomaly Detection In Additively Manufactured Parts Using Laser Doppler Vibrometery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, Carlos A.

    Additively manufactured parts are susceptible to non-uniform structure caused by the unique manufacturing process. This can lead to structural weakness or catastrophic failure. Using laser Doppler vibrometry and frequency response analysis, non-contact detection of anomalies in additively manufactured parts may be possible. Preliminary tests show promise for small scale detection, but more future work is necessary.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coble, Jamie; Orton, Christopher; Schwantes, Jon

    Abstract—The Multi-Isotope Process (MIP) Monitor provides an efficient approach to monitoring the process conditions in used nuclear fuel reprocessing facilities to support process verification and validation. The MIP Monitor applies multivariate analysis to gamma spectroscopy of reprocessing streams in order to detect small changes in the gamma spectrum, which may indicate changes in process conditions. This research extends the MIP Monitor by characterizing a used fuel sample after initial dissolution according to the type of reactor of origin (pressurized or boiling water reactor), initial enrichment, burn up, and cooling time. Simulated gamma spectra were used to develop and test threemore » fuel characterization algorithms. The classification and estimation models employed are based on the partial least squares regression (PLS) algorithm. A PLS discriminate analysis model was developed which perfectly classified reactor type. Locally weighted PLS models were fitted on-the-fly to estimate continuous fuel characteristics. Burn up was predicted within 0.1% root mean squared percent error (RMSPE) and both cooling time and initial enrichment within approximately 2% RMSPE. This automated fuel characterization can be used to independently verify operator declarations of used fuel characteristics and inform the MIP Monitor anomaly detection routines at later stages of the fuel reprocessing stream to improve sensitivity to changes in operational parameters and material diversions.« less

  11. Automatic Brain Tumor Detection in T2-weighted Magnetic Resonance Images

    NASA Astrophysics Data System (ADS)

    Dvořák, P.; Kropatsch, W. G.; Bartušek, K.

    2013-10-01

    This work focuses on fully automatic detection of brain tumors. The first aim is to determine, whether the image contains a brain with a tumor, and if it does, localize it. The goal of this work is not the exact segmentation of tumors, but the localization of their approximate position. The test database contains 203 T2-weighted images of which 131 are images of healthy brain and the remaining 72 images contain brain with pathological area. The estimation, whether the image shows an afflicted brain and where a pathological area is, is done by multi resolution symmetry analysis. The first goal was tested by five-fold cross-validation technique with 100 repetitions to avoid the result dependency on sample order. This part of the proposed method reaches the true positive rate of 87.52% and the true negative rate of 93.14% for an afflicted brain detection. The evaluation of the second part of the algorithm was carried out by comparing the estimated location to the true tumor location. The detection of the tumor location reaches the rate of 95.83% of correct anomaly detection and the rate 87.5% of correct tumor location.

  12. Confabulation Based Real-time Anomaly Detection for Wide-area Surveillance Using Heterogeneous High Performance Computing Architecture

    DTIC Science & Technology

    2015-06-01

    system accuracy. The AnRAD system was also generalized for the additional application of network intrusion detection . A self-structuring technique...to Host- based Intrusion Detection Systems using Contiguous and Discontiguous System Call Patterns,” IEEE Transactions on Computer, 63(4), pp. 807...square kilometer areas. The anomaly recognition and detection (AnRAD) system was built as a cogent confabulation network . It represented road

  13. Tectonics of the Philippines and ambient regions from geophysical inversions

    NASA Astrophysics Data System (ADS)

    Liu, W.; Li, C.; Zhou, Z.; Fairhead, J. D.

    2012-12-01

    The geological study in the Philippines and ambient regions is relatively low so far for the rather scanty data and complex geological structure. Therefore it is a challenge to do the research with limited data. In this paper, an investigation of the Philippines and surrounding area has been carried out using regional magnetic and gravity anomalies. Owing to the difficulties and limitations in reduction to the pole at the low latitudes, analytical signal amplitudes of magnetic anomalies are calculated as the equivalent substitute. Application of the Parker-Oldenburg algorithm to Bouguer gravity anomalies yields a 3D Moho topography. Curie-point depths are estimated from the magnetic anomalies using a windowed wavenumber-domain algorithm. This paper aims to reveal the structure of the Manila subduction zone accurately, and moreover, to clarify the interplay between the magmatism and subduction in the Manila Trench and East Luzon Trough. On the basis of Bouguer gravity anomaly and AS(analytical signal) of magnetic anomaly, the positions of hydrated mantle wedge in the subduction zones of this area are identified in the areas charicterizd by the distribution of high-and low value of Bouguer gravity anomaly or the paralell high value of Bouguer gravity anomaly and AS. Using our inversion results together with some other published information, the boundaries of Palawan Block, Philippine Mobile Belt and Sulu-Celebes Block are defined and the collision history of PCB(Palawan continental block)-PMB (Philippine mobile belt) and PCB-Sulu Sea is also discussed. A "seismic gap" near the 14 degree north latitude on Manila Trench, mentioned in previous studies, is thought to be induced by the slab melting and plastic behavior due to the relatively high geothermal gradient. In the central Philippines, it is likely that an incipient collision-related rifting is proceeding. Furthermore, a possible new evolution model of Sulu Sea, in which the Cagayan Ridge area is thought to be the palaeo-subduction zone and volcanic arc and Palawan Trough is supposed to be a foredeep rather than an extinct trench, is presented. In addition, mantle serpentinization extent of this area is also estimated according to Curie point and Moho depths.

  14. Routine screening for fetal anomalies: expectations.

    PubMed

    Goldberg, James D

    2004-03-01

    Ultrasound has become a routine part of prenatal care. Despite this, the sensitivity and specificity of the procedure is unclear to many patients and healthcare providers. In a small study from Canada, 54.9% of women reported that they had received no information about ultrasound before their examination. In addition, 37.2% of women indicated that they were unaware of any fetal problems that ultrasound could not detect. Most centers that perform ultrasound do not have their own statistics regarding sensitivity and specificity; it is necessary to rely on large collaborative studies. Unfortunately, wide variations exist in these studies with detection rates for fetal anomalies between 13.3% and 82.4%. The Eurofetus study is the largest prospective study performed to date and because of the time and expense involved in this type of study, a similar study is not likely to be repeated. The overall fetal detection rate for anomalous fetuses was 64.1%. It is important to note that in this study, ultrasounds were performed in tertiary centers with significant experience in detecting fetal malformations. The RADIUS study also demonstrated a significantly improved detection rate of anomalies before 24 weeks in tertiary versus community centers (35% versus 13%). Two concepts seem to emerge from reviewing these data. First, patients must be made aware of the limitations of ultrasound in detecting fetal anomalies. This information is critical to allow them to make informed decisions whether to undergo ultrasound examination and to prepare them for potential outcomes.Second, to achieve the detection rates reported in the Eurofetus study, ultrasound examination must be performed in centers that have extensive experience in the detection of fetal anomalies.

  15. Transient ice mass variations over Greenland detected by the combination of GPS and GRACE data

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Liu, L.; Khan, S. A.; van Dam, T. M.; Zhang, E.

    2017-12-01

    Over the past decade, the Greenland Ice Sheet (GrIS) has been undergoing significant warming and ice mass loss. Such mass loss was not always a steady process but had substantial temporal and spatial variabilities. Here we apply multi-channel singular spectral analysis to crustal deformation time series measured at about 50 Global Positioning System (GPS) stations mounted on bedrock around the Greenland coast and mass changes inferred from Gravity Recovery and Climate Experiment (GRACE) to detect transient changes in ice mass balance over the GrIS. We detect two transient anomalies: one is a negative melting anomaly (Anomaly 1) that peaked around 2010; the other is a positive melting anomaly (Anomaly 2) that peaked between 2012 and 2013. The GRACE data show that both anomalies caused significant mass changes south of 74°N but negligible changes north of 74°N. Both anomalies caused the maximum mass change in southeast GrIS, followed by in west GrIS near Jakobshavn. Our results also show that the mass change caused by Anomaly 1 first reached the maximum in late 2009 in the southeast GrIS and then migrated to west GrIS. However, in Anomaly 2, the southeast GrIS was the last place that reached the maximum mass change in early 2013 and the west GrIS near Jakobshavn was the second latest place that reached the maximum mass change. Most of the GPS data show similar spatiotemporal patterns as those obtained from the GRACE data. However, some GPS time series show discrepancies in either space or time, because of data gaps and different sensitivities of mass loading change. Namely, loading deformation measured by GPS can be significantly affected by local dynamical mass changes, which, yet, has little impact on GRACE observations.

  16. Attention focusing and anomaly detection in systems monitoring

    NASA Technical Reports Server (NTRS)

    Doyle, Richard J.

    1994-01-01

    Any attempt to introduce automation into the monitoring of complex physical systems must start from a robust anomaly detection capability. This task is far from straightforward, for a single definition of what constitutes an anomaly is difficult to come by. In addition, to make the monitoring process efficient, and to avoid the potential for information overload on human operators, attention focusing must also be addressed. When an anomaly occurs, more often than not several sensors are affected, and the partially redundant information they provide can be confusing, particularly in a crisis situation where a response is needed quickly. The focus of this paper is a new technique for attention focusing. The technique involves reasoning about the distance between two frequency distributions, and is used to detect both anomalous system parameters and 'broken' causal dependencies. These two forms of information together isolate the locus of anomalous behavior in the system being monitored.

  17. An Assessment Methodology to Evaluate In-Flight Engine Health Management Effectiveness

    NASA Astrophysics Data System (ADS)

    Maggio, Gaspare; Belyeu, Rebecca; Pelaccio, Dennis G.

    2002-01-01

    flight effectiveness of candidate engine health management system concepts. A next generation engine health management system will be required to be both reliable and robust in terms of anomaly detection capability. The system must be able to operate successfully in the hostile, high-stress engine system environment. This implies that its system components, such as the instrumentation, process and control, and vehicle interface and support subsystems, must be highly reliable. Additionally, the system must be able to address a vast range of possible engine operation anomalies through a host of different types of measurements supported by a fast algorithm/architecture processing capability that can identify "true" (real) engine operation anomalies. False anomaly condition reports for such a system must be essentially eliminated. The accuracy of identifying only real anomaly conditions has been an issue with the Space Shuttle Main Engine (SSME) in the past. Much improvement in many of the technologies to address these areas is required. The objectives of this study were to identify and demonstrate a consistent assessment methodology that can evaluate the capability of next generation engine health management system concepts to respond in a correct, timely manner to alleviate an operational engine anomaly condition during flight. Science Applications International Corporation (SAIC), with support from NASA Marshall Space Flight Center, identified a probabilistic modeling approach to assess engine health management system concept effectiveness using a deterministic anomaly-time event assessment modeling approach that can be applied in the engine preliminary design stage of development to assess engine health management system concept effectiveness. Much discussion in this paper focuses on the formulation and application approach in performing this assessment. This includes detailed discussion of key modeling assumptions, the overall assessment methodology approach identified, and the identification of key supporting engine health management system concept design/operation and fault mode information required to utilize this methodology. At the paper's conclusion, discussion focuses on a demonstration benchmark study that applied this methodology to the current SSME health management system. A summary of study results and lessons learned are provided. Recommendations for future work in this area are also identified at the conclusion of the paper. * Please direct all correspondence/communication pertaining to this paper to Dennis G. Pelaccio, Science

  18. Quantifying Performance Bias in Label Fusion

    DTIC Science & Technology

    2012-08-21

    detect ), may provide the end-user with the means to appropriately adjust the performance and optimal thresholds for performance by fusing legacy systems...boolean combination of classification systems in ROC space: An application to anomaly detection with HMMs. Pattern Recognition, 43(8), 2732-2752. 10...Shamsuddin, S. (2009). An overview of neural networks use in anomaly intrusion detection systems. Paper presented at the Research and Development (SCOReD

  19. Detection of cracks on concrete surfaces by hyperspectral image processing

    NASA Astrophysics Data System (ADS)

    Santos, Bruno O.; Valença, Jonatas; Júlio, Eduardo

    2017-06-01

    All large infrastructures worldwide must have a suitable monitoring and maintenance plan, aiming to evaluate their behaviour and predict timely interventions. In the particular case of concrete infrastructures, the detection and characterization of crack patterns is a major indicator of their structural response. In this scope, methods based on image processing have been applied and presented. Usually, methods focus on image binarization followed by applications of mathematical morphology to identify cracks on concrete surface. In most cases, publications are focused on restricted areas of concrete surfaces and in a single crack. On-site, the methods and algorithms have to deal with several factors that interfere with the results, namely dirt and biological colonization. Thus, the automation of a procedure for on-site characterization of crack patterns is of great interest. This advance may result in an effective tool to support maintenance strategies and interventions planning. This paper presents a research based on the analysis and processing of hyper-spectral images for detection and classification of cracks on concrete structures. The objective of the study is to evaluate the applicability of several wavelengths of the electromagnetic spectrum for classification of cracks in concrete surfaces. An image survey considering highly discretized wavelengths between 425 nm and 950 nm was performed on concrete specimens, with bandwidths of 25 nm. The concrete specimens were produced with a crack pattern induced by applying a load with displacement control. The tests were conducted to simulate usual on-site drawbacks. In this context, the surface of the specimen was subjected to biological colonization (leaves and moss). To evaluate the results and enhance crack patterns a clustering method, namely k-means algorithm, is being applied. The research conducted allows to define the suitability of using clustering k-means algorithm combined with hyper-spectral images highly discretized for crack detection on concrete surfaces, considering cracking combined with the most usual concrete anomalies, namely biological colonization.

  20. Anomaly detection for machine learning redshifts applied to SDSS galaxies

    NASA Astrophysics Data System (ADS)

    Hoyle, Ben; Rau, Markus Michael; Paech, Kerstin; Bonnett, Christopher; Seitz, Stella; Weller, Jochen

    2015-10-01

    We present an analysis of anomaly detection for machine learning redshift estimation. Anomaly detection allows the removal of poor training examples, which can adversely influence redshift estimates. Anomalous training examples may be photometric galaxies with incorrect spectroscopic redshifts, or galaxies with one or more poorly measured photometric quantity. We select 2.5 million `clean' SDSS DR12 galaxies with reliable spectroscopic redshifts, and 6730 `anomalous' galaxies with spectroscopic redshift measurements which are flagged as unreliable. We contaminate the clean base galaxy sample with galaxies with unreliable redshifts and attempt to recover the contaminating galaxies using the Elliptical Envelope technique. We then train four machine learning architectures for redshift analysis on both the contaminated sample and on the preprocessed `anomaly-removed' sample and measure redshift statistics on a clean validation sample generated without any preprocessing. We find an improvement on all measured statistics of up to 80 per cent when training on the anomaly removed sample as compared with training on the contaminated sample for each of the machine learning routines explored. We further describe a method to estimate the contamination fraction of a base data sample.

  1. Road Traffic Anomaly Detection via Collaborative Path Inference from GPS Snippets

    PubMed Central

    Wang, Hongtao; Wen, Hui; Yi, Feng; Zhu, Hongsong; Sun, Limin

    2017-01-01

    Road traffic anomaly denotes a road segment that is anomalous in terms of traffic flow of vehicles. Detecting road traffic anomalies from GPS (Global Position System) snippets data is becoming critical in urban computing since they often suggest underlying events. However, the noisy and sparse nature of GPS snippets data have ushered multiple problems, which have prompted the detection of road traffic anomalies to be very challenging. To address these issues, we propose a two-stage solution which consists of two components: a Collaborative Path Inference (CPI) model and a Road Anomaly Test (RAT) model. CPI model performs path inference incorporating both static and dynamic features into a Conditional Random Field (CRF). Dynamic context features are learned collaboratively from large GPS snippets via a tensor decomposition technique. Then RAT calculates the anomalous degree for each road segment from the inferred fine-grained trajectories in given time intervals. We evaluated our method using a large scale real world dataset, which includes one-month GPS location data from more than eight thousand taxicabs in Beijing. The evaluation results show the advantages of our method beyond other baseline techniques. PMID:28282948

  2. Anomaly Detection in the Right Hemisphere: The Influence of Visuospatial Factors

    ERIC Educational Resources Information Center

    Smith, Stephen D.; Dixon, Michael J.; Tays, William J.; Bulman-Fleming, M. Barbara

    2004-01-01

    Previous research with both brain-damaged and neurologically intact populations has demonstrated that the right cerebral hemisphere (RH) is superior to the left cerebral hemisphere (LH) at detecting anomalies (or incongruities) in objects (Ramachandran, 1995; Smith, Tays, Dixon, & Bulman-Fleming, 2002). The current research assesses whether the RH…

  3. A Semiparametric Model for Hyperspectral Anomaly Detection

    DTIC Science & Technology

    2012-01-01

    treeline ) in the presence of natural background clutter (e.g., trees, dirt roads, grasses). Each target consists of about 7 × 4 pixels, and each pixel...vehicles near the treeline in Cube 1 (Figure 1) constitutes the target set, but, since anomaly detectors are not designed to detect a particular target

  4. Experimental investigations on airborne gravimetry based on compressed sensing.

    PubMed

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-03-18

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements.

  5. Experimental Investigations on Airborne Gravimetry Based on Compressed Sensing

    PubMed Central

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-01-01

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements. PMID:24647125

  6. A case study to detect the leakage of underground pressureless cement sewage water pipe using GPR, electrical, and chemical data.

    PubMed

    Liu, Guanqun; Jia, Yonggang; Liu, Hongjun; Qiu, Hanxue; Qiu, Dongling; Shan, Hongxian

    2002-03-01

    The exploration and determination of leakage of underground pressureless nonmetallic pipes is difficult to deal with. A comprehensive method combining Ground Penetrating Rader (GPR), electric potential survey and geochemical survey is introduced in the leakage detection of an underground pressureless nonmetallic sewage pipe in this paper. Theoretically, in the influencing scope of a leakage spot, the obvious changes of the electromagnetic properties and the physical-chemical properties of the underground media will be reflected as anomalies in GPR and electrical survey plots. The advantages of GPR and electrical survey are fast and accurate in detection of anomaly scope. In-situ analysis of the geophysical surveys can guide the geochemical survey. Then water and soil sampling and analyzing can be the evidence for judging the anomaly is caused by pipe leakage or not. On the basis of previous tests and practical surveys, the GPR waveforms, electric potential curves, contour maps, and chemical survey results are all classified into three types according to the extent or indexes of anomalies in orderto find out the leakage spots. When three survey methods all show their anomalies as type I in an anomalous spot, this spot is suspected as the most possible leakage location. Otherwise, it will be down grade suspected point. The suspect leakage spots should be confirmed by referring the site conditions because some anomalies are caused other factors. The excavation afterward proved that the method for determining the suspected location by anomaly type is effective and economic. Comprehensive method of GRP, electric potential survey, and geochemical survey is one of the effective methods in the leakage detection of underground nonmetallic pressureless pipe with its advantages of being fast and accurate.

  7. Cost Analysis of Following Up Incomplete Low-Risk Fetal Anatomy Ultrasounds.

    PubMed

    O'Brien, Karen; Shainker, Scott A; Modest, Anna M; Spiel, Melissa H; Resetkova, Nina; Shah, Neel; Hacker, Michele R

    2017-03-01

    To examine the clinical utility and cost of follow-up ultrasounds performed as a result of suboptimal views at the time of initial second-trimester ultrasound in a cohort of low-risk pregnant women. We conducted a retrospective cohort study of women at low risk for fetal structural anomalies who had second-trimester ultrasounds at 16 to less than 24 weeks of gestation from 2011 to 2013. We determined the probability of women having follow-up ultrasounds as a result of suboptimal views at the time of the initial second-trimester ultrasound, and calculated the probability of detecting an anomaly on follow-up ultrasound. These probabilities were used to estimate the national cost of our current ultrasound practice, and the cost to identify one fetal anomaly on follow-up ultrasound. During the study period, 1,752 women met inclusion criteria. Four fetuses (0.23% [95% CI 0.06-0.58]) were found to have anomalies at the initial ultrasound. Because of suboptimal views, 205 women (11.7%) returned for a follow-up ultrasound, and one (0.49% [95% CI 0.01-2.7]) anomaly was detected. Two women (0.11%) still had suboptimal views and returned for an additional follow-up ultrasound, with no anomalies detected. When the incidence of incomplete ultrasounds was applied to a similar low-risk national cohort, the annual cost of these follow-up scans was estimated at $85,457,160. In our cohort, the cost to detect an anomaly on follow-up ultrasound was approximately $55,000. The clinical yield of performing follow-up ultrasounds because of suboptimal views on low-risk second-trimester ultrasounds is low. Since so few fetal abnormalities were identified on follow-up scans, this added cost and patient burden may not be warranted. © 2016 Wiley Periodicals, Inc.

  8. Machine intelligence-based decision-making (MIND) for automatic anomaly detection

    NASA Astrophysics Data System (ADS)

    Prasad, Nadipuram R.; King, Jason C.; Lu, Thomas

    2007-04-01

    Any event deemed as being out-of-the-ordinary may be called an anomaly. Anomalies by virtue of their definition are events that occur spontaneously with no prior indication of their existence or appearance. Effects of anomalies are typically unknown until they actually occur, and their effects aggregate in time to show noticeable change from the original behavior. An evolved behavior would in general be very difficult to correct unless the anomalous event that caused such behavior can be detected early, and any consequence attributed to the specific anomaly. Substantial time and effort is required to back-track the cause for abnormal behavior and to recreate the event sequence leading to abnormal behavior. There is a critical need therefore to automatically detect anomalous behavior as and when they may occur, and to do so with the operator in the loop. Human-machine interaction results in better machine learning and a better decision-support mechanism. This is the fundamental concept of intelligent control where machine learning is enhanced by interaction with human operators, and vice versa. The paper discusses a revolutionary framework for the characterization, detection, identification, learning, and modeling of anomalous behavior in observed phenomena arising from a large class of unknown and uncertain dynamical systems.

  9. Cross-linguistic variation in the neurophysiological response to semantic processing: Evidence from anomalies at the borderline of awareness

    PubMed Central

    Tune, Sarah; Schlesewsky, Matthias; Small, Steven L.; Sanford, Anthony J.; Bohan, Jason; Sassenhagen, Jona; Bornkessel-Schlesewsky, Ina

    2014-01-01

    The N400 event-related brain potential (ERP) has played a major role in the examination of how the human brain processes meaning. For current theories of the N400, classes of semantic inconsistencies which do not elicit N400 effects have proven particularly influential. Semantic anomalies that are difficult to detect are a case in point (“borderline anomalies”, e.g. “After an air crash, where should the survivors be buried?”), engendering a late positive ERP response but no N400 effect in English (Sanford, Leuthold, Bohan, & Sanford, 2011). In three auditory ERP experiments, we demonstrate that this result is subject to cross-linguistic variation. In a German version of Sanford and colleagues' experiment (Experiment 1), detected borderline anomalies elicited both N400 and late positivity effects compared to control stimuli or to missed borderline anomalies. Classic easy-to-detect semantic (non-borderline) anomalies showed the same pattern as in English (N400 plus late positivity). The cross-linguistic difference in the response to borderline anomalies was replicated in two additional studies with a slightly modified task (Experiment 2a: German; Experiment 2b: English), with a reliable LANGUAGE × ANOMALY interaction for the borderline anomalies confirming that the N400 effect is subject to systematic cross-linguistic variation. We argue that this variation results from differences in the language-specific default weighting of top-down and bottom-up information, concluding that N400 amplitude reflects the interaction between the two information sources in the form-to-meaning mapping. PMID:24447768

  10. Sub-surface defects detection of by using active thermography and advanced image edge detection

    NASA Astrophysics Data System (ADS)

    Tse, Peter W.; Wang, Gaochao

    2017-05-01

    Active or pulsed thermography is a popular non-destructive testing (NDT) tool for inspecting the integrity and anomaly of industrial equipment. One of the recent research trends in using active thermography is to automate the process in detecting hidden defects. As of today, human effort has still been using to adjust the temperature intensity of the thermo camera in order to visually observe the difference in cooling rates caused by a normal target as compared to that by a sub-surface crack exists inside the target. To avoid the tedious human-visual inspection and minimize human induced error, this paper reports the design of an automatic method that is capable of detecting subsurface defects. The method used the technique of active thermography, edge detection in machine vision and smart algorithm. An infrared thermo-camera was used to capture a series of temporal pictures after slightly heating up the inspected target by flash lamps. Then the Canny edge detector was employed to automatically extract the defect related images from the captured pictures. The captured temporal pictures were preprocessed by a packet of Canny edge detector and then a smart algorithm was used to reconstruct the whole sequences of image signals. During the processes, noise and irrelevant backgrounds exist in the pictures were removed. Consequently, the contrast of the edges of defective areas had been highlighted. The designed automatic method was verified by real pipe specimens that contains sub-surface cracks. After applying such smart method, the edges of cracks can be revealed visually without the need of using manual adjustment on the setting of thermo-camera. With the help of this automatic method, the tedious process in manually adjusting the colour contract and the pixel intensity in order to reveal defects can be avoided.

  11. Determination Gradients of the Earth's Magnetic Field from the Measurements of the Satellites and Inversion of the Kursk Magnetic Anomaly

    NASA Technical Reports Server (NTRS)

    Karoly, Kis; Taylor, Patrick T.; Geza, Wittmann

    2014-01-01

    We computed magnetic field gradients at satellite altitude, over Europe with emphasis on the Kursk Magnetic Anomaly (KMA). They were calculated using the CHAMP satellite total magnetic anomalies. Our computations were done to determine how the magnetic anomaly data from the new ESA/Swarm satellites could be utilized to determine the structure of the magnetization of the Earths crust, especially in the region of the KMA. Since the ten years of 2 CHAMP data could be used to simulate the Swarm data. An initial East magnetic anomaly gradient map of Europe was computed and subsequently the North, East and Vertical magnetic gradients for the KMA region were calculated. The vertical gradient of the KMA was determined using Hilbert transforms. Inversion of the total KMA was derived using Simplex and Simulated Annealing algorithms. Our resulting inversion depth model is a horizontal quadrangle with upper 300-329 km and lower 331-339 km boundaries.

  12. Discrepancy of cytogenetic analysis in Western and eastern Taiwan.

    PubMed

    Chang, Yu-Hsun; Chen, Pui-Yi; Li, Tzu-Ying; Yeh, Chung-Nan; Li, Yi-Shian; Chu, Shao-Yin; Lee, Ming-Liang

    2013-06-01

    This study aimed at investigating the results of second-trimester amniocyte karyotyping in western and eastern Taiwan, and identifying any regional differences in the prevalence of fetal chromosomal anomalies. From 2004 to 2009, pregnant women who underwent amniocentesis in their second trimester at three hospitals in western Taiwan and at four hospitals in eastern Taiwan were included. All the cytogenetic analyses of cultured amniocytes were performed in the cytogenetics laboratory of the Genetic Counseling Center of Hualien Buddhist Tzu Chi General Hospital. We used the chi-square test, Student t test, and Mann-Whitney U test to evaluate the variants of clinical indications, amniocyte karyotyping results, and prevalence and types of chromosomal anomalies in western and eastern Taiwan. During the study period, 3573 samples, 1990 (55.7%) from western Taiwan and 1583 (44.3%) from eastern Taiwan, were collected and analyzed. The main indication for amniocyte karyotyping was advanced maternal age (69.0% in western Taiwan, 67.1% in eastern Taiwan). The detection rates of chromosomal anomalies by amniocyte karyotyping in eastern Taiwan (45/1582, 2.8%) did not differ significantly from that in western Taiwan (42/1989, 2.1%) (p = 1.58). Mothers who had abnormal ultrasound findings and histories of familial hereditary diseases or chromosomal anomalies had higher detection rates of chromosomal anomalies (9.3% and 7.2%, respectively). The detection rate of autosomal anomalies was higher in eastern Taiwan (93.3% vs. 78.6%, p = 0.046), but the detection rate of sex-linked chromosomal anomalies was higher in western Taiwan (21.4% vs. 6.7%, p = 0.046). We demonstrated regional differences in second-trimester amniocyte karyotyping results and established a database of common chromosomal anomalies that could be useful for genetic counseling, especially in eastern Taiwan. Copyright © 2012. Published by Elsevier B.V.

  13. An investigation of thermal anomalies in the Central American volcanic chain and evaluation of the utility of thermal anomaly monitoring in the prediction of volcanic eruptions. [Central America

    NASA Technical Reports Server (NTRS)

    Stoiber, R. E. (Principal Investigator); Rose, W. I., Jr.

    1975-01-01

    The author has identified the following significant results. Ground truth data collection proves that significant anomalies exist at 13 volcanoes within the test site of Central America. The dimensions and temperature contrast of these ten anomalies are large enough to be detected by the Skylab 192 instrument. The dimensions and intensity of thermal anomalies have changed at most of these volcanoes during the Skylab mission.

  14. System and method for anomaly detection

    DOEpatents

    Scherrer, Chad

    2010-06-15

    A system and method for detecting one or more anomalies in a plurality of observations is provided. In one illustrative embodiment, the observations are real-time network observations collected from a stream of network traffic. The method includes performing a discrete decomposition of the observations, and introducing derived variables to increase storage and query efficiencies. A mathematical model, such as a conditional independence model, is then generated from the formatted data. The formatted data is also used to construct frequency tables which maintain an accurate count of specific variable occurrence as indicated by the model generation process. The formatted data is then applied to the mathematical model to generate scored data. The scored data is then analyzed to detect anomalies.

  15. Survey of Anomaly Detection Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, B

    This survey defines the problem of anomaly detection and provides an overview of existing methods. The methods are categorized into two general classes: generative and discriminative. A generative approach involves building a model that represents the joint distribution of the input features and the output labels of system behavior (e.g., normal or anomalous) then applies the model to formulate a decision rule for detecting anomalies. On the other hand, a discriminative approach aims directly to find the decision rule, with the smallest error rate, that distinguishes between normal and anomalous behavior. For each approach, we will give an overview ofmore » popular techniques and provide references to state-of-the-art applications.« less

  16. A primitive study on unsupervised anomaly detection with an autoencoder in emergency head CT volumes

    NASA Astrophysics Data System (ADS)

    Sato, Daisuke; Hanaoka, Shouhei; Nomura, Yukihiro; Takenaga, Tomomi; Miki, Soichiro; Yoshikawa, Takeharu; Hayashi, Naoto; Abe, Osamu

    2018-02-01

    Purpose: The target disorders of emergency head CT are wide-ranging. Therefore, people working in an emergency department desire a computer-aided detection system for general disorders. In this study, we proposed an unsupervised anomaly detection method in emergency head CT using an autoencoder and evaluated the anomaly detection performance of our method in emergency head CT. Methods: We used a 3D convolutional autoencoder (3D-CAE), which contains 11 layers in the convolution block and 6 layers in the deconvolution block. In the training phase, we trained the 3D-CAE using 10,000 3D patches extracted from 50 normal cases. In the test phase, we calculated abnormalities of each voxel in 38 emergency head CT volumes (22 abnormal cases and 16 normal cases) for evaluation and evaluated the likelihood of lesion existence. Results: Our method achieved a sensitivity of 68% and a specificity of 88%, with an area under the curve of the receiver operating characteristic curve of 0.87. It shows that this method has a moderate accuracy to distinguish normal CT cases to abnormal ones. Conclusion: Our method has potentialities for anomaly detection in emergency head CT.

  17. Effects of Sampling and Spatio/Temporal Granularity in Traffic Monitoring on Anomaly Detectability

    NASA Astrophysics Data System (ADS)

    Ishibashi, Keisuke; Kawahara, Ryoichi; Mori, Tatsuya; Kondoh, Tsuyoshi; Asano, Shoichiro

    We quantitatively evaluate how sampling and spatio/temporal granularity in traffic monitoring affect the detectability of anomalous traffic. Those parameters also affect the monitoring burden, so network operators face a trade-off between the monitoring burden and detectability and need to know which are the optimal paramter values. We derive equations to calculate the false positive ratio and false negative ratio for given values of the sampling rate, granularity, statistics of normal traffic, and volume of anomalies to be detected. Specifically, assuming that the normal traffic has a Gaussian distribution, which is parameterized by its mean and standard deviation, we analyze how sampling and monitoring granularity change these distribution parameters. This analysis is based on observation of the backbone traffic, which exhibits spatially uncorrelated and temporally long-range dependence. Then we derive the equations for detectability. With those equations, we can answer the practical questions that arise in actual network operations: what sampling rate to set to find the given volume of anomaly, or, if the sampling is too high for actual operation, what granularity is optimal to find the anomaly for a given lower limit of sampling rate.

  18. A machine independent expert system for diagnosing environmentally induced spacecraft anomalies

    NASA Technical Reports Server (NTRS)

    Rolincik, Mark J.

    1991-01-01

    A new rule-based, machine independent analytical tool for diagnosing spacecraft anomalies, the EnviroNET expert system, was developed. Expert systems provide an effective method for storing knowledge, allow computers to sift through large amounts of data pinpointing significant parts, and most importantly, use heuristics in addition to algorithms which allow approximate reasoning and inference, and the ability to attack problems not rigidly defines. The EviroNET expert system knowledge base currently contains over two hundred rules, and links to databases which include past environmental data, satellite data, and previous known anomalies. The environmental causes considered are bulk charging, single event upsets (SEU), surface charging, and total radiation dose.

  19. Problematic projection to the in-sample subspace for a kernelized anomaly detector

    DOE PAGES

    Theiler, James; Grosklos, Guen

    2016-03-07

    We examine the properties and performance of kernelized anomaly detectors, with an emphasis on the Mahalanobis-distance-based kernel RX (KRX) algorithm. Although the detector generally performs well for high-bandwidth Gaussian kernels, it exhibits problematic (in some cases, catastrophic) performance for distances that are large compared to the bandwidth. By comparing KRX to two other anomaly detectors, we can trace the problem to a projection in feature space, which arises when a pseudoinverse is used on the covariance matrix in that feature space. Here, we show that a regularized variant of KRX overcomes this difficulty and achieves superior performance over a widemore » range of bandwidths.« less

  20. Method for Real-Time Model Based Structural Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)

    2015-01-01

    A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.

  1. Formal Methods for Information Protection Technology. Task 2: Mathematical Foundations, Architecture and Principles of Implementation of Multi-Agent Learning Components for Attack Detection in Computer Networks. Part 2

    DTIC Science & Technology

    2003-11-01

    Lafayette, IN 47907. [Lane et al-97b] T. Lane and C . E. Brodley. Sequence matching and learning in anomaly detection for computer security. Proceedings of...Mining, pp 259-263. 1998. [Lane et al-98b] T. Lane and C . E. Brodley. Temporal sequence learning and data reduction for anomaly detection ...W. Lee, C . Park, and S. Stolfo. Towards Automatic Intrusion Detection using NFR. 1st USENIX Workshop on Intrusion Detection and Network Monitoring

  2. SCADA Protocol Anomaly Detection Utilizing Compression (SPADUC) 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon Rueff; Lyle Roybal; Denis Vollmer

    2013-01-01

    There is a significant need to protect the nation’s energy infrastructures from malicious actors using cyber methods. Supervisory, Control, and Data Acquisition (SCADA) systems may be vulnerable due to the insufficient security implemented during the design and deployment of these control systems. This is particularly true in older legacy SCADA systems that are still commonly in use. The purpose of INL’s research on the SCADA Protocol Anomaly Detection Utilizing Compression (SPADUC) project was to determine if and how data compression techniques could be used to identify and protect SCADA systems from cyber attacks. Initially, the concept was centered on howmore » to train a compression algorithm to recognize normal control system traffic versus hostile network traffic. Because large portions of the TCP/IP message traffic (called packets) are repetitive, the concept of using compression techniques to differentiate “non-normal” traffic was proposed. In this manner, malicious SCADA traffic could be identified at the packet level prior to completing its payload. Previous research has shown that SCADA network traffic has traits desirable for compression analysis. This work investigated three different approaches to identify malicious SCADA network traffic using compression techniques. The preliminary analyses and results presented herein are clearly able to differentiate normal from malicious network traffic at the packet level at a very high confidence level for the conditions tested. Additionally, the master dictionary approach used in this research appears to initially provide a meaningful way to categorize and compare packets within a communication channel.« less

  3. PLAT: An Automated Fault and Behavioural Anomaly Detection Tool for PLC Controlled Manufacturing Systems.

    PubMed

    Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun; Wang, Gi-Nam

    2016-01-01

    Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively.

  4. PLAT: An Automated Fault and Behavioural Anomaly Detection Tool for PLC Controlled Manufacturing Systems

    PubMed Central

    Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun

    2016-01-01

    Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively. PMID:27974882

  5. Congenital aplastic-hypoplastic lumbar pedicle in infants and young children

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yousefzadeh, D.K.; El-Khoury, G.Y.; Lupetin, A.R.

    1982-01-01

    Nine cases of congenital aplastic-hypoplastic lumbar pedicle (mean age 27 months) are described. Their data are compared to those of 18 other reported cases (mean age 24.7 years) and the following conclusions are made: (1) Almost exclusively, the pedicular defect in infants and young children is due to developmental anomaly rather than destruction by malignancy or infectious processes. (2) This anomaly, we think, is more common than it is believed to be. (3) Unlike adults, infants and young children rarely develop hypertrophy and/or sclerosis of the contralateral pedicle. (4) Detection of pedicular anomaly is more than satisfying a radiographic curiositymore » and may lead to discovery of other coexisting anomalies. (5) Ultrasonic screening of the patients with congenital pedicular defects may detect the associated genitourinary anomalies, if present, and justify further studies in a selected group of patients.« less

  6. Machine Learning in Intrusion Detection

    DTIC Science & Technology

    2005-07-01

    machine learning tasks. Anomaly detection provides the core technology for a broad spectrum of security-centric applications. In this dissertation, we examine various aspects of anomaly based intrusion detection in computer security. First, we present a new approach to learn program behavior for intrusion detection. Text categorization techniques are adopted to convert each process to a vector and calculate the similarity between two program activities. Then the k-nearest neighbor classifier is employed to classify program behavior as normal or intrusive. We demonstrate

  7. Observed TEC Anomalies by GNSS Sites Preceding the Aegean Sea Earthquake of 2014

    NASA Astrophysics Data System (ADS)

    Ulukavak, Mustafa; Yal&ccedul; ınkaya, Mualla

    2016-11-01

    In recent years, Total Electron Content (TEC) data, obtained from Global Navigation Satellites Systems (GNSS) receivers, has been widely used to detect seismo-ionospheric anomalies. In this study, Global Positioning System - Total Electron Content (GPS-TEC) data were used to investigate ionospheric abnormal behaviors prior to the 2014 Aegean Sea earthquake (40.305°N 25.453°E, 24 May 2014, 09:25:03 UT, Mw:6.9). The data obtained from three Continuously Operating Reference Stations in Turkey (CORS-TR) and two International GNSS Service (IGS) sites near the epicenter of the earthquake is used to detect ionospheric anomalies before the earthquake. Solar activity index (F10.7) and geomagnetic activity index (Dst), which are both related to space weather conditions, were used to analyze these pre-earthquake ionospheric anomalies. An examination of these indices indicated high solar activity between May 8 and 15, 2014. The first significant increase (positive anomalies) in Vertical Total Electron Content (VTEC) was detected on May 14, 2014 or 10 days before the earthquake. This positive anomaly can be attributed to the high solar activity. The indices do not imply high solar or geomagnetic activity after May 15, 2014. Abnormal ionospheric TEC changes (negative anomaly) were observed at all stations one day before the earthquake. These changes were lower than the lower bound by approximately 10-20 TEC unit (TECU), and may be considered as the ionospheric precursor of the 2014 Aegean Sea earthquake

  8. Eddy-Current Inspection of Ball Bearings

    NASA Technical Reports Server (NTRS)

    Bankston, B.

    1985-01-01

    Custom eddy-current probe locates surface anomalies. Low friction air cushion within cone allows ball to roll easily. Eddy current probe reliably detects surface and near-surface cracks, voids, and material anomalies in bearing balls or other spherical objects. Defects in ball surface detected by probe displayed on CRT and recorded on strip-chart recorder.

  9. Anomaly Detection Techniques for Ad Hoc Networks

    ERIC Educational Resources Information Center

    Cai, Chaoli

    2009-01-01

    Anomaly detection is an important and indispensable aspect of any computer security mechanism. Ad hoc and mobile networks consist of a number of peer mobile nodes that are capable of communicating with each other absent a fixed infrastructure. Arbitrary node movements and lack of centralized control make them vulnerable to a wide variety of…

  10. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  11. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  12. On the impact of different volcanic hot spot detection methods on eruption energy quantification

    NASA Astrophysics Data System (ADS)

    Pergola, Nicola; Coviello, Irina; Falconieri, Alfredo; Lacava, Teodosio; Marchese, Francesco; Tramutoli, Valerio

    2016-04-01

    Several studies have shown that sensors like the Advanced Very High Resolution Radiometer (AVHRR) and the Moderate Resolution Imaging Spectroradiometer (MODIS) may be effectively used to identify volcanic hotspots. These sensors offer in fact some spectral channels in the Medium Infrared (MIR) and Thermal Infrared (TIR) bands together with a good compromise between spatial and temporal resolution suited to study and monitor thermal volcanic activity. Many algorithms were developed to identify volcanic thermal anomalies from space with some of them that were extensively tested in very different geographich areas. In this work, we analyze the volcanic radiative power (VRP) representing one of parameters of major interest for volcanologists that may be estimated by satellite. In particular, we compare the radiative power estimations driven by some well-established state of the art hotspot detection methods (e.g. RSTVOLC, MODVOLC, HOTSAT). Differences in terms of radiative power estimations achieved during recent Mt. Etna (Italy) eruptions will be evaluated, assessing how much the VRP retrieved during effusive eruptions is affected by the sensitivity of hotspot detection methods.

  13. Integrated System for Autonomous Science

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Sherwood, Robert; Tran, Daniel; Cichy, Benjamin; Davies, Ashley; Castano, Rebecca; Rabideau, Gregg; Frye, Stuart; Trout, Bruce; Shulman, Seth; hide

    2006-01-01

    The New Millennium Program Space Technology 6 Project Autonomous Sciencecraft software implements an integrated system for autonomous planning and execution of scientific, engineering, and spacecraft-coordination actions. A prior version of this software was reported in "The TechSat 21 Autonomous Sciencecraft Experiment" (NPO-30784), NASA Tech Briefs, Vol. 28, No. 3 (March 2004), page 33. This software is now in continuous use aboard the Earth Orbiter 1 (EO-1) spacecraft mission and is being adapted for use in the Mars Odyssey and Mars Exploration Rovers missions. This software enables EO-1 to detect and respond to such events of scientific interest as volcanic activity, flooding, and freezing and thawing of water. It uses classification algorithms to analyze imagery onboard to detect changes, including events of scientific interest. Detection of such events triggers acquisition of follow-up imagery. The mission-planning component of the software develops a response plan that accounts for visibility of targets and operational constraints. The plan is then executed under control by a task-execution component of the software that is capable of responding to anomalies.

  14. DELTACON: A Principled Massive-Graph Similarity Function with Attribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koutra, Danai; Shah, Neil; Vogelstein, Joshua T.

    How much did a network change since yesterday? How different is the wiring between Bob's brain (a left-handed male) and Alice's brain (a right-handed female)? Graph similarity with known node correspondence, i.e. the detection of changes in the connectivity of graphs, arises in numerous settings. In this work, we formally state the axioms and desired properties of the graph similarity functions, and evaluate when state-of-the-art methods fail to detect crucial connectivity changes in graphs. We propose DeltaCon, a principled, intuitive, and scalable algorithm that assesses the similarity between two graphs on the same nodes (e.g. employees of a company, customersmore » of a mobile carrier). In our experiments on various synthetic and real graphs we showcase the advantages of our method over existing similarity measures. We also employ DeltaCon to real applications: (a) we classify people to groups of high and low creativity based on their brain connectivity graphs, and (b) do temporal anomaly detection in the who-emails-whom Enron graph.« less

  15. DELTACON: A Principled Massive-Graph Similarity Function with Attribution

    DOE PAGES

    Koutra, Danai; Shah, Neil; Vogelstein, Joshua T.; ...

    2014-05-22

    How much did a network change since yesterday? How different is the wiring between Bob's brain (a left-handed male) and Alice's brain (a right-handed female)? Graph similarity with known node correspondence, i.e. the detection of changes in the connectivity of graphs, arises in numerous settings. In this work, we formally state the axioms and desired properties of the graph similarity functions, and evaluate when state-of-the-art methods fail to detect crucial connectivity changes in graphs. We propose DeltaCon, a principled, intuitive, and scalable algorithm that assesses the similarity between two graphs on the same nodes (e.g. employees of a company, customersmore » of a mobile carrier). In our experiments on various synthetic and real graphs we showcase the advantages of our method over existing similarity measures. We also employ DeltaCon to real applications: (a) we classify people to groups of high and low creativity based on their brain connectivity graphs, and (b) do temporal anomaly detection in the who-emails-whom Enron graph.« less

  16. Archean Isotope Anomalies as a Window into the Differentiation History of the Earth

    NASA Astrophysics Data System (ADS)

    Wainwright, A. N.; Debaille, V.; Zincone, S. A.

    2018-05-01

    No resolvable µ142Nd anomaly was detected in Paleo- Mesoarchean rocks of São Francisco and West African cratons. The lack of µ142Nd anomalies outside of North America and Greenland implies the Earth differentiated into at least two distinct domains.

  17. Prototype global burnt area algorithm using the AVHRR-LTDR time series

    NASA Astrophysics Data System (ADS)

    López-Saldaña, Gerardo; Pereira, José Miguel; Aires, Filipe

    2013-04-01

    One of the main limitations of products derived from remotely-sensed data is the length of the data records available for climate studies. The Advanced Very High Resolution Radiometer (AVHRR) long-term data record (LTDR) comprises a daily global atmospherically-corrected surface reflectance dataset at 0.05° spatial resolution and is available for the 1981-1999 time period. Fire is strong cause of land surface change and emissions of greenhouse gases around the globe. A global long-term identification of areas affected by fire is needed to analyze trends and fire-clime relationships. A burnt area algorithm can be seen as a change point detection problem where there is an abrupt change in the surface reflectance due to the biomass burning. Using the AVHRR-LTDR dataset, a time series of bidirectional reflectance distribution function (BRDF) corrected surface reflectance was generated using the daily observations and constraining the BRDF model inversion using a climatology of BRDF parameters derived from 12 years of MODIS data. The identification of the burnt area was performed using a t-test in the pre- and post-fire reflectance values and a change point detection algorithm, then spectral constraints were applied to flag changes caused by natural land processes like vegetation seasonality or flooding. Additional temporal constraints are applied focusing in the persistence of the affected areas. Initial results for year 1998, which was selected because of a positive fire anomaly, show spatio-temporal coherence but further analysis is required and a formal rigorous validation will be applied using burn scars identified from high-resolution datasets.

  18. Aircraft Fault Detection and Classification Using Multi-Level Immune Learning Detection

    NASA Technical Reports Server (NTRS)

    Wong, Derek; Poll, Scott; KrishnaKumar, Kalmanje

    2005-01-01

    This work is an extension of a recently developed software tool called MILD (Multi-level Immune Learning Detection), which implements a negative selection algorithm for anomaly and fault detection that is inspired by the human immune system. The immunity-based approach can detect a broad spectrum of known and unforeseen faults. We extend MILD by applying a neural network classifier to identify the pattern of fault detectors that are activated during fault detection. Consequently, MILD now performs fault detection and identification of the system under investigation. This paper describes the application of MILD to detect and classify faults of a generic transport aircraft augmented with an intelligent flight controller. The intelligent control architecture is designed to accommodate faults without the need to explicitly identify them. Adding knowledge about the existence and type of a fault will improve the handling qualities of a degraded aircraft and impact tactical and strategic maneuvering decisions. In addition, providing fault information to the pilot is important for maintaining situational awareness so that he can avoid performing an action that might lead to unexpected behavior - e.g., an action that exceeds the remaining control authority of the damaged aircraft. We discuss the detection and classification results of simulated failures of the aircraft's control system and show that MILD is effective at determining the problem with low false alarm and misclassification rates.

  19. GBAS Ionospheric Anomaly Monitoring Based on a Two-Step Approach

    PubMed Central

    Zhao, Lin; Yang, Fuxin; Li, Liang; Ding, Jicheng; Zhao, Yuxin

    2016-01-01

    As one significant component of space environmental weather, the ionosphere has to be monitored using Global Positioning System (GPS) receivers for the Ground-Based Augmentation System (GBAS). This is because an ionospheric anomaly can pose a potential threat for GBAS to support safety-critical services. The traditional code-carrier divergence (CCD) methods, which have been widely used to detect the variants of the ionospheric gradient for GBAS, adopt a linear time-invariant low-pass filter to suppress the effect of high frequency noise on the detection of the ionospheric anomaly. However, there is a counterbalance between response time and estimation accuracy due to the fixed time constants. In order to release the limitation, a two-step approach (TSA) is proposed by integrating the cascaded linear time-invariant low-pass filters with the adaptive Kalman filter to detect the ionospheric gradient anomaly. The performance of the proposed method is tested by using simulated and real-world data, respectively. The simulation results show that the TSA can detect ionospheric gradient anomalies quickly, even when the noise is severer. Compared to the traditional CCD methods, the experiments from real-world GPS data indicate that the average estimation accuracy of the ionospheric gradient improves by more than 31.3%, and the average response time to the ionospheric gradient at a rate of 0.018 m/s improves by more than 59.3%, which demonstrates the ability of TSA to detect a small ionospheric gradient more rapidly. PMID:27240367

  20. Advances in soil gas geochemical exploration for natural resources: Some current examples and practices

    NASA Astrophysics Data System (ADS)

    McCarthy, J. Howard, Jr.; Reimer, G. Michael

    1986-11-01

    Field studies have demonstrated that gas anomalies are found over buried mineral deposits. Abnormally high concentrations of sulfur gases and carbon dioxide and abnormally low concentrations of oxygen are commonly found over sulfide ore deposits. Helium anomalies are commonly associated with uranium deposits and geothermal areas. Helium and hydrocarbon gas anomalies have been detected over oil and gas deposits. Gases are sampled by extracting them from the pore space of soil, by degassing soil or rock, or by adsorbing them on artificial collectors. The two most widely used techniques for gas analysis are gas chromatography and mass spectrometry. The detection of gas anomalies at or near the surface may be an effective method to locate buried mineral deposits.

Top