An immunity-based anomaly detection system with sensor agents.
Okamoto, Takeshi; Ishida, Yoshiteru
2009-01-01
This paper proposes an immunity-based anomaly detection system with sensor agents based on the specificity and diversity of the immune system. Each agent is specialized to react to the behavior of a specific user. Multiple diverse agents decide whether the behavior is normal or abnormal. Conventional systems have used only a single sensor to detect anomalies, while the immunity-based system makes use of multiple sensors, which leads to improvements in detection accuracy. In addition, we propose an evaluation framework for the anomaly detection system, which is capable of evaluating the differences in detection accuracy between internal and external anomalies. This paper focuses on anomaly detection in user's command sequences on UNIX-like systems. In experiments, the immunity-based system outperformed some of the best conventional systems.
Domain Anomaly Detection in Machine Perception: A System Architecture and Taxonomy.
Kittler, Josef; Christmas, William; de Campos, Teófilo; Windridge, David; Yan, Fei; Illingworth, John; Osman, Magda
2014-05-01
We address the problem of anomaly detection in machine perception. The concept of domain anomaly is introduced as distinct from the conventional notion of anomaly used in the literature. We propose a unified framework for anomaly detection which exposes the multifaceted nature of anomalies and suggest effective mechanisms for identifying and distinguishing each facet as instruments for domain anomaly detection. The framework draws on the Bayesian probabilistic reasoning apparatus which clearly defines concepts such as outlier, noise, distribution drift, novelty detection (object, object primitive), rare events, and unexpected events. Based on these concepts we provide a taxonomy of domain anomaly events. One of the mechanisms helping to pinpoint the nature of anomaly is based on detecting incongruence between contextual and noncontextual sensor(y) data interpretation. The proposed methodology has wide applicability. It underpins in a unified way the anomaly detection applications found in the literature. To illustrate some of its distinguishing features, in here the domain anomaly detection methodology is applied to the problem of anomaly detection for a video annotation system.
A Survey on Anomaly Based Host Intrusion Detection System
NASA Astrophysics Data System (ADS)
Jose, Shijoe; Malathi, D.; Reddy, Bharath; Jayaseeli, Dorathi
2018-04-01
An intrusion detection system (IDS) is hardware, software or a combination of two, for monitoring network or system activities to detect malicious signs. In computer security, designing a robust intrusion detection system is one of the most fundamental and important problems. The primary function of system is detecting intrusion and gives alerts when user tries to intrusion on timely manner. In these techniques when IDS find out intrusion it will send alert massage to the system administrator. Anomaly detection is an important problem that has been researched within diverse research areas and application domains. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. From the existing anomaly detection techniques, each technique has relative strengths and weaknesses. The current state of the experiment practice in the field of anomaly-based intrusion detection is reviewed and survey recent studies in this. This survey provides a study of existing anomaly detection techniques, and how the techniques used in one area can be applied in another application domain.
Clustering and Recurring Anomaly Identification: Recurring Anomaly Detection System (ReADS)
NASA Technical Reports Server (NTRS)
McIntosh, Dawn
2006-01-01
This viewgraph presentation reviews the Recurring Anomaly Detection System (ReADS). The Recurring Anomaly Detection System is a tool to analyze text reports, such as aviation reports and maintenance records: (1) Text clustering algorithms group large quantities of reports and documents; Reduces human error and fatigue (2) Identifies interconnected reports; Automates the discovery of possible recurring anomalies; (3) Provides a visualization of the clusters and recurring anomalies We have illustrated our techniques on data from Shuttle and ISS discrepancy reports, as well as ASRS data. ReADS has been integrated with a secure online search
A model for anomaly classification in intrusion detection systems
NASA Astrophysics Data System (ADS)
Ferreira, V. O.; Galhardi, V. V.; Gonçalves, L. B. L.; Silva, R. C.; Cansian, A. M.
2015-09-01
Intrusion Detection Systems (IDS) are traditionally divided into two types according to the detection methods they employ, namely (i) misuse detection and (ii) anomaly detection. Anomaly detection has been widely used and its main advantage is the ability to detect new attacks. However, the analysis of anomalies generated can become expensive, since they often have no clear information about the malicious events they represent. In this context, this paper presents a model for automated classification of alerts generated by an anomaly based IDS. The main goal is either the classification of the detected anomalies in well-defined taxonomies of attacks or to identify whether it is a false positive misclassified by the IDS. Some common attacks to computer networks were considered and we achieved important results that can equip security analysts with best resources for their analyses.
Network anomaly detection system with optimized DS evidence theory.
Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu
2014-01-01
Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network-complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each sensor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly.
Network Anomaly Detection System with Optimized DS Evidence Theory
Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu
2014-01-01
Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network—complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each senor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly. PMID:25254258
Road Anomalies Detection System Evaluation.
Silva, Nuno; Shah, Vaibhav; Soares, João; Rodrigues, Helena
2018-06-21
Anomalies on road pavement cause discomfort to drivers and passengers, and may cause mechanical failure or even accidents. Governments spend millions of Euros every year on road maintenance, often causing traffic jams and congestion on urban roads on a daily basis. This paper analyses the difference between the deployment of a road anomalies detection and identification system in a “conditioned” and a real world setup, where the system performed worse compared to the “conditioned” setup. It also presents a system performance analysis based on the analysis of the training data sets; on the analysis of the attributes complexity, through the application of PCA techniques; and on the analysis of the attributes in the context of each anomaly type, using acceleration standard deviation attributes to observe how different anomalies classes are distributed in the Cartesian coordinates system. Overall, in this paper, we describe the main insights on road anomalies detection challenges to support the design and deployment of a new iteration of our system towards the deployment of a road anomaly detection service to provide information about roads condition to drivers and government entities.
Using statistical anomaly detection models to find clinical decision support malfunctions.
Ray, Soumi; McEvoy, Dustin S; Aaron, Skye; Hickman, Thu-Trang; Wright, Adam
2018-05-11
Malfunctions in Clinical Decision Support (CDS) systems occur due to a multitude of reasons, and often go unnoticed, leading to potentially poor outcomes. Our goal was to identify malfunctions within CDS systems. We evaluated 6 anomaly detection models: (1) Poisson Changepoint Model, (2) Autoregressive Integrated Moving Average (ARIMA) Model, (3) Hierarchical Divisive Changepoint (HDC) Model, (4) Bayesian Changepoint Model, (5) Seasonal Hybrid Extreme Studentized Deviate (SHESD) Model, and (6) E-Divisive with Median (EDM) Model and characterized their ability to find known anomalies. We analyzed 4 CDS alerts with known malfunctions from the Longitudinal Medical Record (LMR) and Epic® (Epic Systems Corporation, Madison, WI, USA) at Brigham and Women's Hospital, Boston, MA. The 4 rules recommend lead testing in children, aspirin therapy in patients with coronary artery disease, pneumococcal vaccination in immunocompromised adults and thyroid testing in patients taking amiodarone. Poisson changepoint, ARIMA, HDC, Bayesian changepoint and the SHESD model were able to detect anomalies in an alert for lead screening in children and in an alert for pneumococcal conjugate vaccine in immunocompromised adults. EDM was able to detect anomalies in an alert for monitoring thyroid function in patients on amiodarone. Malfunctions/anomalies occur frequently in CDS alert systems. It is important to be able to detect such anomalies promptly. Anomaly detection models are useful tools to aid such detections.
2004-01-01
login identity to the one under which the system call is executed, the parameters of the system call execution - file names including full path...Anomaly detection COAST-EIMDT Distributed on target hosts EMERALD Distributed on target hosts and security servers Signature recognition Anomaly...uses a centralized architecture, and employs an anomaly detection technique for intrusion detection. The EMERALD project [80] proposes a
Effective Sensor Selection and Data Anomaly Detection for Condition Monitoring of Aircraft Engines
Liu, Liansheng; Liu, Datong; Zhang, Yujie; Peng, Yu
2016-01-01
In a complex system, condition monitoring (CM) can collect the system working status. The condition is mainly sensed by the pre-deployed sensors in/on the system. Most existing works study how to utilize the condition information to predict the upcoming anomalies, faults, or failures. There is also some research which focuses on the faults or anomalies of the sensing element (i.e., sensor) to enhance the system reliability. However, existing approaches ignore the correlation between sensor selecting strategy and data anomaly detection, which can also improve the system reliability. To address this issue, we study a new scheme which includes sensor selection strategy and data anomaly detection by utilizing information theory and Gaussian Process Regression (GPR). The sensors that are more appropriate for the system CM are first selected. Then, mutual information is utilized to weight the correlation among different sensors. The anomaly detection is carried out by using the correlation of sensor data. The sensor data sets that are utilized to carry out the evaluation are provided by National Aeronautics and Space Administration (NASA) Ames Research Center and have been used as Prognostics and Health Management (PHM) challenge data in 2008. By comparing the two different sensor selection strategies, the effectiveness of selection method on data anomaly detection is proved. PMID:27136561
Effective Sensor Selection and Data Anomaly Detection for Condition Monitoring of Aircraft Engines.
Liu, Liansheng; Liu, Datong; Zhang, Yujie; Peng, Yu
2016-04-29
In a complex system, condition monitoring (CM) can collect the system working status. The condition is mainly sensed by the pre-deployed sensors in/on the system. Most existing works study how to utilize the condition information to predict the upcoming anomalies, faults, or failures. There is also some research which focuses on the faults or anomalies of the sensing element (i.e., sensor) to enhance the system reliability. However, existing approaches ignore the correlation between sensor selecting strategy and data anomaly detection, which can also improve the system reliability. To address this issue, we study a new scheme which includes sensor selection strategy and data anomaly detection by utilizing information theory and Gaussian Process Regression (GPR). The sensors that are more appropriate for the system CM are first selected. Then, mutual information is utilized to weight the correlation among different sensors. The anomaly detection is carried out by using the correlation of sensor data. The sensor data sets that are utilized to carry out the evaluation are provided by National Aeronautics and Space Administration (NASA) Ames Research Center and have been used as Prognostics and Health Management (PHM) challenge data in 2008. By comparing the two different sensor selection strategies, the effectiveness of selection method on data anomaly detection is proved.
Improving Cyber-Security of Smart Grid Systems via Anomaly Detection and Linguistic Domain Knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ondrej Linda; Todd Vollmer; Milos Manic
The planned large scale deployment of smart grid network devices will generate a large amount of information exchanged over various types of communication networks. The implementation of these critical systems will require appropriate cyber-security measures. A network anomaly detection solution is considered in this work. In common network architectures multiple communications streams are simultaneously present, making it difficult to build an anomaly detection solution for the entire system. In addition, common anomaly detection algorithms require specification of a sensitivity threshold, which inevitably leads to a tradeoff between false positives and false negatives rates. In order to alleviate these issues, thismore » paper proposes a novel anomaly detection architecture. The designed system applies the previously developed network security cyber-sensor method to individual selected communication streams allowing for learning accurate normal network behavior models. Furthermore, the developed system dynamically adjusts the sensitivity threshold of each anomaly detection algorithm based on domain knowledge about the specific network system. It is proposed to model this domain knowledge using Interval Type-2 Fuzzy Logic rules, which linguistically describe the relationship between various features of the network communication and the possibility of a cyber attack. The proposed method was tested on experimental smart grid system demonstrating enhanced cyber-security.« less
ISHM Anomaly Lexicon for Rocket Test
NASA Technical Reports Server (NTRS)
Schmalzel, John L.; Buchanan, Aubri; Hensarling, Paula L.; Morris, Jonathan; Turowski, Mark; Figueroa, Jorge F.
2007-01-01
Integrated Systems Health Management (ISHM) is a comprehensive capability. An ISHM system must detect anomalies, identify causes of such anomalies, predict future anomalies, help identify consequences of anomalies for example, suggested mitigation steps. The system should also provide users with appropriate navigation tools to facilitate the flow of information into and out of the ISHM system. Central to the ability of the ISHM to detect anomalies is a clearly defined catalog of anomalies. Further, this lexicon of anomalies must be organized in ways that make it accessible to a suite of tools used to manage the data, information and knowledge (DIaK) associated with a system. In particular, it is critical to ensure that there is optimal mapping between target anomalies and the algorithms associated with their detection. During the early development of our ISHM architecture and approach, it became clear that a lexicon of anomalies would be important to the development of critical anomaly detection algorithms. In our work in the rocket engine test environment at John C. Stennis Space Center, we have access to a repository of discrepancy reports (DRs) that are generated in response to squawks identified during post-test data analysis. The DR is the tool used to document anomalies and the methods used to resolve the issue. These DRs have been generated for many different tests and for all test stands. The result is that they represent a comprehensive summary of the anomalies associated with rocket engine testing. Fig. 1 illustrates some of the data that can be extracted from a DR. Such information includes affected transducer channels, narrative description of the observed anomaly, and the steps used to correct the problem. The primary goal of the anomaly lexicon development efforts we have undertaken is to create a lexicon that could be used in support of an associated health assessment database system (HADS) co-development effort. There are a number of significant byproducts of the anomaly lexicon compilation effort. For example, (1) Allows determination of the frequency distribution of anomalies to help identify those with the potential for high return on investment if included in automated detection as part of an ISHM system, (2) Availability of a regular lexicon could provide the base anomaly name choices to help maintain consistency in the DR collection process, and (3) Although developed for the rocket engine test environment, most of the anomalies are not specific to rocket testing, and thus can be reused in other applications.
Conditional anomaly detection methods for patient–management alert systems
Valko, Michal; Cooper, Gregory; Seybert, Amy; Visweswaran, Shyam; Saul, Melissa; Hauskrecht, Milos
2010-01-01
Anomaly detection methods can be very useful in identifying unusual or interesting patterns in data. A recently proposed conditional anomaly detection framework extends anomaly detection to the problem of identifying anomalous patterns on a subset of attributes in the data. The anomaly always depends (is conditioned) on the value of remaining attributes. The work presented in this paper focuses on instance–based methods for detecting conditional anomalies. The methods rely on the distance metric to identify examples in the dataset that are most critical for detecting the anomaly. We investigate various metrics and metric learning methods to optimize the performance of the instance–based anomaly detection methods. We show the benefits of the instance–based methods on two real–world detection problems: detection of unusual admission decisions for patients with the community–acquired pneumonia and detection of unusual orders of an HPF4 test that is used to confirm Heparin induced thrombocytopenia — a life–threatening condition caused by the Heparin therapy. PMID:25392850
Implementation of a General Real-Time Visual Anomaly Detection System Via Soft Computing
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steve; Ferrell, Bob; Steinrock, Todd (Technical Monitor)
2001-01-01
The intelligent visual system detects anomalies or defects in real time under normal lighting operating conditions. The application is basically a learning machine that integrates fuzzy logic (FL), artificial neural network (ANN), and generic algorithm (GA) schemes to process the image, run the learning process, and finally detect the anomalies or defects. The system acquires the image, performs segmentation to separate the object being tested from the background, preprocesses the image using fuzzy reasoning, performs the final segmentation using fuzzy reasoning techniques to retrieve regions with potential anomalies or defects, and finally retrieves them using a learning model built via ANN and GA techniques. FL provides a powerful framework for knowledge representation and overcomes uncertainty and vagueness typically found in image analysis. ANN provides learning capabilities, and GA leads to robust learning results. An application prototype currently runs on a regular PC under Windows NT, and preliminary work has been performed to build an embedded version with multiple image processors. The application prototype is being tested at the Kennedy Space Center (KSC), Florida, to visually detect anomalies along slide basket cables utilized by the astronauts to evacuate the NASA Shuttle launch pad in an emergency. The potential applications of this anomaly detection system in an open environment are quite wide. Another current, potentially viable application at NASA is in detecting anomalies of the NASA Space Shuttle Orbiter's radiator panels.
Modeling And Detecting Anomalies In Scada Systems
NASA Astrophysics Data System (ADS)
Svendsen, Nils; Wolthusen, Stephen
The detection of attacks and intrusions based on anomalies is hampered by the limits of specificity underlying the detection techniques. However, in the case of many critical infrastructure systems, domain-specific knowledge and models can impose constraints that potentially reduce error rates. At the same time, attackers can use their knowledge of system behavior to mask their manipulations, causing adverse effects to observed only after a significant period of time. This paper describes elementary statistical techniques that can be applied to detect anomalies in critical infrastructure networks. A SCADA system employed in liquefied natural gas (LNG) production is used as a case study.
Fuzzy Kernel k-Medoids algorithm for anomaly detection problems
NASA Astrophysics Data System (ADS)
Rustam, Z.; Talita, A. S.
2017-07-01
Intrusion Detection System (IDS) is an essential part of security systems to strengthen the security of information systems. IDS can be used to detect the abuse by intruders who try to get into the network system in order to access and utilize the available data sources in the system. There are two approaches of IDS, Misuse Detection and Anomaly Detection (behavior-based intrusion detection). Fuzzy clustering-based methods have been widely used to solve Anomaly Detection problems. Other than using fuzzy membership concept to determine the object to a cluster, other approaches as in combining fuzzy and possibilistic membership or feature-weighted based methods are also used. We propose Fuzzy Kernel k-Medoids that combining fuzzy and possibilistic membership as a powerful method to solve anomaly detection problem since on numerical experiment it is able to classify IDS benchmark data into five different classes simultaneously. We classify IDS benchmark data KDDCup'99 data set into five different classes simultaneously with the best performance was achieved by using 30 % of training data with clustering accuracy reached 90.28 percent.
Integrated System Health Management (ISHM) for Test Stand and J-2X Engine: Core Implementation
NASA Technical Reports Server (NTRS)
Figueroa, Jorge F.; Schmalzel, John L.; Aguilar, Robert; Shwabacher, Mark; Morris, Jon
2008-01-01
ISHM capability enables a system to detect anomalies, determine causes and effects, predict future anomalies, and provides an integrated awareness of the health of the system to users (operators, customers, management, etc.). NASA Stennis Space Center, NASA Ames Research Center, and Pratt & Whitney Rocketdyne have implemented a core ISHM capability that encompasses the A1 Test Stand and the J-2X Engine. The implementation incorporates all aspects of ISHM; from anomaly detection (e.g. leaks) to root-cause-analysis based on failure mode and effects analysis (FMEA), to a user interface for an integrated visualization of the health of the system (Test Stand and Engine). The implementation provides a low functional capability level (FCL) in that it is populated with few algorithms and approaches for anomaly detection, and root-cause trees from a limited FMEA effort. However, it is a demonstration of a credible ISHM capability, and it is inherently designed for continuous and systematic augmentation of the capability. The ISHM capability is grounded on an integrating software environment used to create an ISHM model of the system. The ISHM model follows an object-oriented approach: includes all elements of the system (from schematics) and provides for compartmentalized storage of information associated with each element. For instance, a sensor object contains a transducer electronic data sheet (TEDS) with information that might be used by algorithms and approaches for anomaly detection, diagnostics, etc. Similarly, a component, such as a tank, contains a Component Electronic Data Sheet (CEDS). Each element also includes a Health Electronic Data Sheet (HEDS) that contains health-related information such as anomalies and health state. Some practical aspects of the implementation include: (1) near real-time data flow from the test stand data acquisition system through the ISHM model, for near real-time detection of anomalies and diagnostics, (2) insertion of the J-2X predictive model providing predicted sensor values for comparison with measured values and use in anomaly detection and diagnostics, and (3) insertion of third-party anomaly detection algorithms into the integrated ISHM model.
Christiansen, Peter; Nielsen, Lars N; Steen, Kim A; Jørgensen, Rasmus N; Karstoft, Henrik
2016-11-11
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).
Christiansen, Peter; Nielsen, Lars N.; Steen, Kim A.; Jørgensen, Rasmus N.; Karstoft, Henrik
2016-01-01
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit). PMID:27845717
Overton, Jr., William C.; Steyert, Jr., William A.
1984-01-01
A superconducting quantum interference device (SQUID) magnetic detection apparatus detects magnetic fields, signals, and anomalies at remote locations. Two remotely rotatable SQUID gradiometers may be housed in a cryogenic environment to search for and locate unambiguously magnetic anomalies. The SQUID magnetic detection apparatus can be used to determine the azimuth of a hydrofracture by first flooding the hydrofracture with a ferrofluid to create an artificial magnetic anomaly therein.
Overton, W.C. Jr.; Steyert, W.A. Jr.
1981-05-22
A superconducting quantum interference device (SQUID) magnetic detection apparatus detects magnetic fields, signals, and anomalies at remote locations. Two remotely rotatable SQUID gradiometers may be housed in a cryogenic environment to search for and locate unambiguously magnetic anomalies. The SQUID magnetic detection apparatus can be used to determine the azimuth of a hydrofracture by first flooding the hydrofracture with a ferrofluid to create an artificial magnetic anomaly therein.
Critical Infrastructure Protection and Resilience Literature Survey: Modeling and Simulation
2014-11-01
2013 Page 34 of 63 Below the yellow set is a purple cluster bringing together detection , anomaly , intrusion, sensors, monitoring and alerting (early...hazards and threats to security56 Water ADWICE, PSS®SINCAL ADWICE for real-time anomaly detection in water management systems57 One tool that...Systems. Cybernetics and Information Technologies. 2008;8(4):57-68. 57. Raciti M, Cucurull J, Nadjm-Tehrani S. Anomaly detection in water management
Symbolic Time-Series Analysis for Anomaly Detection in Mechanical Systems
2006-08-01
Amol Khatkhate, Asok Ray , Fellow, IEEE, Eric Keller, Shalabh Gupta, and Shin C. Chin Abstract—This paper examines the efficacy of a novel method for...recognition. KHATKHATE et al.: SYMBOLIC TIME-SERIES ANALYSIS FOR ANOMALY DETECTION 447 Asok Ray (F’02) received graduate degrees in electri- cal...anomaly detection has been pro- posed by Ray [6], where the underlying information on the dynamical behavior of complex systems is derived based on
NASA Astrophysics Data System (ADS)
Zhao, Mingkang; Wi, Hun; Lee, Eun Jung; Woo, Eung Je; In Oh, Tong
2014-10-01
Electrical impedance imaging has the potential to detect an early stage of breast cancer due to higher admittivity values compared with those of normal breast tissues. The tumor size and extent of axillary lymph node involvement are important parameters to evaluate the breast cancer survival rate. Additionally, the anomaly characterization is required to distinguish a malignant tumor from a benign tumor. In order to overcome the limitation of breast cancer detection using impedance measurement probes, we developed the high density trans-admittance mammography (TAM) system with 60 × 60 electrode array and produced trans-admittance maps obtained at several frequency pairs. We applied the anomaly detection algorithm to the high density TAM system for estimating the volume and position of breast tumor. We tested four different sizes of anomaly with three different conductivity contrasts at four different depths. From multifrequency trans-admittance maps, we can readily observe the transversal position and estimate its volume and depth. Specially, the depth estimated values were obtained accurately, which were independent to the size and conductivity contrast when applying the new formula using Laplacian of trans-admittance map. The volume estimation was dependent on the conductivity contrast between anomaly and background in the breast phantom. We characterized two testing anomalies using frequency difference trans-admittance data to eliminate the dependency of anomaly position and size. We confirmed the anomaly detection and characterization algorithm with the high density TAM system on bovine breast tissue. Both results showed the feasibility of detecting the size and position of anomaly and tissue characterization for screening the breast cancer.
2015-06-01
system accuracy. The AnRAD system was also generalized for the additional application of network intrusion detection . A self-structuring technique...to Host- based Intrusion Detection Systems using Contiguous and Discontiguous System Call Patterns,” IEEE Transactions on Computer, 63(4), pp. 807...square kilometer areas. The anomaly recognition and detection (AnRAD) system was built as a cogent confabulation network . It represented road
Deep learning on temporal-spectral data for anomaly detection
NASA Astrophysics Data System (ADS)
Ma, King; Leung, Henry; Jalilian, Ehsan; Huang, Daniel
2017-05-01
Detecting anomalies is important for continuous monitoring of sensor systems. One significant challenge is to use sensor data and autonomously detect changes that cause different conditions to occur. Using deep learning methods, we are able to monitor and detect changes as a result of some disturbance in the system. We utilize deep neural networks for sequence analysis of time series. We use a multi-step method for anomaly detection. We train the network to learn spectral and temporal features from the acoustic time series. We test our method using fiber-optic acoustic data from a pipeline.
A Distance Measure for Attention Focusing and Anomaly Detection in Systems Monitoring
NASA Technical Reports Server (NTRS)
Doyle, R.
1994-01-01
Any attempt to introduce automation into the monitoring of complex physical systems must start from a robust anomaly detection capability. This task is far from straightforward, for a single definition of what constitutes an anomaly is difficult to come by. In addition, to make the monitoring process efficient, and to avoid the potential for information overload on human operators, attention focusing must also be addressed. When an anomaly occurs, more often than not several sensors are affected, and the partially redundant information they provide can be confusing, particularly in a crisis situation where a response is needed quickly. Previous results on extending traditional anomaly detection techniques are summarized. The focus of this paper is a new technique for attention focusing.
NASA Astrophysics Data System (ADS)
Akhoondzadeh, M.
2013-09-01
Anomaly detection is extremely important for forecasting the date, location and magnitude of an impending earthquake. In this paper, an Adaptive Network-based Fuzzy Inference System (ANFIS) has been proposed to detect the thermal and Total Electron Content (TEC) anomalies around the time of the Varzeghan, Iran, (Mw = 6.4) earthquake jolted in 11 August 2012 NW Iran. ANFIS is the famous hybrid neuro-fuzzy network for modeling the non-linear complex systems. In this study, also the detected thermal and TEC anomalies using the proposed method are compared to the results dealing with the observed anomalies by applying the classical and intelligent methods including Interquartile, Auto-Regressive Integrated Moving Average (ARIMA), Artificial Neural Network (ANN) and Support Vector Machine (SVM) methods. The duration of the dataset which is comprised from Aqua-MODIS Land Surface Temperature (LST) night-time snapshot images and also Global Ionospheric Maps (GIM), is 62 days. It can be shown that, if the difference between the predicted value using the ANFIS method and the observed value, exceeds the pre-defined threshold value, then the observed precursor value in the absence of non seismic effective parameters could be regarded as precursory anomaly. For two precursors of LST and TEC, the ANFIS method shows very good agreement with the other implemented classical and intelligent methods and this indicates that ANFIS is capable of detecting earthquake anomalies. The applied methods detected anomalous occurrences 1 and 2 days before the earthquake. This paper indicates that the detection of the thermal and TEC anomalies derive their credibility from the overall efficiencies and potentialities of the five integrated methods.
Method and system for monitoring environmental conditions
Kulesz, James J [Oak Ridge, TN; Lee, Ronald W [Oak Ridge, TN
2010-11-16
A system for detecting the occurrence of anomalies includes a plurality of spaced apart nodes, with each node having adjacent nodes, each of the nodes having one or more sensors associated with the node and capable of detecting anomalies, and each of the nodes having a controller connected to the sensors associated with the node. The system also includes communication links between adjacent nodes, whereby the nodes form a network. At least one software agent is capable of changing the operation of at least one of the controllers in response to the detection of an anomaly by a sensor.
First and second trimester screening for fetal structural anomalies.
Edwards, Lindsay; Hui, Lisa
2018-04-01
Fetal structural anomalies are found in up to 3% of all pregnancies and ultrasound-based screening has been an integral part of routine prenatal care for decades. The prenatal detection of fetal anomalies allows for optimal perinatal management, providing expectant parents with opportunities for additional imaging, genetic testing, and the provision of information regarding prognosis and management options. Approximately one-half of all major structural anomalies can now be detected in the first trimester, including acrania/anencephaly, abdominal wall defects, holoprosencephaly and cystic hygromata. Due to the ongoing development of some organ systems however, some anomalies will not be evident until later in the pregnancy. To this extent, the second trimester anatomy is recommended by professional societies as the standard investigation for the detection of fetal structural anomalies. The reported detection rates of structural anomalies vary according to the organ system being examined, and are also dependent upon factors such as the equipment settings and sonographer experience. Technological advances over the past two decades continue to support the role of ultrasound as the primary imaging modality in pregnancy, and the safety of ultrasound for the developing fetus is well established. With increasing capabilities and experience, detailed examination of the central nervous system and cardiovascular system is possible, with dedicated examinations such as the fetal neurosonogram and the fetal echocardiogram now widely performed in tertiary centers. Magnetic resonance imaging (MRI) is well recognized for its role in the assessment of fetal brain anomalies; other potential indications for fetal MRI include lung volume measurement (in cases of congenital diaphragmatic hernia), and pre-surgical planning prior to fetal spina bifida repair. When a major structural abnormality is detected prenatally, genetic testing with chromosomal microarray is recommended over routine karyotype due to its higher genomic resolution. Copyright © 2017 Elsevier Ltd. All rights reserved.
Quantifying Performance Bias in Label Fusion
2012-08-21
detect ), may provide the end-user with the means to appropriately adjust the performance and optimal thresholds for performance by fusing legacy systems...boolean combination of classification systems in ROC space: An application to anomaly detection with HMMs. Pattern Recognition, 43(8), 2732-2752. 10...Shamsuddin, S. (2009). An overview of neural networks use in anomaly intrusion detection systems. Paper presented at the Research and Development (SCOReD
A Testbed for Data Fusion for Engine Diagnostics and Prognostics1
2002-03-01
detected ; too late to be useful for prognostics development. Table 1. Table of acronyms ACRONYM MEANING AD Anomaly detector...strictly defined points. Determining where we are on the engine health curve is the first step in prognostics . Fault detection / diagnostic reasoning... Detection As described above the ability of the monitoring system to detect an anomaly is especially important for knowledge-based systems, i.e.,
A Distance Measure for Attention Focusing and Anaomaly Detection in Systems Monitoring
NASA Technical Reports Server (NTRS)
Doyle, R. J.
1994-01-01
Any attempt to introduce automation into the monitoring of complex physical systems must start from a robust anomaly detection capability. This task is far from straightforward, for a single definition of what constitutes an anomaly is difficult to come by.
Attention focusing and anomaly detection in systems monitoring
NASA Technical Reports Server (NTRS)
Doyle, Richard J.
1994-01-01
Any attempt to introduce automation into the monitoring of complex physical systems must start from a robust anomaly detection capability. This task is far from straightforward, for a single definition of what constitutes an anomaly is difficult to come by. In addition, to make the monitoring process efficient, and to avoid the potential for information overload on human operators, attention focusing must also be addressed. When an anomaly occurs, more often than not several sensors are affected, and the partially redundant information they provide can be confusing, particularly in a crisis situation where a response is needed quickly. The focus of this paper is a new technique for attention focusing. The technique involves reasoning about the distance between two frequency distributions, and is used to detect both anomalous system parameters and 'broken' causal dependencies. These two forms of information together isolate the locus of anomalous behavior in the system being monitored.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, X; Liu, S; Kalet, A
Purpose: The purpose of this work was to investigate the ability of a machine-learning based probabilistic approach to detect radiotherapy treatment plan anomalies given initial disease classes information. Methods In total we obtained 1112 unique treatment plans with five plan parameters and disease information from a Mosaiq treatment management system database for use in the study. The plan parameters include prescription dose, fractions, fields, modality and techniques. The disease information includes disease site, and T, M and N disease stages. A Bayesian network method was employed to model the probabilistic relationships between tumor disease information, plan parameters and an anomalymore » flag. A Bayesian learning method with Dirichlet prior was useed to learn the joint probabilities between dependent variables in error-free plan data and data with artificially induced anomalies. In the study, we randomly sampled data with anomaly in a specified anomaly space.We tested the approach with three groups of plan anomalies – improper concurrence of values of all five plan parameters and values of any two out of five parameters, and all single plan parameter value anomalies. Totally, 16 types of plan anomalies were covered by the study. For each type, we trained an individual Bayesian network. Results: We found that the true positive rate (recall) and positive predictive value (precision) to detect concurrence anomalies of five plan parameters in new patient cases were 94.45±0.26% and 93.76±0.39% respectively. To detect other 15 types of plan anomalies, the average recall and precision were 93.61±2.57% and 93.78±3.54% respectively. The computation time to detect the plan anomaly of each type in a new plan is ∼0.08 seconds. Conclusion: The proposed method for treatment plan anomaly detection was found effective in the initial tests. The results suggest that this type of models could be applied to develop plan anomaly detection tools to assist manual and automated plan checks. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less
Anomaly-based intrusion detection for SCADA systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, D.; Usynin, A.; Hines, J. W.
2006-07-01
Most critical infrastructure such as chemical processing plants, electrical generation and distribution networks, and gas distribution is monitored and controlled by Supervisory Control and Data Acquisition Systems (SCADA. These systems have been the focus of increased security and there are concerns that they could be the target of international terrorists. With the constantly growing number of internet related computer attacks, there is evidence that our critical infrastructure may also be vulnerable. Researchers estimate that malicious online actions may cause $75 billion at 2007. One of the interesting countermeasures for enhancing information system security is called intrusion detection. This paper willmore » briefly discuss the history of research in intrusion detection techniques and introduce the two basic detection approaches: signature detection and anomaly detection. Finally, it presents the application of techniques developed for monitoring critical process systems, such as nuclear power plants, to anomaly intrusion detection. The method uses an auto-associative kernel regression (AAKR) model coupled with the statistical probability ratio test (SPRT) and applied to a simulated SCADA system. The results show that these methods can be generally used to detect a variety of common attacks. (authors)« less
Multi-Level Modeling of Complex Socio-Technical Systems - Phase 1
2013-06-06
is to detect anomalous organizational outcomes, diagnose the causes of these anomalies , and decide upon appropriate compensation schemes. All of...monitor process outcomes. The purpose of this monitoring is to detect anomalous process outcomes, diagnose the causes of these anomalies , and decide upon...monitor work outcomes in terms of performance. The purpose of this monitoring is to detect anomalous work outcomes, diagnose the causes of these anomalies
Anomaly Detection for Next-Generation Space Launch Ground Operations
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Iverson, David L.; Hall, David R.; Taylor, William M.; Patterson-Hine, Ann; Brown, Barbara; Ferrell, Bob A.; Waterman, Robert D.
2010-01-01
NASA is developing new capabilities that will enable future human exploration missions while reducing mission risk and cost. The Fault Detection, Isolation, and Recovery (FDIR) project aims to demonstrate the utility of integrated vehicle health management (IVHM) tools in the domain of ground support equipment (GSE) to be used for the next generation launch vehicles. In addition to demonstrating the utility of IVHM tools for GSE, FDIR aims to mature promising tools for use on future missions and document the level of effort - and hence cost - required to implement an application with each selected tool. One of the FDIR capabilities is anomaly detection, i.e., detecting off-nominal behavior. The tool we selected for this task uses a data-driven approach. Unlike rule-based and model-based systems that require manual extraction of system knowledge, data-driven systems take a radically different approach to reasoning. At the basic level, they start with data that represent nominal functioning of the system and automatically learn expected system behavior. The behavior is encoded in a knowledge base that represents "in-family" system operations. During real-time system monitoring or during post-flight analysis, incoming data is compared to that nominal system operating behavior knowledge base; a distance representing deviation from nominal is computed, providing a measure of how far "out of family" current behavior is. We describe the selected tool for FDIR anomaly detection - Inductive Monitoring System (IMS), how it fits into the FDIR architecture, the operations concept for the GSE anomaly monitoring, and some preliminary results of applying IMS to a Space Shuttle GSE anomaly.
Anomaly Detection in Power Quality at Data Centers
NASA Technical Reports Server (NTRS)
Grichine, Art; Solano, Wanda M.
2015-01-01
The goal during my internship at the National Center for Critical Information Processing and Storage (NCCIPS) is to implement an anomaly detection method through the StruxureWare SCADA Power Monitoring system. The benefit of the anomaly detection mechanism is to provide the capability to detect and anticipate equipment degradation by monitoring power quality prior to equipment failure. First, a study is conducted that examines the existing techniques of power quality management. Based on these findings, and the capabilities of the existing SCADA resources, recommendations are presented for implementing effective anomaly detection. Since voltage, current, and total harmonic distortion demonstrate Gaussian distributions, effective set-points are computed using this model, while maintaining a low false positive count.
Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun; Wang, Gi-Nam
2016-01-01
Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively.
Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun
2016-01-01
Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively. PMID:27974882
Disparity : scalable anomaly detection for clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, N.; Bradshaw, R.; Lusk, E.
2008-01-01
In this paper, we describe disparity, a tool that does parallel, scalable anomaly detection for clusters. Disparity uses basic statistical methods and scalable reduction operations to perform data reduction on client nodes and uses these results to locate node anomalies. We discuss the implementation of disparity and present results of its use on a SiCortex SC5832 system.
Model-Based Anomaly Detection for a Transparent Optical Transmission System
NASA Astrophysics Data System (ADS)
Bengtsson, Thomas; Salamon, Todd; Ho, Tin Kam; White, Christopher A.
In this chapter, we present an approach for anomaly detection at the physical layer of networks where detailed knowledge about the devices and their operations is available. The approach combines physics-based process models with observational data models to characterize the uncertainties and derive the alarm decision rules. We formulate and apply three different methods based on this approach for a well-defined problem in optical network monitoring that features many typical challenges for this methodology. Specifically, we address the problem of monitoring optically transparent transmission systems that use dynamically controlled Raman amplification systems. We use models of amplifier physics together with statistical estimation to derive alarm decision rules and use these rules to automatically discriminate between measurement errors, anomalous losses, and pump failures. Our approach has led to an efficient tool for systematically detecting anomalies in the system behavior of a deployed network, where pro-active measures to address such anomalies are key to preventing unnecessary disturbances to the system's continuous operation.
Using Physical Models for Anomaly Detection in Control Systems
NASA Astrophysics Data System (ADS)
Svendsen, Nils; Wolthusen, Stephen
Supervisory control and data acquisition (SCADA) systems are increasingly used to operate critical infrastructure assets. However, the inclusion of advanced information technology and communications components and elaborate control strategies in SCADA systems increase the threat surface for external and subversion-type attacks. The problems are exacerbated by site-specific properties of SCADA environments that make subversion detection impractical; and by sensor noise and feedback characteristics that degrade conventional anomaly detection systems. Moreover, potential attack mechanisms are ill-defined and may include both physical and logical aspects.
An Optimized Method to Detect BDS Satellites' Orbit Maneuvering and Anomalies in Real-Time.
Huang, Guanwen; Qin, Zhiwei; Zhang, Qin; Wang, Le; Yan, Xingyuan; Wang, Xiaolei
2018-02-28
The orbital maneuvers of Global Navigation Satellite System (GNSS) Constellations will decrease the performance and accuracy of positioning, navigation, and timing (PNT). Because satellites in the Chinese BeiDou Navigation Satellite System (BDS) are in Geostationary Orbit (GEO) and Inclined Geosynchronous Orbit (IGSO), maneuvers occur more frequently. Also, the precise start moment of the BDS satellites' orbit maneuvering cannot be obtained by common users. This paper presented an improved real-time detecting method for BDS satellites' orbit maneuvering and anomalies with higher timeliness and higher accuracy. The main contributions to this improvement are as follows: (1) instead of the previous two-steps method, a new one-step method with higher accuracy is proposed to determine the start moment and the pseudo random noise code (PRN) of the satellite orbit maneuvering in that time; (2) BDS Medium Earth Orbit (MEO) orbital maneuvers are firstly detected according to the proposed selection strategy for the stations; and (3) the classified non-maneuvering anomalies are detected by a new median robust method using the weak anomaly detection factor and the strong anomaly detection factor. The data from the Multi-GNSS Experiment (MGEX) in 2017 was used for experimental analysis. The experimental results and analysis showed that the start moment of orbital maneuvers and the period of non-maneuver anomalies can be determined more accurately in real-time. When orbital maneuvers and anomalies occur, the proposed method improved the data utilization for 91 and 95 min in 2017.
An Optimized Method to Detect BDS Satellites’ Orbit Maneuvering and Anomalies in Real-Time
Huang, Guanwen; Qin, Zhiwei; Zhang, Qin; Wang, Le; Yan, Xingyuan; Wang, Xiaolei
2018-01-01
The orbital maneuvers of Global Navigation Satellite System (GNSS) Constellations will decrease the performance and accuracy of positioning, navigation, and timing (PNT). Because satellites in the Chinese BeiDou Navigation Satellite System (BDS) are in Geostationary Orbit (GEO) and Inclined Geosynchronous Orbit (IGSO), maneuvers occur more frequently. Also, the precise start moment of the BDS satellites’ orbit maneuvering cannot be obtained by common users. This paper presented an improved real-time detecting method for BDS satellites’ orbit maneuvering and anomalies with higher timeliness and higher accuracy. The main contributions to this improvement are as follows: (1) instead of the previous two-steps method, a new one-step method with higher accuracy is proposed to determine the start moment and the pseudo random noise code (PRN) of the satellite orbit maneuvering in that time; (2) BDS Medium Earth Orbit (MEO) orbital maneuvers are firstly detected according to the proposed selection strategy for the stations; and (3) the classified non-maneuvering anomalies are detected by a new median robust method using the weak anomaly detection factor and the strong anomaly detection factor. The data from the Multi-GNSS Experiment (MGEX) in 2017 was used for experimental analysis. The experimental results and analysis showed that the start moment of orbital maneuvers and the period of non-maneuver anomalies can be determined more accurately in real-time. When orbital maneuvers and anomalies occur, the proposed method improved the data utilization for 91 and 95 min in 2017. PMID:29495638
Li, Gang; He, Bin; Huang, Hongwei; Tang, Limin
2016-01-01
The spatial–temporal correlation is an important feature of sensor data in wireless sensor networks (WSNs). Most of the existing works based on the spatial–temporal correlation can be divided into two parts: redundancy reduction and anomaly detection. These two parts are pursued separately in existing works. In this work, the combination of temporal data-driven sleep scheduling (TDSS) and spatial data-driven anomaly detection is proposed, where TDSS can reduce data redundancy. The TDSS model is inspired by transmission control protocol (TCP) congestion control. Based on long and linear cluster structure in the tunnel monitoring system, cooperative TDSS and spatial data-driven anomaly detection are then proposed. To realize synchronous acquisition in the same ring for analyzing the situation of every ring, TDSS is implemented in a cooperative way in the cluster. To keep the precision of sensor data, spatial data-driven anomaly detection based on the spatial correlation and Kriging method is realized to generate an anomaly indicator. The experiment results show that cooperative TDSS can realize non-uniform sensing effectively to reduce the energy consumption. In addition, spatial data-driven anomaly detection is quite significant for maintaining and improving the precision of sensor data. PMID:27690035
Ranking Causal Anomalies via Temporal and Dynamical Analysis on Vanishing Correlations.
Cheng, Wei; Zhang, Kai; Chen, Haifeng; Jiang, Guofei; Chen, Zhengzhang; Wang, Wei
2016-08-01
Modern world has witnessed a dramatic increase in our ability to collect, transmit and distribute real-time monitoring and surveillance data from large-scale information systems and cyber-physical systems. Detecting system anomalies thus attracts significant amount of interest in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be a powerful way in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: 1) fault propagation in the network is ignored; 2) the root casual anomalies may not always be the nodes with a high-percentage of vanishing correlations; 3) temporal patterns of vanishing correlations are not exploited for robust detection. To address these limitations, in this paper we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network, and can perform joint inference on both the structural, and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations, and can compensate for unstructured measurement noise in the system. Extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets demonstrate the effectiveness of our approach.
SU-G-JeP4-03: Anomaly Detection of Respiratory Motion by Use of Singular Spectrum Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotoku, J; Kumagai, S; Nakabayashi, S
Purpose: The implementation and realization of automatic anomaly detection of respiratory motion is a very important technique to prevent accidental damage during radiation therapy. Here, we propose an automatic anomaly detection method using singular value decomposition analysis. Methods: The anomaly detection procedure consists of four parts:1) measurement of normal respiratory motion data of a patient2) calculation of a trajectory matrix representing normal time-series feature3) real-time monitoring and calculation of a trajectory matrix of real-time data.4) calculation of an anomaly score from the similarity of the two feature matrices. Patient motion was observed by a marker-less tracking system using a depthmore » camera. Results: Two types of motion e.g. cough and sudden stop of breathing were successfully detected in our real-time application. Conclusion: Automatic anomaly detection of respiratory motion using singular spectrum analysis was successful in the cough and sudden stop of breathing. The clinical use of this algorithm will be very hopeful. This work was supported by JSPS KAKENHI Grant Number 15K08703.« less
Network Anomaly Detection Based on Wavelet Analysis
NASA Astrophysics Data System (ADS)
Lu, Wei; Ghorbani, Ali A.
2008-12-01
Signal processing techniques have been applied recently for analyzing and detecting network anomalies due to their potential to find novel or unknown intrusions. In this paper, we propose a new network signal modelling technique for detecting network anomalies, combining the wavelet approximation and system identification theory. In order to characterize network traffic behaviors, we present fifteen features and use them as the input signals in our system. We then evaluate our approach with the 1999 DARPA intrusion detection dataset and conduct a comprehensive analysis of the intrusions in the dataset. Evaluation results show that the approach achieves high-detection rates in terms of both attack instances and attack types. Furthermore, we conduct a full day's evaluation in a real large-scale WiFi ISP network where five attack types are successfully detected from over 30 millions flows.
Anomaly Detection Based on Sensor Data in Petroleum Industry Applications
Martí, Luis; Sanchez-Pi, Nayat; Molina, José Manuel; Garcia, Ana Cristina Bicharra
2015-01-01
Anomaly detection is the problem of finding patterns in data that do not conform to an a priori expected behavior. This is related to the problem in which some samples are distant, in terms of a given metric, from the rest of the dataset, where these anomalous samples are indicated as outliers. Anomaly detection has recently attracted the attention of the research community, because of its relevance in real-world applications, like intrusion detection, fraud detection, fault detection and system health monitoring, among many others. Anomalies themselves can have a positive or negative nature, depending on their context and interpretation. However, in either case, it is important for decision makers to be able to detect them in order to take appropriate actions. The petroleum industry is one of the application contexts where these problems are present. The correct detection of such types of unusual information empowers the decision maker with the capacity to act on the system in order to correctly avoid, correct or react to the situations associated with them. In that application context, heavy extraction machines for pumping and generation operations, like turbomachines, are intensively monitored by hundreds of sensors each that send measurements with a high frequency for damage prevention. In this paper, we propose a combination of yet another segmentation algorithm (YASA), a novel fast and high quality segmentation algorithm, with a one-class support vector machine approach for efficient anomaly detection in turbomachines. The proposal is meant for dealing with the aforementioned task and to cope with the lack of labeled training data. As a result, we perform a series of empirical studies comparing our approach to other methods applied to benchmark problems and a real-life application related to oil platform turbomachinery anomaly detection. PMID:25633599
Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.
2016-06-07
A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.
Accurate mobile malware detection and classification in the cloud.
Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi
2015-01-01
As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service.
Detecting errors and anomalies in computerized materials control and accountability databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiteson, R.; Hench, K.; Yarbro, T.
The Automated MC and A Database Assessment project is aimed at improving anomaly and error detection in materials control and accountability (MC and A) databases and increasing confidence in the data that they contain. Anomalous data resulting in poor categorization of nuclear material inventories greatly reduces the value of the database information to users. Therefore it is essential that MC and A data be assessed periodically for anomalies or errors. Anomaly detection can identify errors in databases and thus provide assurance of the integrity of data. An expert system has been developed at Los Alamos National Laboratory that examines thesemore » large databases for anomalous or erroneous data. For several years, MC and A subject matter experts at Los Alamos have been using this automated system to examine the large amounts of accountability data that the Los Alamos Plutonium Facility generates. These data are collected and managed by the Material Accountability and Safeguards System, a near-real-time computerized nuclear material accountability and safeguards system. This year they have expanded the user base, customizing the anomaly detector for the varying requirements of different groups of users. This paper describes the progress in customizing the expert systems to the needs of the users of the data and reports on their results.« less
Firefly Algorithm in detection of TEC seismo-ionospheric anomalies
NASA Astrophysics Data System (ADS)
Akhoondzadeh, Mehdi
2015-07-01
Anomaly detection in time series of different earthquake precursors is an essential introduction to create an early warning system with an allowable uncertainty. Since these time series are more often non linear, complex and massive, therefore the applied predictor method should be able to detect the discord patterns from a large data in a short time. This study acknowledges Firefly Algorithm (FA) as a simple and robust predictor to detect the TEC (Total Electron Content) seismo-ionospheric anomalies around the time of the some powerful earthquakes including Chile (27 February 2010), Varzeghan (11 August 2012) and Saravan (16 April 2013). Outstanding anomalies were observed 7 and 5 days before the Chile and Varzeghan earthquakes, respectively and also 3 and 8 days prior to the Saravan earthquake.
Infrasonic Influences of Tornados and Cyclonic Weather Systems
NASA Astrophysics Data System (ADS)
Cook, Tessa
2014-03-01
Infrasound waves travel through the air at approximately 340 m/s at sea level, while experiencing low levels of friction, allowing the waves to travel over larger distances. When seismic waves travel through unconsolidated soil, the waves slow down to approximately 340 m/s. Because the speeds of waves in the air and ground are similar, a more effective transfer of energy from the atmosphere to the ground can occur. Large ring lasers can be utilized for detecting sources of infrasound traveling through the ground by measuring anomalies in the frequency difference between their two counter-rotating beams. Sources of infrasound include tornados and other cyclonic weather systems. The way systems create waves that transfer to the ground is unknown and will be continued in further research; this research has focused on attempting to isolate the time that the ring laser detected anomalies in order to investigate if these anomalies may be contributed to isolatable weather systems. Furthermore, this research analyzed the frequencies detected in each of the anomalies and compared the frequencies with various characteristics of each weather system, such as tornado width, wind speeds, and system development. This research may be beneficial for monitoring gravity waves and weather systems.
System and method for anomaly detection
Scherrer, Chad
2010-06-15
A system and method for detecting one or more anomalies in a plurality of observations is provided. In one illustrative embodiment, the observations are real-time network observations collected from a stream of network traffic. The method includes performing a discrete decomposition of the observations, and introducing derived variables to increase storage and query efficiencies. A mathematical model, such as a conditional independence model, is then generated from the formatted data. The formatted data is also used to construct frequency tables which maintain an accurate count of specific variable occurrence as indicated by the model generation process. The formatted data is then applied to the mathematical model to generate scored data. The scored data is then analyzed to detect anomalies.
HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yan
Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm ismore » significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.« less
Integrity Verification for SCADA Devices Using Bloom Filters and Deep Packet Inspection
2014-03-27
prevent intrusions in smart grids [PK12]. Parthasarathy proposed an anomaly detection based IDS that takes into account system state. In his implementation...Security, 25(7):498–506, 10 2006. [LMV12] O. Linda, M. Manic, and T. Vollmer. Improving cyber-security of smart grid systems via anomaly detection and...6 2012. 114 [PK12] S. Parthasarathy and D. Kundur. Bloom filter based intrusion detection for smart grid SCADA. In Electrical & Computer Engineering
2013-01-01
The extra-cranial venous system is complex and not well studied in comparison to the peripheral venous system. A newly proposed vascular condition, named chronic cerebrospinal venous insufficiency (CCSVI), described initially in patients with multiple sclerosis (MS) has triggered intense interest in better understanding of the role of extra-cranial venous anomalies and developmental variants. So far, there is no established diagnostic imaging modality, non-invasive or invasive, that can serve as the “gold standard” for detection of these venous anomalies. However, consensus guidelines and standardized imaging protocols are emerging. Most likely, a multimodal imaging approach will ultimately be the most comprehensive means for screening, diagnostic and monitoring purposes. Further research is needed to determine the spectrum of extra-cranial venous pathology and to compare the imaging findings with pathological examinations. The ability to define and reliably detect noninvasively these anomalies is an essential step toward establishing their incidence and prevalence. The role for these anomalies in causing significant hemodynamic consequences for the intra-cranial venous drainage in MS patients and other neurologic disorders, and in aging, remains unproven. PMID:23806142
EMPACT 3D: an advanced EMI discrimination sensor for CONUS and OCONUS applications
NASA Astrophysics Data System (ADS)
Keranen, Joe; Miller, Jonathan S.; Schultz, Gregory; Sander-Olhoeft, Morgan; Laudato, Stephen
2018-04-01
We recently developed a new, man-portable, electromagnetic induction (EMI) sensor designed to detect and classify small, unexploded sub-munitions and discriminate them from non-hazardous debris. The ability to distinguish innocuous metal clutter from potentially hazardous unexploded ordnance (UXO) and other explosive remnants of war (ERW) before excavation can significantly accelerate land reclamation efforts by eliminating time spent removing harmless scrap metal. The EMI sensor employs a multi-axis transmitter and receiver configuration to produce data sufficient for anomaly discrimination. A real-time data inversion routine produces intrinsic and extrinsic anomaly features describing the polarizability, location, and orientation of the anomaly under test. We discuss data acquisition and post-processing software development, and results from laboratory and field tests demonstrating the discrimination capability of the system. Data acquisition and real-time processing emphasize ease-of-use, quality control (QC), and display of discrimination results. Integration of the QC and discrimination methods into the data acquisition software reduces the time required between sensor data collection and the final anomaly discrimination result. The system supports multiple concepts of operations (CONOPs) including: 1) a non-GPS cued configuration in which detected anomalies are discriminated and excavated immediately following the anomaly survey; 2) GPS integration to survey multiple anomalies to produce a prioritized dig list with global anomaly locations; and 3) a dynamic mapping configuration supporting detection followed by discrimination and excavation of targets of interest.
2014-10-02
potential advantages of using multi- variate classification/discrimination/ anomaly detection meth- ods on real world accelerometric condition monitoring ...case of false anomaly reports. A possible explanation of this phenomenon could be given 8 ANNUAL CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT...of those helicopters. 1. Anomaly detection by means of a self-learning Shewhart control chart. A problem highlighted by the experts of Agusta- Westland
Extending TOPS: Ontology-driven Anomaly Detection and Analysis System
NASA Astrophysics Data System (ADS)
Votava, P.; Nemani, R. R.; Michaelis, A.
2010-12-01
Terrestrial Observation and Prediction System (TOPS) is a flexible modeling software system that integrates ecosystem models with frequent satellite and surface weather observations to produce ecosystem nowcasts (assessments of current conditions) and forecasts useful in natural resources management, public health and disaster management. We have been extending the Terrestrial Observation and Prediction System (TOPS) to include a capability for automated anomaly detection and analysis of both on-line (streaming) and off-line data. In order to best capture the knowledge about data hierarchies, Earth science models and implied dependencies between anomalies and occurrences of observable events such as urbanization, deforestation, or fires, we have developed an ontology to serve as a knowledge base. We can query the knowledge base and answer questions about dataset compatibilities, similarities and dependencies so that we can, for example, automatically analyze similar datasets in order to verify a given anomaly occurrence in multiple data sources. We are further extending the system to go beyond anomaly detection towards reasoning about possible causes of anomalies that are also encoded in the knowledge base as either learned or implied knowledge. This enables us to scale up the analysis by eliminating a large number of anomalies early on during the processing by either failure to verify them from other sources, or matching them directly with other observable events without having to perform an extensive and time-consuming exploration and analysis. The knowledge is captured using OWL ontology language, where connections are defined in a schema that is later extended by including specific instances of datasets and models. The information is stored using Sesame server and is accessible through both Java API and web services using SeRQL and SPARQL query languages. Inference is provided using OWLIM component integrated with Sesame.
Communications and tracking expert systems study
NASA Technical Reports Server (NTRS)
Leibfried, T. F.; Feagin, Terry; Overland, David
1987-01-01
The original objectives of the study consisted of five broad areas of investigation: criteria and issues for explanation of communication and tracking system anomaly detection, isolation, and recovery; data storage simplification issues for fault detection expert systems; data selection procedures for decision tree pruning and optimization to enhance the abstraction of pertinent information for clear explanation; criteria for establishing levels of explanation suited to needs; and analysis of expert system interaction and modularization. Progress was made in all areas, but to a lesser extent in the criteria for establishing levels of explanation suited to needs. Among the types of expert systems studied were those related to anomaly or fault detection, isolation, and recovery.
NASA Astrophysics Data System (ADS)
Brax, Christoffer; Niklasson, Lars
2009-05-01
Maritime Domain Awareness is important for both civilian and military applications. An important part of MDA is detection of unusual vessel activities such as piracy, smuggling, poaching, collisions, etc. Today's interconnected sensorsystems provide us with huge amounts of information over large geographical areas which can make the operators reach their cognitive capacity and start to miss important events. We propose and agent-based situation management system that automatically analyse sensor information to detect unusual activity and anomalies. The system combines knowledge-based detection with data-driven anomaly detection. The system is evaluated using information from both radar and AIS sensors.
Artificial intelligence techniques for ground test monitoring of rocket engines
NASA Technical Reports Server (NTRS)
Ali, Moonis; Gupta, U. K.
1990-01-01
An expert system is being developed which can detect anomalies in Space Shuttle Main Engine (SSME) sensor data significantly earlier than the redline algorithm currently in use. The training of such an expert system focuses on two approaches which are based on low frequency and high frequency analyses of sensor data. Both approaches are being tested on data from SSME tests and their results compared with the findings of NASA and Rocketdyne experts. Prototype implementations have detected the presence of anomalies earlier than the redline algorithms that are in use currently. It therefore appears that these approaches have the potential of detecting anomalies early eneough to shut down the engine or take other corrective action before severe damage to the engine occurs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward
This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less
Integrated System Health Management Development Toolkit
NASA Technical Reports Server (NTRS)
Figueroa, Jorge; Smith, Harvey; Morris, Jon
2009-01-01
This software toolkit is designed to model complex systems for the implementation of embedded Integrated System Health Management (ISHM) capability, which focuses on determining the condition (health) of every element in a complex system (detect anomalies, diagnose causes, and predict future anomalies), and to provide data, information, and knowledge (DIaK) to control systems for safe and effective operation.
A hybrid approach for efficient anomaly detection using metaheuristic methods
Ghanem, Tamer F.; Elkilani, Wail S.; Abdul-kader, Hatem M.
2014-01-01
Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms. PMID:26199752
A hybrid approach for efficient anomaly detection using metaheuristic methods.
Ghanem, Tamer F; Elkilani, Wail S; Abdul-Kader, Hatem M
2015-07-01
Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms.
Dataset of anomalies and malicious acts in a cyber-physical subsystem.
Laso, Pedro Merino; Brosset, David; Puentes, John
2017-10-01
This article presents a dataset produced to investigate how data and information quality estimations enable to detect aNomalies and malicious acts in cyber-physical systems. Data were acquired making use of a cyber-physical subsystem consisting of liquid containers for fuel or water, along with its automated control and data acquisition infrastructure. Described data consist of temporal series representing five operational scenarios - Normal, aNomalies, breakdown, sabotages, and cyber-attacks - corresponding to 15 different real situations. The dataset is publicly available in the .zip file published with the article, to investigate and compare faulty operation detection and characterization methods for cyber-physical systems.
Real-time Bayesian anomaly detection in streaming environmental data
NASA Astrophysics Data System (ADS)
Hill, David J.; Minsker, Barbara S.; Amir, Eyal
2009-04-01
With large volumes of data arriving in near real time from environmental sensors, there is a need for automated detection of anomalous data caused by sensor or transmission errors or by infrequent system behaviors. This study develops and evaluates three automated anomaly detection methods using dynamic Bayesian networks (DBNs), which perform fast, incremental evaluation of data as they become available, scale to large quantities of data, and require no a priori information regarding process variables or types of anomalies that may be encountered. This study investigates these methods' abilities to identify anomalies in eight meteorological data streams from Corpus Christi, Texas. The results indicate that DBN-based detectors, using either robust Kalman filtering or Rao-Blackwellized particle filtering, outperform a DBN-based detector using Kalman filtering, with the former having false positive/negative rates of less than 2%. These methods were successful at identifying data anomalies caused by two real events: a sensor failure and a large storm.
Inductive System Monitors Tasks
NASA Technical Reports Server (NTRS)
2008-01-01
The Inductive Monitoring System (IMS) software developed at Ames Research Center uses artificial intelligence and data mining techniques to build system-monitoring knowledge bases from archived or simulated sensor data. This information is then used to detect unusual or anomalous behavior that may indicate an impending system failure. Currently helping analyze data from systems that help fly and maintain the space shuttle and the International Space Station (ISS), the IMS has also been employed by data classes are then used to build a monitoring knowledge base. In real time, IMS performs monitoring functions: determining and displaying the degree of deviation from nominal performance. IMS trend analyses can detect conditions that may indicate a failure or required system maintenance. The development of IMS was motivated by the difficulty of producing detailed diagnostic models of some system components due to complexity or unavailability of design information. Successful applications have ranged from real-time monitoring of aircraft engine and control systems to anomaly detection in space shuttle and ISS data. IMS was used on shuttle missions STS-121, STS-115, and STS-116 to search the Wing Leading Edge Impact Detection System (WLEIDS) data for signs of possible damaging impacts during launch. It independently verified findings of the WLEIDS Mission Evaluation Room (MER) analysts and indicated additional points of interest that were subsequently investigated by the MER team. In support of the Exploration Systems Mission Directorate, IMS is being deployed as an anomaly detection tool on ISS mission control consoles in the Johnson Space Center Mission Operations Directorate. IMS has been trained to detect faults in the ISS Control Moment Gyroscope (CMG) systems. In laboratory tests, it has already detected several minor anomalies in real-time CMG data. When tested on archived data, IMS was able to detect precursors of the CMG1 failure nearly 15 hours in advance of the actual failure event. In the Aeronautics Research Mission Directorate, IMS successfully performed real-time engine health analysis. IMS was able to detect simulated failures and actual engine anomalies in an F/A-18 aircraft during the course of 25 test flights. IMS is also being used in colla
Security inspection in ports by anomaly detection using hyperspectral imaging technology
NASA Astrophysics Data System (ADS)
Rivera, Javier; Valverde, Fernando; Saldaña, Manuel; Manian, Vidya
2013-05-01
Applying hyperspectral imaging technology in port security is crucial for the detection of possible threats or illegal activities. One of the most common problems that cargo suffers is tampering. This represents a danger to society because it creates a channel to smuggle illegal and hazardous products. If a cargo is altered, security inspections on that cargo should contain anomalies that reveal the nature of the tampering. Hyperspectral images can detect anomalies by gathering information through multiple electromagnetic bands. The spectrums extracted from these bands can be used to detect surface anomalies from different materials. Based on this technology, a scenario was built in which a hyperspectral camera was used to inspect the cargo for any surface anomalies and a user interface shows the results. The spectrum of items, altered by different materials that can be used to conceal illegal products, is analyzed and classified in order to provide information about the tampered cargo. The image is analyzed with a variety of techniques such as multiple features extracting algorithms, autonomous anomaly detection, and target spectrum detection. The results will be exported to a workstation or mobile device in order to show them in an easy -to-use interface. This process could enhance the current capabilities of security systems that are already implemented, providing a more complete approach to detect threats and illegal cargo.
Detection of Anomalous Insiders in Collaborative Environments via Relational Analysis of Access Logs
Chen, You; Malin, Bradley
2014-01-01
Collaborative information systems (CIS) are deployed within a diverse array of environments, ranging from the Internet to intelligence agencies to healthcare. It is increasingly the case that such systems are applied to manage sensitive information, making them targets for malicious insiders. While sophisticated security mechanisms have been developed to detect insider threats in various file systems, they are neither designed to model nor to monitor collaborative environments in which users function in dynamic teams with complex behavior. In this paper, we introduce a community-based anomaly detection system (CADS), an unsupervised learning framework to detect insider threats based on information recorded in the access logs of collaborative environments. CADS is based on the observation that typical users tend to form community structures, such that users with low a nity to such communities are indicative of anomalous and potentially illicit behavior. The model consists of two primary components: relational pattern extraction and anomaly detection. For relational pattern extraction, CADS infers community structures from CIS access logs, and subsequently derives communities, which serve as the CADS pattern core. CADS then uses a formal statistical model to measure the deviation of users from the inferred communities to predict which users are anomalies. To empirically evaluate the threat detection model, we perform an analysis with six months of access logs from a real electronic health record system in a large medical center, as well as a publicly-available dataset for replication purposes. The results illustrate that CADS can distinguish simulated anomalous users in the context of real user behavior with a high degree of certainty and with significant performance gains in comparison to several competing anomaly detection models. PMID:25485309
NASA Astrophysics Data System (ADS)
Mori, Taketoshi; Ishino, Takahito; Noguchi, Hiroshi; Shimosaka, Masamichi; Sato, Tomomasa
2011-06-01
We propose a life pattern estimation method and an anomaly detection method for elderly people living alone. In our observation system for such people, we deploy some pyroelectric sensors into the house and measure the person's activities all the time in order to grasp the person's life pattern. The data are transferred successively to the operation center and displayed to the nurses in the center in a precise way. Then, the nurses decide whether the data is the anomaly or not. In the system, the people whose features in their life resemble each other are categorized as the same group. Anomalies occurred in the past are shared in the group and utilized in the anomaly detection algorithm. This algorithm is based on "anomaly score." The "anomaly score" is figured out by utilizing the activeness of the person. This activeness is approximately proportional to the frequency of the sensor response in a minute. The "anomaly score" is calculated from the difference between the activeness in the present and the past one averaged in the long term. Thus, the score is positive if the activeness in the present is higher than the average in the past, and the score is negative if the value in the present is lower than the average. If the score exceeds a certain threshold, it means that an anomaly event occurs. Moreover, we developed an activity estimation algorithm. This algorithm estimates the residents' basic activities such as uprising, outing, and so on. The estimation is shown to the nurses with the "anomaly score" of the residents. The nurses can understand the residents' health conditions by combining these two information.
Detecting Anomalies in Process Control Networks
NASA Astrophysics Data System (ADS)
Rrushi, Julian; Kang, Kyoung-Don
This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Morris, Jon; Turowski, Mark; Franzl, Richard; Walker, Mark; Kapadia, Ravi; Venkatesh, Meera; Schmalzel, John
2010-01-01
Severe weather events are likely occurrences on the Mississippi Gulf Coast. It is important to rapidly diagnose and mitigate the effects of storms on Stennis Space Center's rocket engine test complex to avoid delays to critical test article programs, reduce costs, and maintain safety. An Integrated Systems Health Management (ISHM) approach and technologies are employed to integrate environmental (weather) monitoring, structural modeling, and the suite of available facility instrumentation to provide information for readiness before storms, rapid initial damage assessment to guide mitigation planning, and then support on-going assurance as repairs are effected and finally support recertification. The system is denominated Katrina Storm Monitoring System (KStorMS). Integrated Systems Health Management (ISHM) describes a comprehensive set of capabilities that provide insight into the behavior the health of a system. Knowing the status of a system allows decision makers to effectively plan and execute their mission. For example, early insight into component degradation and impending failures provides more time to develop work around strategies and more effectively plan for maintenance. Failures of system elements generally occur over time. Information extracted from sensor data, combined with system-wide knowledge bases and methods for information extraction and fusion, inference, and decision making, can be used to detect incipient failures. If failures do occur, it is critical to detect and isolate them, and suggest an appropriate course of action. ISHM enables determining the condition (health) of every element in a complex system-of-systems or SoS (detect anomalies, diagnose causes, predict future anomalies), and provide data, information, and knowledge (DIaK) to control systems for safe and effective operation. ISHM capability is achieved by using a wide range of technologies that enable anomaly detection, diagnostics, prognostics, and advise for control: (1) anomaly detection algorithms and strategies, (2) fusion of DIaK for anomaly detection (model-based, numerical, statistical, empirical, expert-based, qualitative, etc.), (3) diagnostics/prognostics strategies and methods, (4) user interface, (5) advanced control strategies, (6) integration architectures/frameworks, (7) embedding of intelligence. Many of these technologies are mature, and they are being used in the KStorMS. The paper will describe the design, implementation, and operation of the KStorMS; and discuss further evolution to support other needs such as condition-based maintenance (CBM).
A system for learning statistical motion patterns.
Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve
2006-09-01
Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.
Identification and detection of anomalies through SSME data analysis
NASA Technical Reports Server (NTRS)
Pereira, Lisa; Ali, Moonis
1990-01-01
The goal of the ongoing research described in this paper is to analyze real-time ground test data in order to identify patterns associated with the anomalous engine behavior, and on the basis of this analysis to develop an expert system which detects anomalous engine behavior in the early stages of fault development. A prototype of the expert system has been developed and tested on the high frequency data of two SSME tests, namely Test #901-0516 and Test #904-044. The comparison of our results with the post-test analyses indicates that the expert system detected the presence of the anomalies in a significantly early stage of fault development.
NASA Astrophysics Data System (ADS)
Yakunin, A. G.; Hussein, H. M.
2018-01-01
The article shows how the known statistical methods, which are widely used in solving financial problems and a number of other fields of science and technology, can be effectively applied after minor modification for solving such problems in climate and environment monitoring systems, as the detection of anomalies in the form of abrupt changes in signal levels, the occurrence of positive and negative outliers and the violation of the cycle form in periodic processes.
Spectral anomaly methods for aerial detection using KUT nuisance rejection
NASA Astrophysics Data System (ADS)
Detwiler, R. S.; Pfund, D. M.; Myjak, M. J.; Kulisek, J. A.; Seifert, C. E.
2015-06-01
This work discusses the application and optimization of a spectral anomaly method for the real-time detection of gamma radiation sources from an aerial helicopter platform. Aerial detection presents several key challenges over ground-based detection. For one, larger and more rapid background fluctuations are typical due to higher speeds, larger field of view, and geographically induced background changes. As well, the possible large altitude or stand-off distance variations cause significant steps in background count rate as well as spectral changes due to increased gamma-ray scatter with detection at higher altitudes. The work here details the adaptation and optimization of the PNNL-developed algorithm Nuisance-Rejecting Spectral Comparison Ratios for Anomaly Detection (NSCRAD), a spectral anomaly method previously developed for ground-based applications, for an aerial platform. The algorithm has been optimized for two multi-detector systems; a NaI(Tl)-detector-based system and a CsI detector array. The optimization here details the adaptation of the spectral windows for a particular set of target sources to aerial detection and the tailoring for the specific detectors. As well, the methodology and results for background rejection methods optimized for the aerial gamma-ray detection using Potassium, Uranium and Thorium (KUT) nuisance rejection are shown. Results indicate that use of a realistic KUT nuisance rejection may eliminate metric rises due to background magnitude and spectral steps encountered in aerial detection due to altitude changes and geographically induced steps such as at land-water interfaces.
Observed TEC Anomalies by GNSS Sites Preceding the Aegean Sea Earthquake of 2014
NASA Astrophysics Data System (ADS)
Ulukavak, Mustafa; Yal&ccedul; ınkaya, Mualla
2016-11-01
In recent years, Total Electron Content (TEC) data, obtained from Global Navigation Satellites Systems (GNSS) receivers, has been widely used to detect seismo-ionospheric anomalies. In this study, Global Positioning System - Total Electron Content (GPS-TEC) data were used to investigate ionospheric abnormal behaviors prior to the 2014 Aegean Sea earthquake (40.305°N 25.453°E, 24 May 2014, 09:25:03 UT, Mw:6.9). The data obtained from three Continuously Operating Reference Stations in Turkey (CORS-TR) and two International GNSS Service (IGS) sites near the epicenter of the earthquake is used to detect ionospheric anomalies before the earthquake. Solar activity index (F10.7) and geomagnetic activity index (Dst), which are both related to space weather conditions, were used to analyze these pre-earthquake ionospheric anomalies. An examination of these indices indicated high solar activity between May 8 and 15, 2014. The first significant increase (positive anomalies) in Vertical Total Electron Content (VTEC) was detected on May 14, 2014 or 10 days before the earthquake. This positive anomaly can be attributed to the high solar activity. The indices do not imply high solar or geomagnetic activity after May 15, 2014. Abnormal ionospheric TEC changes (negative anomaly) were observed at all stations one day before the earthquake. These changes were lower than the lower bound by approximately 10-20 TEC unit (TECU), and may be considered as the ionospheric precursor of the 2014 Aegean Sea earthquake
Method for Real-Time Model Based Structural Anomaly Detection
NASA Technical Reports Server (NTRS)
Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)
2015-01-01
A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.
GBAS Ionospheric Anomaly Monitoring Based on a Two-Step Approach
Zhao, Lin; Yang, Fuxin; Li, Liang; Ding, Jicheng; Zhao, Yuxin
2016-01-01
As one significant component of space environmental weather, the ionosphere has to be monitored using Global Positioning System (GPS) receivers for the Ground-Based Augmentation System (GBAS). This is because an ionospheric anomaly can pose a potential threat for GBAS to support safety-critical services. The traditional code-carrier divergence (CCD) methods, which have been widely used to detect the variants of the ionospheric gradient for GBAS, adopt a linear time-invariant low-pass filter to suppress the effect of high frequency noise on the detection of the ionospheric anomaly. However, there is a counterbalance between response time and estimation accuracy due to the fixed time constants. In order to release the limitation, a two-step approach (TSA) is proposed by integrating the cascaded linear time-invariant low-pass filters with the adaptive Kalman filter to detect the ionospheric gradient anomaly. The performance of the proposed method is tested by using simulated and real-world data, respectively. The simulation results show that the TSA can detect ionospheric gradient anomalies quickly, even when the noise is severer. Compared to the traditional CCD methods, the experiments from real-world GPS data indicate that the average estimation accuracy of the ionospheric gradient improves by more than 31.3%, and the average response time to the ionospheric gradient at a rate of 0.018 m/s improves by more than 59.3%, which demonstrates the ability of TSA to detect a small ionospheric gradient more rapidly. PMID:27240367
NASA Technical Reports Server (NTRS)
Srivastava, Ashok, N.; Akella, Ram; Diev, Vesselin; Kumaresan, Sakthi Preethi; McIntosh, Dawn M.; Pontikakis, Emmanuel D.; Xu, Zuobing; Zhang, Yi
2006-01-01
This paper describes the results of a significant research and development effort conducted at NASA Ames Research Center to develop new text mining techniques to discover anomalies in free-text reports regarding system health and safety of two aerospace systems. We discuss two problems of significant importance in the aviation industry. The first problem is that of automatic anomaly discovery about an aerospace system through the analysis of tens of thousands of free-text problem reports that are written about the system. The second problem that we address is that of automatic discovery of recurring anomalies, i.e., anomalies that may be described m different ways by different authors, at varying times and under varying conditions, but that are truly about the same part of the system. The intent of recurring anomaly identification is to determine project or system weakness or high-risk issues. The discovery of recurring anomalies is a key goal in building safe, reliable, and cost-effective aerospace systems. We address the anomaly discovery problem on thousands of free-text reports using two strategies: (1) as an unsupervised learning problem where an algorithm takes free-text reports as input and automatically groups them into different bins, where each bin corresponds to a different unknown anomaly category; and (2) as a supervised learning problem where the algorithm classifies the free-text reports into one of a number of known anomaly categories. We then discuss the application of these methods to the problem of discovering recurring anomalies. In fact the special nature of recurring anomalies (very small cluster sizes) requires incorporating new methods and measures to enhance the original approach for anomaly detection. ?& pant 0-
D'Antonio, F; Khalil, A; Garel, C; Pilu, G; Rizzo, G; Lerman-Sagie, T; Bhide, A; Thilaganathan, B; Manzoli, L; Papageorghiou, A T
2016-06-01
To explore the outcome in fetuses with prenatal diagnosis of posterior fossa anomalies apparently isolated on ultrasound imaging. MEDLINE and EMBASE were searched electronically utilizing combinations of relevant medical subject headings for 'posterior fossa' and 'outcome'. The posterior fossa anomalies analyzed were Dandy-Walker malformation (DWM), mega cisterna magna (MCM), Blake's pouch cyst (BPC) and vermian hypoplasia (VH). The outcomes observed were rate of chromosomal abnormalities, additional anomalies detected at prenatal magnetic resonance imaging (MRI), additional anomalies detected at postnatal imaging and concordance between prenatal and postnatal diagnoses. Only isolated cases of posterior fossa anomalies - defined as having no cerebral or extracerebral additional anomalies detected on ultrasound examination - were included in the analysis. Quality assessment of the included studies was performed using the Newcastle-Ottawa Scale for cohort studies. We used meta-analyses of proportions to combine data and fixed- or random-effects models according to the heterogeneity of the results. Twenty-two studies including 531 fetuses with posterior fossa anomalies were included in this systematic review. The prevalence of chromosomal abnormalities in fetuses with isolated DWM was 16.3% (95% CI, 8.7-25.7%). The prevalence of additional central nervous system (CNS) abnormalities that were missed at ultrasound examination and detected only at prenatal MRI was 13.7% (95% CI, 0.2-42.6%), and the prevalence of additional CNS anomalies that were missed at prenatal imaging and detected only after birth was 18.2% (95% CI, 6.2-34.6%). Prenatal diagnosis was not confirmed after birth in 28.2% (95% CI, 8.5-53.9%) of cases. MCM was not significantly associated with additional anomalies detected at prenatal MRI or detected after birth. Prenatal diagnosis was not confirmed postnatally in 7.1% (95% CI, 2.3-14.5%) of cases. The rate of chromosomal anomalies in fetuses with isolated BPC was 5.2% (95% CI, 0.9-12.7%) and there was no associated CNS anomaly detected at prenatal MRI or only after birth. Prenatal diagnosis of BPC was not confirmed after birth in 9.8% (95% CI, 2.9-20.1%) of cases. The rate of chromosomal anomalies in fetuses with isolated VH was 6.5% (95% CI, 0.8-17.1%) and there were no additional anomalies detected at prenatal MRI (0% (95% CI, 0.0-45.9%)). The proportions of cerebral anomalies detected only after birth was 14.2% (95% CI, 2.9-31.9%). Prenatal diagnosis was not confirmed after birth in 32.4% (95% CI, 18.3-48.4%) of cases. DWM apparently isolated on ultrasound imaging is a condition with a high risk for chromosomal and associated structural anomalies. Isolated MCM and BPC have a low risk for aneuploidy or associated structural anomalies. The small number of cases with isolated VH prevents robust conclusions regarding their management from being drawn. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.
Data-Driven Anomaly Detection Performance for the Ares I-X Ground Diagnostic Prototype
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Schwabacher, Mark A.; Matthews, Bryan L.
2010-01-01
In this paper, we will assess the performance of a data-driven anomaly detection algorithm, the Inductive Monitoring System (IMS), which can be used to detect simulated Thrust Vector Control (TVC) system failures. However, the ability of IMS to detect these failures in a true operational setting may be related to the realistic nature of how they are simulated. As such, we will investigate both a low fidelity and high fidelity approach to simulating such failures, with the latter based upon the underlying physics. Furthermore, the ability of IMS to detect anomalies that were previously unknown and not previously simulated will be studied in earnest, as well as apparent deficiencies or misapplications that result from using the data-driven paradigm. Our conclusions indicate that robust detection performance of simulated failures using IMS is not appreciably affected by the use of a high fidelity simulation. However, we have found that the inclusion of a data-driven algorithm such as IMS into a suite of deployable health management technologies does add significant value.
Discovering System Health Anomalies Using Data Mining Techniques
NASA Technical Reports Server (NTRS)
Sriastava, Ashok, N.
2005-01-01
We present a data mining framework for the analysis and discovery of anomalies in high-dimensional time series of sensor measurements that would be found in an Integrated System Health Monitoring system. We specifically treat the problem of discovering anomalous features in the time series that may be indicative of a system anomaly, or in the case of a manned system, an anomaly due to the human. Identification of these anomalies is crucial to building stable, reusable, and cost-efficient systems. The framework consists of an analysis platform and new algorithms that can scale to thousands of sensor streams to discovers temporal anomalies. We discuss the mathematical framework that underlies the system and also describe in detail how this framework is general enough to encompass both discrete and continuous sensor measurements. We also describe a new set of data mining algorithms based on kernel methods and hidden Markov models that allow for the rapid assimilation, analysis, and discovery of system anomalies. We then describe the performance of the system on a real-world problem in the aircraft domain where we analyze the cockpit data from aircraft as well as data from the aircraft propulsion, control, and guidance systems. These data are discrete and continuous sensor measurements and are dealt with seamlessly in order to discover anomalous flights. We conclude with recommendations that describe the tradeoffs in building an integrated scalable platform for robust anomaly detection in ISHM applications.
Survey of Anomaly Detection Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, B
This survey defines the problem of anomaly detection and provides an overview of existing methods. The methods are categorized into two general classes: generative and discriminative. A generative approach involves building a model that represents the joint distribution of the input features and the output labels of system behavior (e.g., normal or anomalous) then applies the model to formulate a decision rule for detecting anomalies. On the other hand, a discriminative approach aims directly to find the decision rule, with the smallest error rate, that distinguishes between normal and anomalous behavior. For each approach, we will give an overview ofmore » popular techniques and provide references to state-of-the-art applications.« less
NASA Astrophysics Data System (ADS)
Nawir, Mukrimah; Amir, Amiza; Lynn, Ong Bi; Yaakob, Naimah; Badlishah Ahmad, R.
2018-05-01
The rapid growth of technologies might endanger them to various network attacks due to the nature of data which are frequently exchange their data through Internet and large-scale data that need to be handle. Moreover, network anomaly detection using machine learning faced difficulty when dealing the involvement of dataset where the number of labelled network dataset is very few in public and this caused many researchers keep used the most commonly network dataset (KDDCup99) which is not relevant to employ the machine learning (ML) algorithms for a classification. Several issues regarding these available labelled network datasets are discussed in this paper. The aim of this paper to build a network anomaly detection system using machine learning algorithms that are efficient, effective and fast processing. The finding showed that AODE algorithm is performed well in term of accuracy and processing time for binary classification towards UNSW-NB15 dataset.
Fault Detection and Diagnosis of Railway Point Machines by Sound Analysis
Lee, Jonguk; Choi, Heesu; Park, Daihee; Chung, Yongwha; Kim, Hee-Young; Yoon, Sukhan
2016-01-01
Railway point devices act as actuators that provide different routes to trains by driving switchblades from the current position to the opposite one. Point failure can significantly affect railway operations, with potentially disastrous consequences. Therefore, early detection of anomalies is critical for monitoring and managing the condition of rail infrastructure. We present a data mining solution that utilizes audio data to efficiently detect and diagnose faults in railway condition monitoring systems. The system enables extracting mel-frequency cepstrum coefficients (MFCCs) from audio data with reduced feature dimensions using attribute subset selection, and employs support vector machines (SVMs) for early detection and classification of anomalies. Experimental results show that the system enables cost-effective detection and diagnosis of faults using a cheap microphone, with accuracy exceeding 94.1% whether used alone or in combination with other known methods. PMID:27092509
OceanXtremes: Scalable Anomaly Detection in Oceanographic Time-Series
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Armstrong, E. M.; Chin, T. M.; Gill, K. M.; Greguska, F. R., III; Huang, T.; Jacob, J. C.; Quach, N.
2016-12-01
The oceanographic community must meet the challenge to rapidly identify features and anomalies in complex and voluminous observations to further science and improve decision support. Given this data-intensive reality, we are developing an anomaly detection system, called OceanXtremes, powered by an intelligent, elastic Cloud-based analytic service backend that enables execution of domain-specific, multi-scale anomaly and feature detection algorithms across the entire archive of 15 to 30-year ocean science datasets.Our parallel analytics engine is extending the NEXUS system and exploits multiple open-source technologies: Apache Cassandra as a distributed spatial "tile" cache, Apache Spark for in-memory parallel computation, and Apache Solr for spatial search and storing pre-computed tile statistics and other metadata. OceanXtremes provides these key capabilities: Parallel generation (Spark on a compute cluster) of 15 to 30-year Ocean Climatologies (e.g. sea surface temperature or SST) in hours or overnight, using simple pixel averages or customizable Gaussian-weighted "smoothing" over latitude, longitude, and time; Parallel pre-computation, tiling, and caching of anomaly fields (daily variables minus a chosen climatology) with pre-computed tile statistics; Parallel detection (over the time-series of tiles) of anomalies or phenomena by regional area-averages exceeding a specified threshold (e.g. high SST in El Nino or SST "blob" regions), or more complex, custom data mining algorithms; Shared discovery and exploration of ocean phenomena and anomalies (facet search using Solr), along with unexpected correlations between key measured variables; Scalable execution for all capabilities on a hybrid Cloud, using our on-premise OpenStack Cloud cluster or at Amazon. The key idea is that the parallel data-mining operations will be run "near" the ocean data archives (a local "network" hop) so that we can efficiently access the thousands of files making up a three decade time-series. The presentation will cover the architecture of OceanXtremes, parallelization of the climatology computation and anomaly detection algorithms using Spark, example results for SST and other time-series, and parallel performance metrics.
Detection of Landmines by Neutron Backscattering: Effects of Soil Moisture on the Detection System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baysoy, D. Y.; Subasi, M.
2010-01-21
Detection of buried land mines by using neutron backscattering technique (NBS) is a well established method. It depends on detecting a hydrogen anomaly in dry soil. Since a landmine and its plastic casing contain much more hydrogen atoms than the dry soil, this anomaly can be detected by observing a rise in the number of neutrons moderated to thermal or epithermal energy. But, the presence of moisture in the soil limits the effectiveness of the measurements. In this work, a landmine detection system using the NBS technique was designed. A series of Monte Carlo calculations was carried out to determinemore » the limits of the system due to the moisture content of the soil. In the simulations, an isotropic fast neutron source ({sup 252}Cf, 100 mug) and a neutron detection system which consists of five {sup 3}He detectors were used in a practicable geometry. In order to see the effects of soil moisture on the efficiency of the detection system, soils with different water contents were tested.« less
SSME Post Test Diagnostic System: Systems Section
NASA Technical Reports Server (NTRS)
Bickmore, Timothy
1995-01-01
An assessment of engine and component health is routinely made after each test firing or flight firing of a Space Shuttle Main Engine (SSME). Currently, this health assessment is done by teams of engineers who manually review sensor data, performance data, and engine and component operating histories. Based on review of information from these various sources, an evaluation is made as to the health of each component of the SSME and the preparedness of the engine for another test or flight. The objective of this project - the SSME Post Test Diagnostic System (PTDS) - is to develop a computer program which automates the analysis of test data from the SSME in order to detect and diagnose anomalies. This report primarily covers work on the Systems Section of the PTDS, which automates the analyses performed by the systems/performance group at the Propulsion Branch of NASA Marshall Space Flight Center (MSFC). This group is responsible for assessing the overall health and performance of the engine, and detecting and diagnosing anomalies which involve multiple components (other groups are responsible for analyzing the behavior of specific components). The PTDS utilizes several advanced software technologies to perform its analyses. Raw test data is analyzed using signal processing routines which detect features in the data, such as spikes, shifts, peaks, and drifts. Component analyses are performed by expert systems, which use 'rules-of-thumb' obtained from interviews with the MSFC data analysts to detect and diagnose anomalies. The systems analysis is performed using case-based reasoning. Results of all analyses are stored in a relational database and displayed via an X-window-based graphical user interface which provides ranked lists of anomalies and observations by engine component, along with supporting data plots for each.
2014-02-26
set of anomaly detection rules 62 I.-R. Chen et al. / Ad Hoc Networks 19 (2014) 59–74 Author’s personal copy including the interval rule (for...deficiencies in anomaly detection (e.g., imperfection of rules) by a false negative probability (PHfn) of misidentifying an unhealthy node as a...multimedia servers, Multimedia Syst. 8 (2) (2000) 83–91. [53] R. Mitchell, I.R. Chen, Adaptive intrusion detection for unmanned aircraft systems based on
Autonomous software: Myth or magic?
NASA Astrophysics Data System (ADS)
Allan, A.; Naylor, T.; Saunders, E. S.
2008-03-01
We discuss work by the eSTAR project which demonstrates a fully closed loop autonomous system for the follow up of possible micro-lensing anomalies. Not only are the initial micro-lensing detections followed up in real time, but ongoing events are prioritised and continually monitored, with the returned data being analysed automatically. If the ``smart software'' running the observing campaign detects a planet-like anomaly, further follow-up will be scheduled autonomously and other telescopes and telescope networks alerted to the possible planetary detection. We further discuss the implications of this, and how such projects can be used to build more general autonomous observing and control systems.
Analysis of SSEM Sensor Data Using BEAM
NASA Technical Reports Server (NTRS)
Zak, Michail; Park, Han; James, Mark
2004-01-01
A report describes analysis of space shuttle main engine (SSME) sensor data using Beacon-based Exception Analysis for Multimissions (BEAM) [NASA Tech Briefs articles, the two most relevant being Beacon-Based Exception Analysis for Multimissions (NPO- 20827), Vol. 26, No.9 (September 2002), page 32 and Integrated Formulation of Beacon-Based Exception Analysis for Multimissions (NPO- 21126), Vol. 27, No. 3 (March 2003), page 74] for automated detection of anomalies. A specific implementation of BEAM, using the Dynamical Invariant Anomaly Detector (DIAD), is used to find anomalies commonly encountered during SSME ground test firings. The DIAD detects anomalies by computing coefficients of an autoregressive model and comparing them to expected values extracted from previous training data. The DIAD was trained using nominal SSME test-firing data. DIAD detected all the major anomalies including blade failures, frozen sense lines, and deactivated sensors. The DIAD was particularly sensitive to anomalies caused by faulty sensors and unexpected transients. The system offers a way to reduce SSME analysis time and cost by automatically indicating specific time periods, signals, and features contributing to each anomaly. The software described here executes on a standard workstation and delivers analyses in seconds, a computing time comparable to or faster than the test duration itself, offering potential for real-time analysis.
Unsupervised Anomaly Detection Based on Clustering and Multiple One-Class SVM
NASA Astrophysics Data System (ADS)
Song, Jungsuk; Takakura, Hiroki; Okabe, Yasuo; Kwon, Yongjin
Intrusion detection system (IDS) has played an important role as a device to defend our networks from cyber attacks. However, since it is unable to detect unknown attacks, i.e., 0-day attacks, the ultimate challenge in intrusion detection field is how we can exactly identify such an attack by an automated manner. Over the past few years, several studies on solving these problems have been made on anomaly detection using unsupervised learning techniques such as clustering, one-class support vector machine (SVM), etc. Although they enable one to construct intrusion detection models at low cost and effort, and have capability to detect unforeseen attacks, they still have mainly two problems in intrusion detection: a low detection rate and a high false positive rate. In this paper, we propose a new anomaly detection method based on clustering and multiple one-class SVM in order to improve the detection rate while maintaining a low false positive rate. We evaluated our method using KDD Cup 1999 data set. Evaluation results show that our approach outperforms the existing algorithms reported in the literature; especially in detection of unknown attacks.
Intelligent system for a remote diagnosis of a photovoltaic solar power plant
NASA Astrophysics Data System (ADS)
Sanz-Bobi, M. A.; Muñoz San Roque, A.; de Marcos, A.; Bada, M.
2012-05-01
Usually small and mid-sized photovoltaic solar power plants are located in rural areas and typically they operate unattended. Some technicians are in charge of the supervision of these plants and, if an alarm is automatically issued, they try to investigate the problem and correct it. Sometimes these anomalies are detected some hours or days after they begin. Also the analysis of the causes once the anomaly is detected can take some additional time. All these factors motivated the development of a methodology able to perform continuous and automatic monitoring of the basic parameters of a photovoltaic solar power plant in order to detect anomalies as soon as possible, to diagnose their causes, and to immediately inform the personnel in charge of the plant. The methodology proposed starts from the study of the most significant failure modes of a photovoltaic plant through a FMEA and using this information, its typical performance is characterized by the creation of its normal behaviour models. They are used to detect the presence of a failure in an incipient or current form. Once an anomaly is detected, an automatic and intelligent diagnosis process is started in order to investigate the possible causes. The paper will describe the main features of a software tool able to detect anomalies and to diagnose them in a photovoltaic solar power plant.
Fault Management Technology Maturation for NASA's Constellation Program
NASA Technical Reports Server (NTRS)
Waterman, Robert D.
2010-01-01
This slide presentation reviews the maturation of fault management technology in preparation for the Constellation Program. There is a review of the Space Shuttle Main Engine (SSME) and a discussion of a couple of incidents with the shuttle main engine and tanking that indicated the necessity for predictive maintenance. Included is a review of the planned Ares I-X Ground Diagnostic Prototype (GDP) and further information about detection and isolation of faults using Testability Engineering and Maintenance System (TEAMS). Another system that being readied for use that detects anomalies, the Inductive Monitoring System (IMS). The IMS automatically learns how the system behaves and alerts operations it the current behavior is anomalous. The comparison of STS-83 and STS-107 (i.e., the Columbia accident) is shown as an example of the anomaly detection capabilities.
Item Anomaly Detection Based on Dynamic Partition for Time Series in Recommender Systems
Gao, Min; Tian, Renli; Wen, Junhao; Xiong, Qingyu; Ling, Bin; Yang, Linda
2015-01-01
In recent years, recommender systems have become an effective method to process information overload. However, recommendation technology still suffers from many problems. One of the problems is shilling attacks-attackers inject spam user profiles to disturb the list of recommendation items. There are two characteristics of all types of shilling attacks: 1) Item abnormality: The rating of target items is always maximum or minimum; and 2) Attack promptness: It takes only a very short period time to inject attack profiles. Some papers have proposed item anomaly detection methods based on these two characteristics, but their detection rate, false alarm rate, and universality need to be further improved. To solve these problems, this paper proposes an item anomaly detection method based on dynamic partitioning for time series. This method first dynamically partitions item-rating time series based on important points. Then, we use chi square distribution (χ2) to detect abnormal intervals. The experimental results on MovieLens 100K and 1M indicate that this approach has a high detection rate and a low false alarm rate and is stable toward different attack models and filler sizes. PMID:26267477
Item Anomaly Detection Based on Dynamic Partition for Time Series in Recommender Systems.
Gao, Min; Tian, Renli; Wen, Junhao; Xiong, Qingyu; Ling, Bin; Yang, Linda
2015-01-01
In recent years, recommender systems have become an effective method to process information overload. However, recommendation technology still suffers from many problems. One of the problems is shilling attacks-attackers inject spam user profiles to disturb the list of recommendation items. There are two characteristics of all types of shilling attacks: 1) Item abnormality: The rating of target items is always maximum or minimum; and 2) Attack promptness: It takes only a very short period time to inject attack profiles. Some papers have proposed item anomaly detection methods based on these two characteristics, but their detection rate, false alarm rate, and universality need to be further improved. To solve these problems, this paper proposes an item anomaly detection method based on dynamic partitioning for time series. This method first dynamically partitions item-rating time series based on important points. Then, we use chi square distribution (χ2) to detect abnormal intervals. The experimental results on MovieLens 100K and 1M indicate that this approach has a high detection rate and a low false alarm rate and is stable toward different attack models and filler sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solaimani, Mohiuddin; Iftekhar, Mohammed; Khan, Latifur
Anomaly detection refers to the identi cation of an irregular or unusual pat- tern which deviates from what is standard, normal, or expected. Such deviated patterns typically correspond to samples of interest and are assigned different labels in different domains, such as outliers, anomalies, exceptions, or malware. Detecting anomalies in fast, voluminous streams of data is a formidable chal- lenge. This paper presents a novel, generic, real-time distributed anomaly detection framework for heterogeneous streaming data where anomalies appear as a group. We have developed a distributed statistical approach to build a model and later use it to detect anomaly. Asmore » a case study, we investigate group anomaly de- tection for a VMware-based cloud data center, which maintains a large number of virtual machines (VMs). We have built our framework using Apache Spark to get higher throughput and lower data processing time on streaming data. We have developed a window-based statistical anomaly detection technique to detect anomalies that appear sporadically. We then relaxed this constraint with higher accuracy by implementing a cluster-based technique to detect sporadic and continuous anomalies. We conclude that our cluster-based technique out- performs other statistical techniques with higher accuracy and lower processing time.« less
NASA Astrophysics Data System (ADS)
Akhoondzadeh, M.
2014-02-01
A powerful earthquake of Mw = 7.7 struck the Saravan region (28.107° N, 62.053° E) in Iran on 16 April 2013. Up to now nomination of an automated anomaly detection method in a non linear time series of earthquake precursor has been an attractive and challenging task. Artificial Neural Network (ANN) and Particle Swarm Optimization (PSO) have revealed strong potentials in accurate time series prediction. This paper presents the first study of an integration of ANN and PSO method in the research of earthquake precursors to detect the unusual variations of the thermal and total electron content (TEC) seismo-ionospheric anomalies induced by the strong earthquake of Saravan. In this study, to overcome the stagnation in local minimum during the ANN training, PSO as an optimization method is used instead of traditional algorithms for training the ANN method. The proposed hybrid method detected a considerable number of anomalies 4 and 8 days preceding the earthquake. Since, in this case study, ionospheric TEC anomalies induced by seismic activity is confused with background fluctuations due to solar activity, a multi-resolution time series processing technique based on wavelet transform has been applied on TEC signal variations. In view of the fact that the accordance in the final results deduced from some robust methods is a convincing indication for the efficiency of the method, therefore the detected thermal and TEC anomalies using the ANN + PSO method were compared to the results with regard to the observed anomalies by implementing the mean, median, Wavelet, Kalman filter, Auto-Regressive Integrated Moving Average (ARIMA), Support Vector Machine (SVM) and Genetic Algorithm (GA) methods. The results indicate that the ANN + PSO method is quite promising and deserves serious attention as a new tool for thermal and TEC seismo anomalies detection.
A Comparative Study of Unsupervised Anomaly Detection Techniques Using Honeypot Data
NASA Astrophysics Data System (ADS)
Song, Jungsuk; Takakura, Hiroki; Okabe, Yasuo; Inoue, Daisuke; Eto, Masashi; Nakao, Koji
Intrusion Detection Systems (IDS) have been received considerable attention among the network security researchers as one of the most promising countermeasures to defend our crucial computer systems or networks against attackers on the Internet. Over the past few years, many machine learning techniques have been applied to IDSs so as to improve their performance and to construct them with low cost and effort. Especially, unsupervised anomaly detection techniques have a significant advantage in their capability to identify unforeseen attacks, i.e., 0-day attacks, and to build intrusion detection models without any labeled (i.e., pre-classified) training data in an automated manner. In this paper, we conduct a set of experiments to evaluate and analyze performance of the major unsupervised anomaly detection techniques using real traffic data which are obtained at our honeypots deployed inside and outside of the campus network of Kyoto University, and using various evaluation criteria, i.e., performance evaluation by similarity measurements and the size of training data, overall performance, detection ability for unknown attacks, and time complexity. Our experimental results give some practical and useful guidelines to IDS researchers and operators, so that they can acquire insight to apply these techniques to the area of intrusion detection, and devise more effective intrusion detection models.
Theory and experiments in model-based space system anomaly management
NASA Astrophysics Data System (ADS)
Kitts, Christopher Adam
This research program consists of an experimental study of model-based reasoning methods for detecting, diagnosing and resolving anomalies that occur when operating a comprehensive space system. Using a first principles approach, several extensions were made to the existing field of model-based fault detection and diagnosis in order to develop a general theory of model-based anomaly management. Based on this theory, a suite of algorithms were developed and computationally implemented in order to detect, diagnose and identify resolutions for anomalous conditions occurring within an engineering system. The theory and software suite were experimentally verified and validated in the context of a simple but comprehensive, student-developed, end-to-end space system, which was developed specifically to support such demonstrations. This space system consisted of the Sapphire microsatellite which was launched in 2001, several geographically distributed and Internet-enabled communication ground stations, and a centralized mission control complex located in the Space Technology Center in the NASA Ames Research Park. Results of both ground-based and on-board experiments demonstrate the speed, accuracy, and value of the algorithms compared to human operators, and they highlight future improvements required to mature this technology.
Analysis of a SCADA System Anomaly Detection Model Based on Information Entropy
2014-03-27
20 Intrusion Detection...alarms ( Rem ). ............................................................................................................. 86 Figure 25. TP% for...literature concerning the focus areas of this research. The focus areas include SCADA vulnerabilities, information theory, and intrusion detection
Road Traffic Anomaly Detection via Collaborative Path Inference from GPS Snippets
Wang, Hongtao; Wen, Hui; Yi, Feng; Zhu, Hongsong; Sun, Limin
2017-01-01
Road traffic anomaly denotes a road segment that is anomalous in terms of traffic flow of vehicles. Detecting road traffic anomalies from GPS (Global Position System) snippets data is becoming critical in urban computing since they often suggest underlying events. However, the noisy and sparse nature of GPS snippets data have ushered multiple problems, which have prompted the detection of road traffic anomalies to be very challenging. To address these issues, we propose a two-stage solution which consists of two components: a Collaborative Path Inference (CPI) model and a Road Anomaly Test (RAT) model. CPI model performs path inference incorporating both static and dynamic features into a Conditional Random Field (CRF). Dynamic context features are learned collaboratively from large GPS snippets via a tensor decomposition technique. Then RAT calculates the anomalous degree for each road segment from the inferred fine-grained trajectories in given time intervals. We evaluated our method using a large scale real world dataset, which includes one-month GPS location data from more than eight thousand taxicabs in Beijing. The evaluation results show the advantages of our method beyond other baseline techniques. PMID:28282948
FRaC: a feature-modeling approach for semi-supervised and unsupervised anomaly detection.
Noto, Keith; Brodley, Carla; Slonim, Donna
2012-01-01
Anomaly detection involves identifying rare data instances (anomalies) that come from a different class or distribution than the majority (which are simply called "normal" instances). Given a training set of only normal data, the semi-supervised anomaly detection task is to identify anomalies in the future. Good solutions to this task have applications in fraud and intrusion detection. The unsupervised anomaly detection task is different: Given unlabeled, mostly-normal data, identify the anomalies among them. Many real-world machine learning tasks, including many fraud and intrusion detection tasks, are unsupervised because it is impractical (or impossible) to verify all of the training data. We recently presented FRaC, a new approach for semi-supervised anomaly detection. FRaC is based on using normal instances to build an ensemble of feature models, and then identifying instances that disagree with those models as anomalous. In this paper, we investigate the behavior of FRaC experimentally and explain why FRaC is so successful. We also show that FRaC is a superior approach for the unsupervised as well as the semi-supervised anomaly detection task, compared to well-known state-of-the-art anomaly detection methods, LOF and one-class support vector machines, and to an existing feature-modeling approach.
FRaC: a feature-modeling approach for semi-supervised and unsupervised anomaly detection
Brodley, Carla; Slonim, Donna
2011-01-01
Anomaly detection involves identifying rare data instances (anomalies) that come from a different class or distribution than the majority (which are simply called “normal” instances). Given a training set of only normal data, the semi-supervised anomaly detection task is to identify anomalies in the future. Good solutions to this task have applications in fraud and intrusion detection. The unsupervised anomaly detection task is different: Given unlabeled, mostly-normal data, identify the anomalies among them. Many real-world machine learning tasks, including many fraud and intrusion detection tasks, are unsupervised because it is impractical (or impossible) to verify all of the training data. We recently presented FRaC, a new approach for semi-supervised anomaly detection. FRaC is based on using normal instances to build an ensemble of feature models, and then identifying instances that disagree with those models as anomalous. In this paper, we investigate the behavior of FRaC experimentally and explain why FRaC is so successful. We also show that FRaC is a superior approach for the unsupervised as well as the semi-supervised anomaly detection task, compared to well-known state-of-the-art anomaly detection methods, LOF and one-class support vector machines, and to an existing feature-modeling approach. PMID:22639542
Spatiotemporal Detection of Unusual Human Population Behavior Using Mobile Phone Data
Dobra, Adrian; Williams, Nathalie E.; Eagle, Nathan
2015-01-01
With the aim to contribute to humanitarian response to disasters and violent events, scientists have proposed the development of analytical tools that could identify emergency events in real-time, using mobile phone data. The assumption is that dramatic and discrete changes in behavior, measured with mobile phone data, will indicate extreme events. In this study, we propose an efficient system for spatiotemporal detection of behavioral anomalies from mobile phone data and compare sites with behavioral anomalies to an extensive database of emergency and non-emergency events in Rwanda. Our methodology successfully captures anomalous behavioral patterns associated with a broad range of events, from religious and official holidays to earthquakes, floods, violence against civilians and protests. Our results suggest that human behavioral responses to extreme events are complex and multi-dimensional, including extreme increases and decreases in both calling and movement behaviors. We also find significant temporal and spatial variance in responses to extreme events. Our behavioral anomaly detection system and extensive discussion of results are a significant contribution to the long-term project of creating an effective real-time event detection system with mobile phone data and we discuss the implications of our findings for future research to this end. PMID:25806954
Steganography anomaly detection using simple one-class classification
NASA Astrophysics Data System (ADS)
Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.
2007-04-01
There are several security issues tied to multimedia when implementing the various applications in the cellular phone and wireless industry. One primary concern is the potential ease of implementing a steganography system. Traditionally, the only mechanism to embed information into a media file has been with a desktop computer. However, as the cellular phone and wireless industry matures, it becomes much simpler for the same techniques to be performed using a cell phone. In this paper, two methods are compared that classify cell phone images as either an anomaly or clean, where a clean image is one in which no alterations have been made and an anomalous image is one in which information has been hidden within the image. An image in which information has been hidden is known as a stego image. The main concern in detecting steganographic content with machine learning using cell phone images is in training specific embedding procedures to determine if the method has been used to generate a stego image. This leads to a possible flaw in the system when the learned model of stego is faced with a new stego method which doesn't match the existing model. The proposed solution to this problem is to develop systems that detect steganography as anomalies, making the embedding method irrelevant in detection. Two applicable classification methods for solving the anomaly detection of steganographic content problem are single class support vector machines (SVM) and Parzen-window. Empirical comparison of the two approaches shows that Parzen-window outperforms the single class SVM most likely due to the fact that Parzen-window generalizes less.
Structural Anomaly Detection Using Fiber Optic Sensors and Inverse Finite Element Method
NASA Technical Reports Server (NTRS)
Quach, Cuong C.; Vazquez, Sixto L.; Tessler, Alex; Moore, Jason P.; Cooper, Eric G.; Spangler, Jan. L.
2005-01-01
NASA Langley Research Center is investigating a variety of techniques for mitigating aircraft accidents due to structural component failure. One technique under consideration combines distributed fiber optic strain sensing with an inverse finite element method for detecting and characterizing structural anomalies anomalies that may provide early indication of airframe structure degradation. The technique identifies structural anomalies that result in observable changes in localized strain but do not impact the overall surface shape. Surface shape information is provided by an Inverse Finite Element Method that computes full-field displacements and internal loads using strain data from in-situ fiberoptic sensors. This paper describes a prototype of such a system and reports results from a series of laboratory tests conducted on a test coupon subjected to increasing levels of damage.
Min-max hyperellipsoidal clustering for anomaly detection in network security.
Sarasamma, Suseela T; Zhu, Qiuming A
2006-08-01
A novel hyperellipsoidal clustering technique is presented for an intrusion-detection system in network security. Hyperellipsoidal clusters toward maximum intracluster similarity and minimum intercluster similarity are generated from training data sets. The novelty of the technique lies in the fact that the parameters needed to construct higher order data models in general multivariate Gaussian functions are incrementally derived from the data sets using accretive processes. The technique is implemented in a feedforward neural network that uses a Gaussian radial basis function as the model generator. An evaluation based on the inclusiveness and exclusiveness of samples with respect to specific criteria is applied to accretively learn the output clusters of the neural network. One significant advantage of this is its ability to detect individual anomaly types that are hard to detect with other anomaly-detection schemes. Applying this technique, several feature subsets of the tcptrace network-connection records that give above 95% detection at false-positive rates below 5% were identified.
A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data.
Goldstein, Markus; Uchida, Seiichi
2016-01-01
Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks.
NASA Technical Reports Server (NTRS)
Gage, Mark; Dehoff, Ronald
1991-01-01
This system architecture task (1) analyzed the current process used to make an assessment of engine and component health after each test or flight firing of an SSME, (2) developed an approach and a specific set of objectives and requirements for automated diagnostics during post fire health assessment, and (3) listed and described the software applications required to implement this system. The diagnostic system described is a distributed system with a database management system to store diagnostic information and test data, a CAE package for visual data analysis and preparation of plots of hot-fire data, a set of procedural applications for routine anomaly detection, and an expert system for the advanced anomaly detection and evaluation.
Wamsley, Paula R.; Weimer, Carl S.; Nelson, Loren D.; O'Brien, Martin J.
2003-01-01
An oil and gas exploration system and method for land and airborne operations, the system and method used for locating subsurface hydrocarbon deposits based upon a remote detection of trace amounts of gases in the atmosphere. The detection of one or more target gases in the atmosphere is used to indicate a possible subsurface oil and gas deposit. By mapping a plurality of gas targets over a selected survey area, the survey area can be analyzed for measurable concentration anomalies. The anomalies are interpreted along with other exploration data to evaluate the value of an underground deposit. The system includes a differential absorption lidar (DIAL) system with a spectroscopic grade laser light and a light detector. The laser light is continuously tunable in a mid-infrared range, 2 to 5 micrometers, for choosing appropriate wavelengths to measure different gases and avoid absorption bands of interference gases. The laser light has sufficient optical energy to measure atmospheric concentrations of a gas over a path as long as a mile and greater. The detection of the gas is based on optical absorption measurements at specific wavelengths in the open atmosphere. Light that is detected using the light detector contains an absorption signature acquired as the light travels through the atmosphere from the laser source and back to the light detector. The absorption signature of each gas is processed and then analyzed to determine if a potential anomaly exists.
A primitive study on unsupervised anomaly detection with an autoencoder in emergency head CT volumes
NASA Astrophysics Data System (ADS)
Sato, Daisuke; Hanaoka, Shouhei; Nomura, Yukihiro; Takenaga, Tomomi; Miki, Soichiro; Yoshikawa, Takeharu; Hayashi, Naoto; Abe, Osamu
2018-02-01
Purpose: The target disorders of emergency head CT are wide-ranging. Therefore, people working in an emergency department desire a computer-aided detection system for general disorders. In this study, we proposed an unsupervised anomaly detection method in emergency head CT using an autoencoder and evaluated the anomaly detection performance of our method in emergency head CT. Methods: We used a 3D convolutional autoencoder (3D-CAE), which contains 11 layers in the convolution block and 6 layers in the deconvolution block. In the training phase, we trained the 3D-CAE using 10,000 3D patches extracted from 50 normal cases. In the test phase, we calculated abnormalities of each voxel in 38 emergency head CT volumes (22 abnormal cases and 16 normal cases) for evaluation and evaluated the likelihood of lesion existence. Results: Our method achieved a sensitivity of 68% and a specificity of 88%, with an area under the curve of the receiver operating characteristic curve of 0.87. It shows that this method has a moderate accuracy to distinguish normal CT cases to abnormal ones. Conclusion: Our method has potentialities for anomaly detection in emergency head CT.
NASA Technical Reports Server (NTRS)
Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.
2010-01-01
The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods
Development of anomaly detection models for deep subsurface monitoring
NASA Astrophysics Data System (ADS)
Sun, A. Y.
2017-12-01
Deep subsurface repositories are used for waste disposal and carbon sequestration. Monitoring deep subsurface repositories for potential anomalies is challenging, not only because the number of sensor networks and the quality of data are often limited, but also because of the lack of labeled data needed to train and validate machine learning (ML) algorithms. Although physical simulation models may be applied to predict anomalies (or the system's nominal state for that sake), the accuracy of such predictions may be limited by inherent conceptual and parameter uncertainties. The main objective of this study was to demonstrate the potential of data-driven models for leakage detection in carbon sequestration repositories. Monitoring data collected during an artificial CO2 release test at a carbon sequestration repository were used, which include both scalar time series (pressure) and vector time series (distributed temperature sensing). For each type of data, separate online anomaly detection algorithms were developed using the baseline experiment data (no leak) and then tested on the leak experiment data. Performance of a number of different online algorithms was compared. Results show the importance of including contextual information in the dataset to mitigate the impact of reservoir noise and reduce false positive rate. The developed algorithms were integrated into a generic Web-based platform for real-time anomaly detection.
Machine intelligence-based decision-making (MIND) for automatic anomaly detection
NASA Astrophysics Data System (ADS)
Prasad, Nadipuram R.; King, Jason C.; Lu, Thomas
2007-04-01
Any event deemed as being out-of-the-ordinary may be called an anomaly. Anomalies by virtue of their definition are events that occur spontaneously with no prior indication of their existence or appearance. Effects of anomalies are typically unknown until they actually occur, and their effects aggregate in time to show noticeable change from the original behavior. An evolved behavior would in general be very difficult to correct unless the anomalous event that caused such behavior can be detected early, and any consequence attributed to the specific anomaly. Substantial time and effort is required to back-track the cause for abnormal behavior and to recreate the event sequence leading to abnormal behavior. There is a critical need therefore to automatically detect anomalous behavior as and when they may occur, and to do so with the operator in the loop. Human-machine interaction results in better machine learning and a better decision-support mechanism. This is the fundamental concept of intelligent control where machine learning is enhanced by interaction with human operators, and vice versa. The paper discusses a revolutionary framework for the characterization, detection, identification, learning, and modeling of anomalous behavior in observed phenomena arising from a large class of unknown and uncertain dynamical systems.
Anomaly Detection in Test Equipment via Sliding Mode Observers
NASA Technical Reports Server (NTRS)
Solano, Wanda M.; Drakunov, Sergey V.
2012-01-01
Nonlinear observers were originally developed based on the ideas of variable structure control, and for the purpose of detecting disturbances in complex systems. In this anomaly detection application, these observers were designed for estimating the distributed state of fluid flow in a pipe described by a class of advection equations. The observer algorithm uses collected data in a piping system to estimate the distributed system state (pressure and velocity along a pipe containing liquid gas propellant flow) using only boundary measurements. These estimates are then used to further estimate and localize possible anomalies such as leaks or foreign objects, and instrumentation metering problems such as incorrect flow meter orifice plate size. The observer algorithm has the following parts: a mathematical model of the fluid flow, observer control algorithm, and an anomaly identification algorithm. The main functional operation of the algorithm is in creating the sliding mode in the observer system implemented as software. Once the sliding mode starts in the system, the equivalent value of the discontinuous function in sliding mode can be obtained by filtering out the high-frequency chattering component. In control theory, "observers" are dynamic algorithms for the online estimation of the current state of a dynamic system by measurements of an output of the system. Classical linear observers can provide optimal estimates of a system state in case of uncertainty modeled by white noise. For nonlinear cases, the theory of nonlinear observers has been developed and its success is mainly due to the sliding mode approach. Using the mathematical theory of variable structure systems with sliding modes, the observer algorithm is designed in such a way that it steers the output of the model to the output of the system obtained via a variety of sensors, in spite of possible mismatches between the assumed model and actual system. The unique properties of sliding mode control allow not only control of the model internal states to the states of the real-life system, but also identification of the disturbance or anomaly that may occur.
A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data
Goldstein, Markus; Uchida, Seiichi
2016-01-01
Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. PMID:27093601
NASA Astrophysics Data System (ADS)
Grunskaya, L. V.; Isakevich, V. V.; Isakevich, D. V.
2018-05-01
A system is constructed, which, on the basis of extensive experimental material and the use of eigenoscopy, has allowed us to detect anomalies in the spectra of uncorrelated components localized near the rotation frequencies and twice the rotation frequencies of relativistic binary star systems with vanishingly low probability of false alarm, not exceeding 10-17.
Road Traffic Anomaly Detection via Collaborative Path Inference from GPS Snippets.
Wang, Hongtao; Wen, Hui; Yi, Feng; Zhu, Hongsong; Sun, Limin
2017-03-09
Road traffic anomaly denotes a road segment that is anomalous in terms of traffic flow of vehicles. Detecting road traffic anomalies from GPS (Global Position System) snippets data is becoming critical in urban computing since they often suggest underlying events. However, the noisy ands parse nature of GPS snippets data have ushered multiple problems, which have prompted the detection of road traffic anomalies to be very challenging. To address these issues, we propose a two-stage solution which consists of two components: a Collaborative Path Inference (CPI) model and a Road Anomaly Test (RAT) model. CPI model performs path inference incorporating both static and dynamic features into a Conditional Random Field (CRF). Dynamic context features are learned collaboratively from large GPS snippets via a tensor decomposition technique. Then RAT calculates the anomalous degree for each road segment from the inferred fine-grained trajectories in given time intervals. We evaluated our method using a large scale real world dataset, which includes one-month GPS location data from more than eight thousand taxi cabs in Beijing. The evaluation results show the advantages of our method beyond other baseline techniques.
Statistical Traffic Anomaly Detection in Time-Varying Communication Networks
2015-02-01
methods perform better than their vanilla counterparts, which assume that normal traffic is stationary. Statistical Traffic Anomaly Detection in Time...our methods perform better than their vanilla counterparts, which assume that normal traffic is stationary. Index Terms—Statistical anomaly detection...anomaly detection but also for understanding the normal traffic in time-varying networks. C. Comparison with vanilla stochastic methods For both types
Statistical Traffic Anomaly Detection in Time Varying Communication Networks
2015-02-01
methods perform better than their vanilla counterparts, which assume that normal traffic is stationary. Statistical Traffic Anomaly Detection in Time...our methods perform better than their vanilla counterparts, which assume that normal traffic is stationary. Index Terms—Statistical anomaly detection...anomaly detection but also for understanding the normal traffic in time-varying networks. C. Comparison with vanilla stochastic methods For both types
BEARS: a multi-mission anomaly response system
NASA Astrophysics Data System (ADS)
Roberts, Bryce A.
2009-05-01
The Mission Operations Group at UC Berkeley's Space Sciences Laboratory operates a highly automated ground station and presently a fleet of seven satellites, each with its own associated command and control console. However, the requirement for prompt anomaly detection and resolution is shared commonly between the ground segment and all spacecraft. The efficient, low-cost operation and "lights-out" staffing of the Mission Operations Group requires that controllers and engineers be notified of spacecraft and ground system problems around the clock. The Berkeley Emergency Anomaly and Response System (BEARS) is an in-house developed web- and paging-based software system that meets this need. BEARS was developed as a replacement for an existing emergency reporting software system that was too closedsource, platform-specific, expensive, and antiquated to expand or maintain. To avoid these limitations, the new system design leverages cross-platform, open-source software products such as MySQL, PHP, and Qt. Anomaly notifications and responses make use of the two-way paging capabilities of modern smart phones.
Effectiveness of an Expert System for Astronaut Assistance on a Sleep Experiment
NASA Technical Reports Server (NTRS)
Heher, Dennis; Callini, Gianluca; Essig, Susanne M.; Young, Laurence R.; Koga, Dennis (Technical Monitor)
1998-01-01
Principal Investigator-in-a-Box ([PI]) is an expert system designed to train and assist astronauts with the performance of an experiment outside their field of expertise, particularly when contact with the Principal Investigators on the ground is limited or impossible. In the current case, [PI] was designed to assist with the calibration and troubleshooting procedures of the Neurolab Sleep and Respiration Experiment during the pre-sleep period of no ground contact. It displays physiological signals in real time during the pre-sleep instrumentation period, and alerts the astronauts when a poor signal quality is detected. Results of the first study indicated a beneficial effect of [PI] and training in reducing anomaly detection time and the number of undetected anomalies. For the in-flight performance, excluding the saturated signals, the expert system had an 84.2% detection accuracy, and the questionnaires filled out by the astronauts showed positive crew reactions to the expert system.
Transient ice mass variations over Greenland detected by the combination of GPS and GRACE data
NASA Astrophysics Data System (ADS)
Zhang, B.; Liu, L.; Khan, S. A.; van Dam, T. M.; Zhang, E.
2017-12-01
Over the past decade, the Greenland Ice Sheet (GrIS) has been undergoing significant warming and ice mass loss. Such mass loss was not always a steady process but had substantial temporal and spatial variabilities. Here we apply multi-channel singular spectral analysis to crustal deformation time series measured at about 50 Global Positioning System (GPS) stations mounted on bedrock around the Greenland coast and mass changes inferred from Gravity Recovery and Climate Experiment (GRACE) to detect transient changes in ice mass balance over the GrIS. We detect two transient anomalies: one is a negative melting anomaly (Anomaly 1) that peaked around 2010; the other is a positive melting anomaly (Anomaly 2) that peaked between 2012 and 2013. The GRACE data show that both anomalies caused significant mass changes south of 74°N but negligible changes north of 74°N. Both anomalies caused the maximum mass change in southeast GrIS, followed by in west GrIS near Jakobshavn. Our results also show that the mass change caused by Anomaly 1 first reached the maximum in late 2009 in the southeast GrIS and then migrated to west GrIS. However, in Anomaly 2, the southeast GrIS was the last place that reached the maximum mass change in early 2013 and the west GrIS near Jakobshavn was the second latest place that reached the maximum mass change. Most of the GPS data show similar spatiotemporal patterns as those obtained from the GRACE data. However, some GPS time series show discrepancies in either space or time, because of data gaps and different sensitivities of mass loading change. Namely, loading deformation measured by GPS can be significantly affected by local dynamical mass changes, which, yet, has little impact on GRACE observations.
Ristivojević, Andjelka; Djokić, Petra Lukić; Katanić, Dragan; Dobanovacki, Dušanka; Privrodski, Jadranka Jovanović
2016-05-01
According to the World Health Organization (WHO) definition, congenital anomalies are all disorders of the organs or tissues, regardless of whether they are visible at birth or manifest in life, and are registered in the International Classification of Diseases. The aim of this study was to compare the incidence and structure of prenatally detected and clinically manifested congenital anomalies in the newborns in the region of Novi Sad (Province of Vojvodina, Serbia) in the two distant years (1996 and 2006). This retrospective cohort study included all the children born at the Clinic for Gynecology and Obstetrics (Clinical Center of Vojvodina) in Novi Sad during 1996 and 2006. The incidence and the structure of congenital anomalies were analyzed. During 1996 there were 6,099 births and major congenital anomalies were found in 215 infants, representing 3.5%. In 2006 there were 6,628 births and major congenital anomalies were noted in 201 newborns, which is 3%. During 1996 there were more children with anomalies of musculoskeletal system, urogenital tract, with anomalies of the central nervous system and chromosomal abnormalities. During the year 2006 there were more children with cardiovascular anomalies, followed by urogenital anomalies, with significant decline in musculoskeletal anomalies. The distribution of the newborns with major congenital anomalies, regarding perinatal outcome, showed the difference between the studied years. In 2006 the increasing number of children required further investigation and treatment. There is no national registry of congenital anomalies in Serbia so the aim of this study was to enlight this topic. In the span of ten years, covering the period of the NATO campaign in Novi Sad and Serbia, the frequency of major congenital anomalies in the newborns was not increased. The most frequent anomalies observed during both years implied the musculosketelal, cardiovascular, urogenital and central nervous system. In the year 2006 there was a significant eruption of cardiovascular anomalies and a significant decrease of musculoskeletal anomalies, chromosomal abnormalities and central nervous system anomalies, while the number of urogenital anomalies declined compared to the year 1996.
The collection of Intelligence , Surveillance, and Reconnaissance (ISR) Full Motion Video (FMV) is growing at an exponential rate, and the manual... intelligence for the warfighter. This paper will address the question of how can automatic pattern extraction, based on computer vision, extract anomalies in
A function approximation approach to anomaly detection in propulsion system test data
NASA Technical Reports Server (NTRS)
Whitehead, Bruce A.; Hoyt, W. A.
1993-01-01
Ground test data from propulsion systems such as the Space Shuttle Main Engine (SSME) can be automatically screened for anomalies by a neural network. The neural network screens data after being trained with nominal data only. Given the values of 14 measurements reflecting external influences on the SSME at a given time, the neural network predicts the expected nominal value of a desired engine parameter at that time. We compared the ability of three different function-approximation techniques to perform this nominal value prediction: a novel neural network architecture based on Gaussian bar basis functions, a conventional back propagation neural network, and linear regression. These three techniques were tested with real data from six SSME ground tests containing two anomalies. The basis function network trained more rapidly than back propagation. It yielded nominal predictions with, a tight enough confidence interval to distinguish anomalous deviations from the nominal fluctuations in an engine parameter. Since the function-approximation approach requires nominal training data only, it is capable of detecting unknown classes of anomalies for which training data is not available.
Setup Instructions for the Applied Anomaly Detection Tool (AADT) Web Server
2016-09-01
ARL-TR-7798 ● SEP 2016 US Army Research Laboratory Setup Instructions for the Applied Anomaly Detection Tool (AADT) Web Server...for the Applied Anomaly Detection Tool (AADT) Web Server by Christian D Schlesiger Computational and Information Sciences Directorate, ARL...SUBTITLE Setup Instructions for the Applied Anomaly Detection Tool (AADT) Web Server 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT
Seismic data fusion anomaly detection
NASA Astrophysics Data System (ADS)
Harrity, Kyle; Blasch, Erik; Alford, Mark; Ezekiel, Soundararajan; Ferris, David
2014-06-01
Detecting anomalies in non-stationary signals has valuable applications in many fields including medicine and meteorology. These include uses such as identifying possible heart conditions from an Electrocardiography (ECG) signals or predicting earthquakes via seismographic data. Over the many choices of anomaly detection algorithms, it is important to compare possible methods. In this paper, we examine and compare two approaches to anomaly detection and see how data fusion methods may improve performance. The first approach involves using an artificial neural network (ANN) to detect anomalies in a wavelet de-noised signal. The other method uses a perspective neural network (PNN) to analyze an arbitrary number of "perspectives" or transformations of the observed signal for anomalies. Possible perspectives may include wavelet de-noising, Fourier transform, peak-filtering, etc.. In order to evaluate these techniques via signal fusion metrics, we must apply signal preprocessing techniques such as de-noising methods to the original signal and then use a neural network to find anomalies in the generated signal. From this secondary result it is possible to use data fusion techniques that can be evaluated via existing data fusion metrics for single and multiple perspectives. The result will show which anomaly detection method, according to the metrics, is better suited overall for anomaly detection applications. The method used in this study could be applied to compare other signal processing algorithms.
Personal Electronic Devices and Their Interference with Aircraft Systems
NASA Technical Reports Server (NTRS)
Ross, Elden; Ely, Jay J. (Technical Monitor)
2001-01-01
A compilation of data on personal electronic devices (PEDs) attributed to having created anomalies with aircraft systems. Charts and tables display 14 years of incidents reported by pilots to the Aviation Safety Reporting System (ASRS). Affected systems, incident severity, sources of anomaly detection, and the most frequently identified PEDs are some of the more significant data. Several reports contain incidents of aircraft off course when all systems indicated on course and of critical events that occurred during landings and takeoffs. Additionally, PEDs that should receive priority in testing are identified.
Adiabatic Quantum Anomaly Detection and Machine Learning
NASA Astrophysics Data System (ADS)
Pudenz, Kristen; Lidar, Daniel
2012-02-01
We present methods of anomaly detection and machine learning using adiabatic quantum computing. The machine learning algorithm is a boosting approach which seeks to optimally combine somewhat accurate classification functions to create a unified classifier which is much more accurate than its components. This algorithm then becomes the first part of the larger anomaly detection algorithm. In the anomaly detection routine, we first use adiabatic quantum computing to train two classifiers which detect two sets, the overlap of which forms the anomaly class. We call this the learning phase. Then, in the testing phase, the two learned classification functions are combined to form the final Hamiltonian for an adiabatic quantum computation, the low energy states of which represent the anomalies in a binary vector space.
A mathematical model of extremely low frequency ocean induced electromagnetic noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dautta, Manik, E-mail: manik.dautta@anyeshan.com; Faruque, Rumana Binte, E-mail: rumana.faruque@anyeshan.com; Islam, Rakibul, E-mail: rakibul.islam@anyeshan.com
2016-07-12
Magnetic Anomaly Detection (MAD) system uses the principle that ferromagnetic objects disturb the magnetic lines of force of the earth. These lines of force are able to pass through both water and air in similar manners. A MAD system, usually mounted on an aerial vehicle, is thus often employed to confirm the detection and accomplish localization of large ferromagnetic objects submerged in a sea-water environment. However, the total magnetic signal encountered by a MAD system includes contributions from a myriad of low to Extremely Low Frequency (ELF) sources. The goal of the MAD system is to detect small anomaly signalsmore » in the midst of these low-frequency interfering signals. Both the Range of Detection (R{sub d}) and the Probability of Detection (P{sub d}) are limited by the ratio of anomaly signal strength to the interfering magnetic noise. In this paper, we report a generic mathematical model to estimate the signal-to-noise ratio or SNR. Since time-variant electro-magnetic signals are affected by conduction losses due to sea-water conductivity and the presence of air-water interface, we employ the general formulation of dipole induced electromagnetic field propagation in stratified media [1]. As a first step we employ a volumetric distribution of isolated elementary magnetic dipoles, each having its own dipole strength and orientation, to estimate the magnetic noise observed by a MAD system. Numerical results are presented for a few realizations out of an ensemble of possible realizations of elementary dipole source distributions.« less
Development of a Global Agricultural Hotspot Detection and Early Warning System
NASA Astrophysics Data System (ADS)
Lemoine, G.; Rembold, F.; Urbano, F.; Csak, G.
2015-12-01
The number of web based platforms for crop monitoring has grown rapidly over the last years and anomaly maps and time profiles of remote sensing derived indicators can be accessed online thanks to a number of web based portals. However, while these systems make available a large amount of crop monitoring data to the agriculture and food security analysts, there is no global platform which provides agricultural production hotspot warning in a highly automatic and timely manner. Therefore a web based system providing timely warning evidence as maps and short narratives is currently under development by the Joint Research Centre. The system (called "HotSpot Detection System of Agriculture Production Anomalies", HSDS) will focus on water limited agricultural systems worldwide. The automatic analysis of relevant meteorological and vegetation indicators at selected administrative units (Gaul 1 level) will trigger warning messages for the areas where anomalous conditions are observed. The level of warning (ranging from "watch" to "alert") will depend on the nature and number of indicators for which an anomaly is detected. Information regarding the extent of the agricultural areas concerned by the anomaly and the progress of the agricultural season will complement the warning label. In addition, we are testing supplementary detailed information from other sources for the areas triggering a warning. These regard the automatic web-based and food security-tailored analysis of media (using the JRC Media Monitor semantic search engine) and the automatic detection of active crop area using Sentinel 1, upcoming Sentinel-2 and Landsat 8 imagery processed in Google Earth Engine. The basic processing will be fully automated and updated every 10 days exploiting low resolution rainfall estimates and satellite vegetation indices. Maps, trend graphs and statistics accompanied by short narratives edited by a team of crop monitoring experts, will be made available on the website on a monthly basis.
RIDES: Robust Intrusion Detection System for IP-Based Ubiquitous Sensor Networks
Amin, Syed Obaid; Siddiqui, Muhammad Shoaib; Hong, Choong Seon; Lee, Sungwon
2009-01-01
The IP-based Ubiquitous Sensor Network (IP-USN) is an effort to build the “Internet of things”. By utilizing IP for low power networks, we can benefit from existing well established tools and technologies of IP networks. Along with many other unresolved issues, securing IP-USN is of great concern for researchers so that future market satisfaction and demands can be met. Without proper security measures, both reactive and proactive, it is hard to envisage an IP-USN realm. In this paper we present a design of an IDS (Intrusion Detection System) called RIDES (Robust Intrusion DEtection System) for IP-USN. RIDES is a hybrid intrusion detection system, which incorporates both Signature and Anomaly based intrusion detection components. For signature based intrusion detection this paper only discusses the implementation of distributed pattern matching algorithm with the help of signature-code, a dynamically created attack-signature identifier. Other aspects, such as creation of rules are not discussed. On the other hand, for anomaly based detection we propose a scoring classifier based on the SPC (Statistical Process Control) technique called CUSUM charts. We also investigate the settings and their effects on the performance of related parameters for both of the components. PMID:22412321
RIDES: Robust Intrusion Detection System for IP-Based Ubiquitous Sensor Networks.
Amin, Syed Obaid; Siddiqui, Muhammad Shoaib; Hong, Choong Seon; Lee, Sungwon
2009-01-01
The IP-based Ubiquitous Sensor Network (IP-USN) is an effort to build the "Internet of things". By utilizing IP for low power networks, we can benefit from existing well established tools and technologies of IP networks. Along with many other unresolved issues, securing IP-USN is of great concern for researchers so that future market satisfaction and demands can be met. Without proper security measures, both reactive and proactive, it is hard to envisage an IP-USN realm. In this paper we present a design of an IDS (Intrusion Detection System) called RIDES (Robust Intrusion DEtection System) for IP-USN. RIDES is a hybrid intrusion detection system, which incorporates both Signature and Anomaly based intrusion detection components. For signature based intrusion detection this paper only discusses the implementation of distributed pattern matching algorithm with the help of signature-code, a dynamically created attack-signature identifier. Other aspects, such as creation of rules are not discussed. On the other hand, for anomaly based detection we propose a scoring classifier based on the SPC (Statistical Process Control) technique called CUSUM charts. We also investigate the settings and their effects on the performance of related parameters for both of the components.
On Modeling of Adversary Behavior and Defense for Survivability of Military MANET Applications
2015-01-01
anomaly detection technique. b) A system-level majority-voting based intrusion detection system with m being the number of verifiers used to perform...pp. 1254 - 1263. [5] R. Mitchell, and I.R. Chen, “Adaptive Intrusion Detection for Unmanned Aircraft Systems based on Behavior Rule Specification...and adaptively trigger the best attack strategies while avoiding detection and eviction. The second step is to model defense behavior of defenders
Rocket Testing and Integrated System Health Management
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Schmalzel, John
2005-01-01
Integrated System Health Management (ISHM) describes a set of system capabilities that in aggregate perform: determination of condition for each system element, detection of anomalies, diagnosis of causes for anomalies, and prognostics for future anomalies and system behavior. The ISHM should also provide operators with situational awareness of the system by integrating contextual and timely data, information, and knowledge (DIaK) as needed. ISHM capabilities can be implemented using a variety of technologies and tools. This chapter provides an overview of ISHM contributing technologies and describes in further detail a novel implementation architecture along with associated taxonomy, ontology, and standards. The operational ISHM testbed is based on a subsystem of a rocket engine test stand. Such test stands contain many elements that are common to manufacturing systems, and thereby serve to illustrate the potential benefits and methodologies of the ISHM approach for intelligent manufacturing.
A novel approach for pilot error detection using Dynamic Bayesian Networks.
Saada, Mohamad; Meng, Qinggang; Huang, Tingwen
2014-06-01
In the last decade Dynamic Bayesian Networks (DBNs) have become one type of the most attractive probabilistic modelling framework extensions of Bayesian Networks (BNs) for working under uncertainties from a temporal perspective. Despite this popularity not many researchers have attempted to study the use of these networks in anomaly detection or the implications of data anomalies on the outcome of such models. An abnormal change in the modelled environment's data at a given time, will cause a trailing chain effect on data of all related environment variables in current and consecutive time slices. Albeit this effect fades with time, it still can have an ill effect on the outcome of such models. In this paper we propose an algorithm for pilot error detection, using DBNs as the modelling framework for learning and detecting anomalous data. We base our experiments on the actions of an aircraft pilot, and a flight simulator is created for running the experiments. The proposed anomaly detection algorithm has achieved good results in detecting pilot errors and effects on the whole system.
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.
Rule-based expert system for maritime anomaly detection
NASA Astrophysics Data System (ADS)
Roy, Jean
2010-04-01
Maritime domain operators/analysts have a mandate to be aware of all that is happening within their areas of responsibility. This mandate derives from the needs to defend sovereignty, protect infrastructures, counter terrorism, detect illegal activities, etc., and it has become more challenging in the past decade, as commercial shipping turned into a potential threat. In particular, a huge portion of the data and information made available to the operators/analysts is mundane, from maritime platforms going about normal, legitimate activities, and it is very challenging for them to detect and identify the non-mundane. To achieve such anomaly detection, they must establish numerous relevant situational facts from a variety of sensor data streams. Unfortunately, many of the facts of interest just cannot be observed; the operators/analysts thus use their knowledge of the maritime domain and their reasoning faculties to infer these facts. As they are often overwhelmed by the large amount of data and information, automated reasoning tools could be used to support them by inferring the necessary facts, ultimately providing indications and warning on a small number of anomalous events worthy of their attention. Along this line of thought, this paper describes a proof-of-concept prototype of a rule-based expert system implementing automated rule-based reasoning in support of maritime anomaly detection.
The use of Compton scattering in detecting anomaly in soil-possible use in pyromaterial detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abedin, Ahmad Firdaus Zainal; Ibrahim, Noorddin; Zabidi, Noriza Ahmad
The Compton scattering is able to determine the signature of land mine detection based on dependency of density anomaly and energy change of scattered photons. In this study, 4.43 MeV gamma of the Am-Be source was used to perform Compton scattering. Two detectors were placed between source with distance of 8 cm and radius of 1.9 cm. Detectors of thallium-doped sodium iodide NaI(TI) was used for detecting gamma ray. There are 9 anomalies used in this simulation. The physical of anomaly is in cylinder form with radius of 10 cm and 8.9 cm height. The anomaly is buried 5 cm deep in the bed soil measuredmore » 80 cm radius and 53.5 cm height. Monte Carlo methods indicated the scattering of photons is directly proportional to density of anomalies. The difference between detector response with anomaly and without anomaly namely contrast ratio values are in a linear relationship with density of anomalies. Anomalies of air, wood and water give positive contrast ratio values whereas explosive, sand, concrete, graphite, limestone and polyethylene give negative contrast ratio values. Overall, the contrast ratio values are greater than 2 % for all anomalies. The strong contrast ratios result a good detection capability and distinction between anomalies.« less
SSME HPOTP post-test diagnostic system enhancement project
NASA Technical Reports Server (NTRS)
Bickmore, Timothy W.
1995-01-01
An assessment of engine and component health is routinely made after each test or flight firing of a space shuttle main engine (SSME). Currently, this health assessment is done by teams of engineers who manually review sensor data, performance data, and engine and component operating histories. Based on review of information from these various sources, an evaluation is made as to the health of each component of the SSME and the preparedness of the engine for another test or flight. The objective of this project is to further develop a computer program which automates the analysis of test data from the SSME high-pressure oxidizer turbopump (HPOTP) in order to detect and diagnose anomalies. This program fits into a larger system, the SSME Post-Test Diagnostic System (PTDS), which will eventually be extended to assess the health and status of most SSME components on the basis of test data analysis. The HPOTP module is an expert system, which uses 'rules-of-thumb' obtained from interviews with experts from NASA Marshall Space Flight Center (MSFC) to detect and diagnose anomalies. Analyses of the raw test data are first performed using pattern recognition techniques which result in features such as spikes, shifts, peaks, and drifts being detected and written to a database. The HPOTP module then looks for combination of these features which are indicative of known anomalies, using the rules gathered from the turbomachinery experts. Results of this analysis are then displayed via a graphical user interface which provides ranked lists of anomalies and observations by engine component, along with supporting data plots for each.
Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steve
2011-01-01
This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.
On-line condition monitoring applications in nuclear power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hastiemian, H. M.; Feltus, M. A.
2006-07-01
Existing signals from process instruments in nuclear power plants can be sampled while the plant is operating and analyzed to verify the static and dynamic performance of process sensors, identify process-to-sensor problems, detect instrument anomalies such as venturi fouling, measure the vibration of the reactor vessel and its internals, or detect thermal hydraulic anomalies within the reactor coolant system. These applications are important in nuclear plants to satisfy a variety of objectives such as: 1) meeting the plant technical specification requirements; 2) complying with regulatory regulations; 3) guarding against equipment and process degradation; 4) providing a means for incipient failuremore » detection and predictive maintenance; or 5) identifying the root cause of anomalies in equipment and plant processes. The technologies that are used to achieve these objectives are collectively referred to as 'on-line condition monitoring.' This paper presents a review of key elements of these technologies, provides examples of their use in nuclear power plants, and illustrates how they can be integrated into an on-line condition monitoring system for nuclear power plants. (authors)« less
The architecture of a network level intrusion detection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heady, R.; Luger, G.; Maccabe, A.
1990-08-15
This paper presents the preliminary architecture of a network level intrusion detection system. The proposed system will monitor base level information in network packets (source, destination, packet size, and time), learning the normal patterns and announcing anomalies as they occur. The goal of this research is to determine the applicability of current intrusion detection technology to the detection of network level intrusions. In particular, the authors are investigating the possibility of using this technology to detect and react to worm programs.
On-road anomaly detection by multimodal sensor analysis and multimedia processing
NASA Astrophysics Data System (ADS)
Orhan, Fatih; Eren, P. E.
2014-03-01
The use of smartphones in Intelligent Transportation Systems is gaining popularity, yet many challenges exist in developing functional applications. Due to the dynamic nature of transportation, vehicular social applications face complexities such as developing robust sensor management, performing signal and image processing tasks, and sharing information among users. This study utilizes a multimodal sensor analysis framework which enables the analysis of sensors in multimodal aspect. It also provides plugin-based analyzing interfaces to develop sensor and image processing based applications, and connects its users via a centralized application as well as to social networks to facilitate communication and socialization. With the usage of this framework, an on-road anomaly detector is being developed and tested. The detector utilizes the sensors of a mobile device and is able to identify anomalies such as hard brake, pothole crossing, and speed bump crossing. Upon such detection, the video portion containing the anomaly is automatically extracted in order to enable further image processing analysis. The detection results are shared on a central portal application for online traffic condition monitoring.
Stand-off thermal IR minefield survey: system concept and experimental results
NASA Astrophysics Data System (ADS)
Cremer, Frank; Nguyen, Thanh T.; Yang, Lixin; Sahli, Hichem
2005-06-01
A detailed description of the CLEARFAST system for thermal IR stand-off minefield survey is given. The system allows (i) a stand-off diurnal observation of hazardous area, (ii) detecting anomalies, i.e. locating and searching for targets which are thermally and spectrally distinct from their surroundings, (iii) estimating the physical parameters, i.e. depth and thermal diffusivity, of the detected anomalies, and (iv) providing panoramic (mosaic) images indicating the locations of suspect objects and known markers. The CLEARFAST demonstrator has been successfully deployed and operated, in November 2004, in a real minefield within the United Nations Buffer Zone in Cyprus. The paper describes the main principles of the system and illustrates the processing chain on a set of real minefield images, together with qualitative and quantitative results.
Experiments on Adaptive Techniques for Host-Based Intrusion Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
DRAELOS, TIMOTHY J.; COLLINS, MICHAEL J.; DUGGAN, DAVID P.
2001-09-01
This research explores four experiments of adaptive host-based intrusion detection (ID) techniques in an attempt to develop systems that can detect novel exploits. The technique considered to have the most potential is adaptive critic designs (ACDs) because of their utilization of reinforcement learning, which allows learning exploits that are difficult to pinpoint in sensor data. Preliminary results of ID using an ACD, an Elman recurrent neural network, and a statistical anomaly detection technique demonstrate an ability to learn to distinguish between clean and exploit data. We used the Solaris Basic Security Module (BSM) as a data source and performed considerablemore » preprocessing on the raw data. A detection approach called generalized signature-based ID is recommended as a middle ground between signature-based ID, which has an inability to detect novel exploits, and anomaly detection, which detects too many events including events that are not exploits. The primary results of the ID experiments demonstrate the use of custom data for generalized signature-based intrusion detection and the ability of neural network-based systems to learn in this application environment.« less
Congenital anomalies of the left brachiocephalic vein detected in adults on computed tomography.
Yamamuro, Hiroshi; Ichikawa, Tamaki; Hashimoto, Jun; Ono, Shun; Nagata, Yoshimi; Kawada, Shuichi; Kobayashi, Makiko; Koizumi, Jun; Shibata, Takeo; Imai, Yutaka
2017-10-01
Anomalous left brachiocephalic vein (BCV) is a rare and less known systemic venous anomaly. We evaluated congenital anomalies of the left BCV in adults detected during computed tomography (CT) examinations. This retrospective study included 81,425 patients without congenital heart disease who underwent chest CT. We reviewed the recorded reports and CT images for congenital anomalies of the left BCV including aberrant and supernumerary BCVs. The associated congenital aortic anomalies were assessed. Among 73,407 cases at a university hospital, 22 (16 males, 6 females; mean age, 59 years) with aberrant left BCVs were found using keyword research on recorded reports (0.03%). Among 8018 cases at the branch hospital, 5 (4 males, 1 female; mean age, 67 years) with aberrant left BCVs were found using CT image review (0.062%). There were no significant differences in incidences of aberrant left BCV between the two groups. Two cases had double left BCVs. Eleven cases showed high aortic arches. Two cases had the right aortic arch, one case had an incomplete double aortic arch, and one case was associated with coarctation. Aberrant left BCV on CT examination in adults was extremely rare. Some cases were associated with aortic arch anomalies.
System for Anomaly and Failure Detection (SAFD) system development
NASA Technical Reports Server (NTRS)
Oreilly, D.
1992-01-01
This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.
Rotor Smoothing and Vibration Monitoring Results for the US Army VMEP
2009-06-01
individual component CI detection thresholds, and development of models for diagnostics, prognostics , and anomaly detection . Figure 16 VMEP Server...and prognostics are of current interest. Development of those systems requires large amounts of data (collection, monitoring , manipulation) to capture...development of automated systems and for continuous updating of algorithms to improve detection , classification, and prognostic performance. A test
Continental and oceanic magnetic anomalies: Enhancement through GRM
NASA Technical Reports Server (NTRS)
Vonfrese, R. R. B.; Hinze, W. J.
1985-01-01
In contrast to the POGO and MAGSAT satellites, the Geopotential Research Mission (GRM) satellite system will orbit at a minimum elevation to provide significantly better resolved lithospheric magnetic anomalies for more detailed and improved geologic analysis. In addition, GRM will measure corresponding gravity anomalies to enhance our understanding of the gravity field for vast regions of the Earth which are largely inaccessible to more conventional surface mapping. Crustal studies will greatly benefit from the dual data sets as modeling has shown that lithospheric sources of long wavelength magnetic anomalies frequently involve density variations which may produce detectable gravity anomalies at satellite elevations. Furthermore, GRM will provide an important replication of lithospheric magnetic anomalies as an aid to identifying and extracting these anomalies from satellite magnetic measurements. The potential benefits to the study of the origin and characterization of the continents and oceans, that may result from the increased GRM resolution are examined.
Mobile In Vivo Infrared Data Collection and Diagnoses Comparison System
NASA Technical Reports Server (NTRS)
Mintz, Frederick W. (Inventor); Gunapala, Sarath D. (Inventor); Moynihan, Philip I. (Inventor)
2013-01-01
Described is a mobile in vivo infrared brain scan and analysis system. The system includes a data collection subsystem and a data analysis subsystem. The data collection subsystem is a helmet with a plurality of infrared (IR) thermometer probes. Each of the IR thermometer probes includes an IR photodetector capable of detecting IR radiation generated by evoked potentials within a user's skull. The helmet is formed to collect brain data that is reflective of firing neurons in a mobile subject and transmit the brain data to the data analysis subsystem. The data analysis subsystem is configured to generate and display a three-dimensional image that depicts a location of the firing neurons. The data analysis subsystem is also configured to compare the brain data against a library of brain data to detect an anomaly in the brain data, and notify a user of any detected anomaly in the brain data.
Big Data Analysis of Manufacturing Processes
NASA Astrophysics Data System (ADS)
Windmann, Stefan; Maier, Alexander; Niggemann, Oliver; Frey, Christian; Bernardi, Ansgar; Gu, Ying; Pfrommer, Holger; Steckel, Thilo; Krüger, Michael; Kraus, Robert
2015-11-01
The high complexity of manufacturing processes and the continuously growing amount of data lead to excessive demands on the users with respect to process monitoring, data analysis and fault detection. For these reasons, problems and faults are often detected too late, maintenance intervals are chosen too short and optimization potential for higher output and increased energy efficiency is not sufficiently used. A possibility to cope with these challenges is the development of self-learning assistance systems, which identify relevant relationships by observation of complex manufacturing processes so that failures, anomalies and need for optimization are automatically detected. The assistance system developed in the present work accomplishes data acquisition, process monitoring and anomaly detection in industrial and agricultural processes. The assistance system is evaluated in three application cases: Large distillation columns, agricultural harvesting processes and large-scale sorting plants. In this paper, the developed infrastructures for data acquisition in these application cases are described as well as the developed algorithms and initial evaluation results.
TargetVue: Visual Analysis of Anomalous User Behaviors in Online Communication Systems.
Cao, Nan; Shi, Conglei; Lin, Sabrina; Lu, Jie; Lin, Yu-Ru; Lin, Ching-Yung
2016-01-01
Users with anomalous behaviors in online communication systems (e.g. email and social medial platforms) are potential threats to society. Automated anomaly detection based on advanced machine learning techniques has been developed to combat this issue; challenges remain, though, due to the difficulty of obtaining proper ground truth for model training and evaluation. Therefore, substantial human judgment on the automated analysis results is often required to better adjust the performance of anomaly detection. Unfortunately, techniques that allow users to understand the analysis results more efficiently, to make a confident judgment about anomalies, and to explore data in their context, are still lacking. In this paper, we propose a novel visual analysis system, TargetVue, which detects anomalous users via an unsupervised learning model and visualizes the behaviors of suspicious users in behavior-rich context through novel visualization designs and multiple coordinated contextual views. Particularly, TargetVue incorporates three new ego-centric glyphs to visually summarize a user's behaviors which effectively present the user's communication activities, features, and social interactions. An efficient layout method is proposed to place these glyphs on a triangle grid, which captures similarities among users and facilitates comparisons of behaviors of different users. We demonstrate the power of TargetVue through its application in a social bot detection challenge using Twitter data, a case study based on email records, and an interview with expert users. Our evaluation shows that TargetVue is beneficial to the detection of users with anomalous communication behaviors.
Building Intrusion Detection with a Wireless Sensor Network
NASA Astrophysics Data System (ADS)
Wälchli, Markus; Braun, Torsten
This paper addresses the detection and reporting of abnormal building access with a wireless sensor network. A common office room, offering space for two working persons, has been monitored with ten sensor nodes and a base station. The task of the system is to report suspicious office occupation such as office searching by thieves. On the other hand, normal office occupation should not throw alarms. In order to save energy for communication, the system provides all nodes with some adaptive short-term memory. Thus, a set of sensor activation patterns can be temporarily learned. The local memory is implemented as an Adaptive Resonance Theory (ART) neural network. Unknown event patterns detected on sensor node level are reported to the base station, where the system-wide anomaly detection is performed. The anomaly detector is lightweight and completely self-learning. The system can be run autonomously or it could be used as a triggering system to turn on an additional high-resolution system on demand. Our building monitoring system has proven to work reliably in different evaluated scenarios. Communication costs of up to 90% could be saved compared to a threshold-based approach without local memory.
Evaluation of Anomaly Detection Method Based on Pattern Recognition
NASA Astrophysics Data System (ADS)
Fontugne, Romain; Himura, Yosuke; Fukuda, Kensuke
The number of threats on the Internet is rapidly increasing, and anomaly detection has become of increasing importance. High-speed backbone traffic is particularly degraded, but their analysis is a complicated task due to the amount of data, the lack of payload data, the asymmetric routing and the use of sampling techniques. Most anomaly detection schemes focus on the statistical properties of network traffic and highlight anomalous traffic through their singularities. In this paper, we concentrate on unusual traffic distributions, which are easily identifiable in temporal-spatial space (e.g., time/address or port). We present an anomaly detection method that uses a pattern recognition technique to identify anomalies in pictures representing traffic. The main advantage of this method is its ability to detect attacks involving mice flows. We evaluate the parameter set and the effectiveness of this approach by analyzing six years of Internet traffic collected from a trans-Pacific link. We show several examples of detected anomalies and compare our results with those of two other methods. The comparison indicates that the only anomalies detected by the pattern-recognition-based method are mainly malicious traffic with a few packets.
Characterization of normality of chaotic systems including prediction and detection of anomalies
NASA Astrophysics Data System (ADS)
Engler, Joseph John
Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.
Detecting Biosphere anomalies hotspots
NASA Astrophysics Data System (ADS)
Guanche-Garcia, Yanira; Mahecha, Miguel; Flach, Milan; Denzler, Joachim
2017-04-01
The current amount of satellite remote sensing measurements available allow for applying data-driven methods to investigate environmental processes. The detection of anomalies or abnormal events is crucial to monitor the Earth system and to analyze their impacts on ecosystems and society. By means of a combination of statistical methods, this study proposes an intuitive and efficient methodology to detect those areas that present hotspots of anomalies, i.e. higher levels of abnormal or extreme events or more severe phases during our historical records. Biosphere variables from a preliminary version of the Earth System Data Cube developed within the CAB-LAB project (http://earthsystemdatacube.net/) have been used in this study. This database comprises several atmosphere and biosphere variables expanding 11 years (2001-2011) with 8-day of temporal resolution and 0.25° of global spatial resolution. In this study, we have used 10 variables that measure the biosphere. The methodology applied to detect abnormal events follows the intuitive idea that anomalies are assumed to be time steps that are not well represented by a previously estimated statistical model [1].We combine the use of Autoregressive Moving Average (ARMA) models with a distance metric like Mahalanobis distance to detect abnormal events in multiple biosphere variables. In a first step we pre-treat the variables by removing the seasonality and normalizing them locally (μ=0,σ=1). Additionally we have regionalized the area of study into subregions of similar climate conditions, by using the Köppen climate classification. For each climate region and variable we have selected the best ARMA parameters by means of a Bayesian Criteria. Then we have obtained the residuals by comparing the fitted models with the original data. To detect the extreme residuals from the 10 variables, we have computed the Mahalanobis distance to the data's mean (Hotelling's T^2), which considers the covariance matrix of the joint distribution. The proposed methodology has been applied to different areas around the globe. The results show that the method is able to detect historic events and also provides a useful tool to define sensitive regions. This method and results have been developed within the framework of the project BACI (http://baci-h2020.eu/), which aims to integrate Earth Observation data to monitor the earth system and assessing the impacts of terrestrial changes. [1] V. Chandola, A., Banerjee and v., Kumar. Anomaly detection: a survey. ACM computing surveys (CSUR), vol. 41, n. 3, 2009. [2] P. Mahalanobis. On the generalised distance in statistics. Proceedings National Institute of Science, vol. 2, pp 49-55, 1936.
Real-time anomaly detection for very short-term load forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Jian; Hong, Tao; Yue, Meng
Although the recent load information is critical to very short-term load forecasting (VSTLF), power companies often have difficulties in collecting the most recent load values accurately and timely for VSTLF applications. This paper tackles the problem of real-time anomaly detection in most recent load information used by VSTLF. This paper proposes a model-based anomaly detection method that consists of two components, a dynamic regression model and an adaptive anomaly threshold. The case study is developed using the data from ISO New England. This paper demonstrates that the proposed method significantly outperforms three other anomaly detection methods including two methods commonlymore » used in the field and one state-of-the-art method used by a winning team of the Global Energy Forecasting Competition 2014. Lastly, a general anomaly detection framework is proposed for the future research.« less
Real-time anomaly detection for very short-term load forecasting
Luo, Jian; Hong, Tao; Yue, Meng
2018-01-06
Although the recent load information is critical to very short-term load forecasting (VSTLF), power companies often have difficulties in collecting the most recent load values accurately and timely for VSTLF applications. This paper tackles the problem of real-time anomaly detection in most recent load information used by VSTLF. This paper proposes a model-based anomaly detection method that consists of two components, a dynamic regression model and an adaptive anomaly threshold. The case study is developed using the data from ISO New England. This paper demonstrates that the proposed method significantly outperforms three other anomaly detection methods including two methods commonlymore » used in the field and one state-of-the-art method used by a winning team of the Global Energy Forecasting Competition 2014. Lastly, a general anomaly detection framework is proposed for the future research.« less
Using scan statistics for congenital anomalies surveillance: the EUROCAT methodology.
Teljeur, Conor; Kelly, Alan; Loane, Maria; Densem, James; Dolk, Helen
2015-11-01
Scan statistics have been used extensively to identify temporal clusters of health events. We describe the temporal cluster detection methodology adopted by the EUROCAT (European Surveillance of Congenital Anomalies) monitoring system. Since 2001, EUROCAT has implemented variable window width scan statistic for detecting unusual temporal aggregations of congenital anomaly cases. The scan windows are based on numbers of cases rather than being defined by time. The methodology is imbedded in the EUROCAT Central Database for annual application to centrally held registry data. The methodology was incrementally adapted to improve the utility and to address statistical issues. Simulation exercises were used to determine the power of the methodology to identify periods of raised risk (of 1-18 months). In order to operationalize the scan methodology, a number of adaptations were needed, including: estimating date of conception as unit of time; deciding the maximum length (in time) and recency of clusters of interest; reporting of multiple and overlapping significant clusters; replacing the Monte Carlo simulation with a lookup table to reduce computation time; and placing a threshold on underlying population change and estimating the false positive rate by simulation. Exploration of power found that raised risk periods lasting 1 month are unlikely to be detected except when the relative risk and case counts are high. The variable window width scan statistic is a useful tool for the surveillance of congenital anomalies. Numerous adaptations have improved the utility of the original methodology in the context of temporal cluster detection in congenital anomalies.
Anomaly detection applied to a materials control and accounting database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiteson, R.; Spanks, L.; Yarbro, T.
An important component of the national mission of reducing the nuclear danger includes accurate recording of the processing and transportation of nuclear materials. Nuclear material storage facilities, nuclear chemical processing plants, and nuclear fuel fabrication facilities collect and store large amounts of data describing transactions that involve nuclear materials. To maintain confidence in the integrity of these data, it is essential to identify anomalies in the databases. Anomalous data could indicate error, theft, or diversion of material. Yet, because of the complex and diverse nature of the data, analysis and evaluation are extremely tedious. This paper describes the authors workmore » in the development of analysis tools to automate the anomaly detection process for the Material Accountability and Safeguards System (MASS) that tracks and records the activities associated with accountable quantities of nuclear material at Los Alamos National Laboratory. Using existing guidelines that describe valid transactions, the authors have created an expert system that identifies transactions that do not conform to the guidelines. Thus, this expert system can be used to focus the attention of the expert or inspector directly on significant phenomena.« less
Liu, Datong; Peng, Yu; Peng, Xiyuan
2018-01-01
Effective anomaly detection of sensing data is essential for identifying potential system failures. Because they require no prior knowledge or accumulated labels, and provide uncertainty presentation, the probability prediction methods (e.g., Gaussian process regression (GPR) and relevance vector machine (RVM)) are especially adaptable to perform anomaly detection for sensing series. Generally, one key parameter of prediction models is coverage probability (CP), which controls the judging threshold of the testing sample and is generally set to a default value (e.g., 90% or 95%). There are few criteria to determine the optimal CP for anomaly detection. Therefore, this paper designs a graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI) based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points. Furthermore, the Youden index is modified to assess the performance of different CPs, by the minimization of which the optimal CP is derived by the simulated annealing (SA) algorithm. Experiments conducted on two simulation datasets demonstrate the validity of the proposed method. Especially, an actual case study on sensing series from an on-orbit satellite illustrates its significant performance in practical application. PMID:29587372
NASA Astrophysics Data System (ADS)
Singh, Gurmeet; Naikan, V. N. A.
2017-12-01
Thermography has been widely used as a technique for anomaly detection in induction motors. International Electrical Testing Association (NETA) proposed guidelines for thermographic inspection of electrical systems and rotating equipment. These guidelines help in anomaly detection and estimating its severity. However, it focus only on location of hotspot rather than diagnosing the fault. This paper addresses two such faults i.e. inter-turn fault and failure of cooling system, where both results in increase of stator temperature. Present paper proposes two thermal profile indicators using thermal analysis of IRT images. These indicators are in compliance with NETA standard. These indicators help in correctly diagnosing inter-turn fault and failure of cooling system. The work has been experimentally validated for healthy and with seeded faults scenarios of induction motors.
22nd Annual Logistics Conference and Exhibition
2006-04-20
Prognostics & Health Management at GE Dr. Piero P.Bonissone Industrial AI Lab GE Global Research NCD Select detection model Anomaly detection results...Mode 213 x Failure mode histogram 2130014 Anomaly detection from event-log data Anomaly detection from event-log data Diagnostics/ Prognostics Using...Failure Monitoring & AssessmentTactical C4ISR Sense Respond 7 •Diagnostics, Prognostics and health management
Riboni, Daniele; Bettini, Claudio; Civitarese, Gabriele; Janjua, Zaffar Haider; Helaoui, Rim
2016-02-01
In an ageing world population more citizens are at risk of cognitive impairment, with negative consequences on their ability of independent living, quality of life and sustainability of healthcare systems. Cognitive neuroscience researchers have identified behavioral anomalies that are significant indicators of cognitive decline. A general goal is the design of innovative methods and tools for continuously monitoring the functional abilities of the seniors at risk and reporting the behavioral anomalies to the clinicians. SmartFABER is a pervasive system targeting this objective. A non-intrusive sensor network continuously acquires data about the interaction of the senior with the home environment during daily activities. A novel hybrid statistical and knowledge-based technique is used to analyses this data and detect the behavioral anomalies, whose history is presented through a dashboard to the clinicians. Differently from related works, SmartFABER can detect abnormal behaviors at a fine-grained level. We have fully implemented the system and evaluated it using real datasets, partly generated by performing activities in a smart home laboratory, and partly acquired during several months of monitoring of the instrumented home of a senior diagnosed with MCI. Experimental results, including comparisons with other activity recognition techniques, show the effectiveness of SmartFABER in terms of recognition rates. Copyright © 2016 Elsevier B.V. All rights reserved.
AnRAD: A Neuromorphic Anomaly Detection Framework for Massive Concurrent Data Streams.
Chen, Qiuwen; Luley, Ryan; Wu, Qing; Bishop, Morgan; Linderman, Richard W; Qiu, Qinru
2018-05-01
The evolution of high performance computing technologies has enabled the large-scale implementation of neuromorphic models and pushed the research in computational intelligence into a new era. Among the machine learning applications, unsupervised detection of anomalous streams is especially challenging due to the requirements of detection accuracy and real-time performance. Designing a computing framework that harnesses the growing computing power of the multicore systems while maintaining high sensitivity and specificity to the anomalies is an urgent research topic. In this paper, we propose anomaly recognition and detection (AnRAD), a bioinspired detection framework that performs probabilistic inferences. We analyze the feature dependency and develop a self-structuring method that learns an efficient confabulation network using unlabeled data. This network is capable of fast incremental learning, which continuously refines the knowledge base using streaming data. Compared with several existing anomaly detection approaches, our method provides competitive detection quality. Furthermore, we exploit the massive parallel structure of the AnRAD framework. Our implementations of the detection algorithm on the graphic processing unit and the Xeon Phi coprocessor both obtain substantial speedups over the sequential implementation on general-purpose microprocessor. The framework provides real-time service to concurrent data streams within diversified knowledge contexts, and can be applied to large problems with multiple local patterns. Experimental results demonstrate high computing performance and memory efficiency. For vehicle behavior detection, the framework is able to monitor up to 16000 vehicles (data streams) and their interactions in real time with a single commodity coprocessor, and uses less than 0.2 ms for one testing subject. Finally, the detection network is ported to our spiking neural network simulator to show the potential of adapting to the emerging neuromorphic architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heady, R.; Luger, G.F.; Maccabe, A.B.
1991-05-15
This paper presents the implementation of a prototype network level intrusion detection system. The prototype system monitors base level information in network packets (source, destination, packet size, time, and network protocol), learning the normal patterns and announcing anomalies as they occur. The goal of this research is to determine the applicability of current intrusion detection technology to the detection of network level intrusions. In particular, the authors are investigating the possibility of using this technology to detect and react to worm programs.
CPAD: Cyber-Physical Attack Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferragut, Erik M; Laska, Jason A
The CPAD technology relates to anomaly detection and more specifically to cyber physical attack detection. It infers underlying physical relationships between components by analyzing the sensor measurements of a system. It then uses these measurements to detect signs of a non-physically realizable state, which is indicative of an integrity attack on the system. CPAD can be used on any highly-instrumented cyber-physical system to detect integrity attacks and identify the component or components compromised. It has applications to power transmission and distribution, nuclear and industrial plants, and complex vehicles.
A new approach for structural health monitoring by applying anomaly detection on strain sensor data
NASA Astrophysics Data System (ADS)
Trichias, Konstantinos; Pijpers, Richard; Meeuwissen, Erik
2014-03-01
Structural Health Monitoring (SHM) systems help to monitor critical infrastructures (bridges, tunnels, etc.) remotely and provide up-to-date information about their physical condition. In addition, it helps to predict the structure's life and required maintenance in a cost-efficient way. Typically, inspection data gives insight in the structural health. The global structural behavior, and predominantly the structural loading, is generally measured with vibration and strain sensors. Acoustic emission sensors are more and more used for measuring global crack activity near critical locations. In this paper, we present a procedure for local structural health monitoring by applying Anomaly Detection (AD) on strain sensor data for sensors that are applied in expected crack path. Sensor data is analyzed by automatic anomaly detection in order to find crack activity at an early stage. This approach targets the monitoring of critical structural locations, such as welds, near which strain sensors can be applied during construction and/or locations with limited inspection possibilities during structural operation. We investigate several anomaly detection techniques to detect changes in statistical properties, indicating structural degradation. The most effective one is a novel polynomial fitting technique, which tracks slow changes in sensor data. Our approach has been tested on a representative test structure (bridge deck) in a lab environment, under constant and variable amplitude fatigue loading. In both cases, the evolving cracks at the monitored locations were successfully detected, autonomously, by our AD monitoring tool.
Radiation anomaly detection algorithms for field-acquired gamma energy spectra
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ron; Guss, Paul; Mitchell, Stephen
2015-08-01
The Remote Sensing Laboratory (RSL) is developing a tactical, networked radiation detection system that will be agile, reconfigurable, and capable of rapid threat assessment with high degree of fidelity and certainty. Our design is driven by the needs of users such as law enforcement personnel who must make decisions by evaluating threat signatures in urban settings. The most efficient tool available to identify the nature of the threat object is real-time gamma spectroscopic analysis, as it is fast and has a very low probability of producing false positive alarm conditions. Urban radiological searches are inherently challenged by the rapid and large spatial variation of background gamma radiation, the presence of benign radioactive materials in terms of the normally occurring radioactive materials (NORM), and shielded and/or masked threat sources. Multiple spectral anomaly detection algorithms have been developed by national laboratories and commercial vendors. For example, the Gamma Detector Response and Analysis Software (GADRAS) a one-dimensional deterministic radiation transport software capable of calculating gamma ray spectra using physics-based detector response functions was developed at Sandia National Laboratories. The nuisance-rejection spectral comparison ratio anomaly detection algorithm (or NSCRAD), developed at Pacific Northwest National Laboratory, uses spectral comparison ratios to detect deviation from benign medical and NORM radiation source and can work in spite of strong presence of NORM and or medical sources. RSL has developed its own wavelet-based gamma energy spectral anomaly detection algorithm called WAVRAD. Test results and relative merits of these different algorithms will be discussed and demonstrated.
Pre-seismic anomalies from optical satellite observations: a review
NASA Astrophysics Data System (ADS)
Jiao, Zhong-Hu; Zhao, Jing; Shan, Xinjian
2018-04-01
Detecting various anomalies using optical satellite data prior to strong earthquakes is key to understanding and forecasting earthquake activities because of its recognition of thermal-radiation-related phenomena in seismic preparation phases. Data from satellite observations serve as a powerful tool in monitoring earthquake preparation areas at a global scale and in a nearly real-time manner. Over the past several decades, many new different data sources have been utilized in this field, and progressive anomaly detection approaches have been developed. This paper reviews the progress and development of pre-seismic anomaly detection technology in this decade. First, precursor parameters, including parameters from the top of the atmosphere, in the atmosphere, and on the Earth's surface, are stated and discussed. Second, different anomaly detection methods, which are used to extract anomalous signals that probably indicate future seismic events, are presented. Finally, certain critical problems with the current research are highlighted, and new developing trends and perspectives for future work are discussed. The development of Earth observation satellites and anomaly detection algorithms can enrich available information sources, provide advanced tools for multilevel earthquake monitoring, and improve short- and medium-term forecasting, which play a large and growing role in pre-seismic anomaly detection research.
Investigation of Axial Electric Field Measurements with Grounded-Wire TEM Surveys
NASA Astrophysics Data System (ADS)
Zhou, Nan-nan; Xue, Guo-qiang; Li, Hai; Hou, Dong-yang
2018-01-01
The grounded-wire transient electromagnetic (TEM) surveying is often performed along the equatorial direction with its observation lines paralleling to the transmitting wire with a certain transmitter-receiver distance. However, such method takes into account only the equatorial component of the electromagnetic field, and a little effort has been made on incorporating the other major component along the transmitting wire, here denoted as axial field. To obtain a comprehensive understanding of its fundamental characteristics and guide the designing of the corresponding observation system for reliable anomaly detection, this study for the first time investigates the axial electric field from three crucial aspects, including its decay curve, plane distribution, and anomaly sensitivity, through both synthetic modeling and real application to one major coal field in China. The results demonstrate a higher sensitivity to both high- and low-resistivity anomalies by the electric field in axial direction and confirm its great potentials for robust anomaly detection in the subsurface.
A Likely Detection of a Two-planet System in a Low-magnification Microlensing Event
NASA Astrophysics Data System (ADS)
Suzuki, D.; Bennett, D. P.; Udalski, A.; Bond, I. A.; Sumi, T.; Han, C.; Kim, Ho-il.; Abe, F.; Asakura, Y.; Barry, R. K.; Bhattacharya, A.; Donachie, M.; Freeman, M.; Fukui, A.; Hirao, Y.; Itow, Y.; Koshimoto, N.; Li, M. C. A.; Ling, C. H.; Masuda, K.; Matsubara, Y.; Muraki, Y.; Nagakane, M.; Onishi, K.; Oyokawa, H.; Ranc, C.; Rattenbury, N. J.; Saito, To.; Sharan, A.; Sullivan, D. J.; Tristram, P. J.; Yonehara, A.; MOA Collaboration; Poleski, R.; Mróz, P.; Skowron, J.; Szymański, M. K.; Soszyński, I.; Kozłowski, S.; Pietrukowicz, P.; Wyrzykowski, Ł.; Ulaczyk, K.; OGLE Collaboration
2018-06-01
We report on the analysis of a microlensing event, OGLE-2014-BLG-1722, that showed two distinct short-term anomalies. The best-fit model to the observed light curves shows that the two anomalies are explained with two planetary mass ratio companions to the primary lens. Although a binary-source model is also able to explain the second anomaly, it is marginally ruled out by 3.1σ. The two-planet model indicates that the first anomaly was caused by planet “b” with a mass ratio of q=({4.5}-0.6+0.7)× {10}-4 and projected separation in units of the Einstein radius, s = 0.753 ± 0.004. The second anomaly reveals planet “c” with a mass ratio of {q}2=({7.0}-1.7+2.3)× {10}-4 with Δχ 2 ∼ 170 compared to the single-planet model. Its separation has two degenerated solutions: the separation of planet c is s 2 = 0.84 ± 0.03 and 1.37 ± 0.04 for the close and wide models, respectively. Unfortunately, this event does not show clear finite-source and microlensing parallax effects; thus, we estimated the physical parameters of the lens system from Bayesian analysis. This gives the masses of planets b and c as {m}{{b}}={56}-33+51 and {m}{{c}}={85}-51+86 {M}\\oplus , respectively, and they orbit a late-type star with a mass of {M}host} ={0.40}-0.24+0.36 {M}ȯ located at {D}{{L}}={6.4}-1.8+1.3 {kpc} from us. The projected distances between the host and planets are {r}\\perp ,{{b}}=1.5+/- 0.6 {au} for planet b and {r}\\perp ,{{c}}={1.7}-0.6+0.7 {au} and {r}\\perp ,{{c}}={2.7}-1.0+1.1 {au} for the close and wide models of planet c. If the two-planet model is true, then this is the third multiple-planet system detected using the microlensing method and the first multiple-planet system detected in low-magnification events, which are dominant in the microlensing survey data. The occurrence rate of multiple cold gas giant systems is estimated using the two such detections and a simple extrapolation of the survey sensitivity of the 6 yr MOA microlensing survey combined with the 4 yr μFUN detection efficiency. It is estimated that 6% ± 2% of stars host two cold giant planets.
Automated Network Anomaly Detection with Learning, Control and Mitigation
ERIC Educational Resources Information Center
Ippoliti, Dennis
2014-01-01
Anomaly detection is a challenging problem that has been researched within a variety of application domains. In network intrusion detection, anomaly based techniques are particularly attractive because of their ability to identify previously unknown attacks without the need to be programmed with the specific signatures of every possible attack.…
Systematic Screening for Subtelomeric Anomalies in a Clinical Sample of Autism
ERIC Educational Resources Information Center
Wassink, Thomas H.; Losh, Molly; Piven, Joseph; Sheffield, Val C.; Ashley, Elizabeth; Westin, Erik R.; Patil, Shivanand R.
2007-01-01
High-resolution karyotyping detects cytogenetic anomalies in 5-10% of cases of autism. Karyotyping, however, may fail to detect abnormalities of chromosome subtelomeres, which are gene rich regions prone to anomalies. We assessed whether panels of FISH probes targeted for subtelomeres could detect abnormalities beyond those identified by…
Microgravity and Electrical Resistivity Techniques for Detection of Caves and Clandestine Tunnels
NASA Astrophysics Data System (ADS)
Crawford, N. C.; Croft, L. A.; Cesin, G. L.; Wilson, S.
2006-05-01
The Center for Cave and Karst Studies, CCKS, has been using microgravity to locate caves from the ground's surface since 1985. The geophysical subsurface investigations began during a period when explosive and toxic vapors were rising from the karst aquifer under Bowling Green into homes, businesses, and schools. The USEPA provided the funding for this Superfund Emergency, and the CCKS was able to drill numerous wells into low-gravity anomalies to confirm and even map the route of caves in the underlying limestone bedrock. In every case, a low-gravity anomaly indicated a bedrock cave, a cave with a collapsed roof or locations where a bedrock cave had collapsed and filled with alluvium. At numerous locations, several wells were cored into microgravity anomalies and in every case, additional wells were drilled on both sides of the anomalies to confirm that the technique was in fact reliable. The wells cored on both sides of the anomalies did not intersect caves but instead intersected virtually solid limestone. Microgravity also easily detected storm sewers and even sanitary sewers, sometimes six meters (twenty feet) beneath the surface. Microgravity has also been used on many occasions to investigate sinkhole collapses. It identified potential collapse areas by detecting voids in the unconsolidated material above bedrock. The system will soon be tested over known tunnels and then during a blind test along a section of the U.S. border at Nogales, Arizona. The CCKS has experimented with other geophysical techniques, particularly ground penetrating radar, seismic and electrical resistivity. In the late 1990s the CCKS started using the Swift/Sting resistivity meter to perform karst geophysical subsurface investigations. The system provides good depth to bedrock data, but it is often difficult to interpret bedrock caves from the modeled data. The system typically used now by the CCKS to perform karst subsurface investigations is to use electrical resistivity traverses followed by microgravity over suspect areas identified on the modeled resistivity data. Some areas of high resistivity indicate caves, but others simply indicate pockets of dry limestone, and the signatures looks virtually identical. Therefore, the CCKS performs microgravity over all suspect areas along the resistivity traverses. A low-gravity anomaly that corresponds with a high-resistivity anomaly indicates a cave location. A high-resistivity anomaly that does not also have a low- gravity anomaly indicates a pocket of dry limestone. Numerous cored wells have been drilled both into the anomalies and on both sides to confirm the cave locations and to establish that the technique is accurate. The September 11, 2001 World Trade Center catastrophe was the catalyst for the formation of a program within the CCKS to use the techniques for locating bedrock caves and voids in unconsolidated materials for search and rescue and for locating clandestine tunnels. We are now into our third year of a grant from the Kentucky Science and Technology Center to develop a robot that will measure microgravity and other geophysical techniques. The robot has the potential for detecting clandestine tunnels under the U.S. border as well as military applications. The system will soon be tested over known tunnels and then during a blind test along a section of the U.S. border at Nogales, Arizona.
Koopman Operator Framework for Time Series Modeling and Analysis
NASA Astrophysics Data System (ADS)
Surana, Amit
2018-01-01
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
Topological anomaly detection performance with multispectral polarimetric imagery
NASA Astrophysics Data System (ADS)
Gartley, M. G.; Basener, W.,
2009-05-01
Polarimetric imaging has demonstrated utility for increasing contrast of manmade targets above natural background clutter. Manual detection of manmade targets in multispectral polarimetric imagery can be challenging and a subjective process for large datasets. Analyst exploitation may be improved utilizing conventional anomaly detection algorithms such as RX. In this paper we examine the performance of a relatively new approach to anomaly detection, which leverages topology theory, applied to spectral polarimetric imagery. Detection results for manmade targets embedded in a complex natural background will be presented for both the RX and Topological Anomaly Detection (TAD) approaches. We will also present detailed results examining detection sensitivities relative to: (1) the number of spectral bands, (2) utilization of Stoke's images versus intensity images, and (3) airborne versus spaceborne measurements.
Quantum machine learning for quantum anomaly detection
NASA Astrophysics Data System (ADS)
Liu, Nana; Rebentrost, Patrick
2018-04-01
Anomaly detection is used for identifying data that deviate from "normal" data patterns. Its usage on classical data finds diverse applications in many important areas such as finance, fraud detection, medical diagnoses, data cleaning, and surveillance. With the advent of quantum technologies, anomaly detection of quantum data, in the form of quantum states, may become an important component of quantum applications. Machine-learning algorithms are playing pivotal roles in anomaly detection using classical data. Two widely used algorithms are the kernel principal component analysis and the one-class support vector machine. We find corresponding quantum algorithms to detect anomalies in quantum states. We show that these two quantum algorithms can be performed using resources that are logarithmic in the dimensionality of quantum states. For pure quantum states, these resources can also be logarithmic in the number of quantum states used for training the machine-learning algorithm. This makes these algorithms potentially applicable to big quantum data applications.
Particle Filtering for Model-Based Anomaly Detection in Sensor Networks
NASA Technical Reports Server (NTRS)
Solano, Wanda; Banerjee, Bikramjit; Kraemer, Landon
2012-01-01
A novel technique has been developed for anomaly detection of rocket engine test stand (RETS) data. The objective was to develop a system that postprocesses a csv file containing the sensor readings and activities (time-series) from a rocket engine test, and detects any anomalies that might have occurred during the test. The output consists of the names of the sensors that show anomalous behavior, and the start and end time of each anomaly. In order to reduce the involvement of domain experts significantly, several data-driven approaches have been proposed where models are automatically acquired from the data, thus bypassing the cost and effort of building system models. Many supervised learning methods can efficiently learn operational and fault models, given large amounts of both nominal and fault data. However, for domains such as RETS data, the amount of anomalous data that is actually available is relatively small, making most supervised learning methods rather ineffective, and in general met with limited success in anomaly detection. The fundamental problem with existing approaches is that they assume that the data are iid, i.e., independent and identically distributed, which is violated in typical RETS data. None of these techniques naturally exploit the temporal information inherent in time series data from the sensor networks. There are correlations among the sensor readings, not only at the same time, but also across time. However, these approaches have not explicitly identified and exploited such correlations. Given these limitations of model-free methods, there has been renewed interest in model-based methods, specifically graphical methods that explicitly reason temporally. The Gaussian Mixture Model (GMM) in a Linear Dynamic System approach assumes that the multi-dimensional test data is a mixture of multi-variate Gaussians, and fits a given number of Gaussian clusters with the help of the wellknown Expectation Maximization (EM) algorithm. The parameters thus learned are used for calculating the joint distribution of the observations. However, this GMM assumption is essentially an approximation and signals the potential viability of non-parametric density estimators. This is the key idea underlying the new approach.
Occurrence and Detectability of Thermal Anomalies on Europa
NASA Astrophysics Data System (ADS)
Hayne, Paul O.; Christensen, Philip R.; Spencer, John R.; Abramov, Oleg; Howett, Carly; Mellon, Michael; Nimmo, Francis; Piqueux, Sylvain; Rathbun, Julie A.
2017-10-01
Endogenic activity is likely on Europa, given its young surface age of and ongoing tidal heating by Jupiter. Temperature is a fundamental signature of activity, as witnessed on Enceladus, where plumes emanate from vents with strongly elevated temperatures. Recent observations suggest the presence of similar water plumes at Europa. Even if plumes are uncommon, resurfacing may produce elevated surface temperatures, perhaps due to near-surface liquid water. Detecting endogenic activity on Europa is one of the primary mission objectives of NASA’s planned Europa Clipper flyby mission.Here, we use a probabilistic model to assess the likelihood of detectable thermal anomalies on the surface of Europa. The Europa Thermal Emission Imaging System (E-THEMIS) investigation is designed to characterize Europa’s thermal behavior and identify any thermal anomalies due to recent or ongoing activity. We define “detectability” on the basis of expected E-THEMIS measurements, which include multi-spectral infrared emission, both day and night.Thermal anomalies on Europa may take a variety of forms, depending on the resurfacing style, frequency, and duration of events: 1) subsurface melting due to hot spots, 2) shear heating on faults, and 3) eruptions of liquid water or warm ice on the surface. We use numerical and analytical models to estimate temperatures for these features. Once activity ceases, lifetimes of thermal anomalies are estimated to be 100 - 1000 yr. On average, Europa’s 10 - 100 Myr surface age implies a resurfacing rate of ~3 - 30 km2/yr. The typical size of resurfacing features determines their frequency of occurrence. For example, if ~100 km2 chaos features dominate recent resurfacing, we expect one event every few years to decades. Smaller features, such as double-ridges, may be active much more frequently. We model each feature type as a statistically independent event, with probabilities weighted by their observed coverage of Europa’s surface. Our results show that if Europa is resurfaced continuously by the processes considered, there is a >99% chance that E-THEMIS will detect a thermal anomaly due to endogenic activity. Therefore, if no anomalies are detected, these models can be ruled out, or revised.
Hu, Erzhong; Nosato, Hirokazu; Sakanashi, Hidenori; Murakawa, Masahiro
2013-01-01
Capsule endoscopy is a patient-friendly endoscopy broadly utilized in gastrointestinal examination. However, the efficacy of diagnosis is restricted by the large quantity of images. This paper presents a modified anomaly detection method, by which both known and unknown anomalies in capsule endoscopy images of small intestine are expected to be detected. To achieve this goal, this paper introduces feature extraction using a non-linear color conversion and Higher-order Local Auto Correlation (HLAC) Features, and makes use of image partition and subspace method for anomaly detection. Experiments are implemented among several major anomalies with combinations of proposed techniques. As the result, the proposed method achieved 91.7% and 100% detection accuracy for swelling and bleeding respectively, so that the effectiveness of proposed method is demonstrated.
ISHM Implementation for Constellation Systems
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Holland, Randy; Schmalzel, John; Duncavage, Dan; Crocker, Alan; Alena, Rick
2006-01-01
Integrated System Health Management (ISHM) is a capability that focuses on determining the condition (health) of every element in a complex System (detect anomalies, diagnose causes, prognosis of future anomalies), and provide data, information, and knowledge (DIaK) "not just data" to control systems for safe and effective operation. This capability is currently done by large teams of people, primarily from ground, but needs to be embedded on-board systems to a higher degree to enable NASA's new Exploration Mission (long term travel and stay in space), while increasing safety and decreasing life cycle costs of systems (vehicles; platforms; bases or outposts; and ground test, launch, and processing operations). This viewgraph presentation reviews the use of ISHM for the Constellation system.
Anomaly Detection Techniques with Real Test Data from a Spinning Turbine Engine-Like Rotor
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Woike, Mark R.; Oza, Nikunj C.; Matthews, Bryan L.
2012-01-01
Online detection techniques to monitor the health of rotating engine components are becoming increasingly attractive to aircraft engine manufacturers in order to increase safety of operation and lower maintenance costs. Health monitoring remains a challenge to easily implement, especially in the presence of scattered loading conditions, crack size, component geometry, and materials properties. The current trend, however, is to utilize noninvasive types of health monitoring or nondestructive techniques to detect hidden flaws and mini-cracks before any catastrophic event occurs. These techniques go further to evaluate material discontinuities and other anomalies that have grown to the level of critical defects that can lead to failure. Generally, health monitoring is highly dependent on sensor systems capable of performing in various engine environmental conditions and able to transmit a signal upon a predetermined crack length, while acting in a neutral form upon the overall performance of the engine system.
Imaging the fetal central nervous system
De Keersmaecker, B.; Claus, F.; De Catte, L.
2011-01-01
The low prevalence of fetal central nervous system anomalies results in a restricted level of exposure and limited experience for most of the obstetricians involved in prenatal ultrasound. Sonographic guidelines for screening the fetal brain in a systematic way will probably increase the detection rate and enhance a correct referral to a tertiary care center, offering the patient a multidisciplinary approach of the condition. This paper aims to elaborate on prenatal sonographic and magnetic resonance imaging (MRI) diagnosis and outcome of various central nervous system malformations. Detailed neurosonographic investigation has become available through high resolution vaginal ultrasound probes and the development of a variety of 3D ultrasound modalities e.g. ultrasound tomographic imaging. In addition, fetal MRI is particularly helpful in the detection of gyration and neurulation anomalies and disorders of the gray and white matter. PMID:24753859
Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling
NASA Astrophysics Data System (ADS)
Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji
We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.
A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data.
Song, Hongchao; Jiang, Zhuqing; Men, Aidong; Yang, Bo
2017-01-01
Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k -nearest neighbor graphs- ( K -NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.
A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data
Jiang, Zhuqing; Men, Aidong; Yang, Bo
2017-01-01
Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k-nearest neighbor graphs- (K-NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity. PMID:29270197
A Database of Computer Attacks for the Evaluation of Intrusion Detection Systems
1999-06-01
administrator whenever a system binary file (such as the ps, login , or ls program) is modified. Normal users have no legitimate reason to alter these files...development of EMERALD [46], which combines statistical anomaly detection from NIDES with signature verification. Specification-based intrusion detection...the creation of a single host that can act as many hosts. Daemons that provide network services—including telnetd, ftpd, and login — display banners
2009-07-01
nonferrous metallic objects. The applicability of the instrument for ordnance and explosives (OE) detection has been widely demonstrated at sites...was cleared of all metallic items. This clearing of the metallic anomalies from the 2 acre Active Response Demonstration Site was broken into three...with their Multiple Towed Array Detection System (MTADS). This system is known for its effectiveness and ability to detect metallic items. Once the
Contrast enhancing solution for use in confocal microscopy
Tannous, Zeina; Torres, Abel; Gonzalez, Salvador
2006-10-31
A method of optically detecting a tumor during surgery. The method includes imaging at least one test point defined on the tumor using a first optical imaging system to provide a first tumor image. The method further includes excising a first predetermined layer of the tumor for forming an in-vivo defect area. A predetermined contrast enhancing solution is disposed on the in-vivo defect area, which is adapted to interact with at least one cell anomaly, such as basal cell carcinoma, located on the in-vivo defect area for optically enhancing the cell anomaly. Thereafter the defect area can be optically imaged to provide a clear and bright representation of the cell anomaly to aid a surgeon while surgically removing the cell anomaly.
Operational field evaluation of the PAC-MAG man-portable magnetometer array
NASA Astrophysics Data System (ADS)
Keranen, Joe; Topolosky, Zeke; Schultz, Gregory; Miller, Jonathan
2013-06-01
Detection and discrimination of unexploded ordnance (UXO) in areas of prior conflict is of high importance to the international community and the United States government. For humanitarian applications, sensors and processing methods need to be robust, reliable, and easy to train and implement using indigenous UXO removal personnel. This paper describes system characterization, system testing, and a continental United States (CONUS) Operational Field Evaluations (OFE) of the PAC-MAG man-portable UXO detection system. System testing occurred at a government test facility in June, 2010 and December, 2011 and the OFE occurred at the same location in June, 2012. NVESD and White River Technologies personnel were present for all testing and evaluation. The PAC-MAG system is a manportable magnetometer array for the detection and characterization of ferrous UXO. System hardware includes four Cesium vapor magnetometers for detection, a Real-time Kinematic Global Position System (RTK-GPS) for sensor positioning, an electronics module for merging array data and WiFi communications and a tablet computer for transmitting and logging data. An odometer, or "hipchain" encoder, provides position information in GPS-denied areas. System software elements include data logging software and post-processing software for detection and characterization of ferrous anomalies. The output of the post-processing software is a dig list containing locations of potential UXO(s), formatted for import into the system GPS equipment for reacquisition of anomalies. Results from system characterization and the OFE will be described.
Automated synthesis, insertion and detection of polyps for CT colonography
NASA Astrophysics Data System (ADS)
Sezille, Nicolas; Sadleir, Robert J. T.; Whelan, Paul F.
2003-03-01
CT Colonography (CTC) is a new non-invasive colon imaging technique which has the potential to replace conventional colonoscopy for colorectal cancer screening. A novel system which facilitates automated detection of colorectal polyps at CTC is introduced. As exhaustive testing of such a system using real patient data is not feasible, more complete testing is achieved through synthesis of artificial polyps and insertion into real datasets. The polyp insertion is semi-automatic: candidate points are manually selected using a custom GUI, suitable points are determined automatically from an analysis of the local neighborhood surrounding each of the candidate points. Local density and orientation information are used to generate polyps based on an elliptical model. Anomalies are identified from the modified dataset by analyzing the axial images. Detected anomalies are classified as potential polyps or natural features using 3D morphological techniques. The final results are flagged for review. The system was evaluated using 15 scenarios. The sensitivity of the system was found to be 65% with 34% false positive detections. Automated diagnosis at CTC is possible and thorough testing is facilitated by augmenting real patient data with computer generated polyps. Ultimately, automated diagnosis will enhance standard CTC and increase performance.
Hierarchical Kohonenen net for anomaly detection in network security.
Sarasamma, Suseela T; Zhu, Qiuming A; Huff, Julie
2005-04-01
A novel multilevel hierarchical Kohonen Net (K-Map) for an intrusion detection system is presented. Each level of the hierarchical map is modeled as a simple winner-take-all K-Map. One significant advantage of this multilevel hierarchical K-Map is its computational efficiency. Unlike other statistical anomaly detection methods such as nearest neighbor approach, K-means clustering or probabilistic analysis that employ distance computation in the feature space to identify the outliers, our approach does not involve costly point-to-point computation in organizing the data into clusters. Another advantage is the reduced network size. We use the classification capability of the K-Map on selected dimensions of data set in detecting anomalies. Randomly selected subsets that contain both attacks and normal records from the KDD Cup 1999 benchmark data are used to train the hierarchical net. We use a confidence measure to label the clusters. Then we use the test set from the same KDD Cup 1999 benchmark to test the hierarchical net. We show that a hierarchical K-Map in which each layer operates on a small subset of the feature space is superior to a single-layer K-Map operating on the whole feature space in detecting a variety of attacks in terms of detection rate as well as false positive rate.
Low-Cost Ultra-Wideband EM Sensor for UXO Detection and Classification
2012-04-01
Mechanical System MOTU Mark of the Unicorn – a brand name for audio equipment MR Magneto-Resistance NIST National Institute of Standards and...magnetic surveys are able to detect ferrous bodies at a further distance than EMI measurements. Consequently , the magnetic anomaly is broader
2015-06-09
anomaly detection , which is generally considered part of high level information fusion (HLIF) involving temporal-geospatial data as well as meta-data... Anomaly detection in the Maritime defence and security domain typically focusses on trying to identify vessels that are behaving in an unusual...manner compared with lawful vessels operating in the area – an applied case of target detection among distractors. Anomaly detection is a complex problem
NASA Astrophysics Data System (ADS)
Doll, William E.; Bell, David T.; Gamey, T. Jeffrey; Beard, Les P.; Sheehan, Jacob R.; Norton, Jeannemarie
2010-04-01
Over the past decade, notable progress has been made in the performance of airborne geophysical systems for mapping and detection of unexploded ordnance in terrestrial and shallow marine environments. For magnetometer systems, the most significant improvements include development of denser magnetometer arrays and vertical gradiometer configurations. In prototype analyses and recent Environmental Security Technology Certification Program (ESTCP) assessments using new production systems the greatest sensitivity has been achieved with a vertical gradiometer configuration, despite model-based survey design results which suggest that dense total-field arrays would be superior. As effective as magnetometer systems have proven to be at many sites, they are inadequate at sites where basalts and other ferrous geologic formations or soils produce anomalies that approach or exceed those of target ordnance items. Additionally, magnetometer systems are ineffective where detection of non-ferrous ordnance items is of primary concern. Recent completion of the Battelle TEM-8 airborne time-domain electromagnetic system represents the culmination of nearly nine years of assessment and development of airborne electromagnetic systems for UXO mapping and detection. A recent ESTCP demonstration of this system in New Mexico showed that it was able to detect 99% of blind-seeded ordnance items, 81mm and larger, and that it could be used to map in detail a bombing target on a basalt flow where previous airborne magnetometer surveys had failed. The probability of detection for the TEM-8 in the blind-seeded study area was better than that reported for a dense-array total-field magnetometer demonstration of the same blind-seeded site, and the TEM-8 system successfully detected these items with less than half as many anomaly picks as the dense-array total-field magnetometer system.
Post-processing for improving hyperspectral anomaly detection accuracy
NASA Astrophysics Data System (ADS)
Wu, Jee-Cheng; Jiang, Chi-Ming; Huang, Chen-Liang
2015-10-01
Anomaly detection is an important topic in the exploitation of hyperspectral data. Based on the Reed-Xiaoli (RX) detector and a morphology operator, this research proposes a novel technique for improving the accuracy of hyperspectral anomaly detection. Firstly, the RX-based detector is used to process a given input scene. Then, a post-processing scheme using morphology operator is employed to detect those pixels around high-scoring anomaly pixels. Tests were conducted using two real hyperspectral images with ground truth information and the results based on receiver operating characteristic curves, illustrated that the proposed method reduced the false alarm rates of the RXbased detector.
An incremental anomaly detection model for virtual machines.
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.
Novel Hyperspectral Anomaly Detection Methods Based on Unsupervised Nearest Regularized Subspace
NASA Astrophysics Data System (ADS)
Hou, Z.; Chen, Y.; Tan, K.; Du, P.
2018-04-01
Anomaly detection has been of great interest in hyperspectral imagery analysis. Most conventional anomaly detectors merely take advantage of spectral and spatial information within neighboring pixels. In this paper, two methods of Unsupervised Nearest Regularized Subspace-based with Outlier Removal Anomaly Detector (UNRSORAD) and Local Summation UNRSORAD (LSUNRSORAD) are proposed, which are based on the concept that each pixel in background can be approximately represented by its spatial neighborhoods, while anomalies cannot. Using a dual window, an approximation of each testing pixel is a representation of surrounding data via a linear combination. The existence of outliers in the dual window will affect detection accuracy. Proposed detectors remove outlier pixels that are significantly different from majority of pixels. In order to make full use of various local spatial distributions information with the neighboring pixels of the pixels under test, we take the local summation dual-window sliding strategy. The residual image is constituted by subtracting the predicted background from the original hyperspectral imagery, and anomalies can be detected in the residual image. Experimental results show that the proposed methods have greatly improved the detection accuracy compared with other traditional detection method.
An incremental anomaly detection model for virtual machines
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245
Rassam, Murad A.; Zainal, Anazida; Maarof, Mohd Aizaini
2013-01-01
Wireless Sensor Networks (WSNs) are important and necessary platforms for the future as the concept “Internet of Things” has emerged lately. They are used for monitoring, tracking, or controlling of many applications in industry, health care, habitat, and military. However, the quality of data collected by sensor nodes is affected by anomalies that occur due to various reasons, such as node failures, reading errors, unusual events, and malicious attacks. Therefore, anomaly detection is a necessary process to ensure the quality of sensor data before it is utilized for making decisions. In this review, we present the challenges of anomaly detection in WSNs and state the requirements to design efficient and effective anomaly detection models. We then review the latest advancements of data anomaly detection research in WSNs and classify current detection approaches in five main classes based on the detection methods used to design these approaches. Varieties of the state-of-the-art models for each class are covered and their limitations are highlighted to provide ideas for potential future works. Furthermore, the reviewed approaches are compared and evaluated based on how well they meet the stated requirements. Finally, the general limitations of current approaches are mentioned and further research opportunities are suggested and discussed. PMID:23966182
NASA Astrophysics Data System (ADS)
Pacaud, F.; Micoulaut, M.
2015-08-01
The thermodynamic, dynamic, structural, and rigidity properties of densified liquid germania (GeO2) have been investigated using classical molecular dynamics simulation. We construct from a thermodynamic framework an analytical equation of state for the liquid allowing the possible detection of thermodynamic precursors (extrema of the derivatives of the free energy), which usually indicate the possibility of a liquid-liquid transition. It is found that for the present germania system, such precursors and the possible underlying liquid-liquid transition are hidden by the slowing down of the dynamics with decreasing temperature. In this respect, germania behaves quite differently when compared to parent tetrahedral systems such as silica or water. We then detect a diffusivity anomaly (a maximum of diffusion with changing density/volume) that is strongly correlated with changes in coordinated species, and the softening of bond-bending (BB) topological constraints that decrease the liquid rigidity and enhance transport. The diffusivity anomaly is finally substantiated from a Rosenfeld-type scaling law linked to the pair correlation entropy, and to structural relaxation.
Multi-Level Anomaly Detection on Time-Varying Graph Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A; Collins, John P; Ferragut, Erik M
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating probabilities at finer levels, and these closely related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, thismore » multi-scale analysis facilitates intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. To illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
Volcanic activity and satellite-detected thermal anomalies at Central American volcanoes
NASA Technical Reports Server (NTRS)
Stoiber, R. E. (Principal Investigator); Rose, W. I., Jr.
1973-01-01
The author has identified the following significant results. A large nuee ardente eruption occurred at Santiaguito volcano, within the test area on 16 September 1973. Through a system of local observers, the eruption has been described, reported to the international scientific community, extent of affected area mapped, and the new ash sampled. A more extensive report on this event will be prepared. The eruption is an excellent example of the kind of volcanic situation in which satellite thermal imagery might be useful. The Santiaguito dome is a complex mass with a whole series of historically active vents. It's location makes access difficult, yet its activity is of great concern to large agricultural populations who live downslope. Santiaguito has produced a number of large eruptions with little apparent warning. In the earlier ground survey large thermal anomalies were identified at Santiaguito. There is no way of knowing whether satellite monitoring could have detected changes in thermal anomaly patterns related to this recent event, but the position of thermal anomalies on Santiaguito and any changes in their character would be relevant information.
Estimation of anomaly location and size using electrical impedance tomography.
Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun; Woo, Eung Je; Cho, Young Gu
2003-01-01
We developed a new algorithm that estimates locations and sizes of anomalies in electrically conducting medium based on electrical impedance tomography (EIT) technique. When only the boundary current and voltage measurements are available, it is not practically feasible to reconstruct accurate high-resolution cross-sectional conductivity or resistivity images of a subject. In this paper, we focus our attention on the estimation of locations and sizes of anomalies with different conductivity values compared with the background tissues. We showed the performance of the algorithm from experimental results using a 32-channel EIT system and saline phantom. With about 1.73% measurement error in boundary current-voltage data, we found that the minimal size (area) of the detectable anomaly is about 0.72% of the size (area) of the phantom. Potential applications include the monitoring of impedance related physiological events and bubble detection in two-phase flow. Since this new algorithm requires neither any forward solver nor time-consuming minimization process, it is fast enough for various real-time applications in medicine and nondestructive testing.
Remote detection of geobotanical anomalies associated with hydrocarbon microseepage
NASA Technical Reports Server (NTRS)
Rock, B. N.
1985-01-01
As part of the continuing study of the Lost River, West Virginia NASA/Geosat Test Case Site, an extensive soil gas survey of the site was conducted during the summer of 1983. This soil gas survey has identified an order of magnitude methane, ethane, propane, and butane anomaly that is precisely coincident with the linear maple anomaly reported previously. This and other maple anomalies were previously suggested to be indicative of anaerobic soil conditions associated with hydrocarbon microseepage. In vitro studies support the view that anomalous distributions of native tree species tolerant of anaerobic soil conditions may be useful indicators of methane microseepage in heavily vegetated areas of the United States characterized by deciduous forest cover. Remote sensing systems which allow discrimination and mapping of native tree species and/or species associations will provide the exploration community with a means of identifying vegetation distributional anomalies indicative of microseepage.
2015-09-21
this framework, MIT LL carried out a one-year proof- of-concept study to determine the capabilities and challenges in the detection of anomalies in...extremely large graphs [5]. Under this effort, two real datasets were considered, and algorithms for data modeling and anomaly detection were developed...is required in a well-defined experimental framework for the detection of anomalies in very large graphs. This study is intended to inform future
Lidar detection algorithm for time and range anomalies.
Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G
2007-10-10
A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.
Recent Results on "Approximations to Optimal Alarm Systems for Anomaly Detection"
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2009-01-01
An optimal alarm system and its approximations may use Kalman filtering for univariate linear dynamic systems driven by Gaussian noise to provide a layer of predictive capability. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. An optimal alarm system can be designed to elicit the fewest false alarms for a fixed detection probability in this particular scenario.
System for evaluating weld quality using eddy currents
Todorov, Evgueni I.; Hay, Jacob
2017-12-12
Electromagnetic and eddy current techniques for fast automated real-time and near real-time inspection and monitoring systems for high production rate joining processes. An eddy current system, array and method for the fast examination of welds to detect anomalies such as missed seam (MS) and lack of penetration (LOP) the system, array and methods capable of detecting and sizing surface and slightly subsurface flaws at various orientations in connection with at least the first and second weld pass.
2016-08-10
enable JCS managers to detect advanced cyber attacks, mitigate the effects of those attacks, and recover their networks following an attack. It also... managers of ICS networks to Detect, Mitigate, and Recover from nation-state-level cyber attacks (strategic, deliberate, well-trained, and funded...Successful Detection of cyber anomalies is best achieved when IT and ICS managers remain in close coordination. The Integrity Checks Table
Advanced Ground Systems Maintenance Enterprise Architecture Project
NASA Technical Reports Server (NTRS)
Harp, Janicce Leshay
2014-01-01
The project implements an architecture for delivery of integrated health management capabilities for the 21st Century launch complex. Capabilities include anomaly detection, fault isolation, prognostics and physics-based diagnostics.
More About Software for No-Loss Computing
NASA Technical Reports Server (NTRS)
Edmonds, Iarina
2007-01-01
A document presents some additional information on the subject matter of "Integrated Hardware and Software for No- Loss Computing" (NPO-42554), which appears elsewhere in this issue of NASA Tech Briefs. To recapitulate: The hardware and software designs of a developmental parallel computing system are integrated to effectuate a concept of no-loss computing (NLC). The system is designed to reconfigure an application program such that it can be monitored in real time and further reconfigured to continue a computation in the event of failure of one of the computers. The design provides for (1) a distributed class of NLC computation agents, denoted introspection agents, that effects hierarchical detection of anomalies; (2) enhancement of the compiler of the parallel computing system to cause generation of state vectors that can be used to continue a computation in the event of a failure; and (3) activation of a recovery component when an anomaly is detected.
Flight-Tested Prototype of BEAM Software
NASA Technical Reports Server (NTRS)
Mackey, Ryan; Tikidjian, Raffi; James, Mark; Wang, David
2006-01-01
Researchers at JPL have completed a software prototype of BEAM (Beacon-based Exception Analysis for Multi-missions) and successfully tested its operation in flight onboard a NASA research aircraft. BEAM (see NASA Tech Briefs, Vol. 26, No. 9; and Vol. 27, No. 3) is an ISHM (Integrated Systems Health Management) technology that automatically analyzes sensor data and classifies system behavior as either nominal or anomalous, and further characterizes anomalies according to strength, duration, and affected signals. BEAM (see figure) can be used to monitor a wide variety of physical systems and sensor types in real time. In this series of tests, BEAM monitored the engines of a Dryden Flight Research Center F-18 aircraft, and performed onboard, unattended analysis of 26 engine sensors from engine startup to shutdown. The BEAM algorithm can detect anomalies based solely on the sensor data, which includes but is not limited to sensor failure, performance degradation, incorrect operation such as unplanned engine shutdown or flameout in this example, and major system faults. BEAM was tested on an F-18 simulator, static engine tests, and 25 individual flights totaling approximately 60 hours of flight time. During these tests, BEAM successfully identified planned anomalies (in-flight shutdowns of one engine) as well as minor unplanned anomalies (e.g., transient oil- and fuel-pressure drops), with no false alarms or suspected false-negative results for the period tested. BEAM also detected previously unknown behavior in the F- 18 compressor section during several flights. This result, confirmed by direct analysis of the raw data, serves as a significant test of BEAM's capability.
Detecting Anomalous Insiders in Collaborative Information Systems
Chen, You; Nyemba, Steve; Malin, Bradley
2012-01-01
Collaborative information systems (CISs) are deployed within a diverse array of environments that manage sensitive information. Current security mechanisms detect insider threats, but they are ill-suited to monitor systems in which users function in dynamic teams. In this paper, we introduce the community anomaly detection system (CADS), an unsupervised learning framework to detect insider threats based on the access logs of collaborative environments. The framework is based on the observation that typical CIS users tend to form community structures based on the subjects accessed (e.g., patients’ records viewed by healthcare providers). CADS consists of two components: 1) relational pattern extraction, which derives community structures and 2) anomaly prediction, which leverages a statistical model to determine when users have sufficiently deviated from communities. We further extend CADS into MetaCADS to account for the semantics of subjects (e.g., patients’ diagnoses). To empirically evaluate the framework, we perform an assessment with three months of access logs from a real electronic health record (EHR) system in a large medical center. The results illustrate our models exhibit significant performance gains over state-of-the-art competitors. When the number of illicit users is low, MetaCADS is the best model, but as the number grows, commonly accessed semantics lead to hiding in a crowd, such that CADS is more prudent. PMID:24489520
Variable Discretisation for Anomaly Detection using Bayesian Networks
2017-01-01
UNCLASSIFIED DST- Group –TR–3328 1 Introduction Bayesian network implementations usually require each variable to take on a finite number of mutually...UNCLASSIFIED Variable Discretisation for Anomaly Detection using Bayesian Networks Jonathan Legg National Security and ISR Division Defence Science...and Technology Group DST- Group –TR–3328 ABSTRACT Anomaly detection is the process by which low probability events are automatically found against a
Enhanced detection and visualization of anomalies in spectral imagery
NASA Astrophysics Data System (ADS)
Basener, William F.; Messinger, David W.
2009-05-01
Anomaly detection algorithms applied to hyperspectral imagery are able to reliably identify man-made objects from a natural environment based on statistical/geometric likelyhood. The process is more robust than target identification, which requires precise prior knowledge of the object of interest, but has an inherently higher false alarm rate. Standard anomaly detection algorithms measure deviation of pixel spectra from a parametric model (either statistical or linear mixing) estimating the image background. The topological anomaly detector (TAD) creates a fully non-parametric, graph theory-based, topological model of the image background and measures deviation from this background using codensity. In this paper we present a large-scale comparative test of TAD against 80+ targets in four full HYDICE images using the entire canonical target set for generation of ROC curves. TAD will be compared against several statistics-based detectors including local RX and subspace RX. Even a perfect anomaly detection algorithm would have a high practical false alarm rate in most scenes simply because the user/analyst is not interested in every anomalous object. To assist the analyst in identifying and sorting objects of interest, we investigate coloring of the anomalies with principle components projections using statistics computed from the anomalies. This gives a very useful colorization of anomalies in which objects of similar material tend to have the same color, enabling an analyst to quickly sort and identify anomalies of highest interest.
Siddesh, Anjurani; Gupta, Geetika; Sharan, Ram; Agarwal, Meenal; Phadke, Shubha R
2017-04-01
Prenatal diagnosis of malformations is an important method of prevention and control of congenital anomalies with poor prognosis. Central nervous system (CNS) malformations amongst these are the most common. The information about the prevalence and spectrum of prenatally detected malformations is crucial for genetic counselling and policymaking for population-based preventive programmes. The objective of this study was to study the spectrum of prenatally detected CNS malformations and their association with chromosomal abnormalities and autopsy findings. This retrospective study was conducted in a tertiary care hospital in north India from January 2007 to December 2013. The details of cases with prenatally detected CNS malformations were collected and were related with the foetal chromosomal analysis and autopsy findings. Amongst 6044 prenatal ultrasonographic examinations performed; 768 (12.7%) had structural malformations and 243 (31.6%) had CNS malformations. Neural tube defects (NTDs) accounted for 52.3 per cent of CNS malformations and 16.5 per cent of all malformations. The other major groups of prenatally detected CNS malformations were ventriculomegaly and midline anomalies. Chromosomal abnormalities were detected in 8.2 per cent of the 73 cases studied. Foetal autopsy findings were available for 48 foetuses. Foetal autopsy identified additional findings in eight foetuses and the aetiological diagnosis changed in two of them (4.2%). Amongst prenatally detected malformations, CNS malformations were common. NTD, which largely is a preventable anomaly, continued to be the most common group. Moreover, 60 per cent of malformations were diagnosed after 20 weeks, posing legal issues. Chromosomal analysis and foetal autopsy are essential for genetic counselling based on aetiological diagnosis.
Pediatric tinnitus: Incidence of imaging anomalies and the impact of hearing loss.
Kerr, Rhorie; Kang, Elise; Hopkins, Brandon; Anne, Samantha
2017-12-01
Guidelines exist for evaluation and management of tinnitus in adults; however lack of evidence in children limits applicability of these guidelines to pediatric patients. Objective of this study is to determine the incidence of inner ear anomalies detected on imaging studies within the pediatric population with tinnitus and evaluate if presence of hearing loss increases the rate of detection of anomalies in comparison to normal hearing patients. Retrospective review of all children with diagnosis of tinnitus from 2010 to 2015 ;at a tertiary care academic center. 102 pediatric patients with tinnitus were identified. Overall, 53 patients had imaging studies with 6 abnormal findings (11.3%). 51/102 patients had hearing loss of which 33 had imaging studies demonstrating 6 inner ear anomalies detected. This is an incidence of 18.2% for inner ear anomalies identified in patients with hearing loss (95% confidence interval (CI) of 7.0-35.5%). 4 of these 6 inner ear anomalies detected were vestibular aqueduct abnormalities. The other two anomalies were cochlear hypoplasia and bilateral semicircular canal dysmorphism. 51 patients had no hearing loss and of these patients, 20 had imaging studies with no inner ear abnormalities detected. There was no statistical difference in incidence of abnormal imaging findings in patients with and without hearing loss (Fisher's exact test, p ;= ;0.072.) CONCLUSION: There is a high incidence of anomalies detected in imaging studies done in pediatric patients with tinnitus, especially in the presence of hearing loss. Copyright © 2017 Elsevier B.V. All rights reserved.
Complementary role of magnetic resonance imaging in the study of the fetal urinary system.
Gómez Huertas, M; Culiañez Casas, M; Molina García, F S; Carrillo Badillo, M P; Pastor Pons, E
2016-01-01
Urinary system birth defects represent the abnormality most often detected in prenatal studies, accounting for 30% to 50% of all structural anomalies present at birth. The most common disorders are urinary tract dilation, developmental variants, cystic kidney diseases, kidney tumors, and bladder defects. These anomalies can present in isolation or in association with various syndromes. They are normally evaluated with sonography, and the use of magnetic resonance imaging (MRI) is considered only in inconclusive cases. In this article, we show the potential of fetal MRI as a technique to complement sonography in the study of fetal urinary system anomalies. We show the additional information that MRI can provide in each entity, especially in the evaluation of kidney function through diffusion-weighted sequences. Copyright © 2016 SERAM. Published by Elsevier España, S.L.U. All rights reserved.
1998-01-01
such as central processing unit (CPU) usage, disk input/output (I/O), memory usage, user activity, and number of logins attempted. The statistics... EMERALD Commercial anomaly detection, system monitoring SRI porras@csl.sri.com www.csl.sri.com/ emerald /index. html Gabriel Commercial system...sensors, it starts to protect the network with minimal configuration and maximum intelligence. T 11 EMERALD TITLE EMERALD (Event Monitoring
Toward Continuous GPS Carrier-Phase Time Transfer: Eliminating the Time Discontinuity at an Anomaly
Yao, Jian; Levine, Judah; Weiss, Marc
2015-01-01
The wide application of Global Positioning System (GPS) carrier-phase (CP) time transfer is limited by the problem of boundary discontinuity (BD). The discontinuity has two categories. One is “day boundary discontinuity,” which has been studied extensively and can be solved by multiple methods [1–8]. The other category of discontinuity, called “anomaly boundary discontinuity (anomaly-BD),” comes from a GPS data anomaly. The anomaly can be a data gap (i.e., missing data), a GPS measurement error (i.e., bad data), or a cycle slip. Initial study of the anomaly-BD shows that we can fix the discontinuity if the anomaly lasts no more than 20 min, using the polynomial curve-fitting strategy to repair the anomaly [9]. However, sometimes, the data anomaly lasts longer than 20 min. Thus, a better curve-fitting strategy is in need. Besides, a cycle slip, as another type of data anomaly, can occur and lead to an anomaly-BD. To solve these problems, this paper proposes a new strategy, i.e., the satellite-clock-aided curve fitting strategy with the function of cycle slip detection. Basically, this new strategy applies the satellite clock correction to the GPS data. After that, we do the polynomial curve fitting for the code and phase data, as before. Our study shows that the phase-data residual is only ~3 mm for all GPS satellites. The new strategy also detects and finds the number of cycle slips by searching the minimum curve-fitting residual. Extensive examples show that this new strategy enables us to repair up to a 40-min GPS data anomaly, regardless of whether the anomaly is due to a data gap, a cycle slip, or a combination of the two. We also find that interference of the GPS signal, known as “jamming”, can possibly lead to a time-transfer error, and that this new strategy can compensate for jamming outages. Thus, the new strategy can eliminate the impact of jamming on time transfer. As a whole, we greatly improve the robustness of the GPS CP time transfer. PMID:26958451
Hot spots of multivariate extreme anomalies in Earth observations
NASA Astrophysics Data System (ADS)
Flach, M.; Sippel, S.; Bodesheim, P.; Brenning, A.; Denzler, J.; Gans, F.; Guanche, Y.; Reichstein, M.; Rodner, E.; Mahecha, M. D.
2016-12-01
Anomalies in Earth observations might indicate data quality issues, extremes or the change of underlying processes within a highly multivariate system. Thus, considering the multivariate constellation of variables for extreme detection yields crucial additional information over conventional univariate approaches. We highlight areas in which multivariate extreme anomalies are more likely to occur, i.e. hot spots of extremes in global atmospheric Earth observations that impact the Biosphere. In addition, we present the year of the most unusual multivariate extreme between 2001 and 2013 and show that these coincide with well known high impact extremes. Technically speaking, we account for multivariate extremes by using three sophisticated algorithms adapted from computer science applications. Namely an ensemble of the k-nearest neighbours mean distance, a kernel density estimation and an approach based on recurrences is used. However, the impact of atmosphere extremes on the Biosphere might largely depend on what is considered to be normal, i.e. the shape of the mean seasonal cycle and its inter-annual variability. We identify regions with similar mean seasonality by means of dimensionality reduction in order to estimate in each region both the `normal' variance and robust thresholds for detecting the extremes. In addition, we account for challenges like heteroscedasticity in Northern latitudes. Apart from hot spot areas, those anomalies in the atmosphere time series are of particular interest, which can only be detected by a multivariate approach but not by a simple univariate approach. Such an anomalous constellation of atmosphere variables is of interest if it impacts the Biosphere. The multivariate constellation of such an anomalous part of a time series is shown in one case study indicating that multivariate anomaly detection can provide novel insights into Earth observations.
A Healthcare Utilization Analysis Framework for Hot Spotting and Contextual Anomaly Detection
Hu, Jianying; Wang, Fei; Sun, Jimeng; Sorrentino, Robert; Ebadollahi, Shahram
2012-01-01
Patient medical records today contain vast amount of information regarding patient conditions along with treatment and procedure records. Systematic healthcare resource utilization analysis leveraging such observational data can provide critical insights to guide resource planning and improve the quality of care delivery while reducing cost. Of particular interest to providers are hot spotting: the ability to identify in a timely manner heavy users of the systems and their patterns of utilization so that targeted intervention programs can be instituted, and anomaly detection: the ability to identify anomalous utilization cases where the patients incurred levels of utilization that are unexpected given their clinical characteristics which may require corrective actions. Past work on medical utilization pattern analysis has focused on disease specific studies. We present a framework for utilization analysis that can be easily applied to any patient population. The framework includes two main components: utilization profiling and hot spotting, where we use a vector space model to represent patient utilization profiles, and apply clustering techniques to identify utilization groups within a given population and isolate high utilizers of different types; and contextual anomaly detection for utilization, where models that map patient’s clinical characteristics to the utilization level are built in order to quantify the deviation between the expected and actual utilization levels and identify anomalies. We demonstrate the effectiveness of the framework using claims data collected from a population of 7667 diabetes patients. Our analysis demonstrates the usefulness of the proposed approaches in identifying clinically meaningful instances for both hot spotting and anomaly detection. In future work we plan to incorporate additional sources of observational data including EMRs and disease registries, and develop analytics models to leverage temporal relationships among medical encounters to provide more in-depth insights. PMID:23304306
A healthcare utilization analysis framework for hot spotting and contextual anomaly detection.
Hu, Jianying; Wang, Fei; Sun, Jimeng; Sorrentino, Robert; Ebadollahi, Shahram
2012-01-01
Patient medical records today contain vast amount of information regarding patient conditions along with treatment and procedure records. Systematic healthcare resource utilization analysis leveraging such observational data can provide critical insights to guide resource planning and improve the quality of care delivery while reducing cost. Of particular interest to providers are hot spotting: the ability to identify in a timely manner heavy users of the systems and their patterns of utilization so that targeted intervention programs can be instituted, and anomaly detection: the ability to identify anomalous utilization cases where the patients incurred levels of utilization that are unexpected given their clinical characteristics which may require corrective actions. Past work on medical utilization pattern analysis has focused on disease specific studies. We present a framework for utilization analysis that can be easily applied to any patient population. The framework includes two main components: utilization profiling and hot spotting, where we use a vector space model to represent patient utilization profiles, and apply clustering techniques to identify utilization groups within a given population and isolate high utilizers of different types; and contextual anomaly detection for utilization, where models that map patient's clinical characteristics to the utilization level are built in order to quantify the deviation between the expected and actual utilization levels and identify anomalies. We demonstrate the effectiveness of the framework using claims data collected from a population of 7667 diabetes patients. Our analysis demonstrates the usefulness of the proposed approaches in identifying clinically meaningful instances for both hot spotting and anomaly detection. In future work we plan to incorporate additional sources of observational data including EMRs and disease registries, and develop analytics models to leverage temporal relationships among medical encounters to provide more in-depth insights.
NASA Astrophysics Data System (ADS)
Flach, Milan; Mahecha, Miguel; Gans, Fabian; Rodner, Erik; Bodesheim, Paul; Guanche-Garcia, Yanira; Brenning, Alexander; Denzler, Joachim; Reichstein, Markus
2016-04-01
The number of available Earth observations (EOs) is currently substantially increasing. Detecting anomalous patterns in these multivariate time series is an important step in identifying changes in the underlying dynamical system. Likewise, data quality issues might result in anomalous multivariate data constellations and have to be identified before corrupting subsequent analyses. In industrial application a common strategy is to monitor production chains with several sensors coupled to some statistical process control (SPC) algorithm. The basic idea is to raise an alarm when these sensor data depict some anomalous pattern according to the SPC, i.e. the production chain is considered 'out of control'. In fact, the industrial applications are conceptually similar to the on-line monitoring of EOs. However, algorithms used in the context of SPC or process monitoring are rarely considered for supervising multivariate spatio-temporal Earth observations. The objective of this study is to exploit the potential and transferability of SPC concepts to Earth system applications. We compare a range of different algorithms typically applied by SPC systems and evaluate their capability to detect e.g. known extreme events in land surface processes. Specifically two main issues are addressed: (1) identifying the most suitable combination of data pre-processing and detection algorithm for a specific type of event and (2) analyzing the limits of the individual approaches with respect to the magnitude, spatio-temporal size of the event as well as the data's signal to noise ratio. Extensive artificial data sets that represent the typical properties of Earth observations are used in this study. Our results show that the majority of the algorithms used can be considered for the detection of multivariate spatiotemporal events and directly transferred to real Earth observation data as currently assembled in different projects at the European scale, e.g. http://baci-h2020.eu/index.php/ and http://earthsystemdatacube.net/. Known anomalies such as the Russian heatwave are detected as well as anomalies which are not detectable with univariate methods.
Spatially-Aware Temporal Anomaly Mapping of Gamma Spectra
NASA Astrophysics Data System (ADS)
Reinhart, Alex; Athey, Alex; Biegalski, Steven
2014-06-01
For security, environmental, and regulatory purposes it is useful to continuously monitor wide areas for unexpected changes in radioactivity. We report on a temporal anomaly detection algorithm which uses mobile detectors to build a spatial map of background spectra, allowing sensitive detection of any anomalies through many days or months of monitoring. We adapt previously-developed anomaly detection methods, which compare spectral shape rather than count rate, to function with limited background data, allowing sensitive detection of small changes in spectral shape from day to day. To demonstrate this technique we collected daily observations over the period of six weeks on a 0.33 square mile research campus and performed source injection simulations.
Wertman, Christina A.; Yablonsky, Richard M.; Shen, Yang; Merrill, John; Kincaid, Christopher R.; Pockalny, Robert A.
2014-01-01
Two destructive high-frequency sea level oscillation events occurred on June 13th, 2013 along the U.S. East Coast. Seafloor processes can be dismissed as the sources, as no concurrent offshore earthquakes or landslides were detected. Here, we present evidence that these tsunami-like events were generated by atmospheric mesoscale convective systems (MCSs) propagating from inland to offshore. The USArray Transportable Array inland and NOAA tide gauges along the coast recorded the pressure anomalies associated with the MCSs. Once offshore, the pressure anomalies generated shallow water waves, which were amplified by the resonance between the water column and atmospheric forcing. Analysis of the tidal data reveals that these waves reflected off the continental shelf break and reached the coast, where bathymetry and coastal geometry contributed to their hazard potential. This study demonstrates that monitoring MCS pressure anomalies in the interior of the U.S. provides important observations for early warnings of MCS-generated tsunamis. PMID:25420958
A review on remotely sensed land surface temperature anomaly as an earthquake precursor
NASA Astrophysics Data System (ADS)
Bhardwaj, Anshuman; Singh, Shaktiman; Sam, Lydia; Joshi, P. K.; Bhardwaj, Akanksha; Martín-Torres, F. Javier; Kumar, Rajesh
2017-12-01
The low predictability of earthquakes and the high uncertainty associated with their forecasts make earthquakes one of the worst natural calamities, capable of causing instant loss of life and property. Here, we discuss the studies reporting the observed anomalies in the satellite-derived Land Surface Temperature (LST) before an earthquake. We compile the conclusions of these studies and evaluate the use of remotely sensed LST anomalies as precursors of earthquakes. The arrival times and the amplitudes of the anomalies vary widely, thus making it difficult to consider them as universal markers to issue earthquake warnings. Based on the randomness in the observations of these precursors, we support employing a global-scale monitoring system to detect statistically robust anomalous geophysical signals prior to earthquakes before considering them as definite precursors.
Hyperspectral anomaly detection using Sony PlayStation 3
NASA Astrophysics Data System (ADS)
Rosario, Dalton; Romano, João; Sepulveda, Rene
2009-05-01
We present a proof-of-principle demonstration using Sony's IBM Cell processor-based PlayStation 3 (PS3) to run-in near real-time-a hyperspectral anomaly detection algorithm (HADA) on real hyperspectral (HS) long-wave infrared imagery. The PS3 console proved to be ideal for doing precisely the kind of heavy computational lifting HS based algorithms require, and the fact that it is a relatively open platform makes programming scientific applications feasible. The PS3 HADA is a unique parallel-random sampling based anomaly detection approach that does not require prior spectra of the clutter background. The PS3 HADA is designed to handle known underlying difficulties (e.g., target shape/scale uncertainties) often ignored in the development of autonomous anomaly detection algorithms. The effort is part of an ongoing cooperative contribution between the Army Research Laboratory and the Army's Armament, Research, Development and Engineering Center, which aims at demonstrating performance of innovative algorithmic approaches for applications requiring autonomous anomaly detection using passive sensors.
A novel approach for detection of anomalies using measurement data of the Ironton-Russell bridge
NASA Astrophysics Data System (ADS)
Zhang, Fan; Norouzi, Mehdi; Hunt, Victor; Helmicki, Arthur
2015-04-01
Data models have been increasingly used in recent years for documenting normal behavior of structures and hence detect and classify anomalies. Large numbers of machine learning algorithms were proposed by various researchers to model operational and functional changes in structures; however, a limited number of studies were applied to actual measurement data due to limited access to the long term measurement data of structures and lack of access to the damaged states of structures. By monitoring the structure during construction and reviewing the effect of construction events on the measurement data, this study introduces a new approach to detect and eventually classify anomalies during construction and after construction. First, the implementation procedure of the sensory network that develops while the bridge is being built and its current status will be detailed. Second, the proposed anomaly detection algorithm will be applied on the collected data and finally, detected anomalies will be validated against the archived construction events.
Application of data cubes for improving detection of water cycle extreme events
NASA Astrophysics Data System (ADS)
Teng, W. L.; Albayrak, A.
2015-12-01
As part of an ongoing NASA-funded project to remove a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series), for the hydrology and other point-time series-oriented communities, "data cubes" are created from which time series files (aka "data rods") are generated on-the-fly and made available as Web services from the Goddard Earth Sciences Data and Information Services Center (GES DISC). Data cubes are data as archived rearranged into spatio-temporal matrices, which allow for easy access to the data, both spatially and temporally. A data cube is a specific case of the general optimal strategy of reorganizing data to match the desired means of access. The gain from such reorganization is greater the larger the data set. As a use case for our project, we are leveraging existing software to explore the application of the data cubes concept to machine learning, for the purpose of detecting water cycle extreme (WCE) events, a specific case of anomaly detection, requiring time series data. We investigate the use of the sequential probability ratio test (SPRT) for anomaly detection and support vector machines (SVM) for anomaly classification. We show an example of detection of WCE events, using the Global Land Data Assimilation Systems (GLDAS) data set.
NASA Astrophysics Data System (ADS)
Sun, Hao; Zou, Huanxin; Zhou, Shilin
2016-03-01
Detection of anomalous targets of various sizes in hyperspectral data has received a lot of attention in reconnaissance and surveillance applications. Many anomaly detectors have been proposed in literature. However, current methods are susceptible to anomalies in the processing window range and often make critical assumptions about the distribution of the background data. Motivated by the fact that anomaly pixels are often distinctive from their local background, in this letter, we proposed a novel hyperspectral anomaly detection framework for real-time remote sensing applications. The proposed framework consists of four major components, sparse feature learning, pyramid grid window selection, joint spatial-spectral collaborative coding and multi-level divergence fusion. It exploits the collaborative representation difference in the feature space to locate potential anomalies and is totally unsupervised without any prior assumptions. Experimental results on airborne recorded hyperspectral data demonstrate that the proposed methods adaptive to anomalies in a large range of sizes and is well suited for parallel processing.
Jang, J; Seo, J K
2015-06-01
This paper describes a multiple background subtraction method in frequency difference electrical impedance tomography (fdEIT) to detect an admittivity anomaly from a high-contrast background conductivity distribution. The proposed method expands the use of the conventional weighted frequency difference EIT method, which has been used limitedly to detect admittivity anomalies in a roughly homogeneous background. The proposed method can be viewed as multiple weighted difference imaging in fdEIT. Although the spatial resolutions of the output images by fdEIT are very low due to the inherent ill-posedness, numerical simulations and phantom experiments of the proposed method demonstrate its feasibility to detect anomalies. It has potential application in stroke detection in a head model, which is highly heterogeneous due to the skull.
Residual Error Based Anomaly Detection Using Auto-Encoder in SMD Machine Sound.
Oh, Dong Yul; Yun, Il Dong
2018-04-24
Detecting an anomaly or an abnormal situation from given noise is highly useful in an environment where constantly verifying and monitoring a machine is required. As deep learning algorithms are further developed, current studies have focused on this problem. However, there are too many variables to define anomalies, and the human annotation for a large collection of abnormal data labeled at the class-level is very labor-intensive. In this paper, we propose to detect abnormal operation sounds or outliers in a very complex machine along with reducing the data-driven annotation cost. The architecture of the proposed model is based on an auto-encoder, and it uses the residual error, which stands for its reconstruction quality, to identify the anomaly. We assess our model using Surface-Mounted Device (SMD) machine sound, which is very complex, as experimental data, and state-of-the-art performance is successfully achieved for anomaly detection.
NASA Astrophysics Data System (ADS)
Nagai, S.; Eto, S.; Tadokoro, K.; Watanabe, T.
2011-12-01
On-land geodetic observations are not enough to monitor crustal activities in and around the subduction zone, so seafloor geodetic observations have been required. However, present accuracy of seafloor geodetic observation is an order of 1 cm or larger, which is difficult to detect differences from plate motion in short time interval, which means a plate coupling rate and its spatio-temporal variation. Our group has been developed observation system and methodology for seafloor geodesy, which is combined kinematic GPS and ocean acoustic ranging. One of influence factors is acoustic velocity change in ocean, due to change in temperature, ocean currents in different scale, and so on. A typical perturbation of acoustic velocity makes an order of 1 ms difference in travel time, which corresponds to 1 m difference in ray length. We have investigated this effect in seafloor geodesy using both observed and synthetic data to reduce estimation error of benchmarker (transponder) positions and to develop our strategy for observation and its analyses. In this paper, we focus on forward modeling of travel times of acoustic ranging data and recovery tests using synthetic data comparing with observed results [Eto et al., 2011; in this meeting]. Estimation procedure for benchmarker positions is similar to those used in earthquake location method and seismic tomography. So we have applied methods in seismic study, especially in tomographic inversion. First, we use method of a one-dimensional velocity inversion with station corrections, proposed by Kissling et al. [1994], to detect spatio-temporal change in ocean acoustic velocity from observed data in the Suruga-Nankai Trough, Japan. From these analyses, some important information has been clarified in travel time data [Eto et al., 2011]. Most of them can explain small velocity anomaly at a depth of 300m or shallower, through forward modeling of travel time data using simple velocity structure with velocity anomaly. However, due to simple data acquisition procedure, we cannot detect velocity anomaly(s) in space and time precisely, that is size of anomaly and its (their) movement. As a next step, we demonstrate recovery of benchmarker positions in tomographic inversion using synthetic data including anomalous travel time data to develop idea to calculate benchmarker positions with high-accuracy. In the tomographic inversion, we introduce some constraints corresponding to realistic conditions. This step gives us new developed system to detect crustal deformation in seafloor geodesy and new findings for understanding these in and around plate boundaries.
Advanced Ground Systems Maintenance Enterprise Architecture Project
NASA Technical Reports Server (NTRS)
Perotti, Jose M. (Compiler)
2015-01-01
The project implements an architecture for delivery of integrated health management capabilities for the 21st Century launch complex. The delivered capabilities include anomaly detection, fault isolation, prognostics and physics based diagnostics.
First trimester PAPP-A in the detection of non-Down syndrome aneuploidy.
Ochshorn, Y; Kupferminc, M J; Wolman, I; Orr-Urtreger, A; Jaffa, A J; Yaron, Y
2001-07-01
Combined first trimester screening using pregnancy associated plasma protein-A (PAPP-A), free beta-human chorionic gonadotrophin, and nuchal translucency (NT), is currently accepted as probably the best combination for the detection of Down syndrome (DS). Current first trimester algorithms provide computed risks only for DS. However, low PAPP-A is also associated with other chromosome anomalies such as trisomy 13, 18, and sex chromosome aneuploidy. Thus, using currently available algorithms, some chromosome anomalies may not be detected. The purpose of the present study was to establish a low-end cut-off value for PAPP-A that would increase the detection rates for non-DS chromosome anomalies. The study included 1408 patients who underwent combined first trimester screening. To determine a low-end cut-off value for PAPP-A, a Receiver-Operator Characteristic (ROC) curve analysis was performed. In the entire study group there were 18 cases of chromosome anomalies (trisomy 21, 13, 18, sex chromosome anomalies), 14 of which were among screen-positive patients, a detection rate of 77.7% for all chromosome anomalies (95% CI: 55.7-99.7%). ROC curve analysis detected a statistically significant cut-off for PAPP-A at 0.25 MoM. If the definition of screen-positive were to also include patients with PAPP-A<0.25 MoM, the detection rate would increase to 88.8% for all chromosome anomalies (95% CI: 71.6-106%). This low cut-off value may be used until specific algorithms are implemented for non-Down syndrome aneuploidy. Copyright 2001 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Vaughan, R. Greg; Hook, Simon J.
2006-01-01
ASTER thermal infrared data over Mt. St Helens were used to characterize its thermal behavior from Jun 2000 to Feb 2006. Prior to the Oct 2004 eruption, the average crater temperature varied seasonally between -12 and 6 C. After the eruption, maximum single-pixel temperature increased from 10 C (Oct 2004) to 96 C (Aug 2005), then showed a decrease to Feb 2006. The initial increase in temperature was correlated with dome morphology and growth rate and the subsequent decrease was interpreted to relate to both seasonal trends and a decreased growth rate/increased cooling rate, possibly suggesting a significant change in the volcanic system. A single-pixel ASTER thermal anomaly first appeared on Oct 1, 2004, eleven hours after the first eruption - 10 days before new lava was exposed at the surface. By contrast, an automated algorithm for detecting thermal anomalies in MODIS data did not trigger an alert until Dec 18. However, a single-pixel thermal anomaly first appeared in MODIS channel 23 (4 um) on Oct 13, 12 days after the first eruption - 2 days after lava was exposed. The earlier thermal anomaly detected with ASTER data is attributed to the higher spatial resolution (90 m) compared with MODIS (1 m) and the earlier visual observation of anomalous pixels compared to the automated detection method suggests that local spatial statistics and background radiance data could improve automated detection methods.
Evaluation schemes for video and image anomaly detection algorithms
NASA Astrophysics Data System (ADS)
Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael
2016-05-01
Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.
System and method for the detection of anomalies in an image
Prasad, Lakshman; Swaminarayan, Sriram
2013-09-03
Preferred aspects of the present invention can include receiving a digital image at a processor; segmenting the digital image into a hierarchy of feature layers comprising one or more fine-scale features defining a foreground object embedded in one or more coarser-scale features defining a background to the one or more fine-scale features in the segmentation hierarchy; detecting a first fine-scale foreground feature as an anomaly with respect to a first background feature within which it is embedded; and constructing an anomalous feature layer by synthesizing spatially contiguous anomalous fine-scale features. Additional preferred aspects of the present invention can include detecting non-pervasive changes between sets of images in response at least in part to one or more difference images between the sets of images.
Time-Frequency Methods for Structural Health Monitoring †
Pyayt, Alexander L.; Kozionov, Alexey P.; Mokhov, Ilya I.; Lang, Bernhard; Meijer, Robert J.; Krzhizhanovskaya, Valeria V.; Sloot, Peter M. A.
2014-01-01
Detection of early warning signals for the imminent failure of large and complex engineered structures is a daunting challenge with many open research questions. In this paper we report on novel ways to perform Structural Health Monitoring (SHM) of flood protection systems (levees, earthen dikes and concrete dams) using sensor data. We present a robust data-driven anomaly detection method that combines time-frequency feature extraction, using wavelet analysis and phase shift, with one-sided classification techniques to identify the onset of failure anomalies in real-time sensor measurements. The methodology has been successfully tested at three operational levees. We detected a dam leakage in the retaining dam (Germany) and “strange” behaviour of sensors installed in a Boston levee (UK) and a Rhine levee (Germany). PMID:24625740
A new comparison of hyperspectral anomaly detection algorithms for real-time applications
NASA Astrophysics Data System (ADS)
Díaz, María.; López, Sebastián.; Sarmiento, Roberto
2016-10-01
Due to the high spectral resolution that remotely sensed hyperspectral images provide, there has been an increasing interest in anomaly detection. The aim of anomaly detection is to stand over pixels whose spectral signature differs significantly from the background spectra. Basically, anomaly detectors mark pixels with a certain score, considering as anomalies those whose scores are higher than a threshold. Receiver Operating Characteristic (ROC) curves have been widely used as an assessment measure in order to compare the performance of different algorithms. ROC curves are graphical plots which illustrate the trade- off between false positive and true positive rates. However, they are limited in order to make deep comparisons due to the fact that they discard relevant factors required in real-time applications such as run times, costs of misclassification and the competence to mark anomalies with high scores. This last fact is fundamental in anomaly detection in order to distinguish them easily from the background without any posterior processing. An extensive set of simulations have been made using different anomaly detection algorithms, comparing their performances and efficiencies using several extra metrics in order to complement ROC curves analysis. Results support our proposal and demonstrate that ROC curves do not provide a good visualization of detection performances for themselves. Moreover, a figure of merit has been proposed in this paper which encompasses in a single global metric all the measures yielded for the proposed additional metrics. Therefore, this figure, named Detection Efficiency (DE), takes into account several crucial types of performance assessment that ROC curves do not consider. Results demonstrate that algorithms with the best detection performances according to ROC curves do not have the highest DE values. Consequently, the recommendation of using extra measures to properly evaluate performances have been supported and justified by the conclusions drawn from the simulations.
NASA Astrophysics Data System (ADS)
Zhang, Xing; Wen, Gongjian
2015-10-01
Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.
A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization
Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.; ...
2016-01-01
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
Active Learning with Rationales for Identifying Operationally Significant Anomalies in Aviation
NASA Technical Reports Server (NTRS)
Sharma, Manali; Das, Kamalika; Bilgic, Mustafa; Matthews, Bryan; Nielsen, David Lynn; Oza, Nikunj C.
2016-01-01
A major focus of the commercial aviation community is discovery of unknown safety events in flight operations data. Data-driven unsupervised anomaly detection methods are better at capturing unknown safety events compared to rule-based methods which only look for known violations. However, not all statistical anomalies that are discovered by these unsupervised anomaly detection methods are operationally significant (e.g., represent a safety concern). Subject Matter Experts (SMEs) have to spend significant time reviewing these statistical anomalies individually to identify a few operationally significant ones. In this paper we propose an active learning algorithm that incorporates SME feedback in the form of rationales to build a classifier that can distinguish between uninteresting and operationally significant anomalies. Experimental evaluation on real aviation data shows that our approach improves detection of operationally significant events by as much as 75% compared to the state-of-the-art. The learnt classifier also generalizes well to additional validation data sets.
2007-02-01
almost identical system call sequences and triggering the same alarm at different hosts. The alarm propagation effect can be used to distinguish “true...different hosts. The alarm propagation effect can be used to distinguish “true alarms” from “false positives”. At the host-level, a new anomaly...0H ( ) ( )∑∑ = = ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ − + − = 2 1 1, 2 2 2 2 1 1 ),( ),(),()( ),( ),(),()( k m ji jiT jiTjiTiN jiT jiTjiTiNW where - marginal observed
A knowledge-based system for monitoring the electrical power system of the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Eddy, Pat
1987-01-01
The design and the prototype for the expert system for the Hubble Space Telescope's electrical power system are discussed. This prototype demonstrated the capability to use real time data from a 32k telemetry stream and to perform operational health and safety status monitoring, detect trends such as battery degradation, and detect anomalies such as solar array failures. This prototype, along with the pointing control system and data management system expert systems, forms the initial Telemetry Analysis for Lockheed Operated Spacecraft (TALOS) capability.
2015-09-17
network intrusion detection systems NIST National Institute of Standards and Technology p-tree protocol tree PI protocol informatics PLC programmable logic...electrical, water, oil , natural gas, manufacturing, and pharmaceutical industries, to name a few. The differences between SCADA and DCS systems are often... Oil Company, also known as Saudi Aramco, suffered huge data loss that resulted in the disruption of daily operations for nearly two weeks [BTR13]. As it
Systems Modeling to Implement Integrated System Health Management Capability
NASA Technical Reports Server (NTRS)
Figueroa, Jorge F.; Walker, Mark; Morris, Jonathan; Smith, Harvey; Schmalzel, John
2007-01-01
ISHM capability includes: detection of anomalies, diagnosis of causes of anomalies, prediction of future anomalies, and user interfaces that enable integrated awareness (past, present, and future) by users. This is achieved by focused management of data, information and knowledge (DIaK) that will likely be distributed across networks. Management of DIaK implies storage, sharing (timely availability), maintaining, evolving, and processing. Processing of DIaK encapsulates strategies, methodologies, algorithms, etc. focused on achieving high ISHM Functional Capability Level (FCL). High FCL means a high degree of success in detecting anomalies, diagnosing causes, predicting future anomalies, and enabling health integrated awareness by the user. A model that enables ISHM capability, and hence, DIaK management, is denominated the ISHM Model of the System (IMS). We describe aspects of the IMS that focus on processing of DIaK. Strategies, methodologies, and algorithms require proper context. We describe an approach to define and use contexts, implementation in an object-oriented software environment (G2), and validation using actual test data from a methane thruster test program at NASA SSC. Context is linked to existence of relationships among elements of a system. For example, the context to use a strategy to detect leak is to identify closed subsystems (e.g. bounded by closed valves and by tanks) that include pressure sensors, and check if the pressure is changing. We call these subsystems Pressurizable Subsystems. If pressure changes are detected, then all members of the closed subsystem become suspect of leakage. In this case, the context is defined by identifying a subsystem that is suitable for applying a strategy. Contexts are defined in many ways. Often, a context is defined by relationships of function (e.g. liquid flow, maintaining pressure, etc.), form (e.g. part of the same component, connected to other components, etc.), or space (e.g. physically close, touching the same common element, etc.). The context might be defined dynamically (if conditions for the context appear and disappear dynamically) or statically. Although this approach is akin to case-based reasoning, we are implementing it using a software environment that embodies tools to define and manage relationships (of any nature) among objects in a very intuitive manner. Context for higher level inferences (that use detected anomalies or events), primarily for diagnosis and prognosis, are related to causal relationships. This is useful to develop root-cause analysis trees showing an event linked to its possible causes and effects. The innovation pertaining to RCA trees encompasses use of previously defined subsystems as well as individual elements in the tree. This approach allows more powerful implementations of RCA capability in object-oriented environments. For example, if a pressurizable subsystem is leaking, its root-cause representation within an RCA tree will show that the cause is that all elements of that subsystem are suspect of leak. Such a tree would apply to all instances of leak-events detected and all elements in all pressurizable subsystems in the system. Example subsystems in our environment to build IMS include: Pressurizable Subsystem, Fluid-Fill Subsystem, Flow-Thru-Valve Subsystem, and Fluid Supply Subsystem. The software environment for IMS is designed to potentially allow definition of any relationship suitable to create a context to achieve ISHM capability.
NASA Astrophysics Data System (ADS)
Gittinger, Jaxon M.; Jimenez, Edward S.; Holswade, Erica A.; Nunna, Rahul S.
2017-02-01
This work will demonstrate the implementation of a traditional and non-traditional visualization of x-ray images for aviation security applications that will be feasible with open system architecture initiatives such as the Open Threat Assessment Platform (OTAP). Anomalies of interest to aviation security are fluid, where characteristic signals of anomalies of interest can evolve rapidly. OTAP is a limited scope open architecture baggage screening prototype that intends to allow 3rd-party vendors to develop and easily implement, integrate, and deploy detection algorithms and specialized hardware on a field deployable screening technology [13]. In this study, stereoscopic images were created using an unmodified, field-deployed system and rendered on the Oculus Rift, a commercial virtual reality video gaming headset. The example described in this work is not dependent on the Oculus Rift, and is possible using any comparable hardware configuration capable of rendering stereoscopic images. The depth information provided from viewing the images will aid in the detection of characteristic signals from anomalies of interest. If successful, OTAP has the potential to allow for aviation security to become more fluid in its adaptation to the evolution of anomalies of interest. This work demonstrates one example that is easily implemented using the OTAP platform, that could lead to the future generation of ATR algorithms and data visualization approaches.
Mining IP to Domain Name Interactions to Detect DNS Flood Attacks on Recursive DNS Servers.
Alonso, Roberto; Monroy, Raúl; Trejo, Luis A
2016-08-17
The Domain Name System (DNS) is a critical infrastructure of any network, and, not surprisingly a common target of cybercrime. There are numerous works that analyse higher level DNS traffic to detect anomalies in the DNS or any other network service. By contrast, few efforts have been made to study and protect the recursive DNS level. In this paper, we introduce a novel abstraction of the recursive DNS traffic to detect a flooding attack, a kind of Distributed Denial of Service (DDoS). The crux of our abstraction lies on a simple observation: Recursive DNS queries, from IP addresses to domain names, form social groups; hence, a DDoS attack should result in drastic changes on DNS social structure. We have built an anomaly-based detection mechanism, which, given a time window of DNS usage, makes use of features that attempt to capture the DNS social structure, including a heuristic that estimates group composition. Our detection mechanism has been successfully validated (in a simulated and controlled setting) and with it the suitability of our abstraction to detect flooding attacks. To the best of our knowledge, this is the first time that work is successful in using this abstraction to detect these kinds of attacks at the recursive level. Before concluding the paper, we motivate further research directions considering this new abstraction, so we have designed and tested two additional experiments which exhibit promising results to detect other types of anomalies in recursive DNS servers.
Recombinant Temporal Aberration Detection Algorithms for Enhanced Biosurveillance
Murphy, Sean Patrick; Burkom, Howard
2008-01-01
Objective Broadly, this research aims to improve the outbreak detection performance and, therefore, the cost effectiveness of automated syndromic surveillance systems by building novel, recombinant temporal aberration detection algorithms from components of previously developed detectors. Methods This study decomposes existing temporal aberration detection algorithms into two sequential stages and investigates the individual impact of each stage on outbreak detection performance. The data forecasting stage (Stage 1) generates predictions of time series values a certain number of time steps in the future based on historical data. The anomaly measure stage (Stage 2) compares features of this prediction to corresponding features of the actual time series to compute a statistical anomaly measure. A Monte Carlo simulation procedure is then used to examine the recombinant algorithms’ ability to detect synthetic aberrations injected into authentic syndromic time series. Results New methods obtained with procedural components of published, sometimes widely used, algorithms were compared to the known methods using authentic datasets with plausible stochastic injected signals. Performance improvements were found for some of the recombinant methods, and these improvements were consistent over a range of data types, outbreak types, and outbreak sizes. For gradual outbreaks, the WEWD MovAvg7+WEWD Z-Score recombinant algorithm performed best; for sudden outbreaks, the HW+WEWD Z-Score performed best. Conclusion This decomposition was found not only to yield valuable insight into the effects of the aberration detection algorithms but also to produce novel combinations of data forecasters and anomaly measures with enhanced detection performance. PMID:17947614
Mining IP to Domain Name Interactions to Detect DNS Flood Attacks on Recursive DNS Servers
Alonso, Roberto; Monroy, Raúl; Trejo, Luis A.
2016-01-01
The Domain Name System (DNS) is a critical infrastructure of any network, and, not surprisingly a common target of cybercrime. There are numerous works that analyse higher level DNS traffic to detect anomalies in the DNS or any other network service. By contrast, few efforts have been made to study and protect the recursive DNS level. In this paper, we introduce a novel abstraction of the recursive DNS traffic to detect a flooding attack, a kind of Distributed Denial of Service (DDoS). The crux of our abstraction lies on a simple observation: Recursive DNS queries, from IP addresses to domain names, form social groups; hence, a DDoS attack should result in drastic changes on DNS social structure. We have built an anomaly-based detection mechanism, which, given a time window of DNS usage, makes use of features that attempt to capture the DNS social structure, including a heuristic that estimates group composition. Our detection mechanism has been successfully validated (in a simulated and controlled setting) and with it the suitability of our abstraction to detect flooding attacks. To the best of our knowledge, this is the first time that work is successful in using this abstraction to detect these kinds of attacks at the recursive level. Before concluding the paper, we motivate further research directions considering this new abstraction, so we have designed and tested two additional experiments which exhibit promising results to detect other types of anomalies in recursive DNS servers. PMID:27548169
Space Shuttle Main Engine: Advanced Health Monitoring System
NASA Technical Reports Server (NTRS)
Singer, Chirs
1999-01-01
The main gola of the Space Shuttle Main Engine (SSME) Advanced Health Management system is to improve flight safety. To this end the new SSME has robust new components to improve the operating margen and operability. The features of the current SSME health monitoring system, include automated checkouts, closed loop redundant control system, catastropic failure mitigation, fail operational/ fail-safe algorithms, and post flight data and inspection trend analysis. The features of the advanced health monitoring system include: a real time vibration monitor system, a linear engine model, and an optical plume anomaly detection system. Since vibration is a fundamental measure of SSME turbopump health, it stands to reason that monitoring the vibration, will give some idea of the health of the turbopumps. However, how is it possible to avoid shutdown, when it is not necessary. A sensor algorithm has been developed which has been exposed to over 400 test cases in order to evaluate the logic. The optical plume anomaly detection (OPAD) has been developed to be a sensitive monitor of engine wear, erosion, and breakage.
Engine Data Interpretation System (EDIS), phase 2
NASA Technical Reports Server (NTRS)
Cost, Thomas L.; Hofmann, Martin O.
1991-01-01
A prototype of an expert system was developed which applies qualitative constraint-based reasoning to the task of post-test analysis of data resulting from a rocket engine firing. Data anomalies are detected and corresponding faults are diagnosed. Engine behavior is reconstructed using measured data and knowledge about engine behavior. Knowledge about common faults guides but does not restrict the search for the best explanation in terms of hypothesized faults. The system contains domain knowledge about the behavior of common rocket engine components and was configured for use with the Space Shuttle Main Engine (SSME). A graphical user interface allows an expert user to intimately interact with the system during diagnosis. The system was applied to data taken during actual SSME tests where data anomalies were observed.
Spiechowicz, Jakub; Łuczka, Jerzy; Hänggi, Peter
2016-01-01
We study far from equilibrium transport of a periodically driven inertial Brownian particle moving in a periodic potential. As detected for a SQUID ratchet dynamics, the mean square deviation of the particle position from its average may involve three distinct intermediate, although extended diffusive regimes: initially as superdiffusion, followed by subdiffusion and finally, normal diffusion in the asymptotic long time limit. Even though these anomalies are transient effects, their lifetime can be many, many orders of magnitude longer than the characteristic time scale of the setup and turns out to be extraordinarily sensitive to the system parameters like temperature or the potential asymmetry. In the paper we reveal mechanisms of diffusion anomalies related to ergodicity of the system, symmetry breaking of the periodic potential and ultraslow relaxation of the particle velocity towards its steady state. Similar sequences of the diffusive behaviours could be detected in various systems including, among others, colloidal particles in random potentials, glass forming liquids and granular gases. PMID:27492219
The Geopotential Research Mission - Mapping the near earth gravity and magnetic fields
NASA Technical Reports Server (NTRS)
Taylor, P. T.; Keating, T.; Smith, D. E.; Langel, R. A.; Schnetzler, C. C.; Kahn, W. D.
1983-01-01
The Geopotential Research Mission (GRM), NASA's low-level satellite system designed to measure the gravity and magnetic fields of the earth, and its objectives are described. The GRM will consist of two, Shuttle launched, satellite systems (300 km apart) that will operate simultaneously at a 160 km circular-polar orbit for six months. Current mission goals include mapping the global geoid to 10 cm, measuring gravity-field anomalies to 2 mgal with a spatial resolution of 100 km, detecting crustal magnetic anomalies of 100 km wavelength with 1 nT accuracy, measuring the vectors components to + or - 5 arc sec and 5 nT, and computing the main dipole or core field to 5 nT with a 2 nT/year secular variation detection. Resource analysis and exploration geology are additional applications considered.
Detection of anomaly in human retina using Laplacian Eigenmaps and vectorized matched filtering
NASA Astrophysics Data System (ADS)
Yacoubou Djima, Karamatou A.; Simonelli, Lucia D.; Cunningham, Denise; Czaja, Wojciech
2015-03-01
We present a novel method for automated anomaly detection on auto fluorescent data provided by the National Institute of Health (NIH). This is motivated by the need for new tools to improve the capability of diagnosing macular degeneration in its early stages, track the progression over time, and test the effectiveness of new treatment methods. In previous work, macular anomalies have been detected automatically through multiscale analysis procedures such as wavelet analysis or dimensionality reduction algorithms followed by a classification algorithm, e.g., Support Vector Machine. The method that we propose is a Vectorized Matched Filtering (VMF) algorithm combined with Laplacian Eigenmaps (LE), a nonlinear dimensionality reduction algorithm with locality preserving properties. By applying LE, we are able to represent the data in the form of eigenimages, some of which accentuate the visibility of anomalies. We pick significant eigenimages and proceed with the VMF algorithm that classifies anomalies across all of these eigenimages simultaneously. To evaluate our performance, we compare our method to two other schemes: a matched filtering algorithm based on anomaly detection on single images and a combination of PCA and VMF. LE combined with VMF algorithm performs best, yielding a high rate of accurate anomaly detection. This shows the advantage of using a nonlinear approach to represent the data and the effectiveness of VMF, which operates on the images as a data cube rather than individual images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumidu Wijayasekara; Ondrej Linda; Milos Manic
Building Energy Management Systems (BEMSs) are essential components of modern buildings that utilize digital control technologies to minimize energy consumption while maintaining high levels of occupant comfort. However, BEMSs can only achieve these energy savings when properly tuned and controlled. Since indoor environment is dependent on uncertain criteria such as weather, occupancy, and thermal state, performance of BEMS can be sub-optimal at times. Unfortunately, the complexity of BEMS control mechanism, the large amount of data available and inter-relations between the data can make identifying these sub-optimal behaviors difficult. This paper proposes a novel Fuzzy Anomaly Detection and Linguistic Description (Fuzzy-ADLD)more » based method for improving the understandability of BEMS behavior for improved state-awareness. The presented method is composed of two main parts: 1) detection of anomalous BEMS behavior and 2) linguistic representation of BEMS behavior. The first part utilizes modified nearest neighbor clustering algorithm and fuzzy logic rule extraction technique to build a model of normal BEMS behavior. The second part of the presented method computes the most relevant linguistic description of the identified anomalies. The presented Fuzzy-ADLD method was applied to real-world BEMS system and compared against a traditional alarm based BEMS. In six different scenarios, the Fuzzy-ADLD method identified anomalous behavior either as fast as or faster (an hour or more), that the alarm based BEMS. In addition, the Fuzzy-ADLD method identified cases that were missed by the alarm based system, demonstrating potential for increased state-awareness of abnormal building behavior.« less
Ares I-X Ground Diagnostic Prototype
NASA Technical Reports Server (NTRS)
Schwabacher, Mark A.; Martin, Rodney Alexander; Waterman, Robert D.; Oostdyk, Rebecca Lynn; Ossenfort, John P.; Matthews, Bryan
2010-01-01
The automation of pre-launch diagnostics for launch vehicles offers three potential benefits: improving safety, reducing cost, and reducing launch delays. The Ares I-X Ground Diagnostic Prototype demonstrated anomaly detection, fault detection, fault isolation, and diagnostics for the Ares I-X first-stage Thrust Vector Control and for the associated ground hydraulics while the vehicle was in the Vehicle Assembly Building at Kennedy Space Center (KSC) and while it was on the launch pad. The prototype combines three existing tools. The first tool, TEAMS (Testability Engineering and Maintenance System), is a model-based tool from Qualtech Systems Inc. for fault isolation and diagnostics. The second tool, SHINE (Spacecraft Health Inference Engine), is a rule-based expert system that was developed at the NASA Jet Propulsion Laboratory. We developed SHINE rules for fault detection and mode identification, and used the outputs of SHINE as inputs to TEAMS. The third tool, IMS (Inductive Monitoring System), is an anomaly detection tool that was developed at NASA Ames Research Center. The three tools were integrated and deployed to KSC, where they were interfaced with live data. This paper describes how the prototype performed during the period of time before the launch, including accuracy and computer resource usage. The paper concludes with some of the lessons that we learned from the experience of developing and deploying the prototype.
2013-09-26
vehicle-lengths between frames. The low specificity of object detectors in WAMI means all vehicle detections are treated equally. Motion clutter...timing of the anomaly . If an anomaly was detected , recent activity would have a priority over older activity. This is due to the reasoning that if the...this could be a potential anomaly detected . Other baseline activities include normal work hours, religious observance times and interactions between
Autonomous detection of crowd anomalies in multiple-camera surveillance feeds
NASA Astrophysics Data System (ADS)
Nordlöf, Jonas; Andersson, Maria
2016-10-01
A novel approach for autonomous detection of anomalies in crowded environments is presented in this paper. The proposed models uses a Gaussian mixture probability hypothesis density (GM-PHD) filter as feature extractor in conjunction with different Gaussian mixture hidden Markov models (GM-HMMs). Results, based on both simulated and recorded data, indicate that this method can track and detect anomalies on-line in individual crowds through multiple camera feeds in a crowded environment.
NASA Astrophysics Data System (ADS)
Chen, S.; Tao, C.; Li, H.; Zhou, J.; Deng, X.; Tao, W.; Zhang, G.; Liu, W.; He, Y.
2014-12-01
The Precious Stone Mountain hydrothermal field (PSMHF) is located on the southern rim of the Galapagos Microplate. It was found at the 3rd leg of the 2009 Chinese DY115-21 expedition on board R/V Dayangyihao. It is efficient to learn the distribution of hydrothermal plumes and locate the hydrothermal vents by detecting the anomalies of turbidity and temperature. Detecting seawater turbidity by MAPR based on deep-tow technology is established and improved during our cruises. We collected data recorded by MAPR and information from geological sampling, yielding the following results: (1)Strong hydrothermal turbidity and temperature anomalies were recorded at 1.23°N, southeast and northwest of PSMHF. According to the CTD data on the mooring system, significant temperature anomalies were observed over PSMHF at the depth of 1,470 m, with anomalies range from 0.2℃ to 0.4℃, which gave another evidence of the existence of hydrothermal plume. (2)At 1.23°N (101.4802°W/1.2305°N), the nose-shaped particle plume was concentrated at a depth interval of 1,400-1,600 m, with 200 m thickness and an east-west diffusion range of 500 m. The maximum turbidity anomaly (0.045 △NTU) was recorded at the depth of 1,500 m, while the background anomaly was about 0.01△NTU. A distinct temperature anomaly was also detected at the seafloor near 1.23°N. Deep-tow camera showed the area was piled up by hydrothermal sulfide sediments. (3) In the southeast (101.49°W/1.21°N), the thickness of hydrothermal plume was 300 m and it was spreading laterally at a depth of 1,500-1,800 m, for a distance about 800 m. The maximum turbidity anomaly of nose-shaped plume is about 0.04 △NTU at the depth of 1,600 m. Distinct temperature anomaly was also detected in the northwest (101.515°W/1.235°N). (4) Terrain and bottom current were the main factors controlling the distribution of hydrothermal plume. Different from the distribution of hydrothermal plumes on the mid-ocean ridges, which was mostly effected by seafloor topography, the terrain of the PSMHF was relatively flat, so the impact was negligible. Southwest direction bottom current at the speed of 0.05 m/s in PSMHF had a great influence on the distribution and spreading direction of hydrothermal plume. Keyword: hydrothermal plume, Precious Stone Mountain hydrothermal field, turbidity
Latent Space Tracking from Heterogeneous Data with an Application for Anomaly Detection
2015-11-01
specific, if the anomaly behaves as a sudden outlier after which the data stream goes back to normal state, then the anomalous data point should be...introduced three types of anomalies , all of them are sudden outliers . 438 J. Huang and X. Ning Table 2. Synthetic dataset: AUC and parameters method...Latent Space Tracking from Heterogeneous Data with an Application for Anomaly Detection Jiaji Huang1(B) and Xia Ning2 1 Department of Electrical
Detection and Classification of Network Intrusions Using Hidden Markov Models
2002-01-01
31 2.2.3 High-level state machines for misuse detection . . . . . . . 32 2.2.4 EMERALD ...Solaris host audit data to detect Solaris R2L (Remote-to-Local) and U2R (User-to-Root) attacks. 7 login as a legitimate user on a local system and use a...as suspicious rather than the entire login session and it can detect some anomalies that are difficult to detect with traditional approaches. It’s
Dispersive Phase in the L-band InSAR Image Associated with Heavy Rain Episodes
NASA Astrophysics Data System (ADS)
Furuya, M.; Kinoshita, Y.
2017-12-01
Interferometric synthetic aperture radar (InSAR) is a powerful geodetic technique that allows us to detect ground displacements with unprecedented spatial resolution, and has been used to detect displacements due to earthquakes, volcanic eruptions, and glacier motion. In the meantime, due to the microwave propagation through ionosphere and troposphere, we often encounter non-negligible phase anomaly in InSAR data. Correcting for the ionsphere and troposphere is therefore a long-standing issue for high-precision geodetic measurements. However, if ground displacements are negligible, InSAR image can tell us the details of the atmosphere.Kinoshita and Furuya (2017, SOLA) detected phase anomaly in ALOS/PALSAR InSAR data associated with heavy rain over Niigata area, Japan, and performed numerical weathr model simulation to reproduce the anomaly; ALOS/PALSAR is a satellite-based L-band SAR sensor launched by JAXA in 2006 and terminated in 2011. The phase anomaly could be largely reproduced, using the output data from the weather model. However, we should note that numerical weather model outputs can only account for the non-dispersive effect in the phase anomaly. In case of severe weather event, we may expect dispersive effect that could be caused by the presence of free-electrons.In Global Navigation Satellite System (GNSS) positioning, dual frequency measurements allow us to separate the ionospheric dispersive component from tropospheric non-dispersive components. In contrast, SAR imaging is based on a single carrier frequency, and thus no operational ionospheric corrections have been performed in InSAR data analyses. Recently, Gomba et al (2016) detailed the processing strategy of split spectrum method (SSM) for InSAR, which splits the finite bandwidth of the range spectrum and virtually allows for dual-frequency measurements.We apply the L-band InSAR SSM to the heavy rain episodes, in which more than 50 mm/hour precipitations were reported. We report the presence of phase anomaly in both dispersive and non-dispersive components. While the original phase anomaly turns out to be mostly due to the non-dispersive effect, we could recognize local anomalies in the dispersive component as well. We will discuss its geophysical implications, and may show several case studies.
Conditional Outlier Detection for Clinical Alerting
Hauskrecht, Milos; Valko, Michal; Batal, Iyad; Clermont, Gilles; Visweswaran, Shyam; Cooper, Gregory F.
2010-01-01
We develop and evaluate a data-driven approach for detecting unusual (anomalous) patient-management actions using past patient cases stored in an electronic health record (EHR) system. Our hypothesis is that patient-management actions that are unusual with respect to past patients may be due to a potential error and that it is worthwhile to raise an alert if such a condition is encountered. We evaluate this hypothesis using data obtained from the electronic health records of 4,486 post-cardiac surgical patients. We base the evaluation on the opinions of a panel of experts. The results support that anomaly-based alerting can have reasonably low false alert rates and that stronger anomalies are correlated with higher alert rates. PMID:21346986
Conditional outlier detection for clinical alerting.
Hauskrecht, Milos; Valko, Michal; Batal, Iyad; Clermont, Gilles; Visweswaran, Shyam; Cooper, Gregory F
2010-11-13
We develop and evaluate a data-driven approach for detecting unusual (anomalous) patient-management actions using past patient cases stored in an electronic health record (EHR) system. Our hypothesis is that patient-management actions that are unusual with respect to past patients may be due to a potential error and that it is worthwhile to raise an alert if such a condition is encountered. We evaluate this hypothesis using data obtained from the electronic health records of 4,486 post-cardiac surgical patients. We base the evaluation on the opinions of a panel of experts. The results support that anomaly-based alerting can have reasonably low false alert rates and that stronger anomalies are correlated with higher alert rates.
Anomaly Detection for Beam Loss Maps in the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Valentino, Gianluca; Bruce, Roderik; Redaelli, Stefano; Rossi, Roberto; Theodoropoulos, Panagiotis; Jaster-Merz, Sonja
2017-07-01
In the LHC, beam loss maps are used to validate collimator settings for cleaning and machine protection. This is done by monitoring the loss distribution in the ring during infrequent controlled loss map campaigns, as well as in standard operation. Due to the complexity of the system, consisting of more than 50 collimators per beam, it is difficult to identify small changes in the collimation hierarchy, which may be due to setting errors or beam orbit drifts with such methods. A technique based on Principal Component Analysis and Local Outlier Factor is presented to detect anomalies in the loss maps and therefore provide an automatic check of the collimation hierarchy.
Relationships between Rwandan seasonal rainfall anomalies and ENSO events
NASA Astrophysics Data System (ADS)
Muhire, I.; Ahmed, F.; Abutaleb, K.
2015-10-01
This study aims primarily at investigating the relationships between Rwandan seasonal rainfall anomalies and El Niño-South Oscillation phenomenon (ENSO) events. The study is useful for early warning of negative effects associated with extreme rainfall anomalies across the country. It covers the period 1935-1992, using long and short rains data from 28 weather stations in Rwanda and ENSO events resourced from Glantz (2001). The mean standardized anomaly indices were calculated to investigate their associations with ENSO events. One-way analysis of variance was applied on the mean standardized anomaly index values per ENSO event to explore the spatial correlation of rainfall anomalies per ENSO event. A geographical information system was used to present spatially the variations in mean standardized anomaly indices per ENSO event. The results showed approximately three climatic periods, namely, dry period (1935-1960), semi-humid period (1961-1976) and wet period (1977-1992). Though positive and negative correlations were detected between extreme short rains anomalies and El Niño events, La Niña events were mostly linked to negative rainfall anomalies while El Niño events were associated with positive rainfall anomalies. The occurrence of El Niño and La Niña in the same year does not show any clear association with rainfall anomalies. However, the phenomenon was more linked with positive long rains anomalies and negative short rains anomalies. The normal years were largely linked with negative long rains anomalies and positive short rains anomalies, which is a pointer to the influence of other factors other than ENSO events. This makes projection of seasonal rainfall anomalies in the country by merely predicting ENSO events difficult.
NASA Technical Reports Server (NTRS)
Patrick, M. Clinton; Cooper, Anita E.; Powers, W. T.
2004-01-01
Researchers are working on many konts to make possible high speed, automated classification and quantification of constituent materials in numerous environments. NASA's Marshall Space Flight Center has implemented a system for rocket engine flow fields/plumes; the Optical Plume Anomaly Detection (OPAD) system was designed to utilize emission and absorption spectroscopy for monitoring molecular and atomic particulates in gas plasma. An accompanying suite of tools and analytical package designed to utilize information collected by OPAD is known as the Engine Diagnostic Filtering System (EDIFIS). The current combination of these systems identifies atomic and molecular species and quantifies mass loss rates in H2/O2 rocket plumes. Additionally, efforts are being advanced to hardware encode components of the EDIFIS in order to address real-time operational requirements for health monitoring and management. This paper addresses the OPAD with its tool suite, and discusses what is considered a natural progression: a concept for migrating OPAD towards detection of high energy particles, including neutrons and gamma rays. The integration of these tools and capabilities will provide NASA with a systematic approach to monitor space vehicle internal and external environment.
NASA Astrophysics Data System (ADS)
McEvoy, Thomas Richard; Wolthusen, Stephen D.
Recent research on intrusion detection in supervisory data acquisition and control (SCADA) and DCS systems has focused on anomaly detection at protocol level based on the well-defined nature of traffic on such networks. Here, we consider attacks which compromise sensors or actuators (including physical manipulation), where intrusion may not be readily apparent as data and computational states can be controlled to give an appearance of normality, and sensor and control systems have limited accuracy. To counter these, we propose to consider indirect relations between sensor readings to detect such attacks through concurrent observations as determined by control laws and constraints.
Innovative Sensors for Pipeline Crawlers: Rotating Permanent Magnet Inspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Bruce Nestleroth; Richard J. Davis; Stephanie Flamberg
2006-09-30
Internal inspection of pipelines is an important tool for ensuring safe and reliable delivery of fossil energy products. Current inspection systems that are propelled through the pipeline by the product flow cannot be used to inspect all pipelines because of the various physical barriers they may encounter. To facilitate inspection of these ''unpiggable'' pipelines, recent inspection development efforts have focused on a new generation of powered inspection platforms that are able to crawl slowly inside a pipeline and can maneuver past the physical barriers that limit internal inspection applicability, such as bore restrictions, low product flow rate, and low pressure.more » The first step in this research was to review existing inspection technologies for applicability and compatibility with crawler systems. Most existing inspection technologies, including magnetic flux leakage and ultrasonic methods, had significant implementation limitations including mass, physical size, inspection energy coupling requirements and technology maturity. The remote field technique was the most promising but power consumption was high and anomaly signals were low requiring sensitive detectors and electronics. After reviewing each inspection technology, it was decided to investigate the potential for a new inspection method. The new inspection method takes advantage of advances in permanent magnet strength, along with their wide availability and low cost. Called rotating permanent magnet inspection (RPMI), this patent pending technology employs pairs of permanent magnets rotating around the central axis of a cylinder to induce high current densities in the material under inspection. Anomalies and wall thickness variations are detected with an array of sensors that measure local changes in the magnetic field produced by the induced current flowing in the material. This inspection method is an alternative to the common concentric coil remote field technique that induces low-frequency eddy currents in ferromagnetic pipes and tubes. Since this is a new inspection method, both theory and experiment were used to determine fundamental capabilities and limitations. Fundamental finite element modeling analysis and experimental investigations performed during this development have led to the derivation of a first order analytical equation for designing rotating magnetizers to induce current and positioning sensors to record signals from anomalies. Experimental results confirm the analytical equation and the finite element calculations provide a firm basis for the design of RPMI systems. Experimental results have shown that metal loss anomalies and wall thickness variations can be detected with an array of sensors that measure local changes in the magnetic field produced by the induced current flowing in the material. The design exploits the phenomenon that circumferential currents are easily detectable at distances well away from the magnets. Current changes at anomalies were detectable with commercial low cost Hall Effect sensors. Commercial analog to digital converters can be used to measure the sensor output and data analysis can be performed in real time using PC computer systems. The technology was successfully demonstrated during two blind benchmark tests where numerous metal loss defects were detected. For this inspection technology, the detection threshold is a function of wall thickness and corrosion depth. For thinner materials, the detection threshold was experimentally shown to be comparable to magnetic flux leakage. For wall thicknesses greater than three tenths of an inch, the detection threshold increases with wall thickness. The potential for metal loss anomaly sizing was demonstrated in the second benchmarking study, again with accuracy comparable to existing magnetic flux leakage technologies. The rotating permanent magnet system has the potential for inspecting unpiggable pipelines since the magnetizer configurations can be sufficiently small with respect to the bore of the pipe to pass obstructions that limit the application of many inspection technologies. Also, since the largest dimension of the Hall Effect sensor is two tenths of an inch, the sensor packages can be small, flexible and light. The power consumption, on the order of ten watts, is low compared to some inspection systems; this would enable autonomous systems to inspect longer distances between charges. This project showed there are no technical barriers to building a field ready unit that can pass through narrow obstructions, such as plug valves. The next step in project implementation is to build a field ready unit that can begin to establish optimal performance capabilities including detection thresholds, sizing capability, and wall thickness limitations.« less
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2004-01-01
In this paper, an approach for in-flight fault detection and isolation (FDI) of aircraft engine sensors based on a bank of Kalman filters is developed. This approach utilizes multiple Kalman filters, each of which is designed based on a specific fault hypothesis. When the propulsion system experiences a fault, only one Kalman filter with the correct hypothesis is able to maintain the nominal estimation performance. Based on this knowledge, the isolation of faults is achieved. Since the propulsion system may experience component and actuator faults as well, a sensor FDI system must be robust in terms of avoiding misclassifications of any anomalies. The proposed approach utilizes a bank of (m+1) Kalman filters where m is the number of sensors being monitored. One Kalman filter is used for the detection of component and actuator faults while each of the other m filters detects a fault in a specific sensor. With this setup, the overall robustness of the sensor FDI system to anomalies is enhanced. Moreover, numerous component fault events can be accounted for by the FDI system. The sensor FDI system is applied to a commercial aircraft engine simulation, and its performance is evaluated at multiple power settings at a cruise operating point using various fault scenarios.
Robust and efficient anomaly detection using heterogeneous representations
NASA Astrophysics Data System (ADS)
Hu, Xing; Hu, Shiqiang; Xie, Jinhua; Zheng, Shiyou
2015-05-01
Various approaches have been proposed for video anomaly detection. Yet these approaches typically suffer from one or more limitations: they often characterize the pattern using its internal information, but ignore its external relationship which is important for local anomaly detection. Moreover, the high-dimensionality and the lack of robustness of pattern representation may lead to problems, including overfitting, increased computational cost and memory requirements, and high false alarm rate. We propose a video anomaly detection framework which relies on a heterogeneous representation to account for both the pattern's internal information and external relationship. The internal information is characterized by slow features learned by slow feature analysis from low-level representations, and the external relationship is characterized by the spatial contextual distances. The heterogeneous representation is compact, robust, efficient, and discriminative for anomaly detection. Moreover, both the pattern's internal information and external relationship can be taken into account in the proposed framework. Extensive experiments demonstrate the robustness and efficiency of our approach by comparison with the state-of-the-art approaches on the widely used benchmark datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohrer, Brandon Robinson
2011-09-01
Events of interest to data analysts are sometimes difficult to characterize in detail. Rather, they consist of anomalies, events that are unpredicted, unusual, or otherwise incongruent. The purpose of this LDRD was to test the hypothesis that a biologically-inspired anomaly detection algorithm could be used to detect contextual, multi-modal anomalies. There currently is no other solution to this problem, but the existence of a solution would have a great national security impact. The technical focus of this research was the application of a brain-emulating cognition and control architecture (BECCA) to the problem of anomaly detection. One aspect of BECCA inmore » particular was discovered to be critical to improved anomaly detection capabilities: it's feature creator. During the course of this project the feature creator was developed and tested against multiple data types. Development direction was drawn from psychological and neurophysiological measurements. Major technical achievements include the creation of hierarchical feature sets created from both audio and imagery data.« less
Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa
2018-01-01
A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.
Visualization of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Hogervorst, Maarten A.; Bijl, Piet; Toet, Alexander
2007-04-01
We developed four new techniques to visualize hyper spectral image data for man-in-the-loop target detection. The methods respectively: (1) display the subsequent bands as a movie ("movie"), (2) map the data onto three channels and display these as a colour image ("colour"), (3) display the correlation between the pixel signatures and a known target signature ("match") and (4) display the output of a standard anomaly detector ("anomaly"). The movie technique requires no assumptions about the target signature and involves no information loss. The colour technique produces a single image that can be displayed in real-time. A disadvantage of this technique is loss of information. A display of the match between a target signature and pixels and can be interpreted easily and fast, but this technique relies on precise knowledge of the target signature. The anomaly detector signifies pixels with signatures that deviate from the (local) background. We performed a target detection experiment with human observers to determine their relative performance with the four techniques,. The results show that the "match" presentation yields the best performance, followed by "movie" and "anomaly", while performance with the "colour" presentation was the poorest. Each scheme has its advantages and disadvantages and is more or less suited for real-time and post-hoc processing. The rationale is that the final interpretation is best done by a human observer. In contrast to automatic target recognition systems, the interpretation of hyper spectral imagery by the human visual system is robust to noise and image transformations and requires a minimal number of assumptions (about signature of target and background, target shape etc.) When more knowledge about target and background is available this may be used to help the observer interpreting the data (aided target detection).
NASA Technical Reports Server (NTRS)
Lo, C. F.; Wu, K.; Whitehead, B. A.
1993-01-01
The statistical and neural networks methods have been applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME. The anomalies are detected based on the amplitude of peaks of fundamental and harmonic frequencies in the power spectral density. These data are reduced to the proper format from sensor data measured by strain gauges and accelerometers. Both methods are feasible to detect the vibration anomalies. The statistical method requires sufficient data points to establish a reasonable statistical distribution data bank. This method is applicable for on-line operation. The neural networks method also needs to have enough data basis to train the neural networks. The testing procedure can be utilized at any time so long as the characteristics of components remain unchanged.
A System for Fault Management for NASA's Deep Space Habitat
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.; Spirkovska, Liljana; Aaseng, Gordon B.; Mccann, Robert S.; Baskaran, Vijayakumar; Ossenfort, John P.; Smith, Irene Skupniewicz; Iverson, David L.; Schwabacher, Mark A.
2013-01-01
NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy.
A System for Fault Management and Fault Consequences Analysis for NASA's Deep Space Habitat
NASA Technical Reports Server (NTRS)
Colombano, Silvano; Spirkovska, Liljana; Baskaran, Vijaykumar; Aaseng, Gordon; McCann, Robert S.; Ossenfort, John; Smith, Irene; Iverson, David L.; Schwabacher, Mark
2013-01-01
NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy
Investigation of a Neural Network Implementation of a TCP Packet Anomaly Detection System
2004-05-01
reconnatre les nouvelles variantes d’attaque. Les réseaux de neurones artificiels (ANN) ont les capacités d’apprendre à partir de schémas et de...Computational Intelligence Techniques in Intrusion Detection Systems. In IASTED International Conference on Neural Networks and Computational Intelligence , pp...Neural Network Training: Overfitting May be Harder than Expected. In Proceedings of the Fourteenth National Conference on Artificial Intelligence , AAAI-97
Robust and Accurate Anomaly Detection in ECG Artifacts Using Time Series Motif Discovery
Sivaraks, Haemwaan
2015-01-01
Electrocardiogram (ECG) anomaly detection is an important technique for detecting dissimilar heartbeats which helps identify abnormal ECGs before the diagnosis process. Currently available ECG anomaly detection methods, ranging from academic research to commercial ECG machines, still suffer from a high false alarm rate because these methods are not able to differentiate ECG artifacts from real ECG signal, especially, in ECG artifacts that are similar to ECG signals in terms of shape and/or frequency. The problem leads to high vigilance for physicians and misinterpretation risk for nonspecialists. Therefore, this work proposes a novel anomaly detection technique that is highly robust and accurate in the presence of ECG artifacts which can effectively reduce the false alarm rate. Expert knowledge from cardiologists and motif discovery technique is utilized in our design. In addition, every step of the algorithm conforms to the interpretation of cardiologists. Our method can be utilized to both single-lead ECGs and multilead ECGs. Our experiment results on real ECG datasets are interpreted and evaluated by cardiologists. Our proposed algorithm can mostly achieve 100% of accuracy on detection (AoD), sensitivity, specificity, and positive predictive value with 0% false alarm rate. The results demonstrate that our proposed method is highly accurate and robust to artifacts, compared with competitive anomaly detection methods. PMID:25688284
Gordon, J. J.; Gardner, J. K.; Wang, S.; Siebers, J. V.
2012-01-01
Purpose: This work uses repeat images of intensity modulated radiation therapy (IMRT) fields to quantify fluence anomalies (i.e., delivery errors) that can be reliably detected in electronic portal images used for IMRT pretreatment quality assurance. Methods: Repeat images of 11 clinical IMRT fields are acquired on a Varian Trilogy linear accelerator at energies of 6 MV and 18 MV. Acquired images are corrected for output variations and registered to minimize the impact of linear accelerator and electronic portal imaging device (EPID) positioning deviations. Detection studies are performed in which rectangular anomalies of various sizes are inserted into the images. The performance of detection strategies based on pixel intensity deviations (PIDs) and gamma indices is evaluated using receiver operating characteristic analysis. Results: Residual differences between registered images are due to interfraction positional deviations of jaws and multileaf collimator leaves, plus imager noise. Positional deviations produce large intensity differences that degrade anomaly detection. Gradient effects are suppressed in PIDs using gradient scaling. Background noise is suppressed using median filtering. In the majority of images, PID-based detection strategies can reliably detect fluence anomalies of ≥5% in ∼1 mm2 areas and ≥2% in ∼20 mm2 areas. Conclusions: The ability to detect small dose differences (≤2%) depends strongly on the level of background noise. This in turn depends on the accuracy of image registration, the quality of the reference image, and field properties. The longer term aim of this work is to develop accurate and reliable methods of detecting IMRT delivery errors and variations. The ability to resolve small anomalies will allow the accuracy of advanced treatment techniques, such as image guided, adaptive, and arc therapies, to be quantified. PMID:22894421
Implementing Classification on a Munitions Response Project
2011-12-01
Detection Dig List IVS/Seed Site Planning Decisions Dig All Anomalies Site Characterization Implementing Classification on a Munitions Response...Details ● Seed emplacement ● EM61-MK2 detection survey RTK GPS ● Select anomalies for further investigation ● Collect cued data using MetalMapper...5.2 mV in channel 2 938 anomalies selected ● All QC seeds detected using this threshold Some just inside the 60-cm halo ● IVS reproducibility
Szczałuba, Krzysztof; Nowakowska, Beata; Sobecka, Katarzyna; Smyk, Marta; Castaneda, Jennifer; Klapecki, Jakub; Kutkowska-Kaźmierczak, Anna; Śmigiel, Robert; Bocian, Ewa; Radkowski, Marek; Demkow, Urszula
2016-01-01
Major congenital anomalies are detectable in 2-3 % of the newborn population. Some of their genetic causes are attributable to copy number variations identified by array comparative genomic hybridization (aCGH). The value of aCGH screening as a first-tier test in children with multiple congenital anomalies has been studied and consensus adopted. However, array resolution has not been agreed upon, specifically in the newborn or infant population. Moreover, most array studies have been focused on mixed populations of intellectual disability/developmental delay with or without multiple congenital anomalies, making it difficult to assess the value of microarrays in newborns. The aim of the study was to determine the optimal quality and clinical sensitivity of high-resolution array comparative genomic hybridization in neonates with multiple congenital anomalies. We investigated a group of 54 newborns with multiple congenital anomalies defined as two or more birth defects from more than one organ system. Cytogenetic studies were performed using OGT CytoSure 8 × 60 K microarray. We found ten rearrangements in ten newborns. Of these, one recurrent syndromic microduplication was observed, whereas all other changes were unique. Six rearrangements were definitely pathogenic, including one submicroscopic and five that could be seen on routine karyotype analysis. Four other copy number variants were likely pathogenic. The candidate genes that may explain the phenotype were discussed. In conclusion, high-resolution array comparative hybridization can be applied successfully in newborns with multiple congenital anomalies as the method detects a significant number of pathogenic changes, resulting in early diagnoses. We hypothesize that small changes previously considered benign or even inherited rearrangements should be classified as potentially pathogenic at least until a subsequent clinical assessment would exclude a developmental delay or dysmorphism.
Anomaly detection in hyperspectral imagery: statistics vs. graph-based algorithms
NASA Astrophysics Data System (ADS)
Berkson, Emily E.; Messinger, David W.
2016-05-01
Anomaly detection (AD) algorithms are frequently applied to hyperspectral imagery, but different algorithms produce different outlier results depending on the image scene content and the assumed background model. This work provides the first comparison of anomaly score distributions between common statistics-based anomaly detection algorithms (RX and subspace-RX) and the graph-based Topological Anomaly Detector (TAD). Anomaly scores in statistical AD algorithms should theoretically approximate a chi-squared distribution; however, this is rarely the case with real hyperspectral imagery. The expected distribution of scores found with graph-based methods remains unclear. We also look for general trends in algorithm performance with varied scene content. Three separate scenes were extracted from the hyperspectral MegaScene image taken over downtown Rochester, NY with the VIS-NIR-SWIR ProSpecTIR instrument. In order of most to least cluttered, we study an urban, suburban, and rural scene. The three AD algorithms were applied to each scene, and the distributions of the most anomalous 5% of pixels were compared. We find that subspace-RX performs better than RX, because the data becomes more normal when the highest variance principal components are removed. We also see that compared to statistical detectors, anomalies detected by TAD are easier to separate from the background. Due to their different underlying assumptions, the statistical and graph-based algorithms highlighted different anomalies within the urban scene. These results will lead to a deeper understanding of these algorithms and their applicability across different types of imagery.
Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity.
Napoletano, Paolo; Piccoli, Flavio; Schettini, Raimondo
2018-01-12
Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art.
Networked gamma radiation detection system for tactical deployment
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ronald; Smith, Ethan; Guss, Paul; Mitchell, Stephen
2015-08-01
A networked gamma radiation detection system with directional sensitivity and energy spectral data acquisition capability is being developed by the National Security Technologies, LLC, Remote Sensing Laboratory to support the close and intense tactical engagement of law enforcement who carry out counterterrorism missions. In the proposed design, three clusters of 2″ × 4″ × 16″ sodium iodide crystals (4 each) with digiBASE-E (for list mode data collection) would be placed on the passenger side of a minivan. To enhance localization and facilitate rapid identification of isotopes, advanced smart real-time localization and radioisotope identification algorithms like WAVRAD (wavelet-assisted variance reduction for anomaly detection) and NSCRAD (nuisance-rejection spectral comparison ratio anomaly detection) will be incorporated. We will test a collection of algorithms and analysis that centers on the problem of radiation detection with a distributed sensor network. We will study the basic characteristics of a radiation sensor network and focus on the trade-offs between false positive alarm rates, true positive alarm rates, and time to detect multiple radiation sources in a large area. Empirical and simulation analyses of critical system parameters, such as number of sensors, sensor placement, and sensor response functions, will be examined. This networked system will provide an integrated radiation detection architecture and framework with (i) a large nationally recognized search database equivalent that would help generate a common operational picture in a major radiological crisis; (ii) a robust reach back connectivity for search data to be evaluated by home teams; and, finally, (iii) a possibility of integrating search data from multi-agency responders.
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2008-01-01
In this paper, a baseline system which utilizes dual-channel sensor measurements for aircraft engine on-line diagnostics is developed. This system is composed of a linear on-board engine model (LOBEM) and fault detection and isolation (FDI) logic. The LOBEM provides the analytical third channel against which the dual-channel measurements are compared. When the discrepancy among the triplex channels exceeds a tolerance level, the FDI logic determines the cause of the discrepancy. Through this approach, the baseline system achieves the following objectives: (1) anomaly detection, (2) component fault detection, and (3) sensor fault detection and isolation. The performance of the baseline system is evaluated in a simulation environment using faults in sensors and components.
Integrated System Health Management: Foundational Concepts, Approach, and Implementation.
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Schmalzel, John; Walker, Mark; Venkatesh, Meera; Kapadia, Ravi; Morris, Jon; Turowski, Mark; Smith, Harvey
2009-01-01
Implementation of integrated system health management (ISHM) capability is fundamentally linked to the management of data, information, and knowledge (DIaK) with the purposeful objective of determining the health of a system. It is akin to having a team of experts who are all individually and collectively observing and analyzing a complex system, and communicating effectively with each other in order to arrive to an accurate and reliable assessment of its health. We present concepts, procedures, and a specific approach as a foundation for implementing a credible ISHM capability. The capability stresses integration of DIaK from all elements of a system. The intent is also to make possible implementation of on-board ISHM capability, in contrast to a remote capability. The information presented is the result of many years of research, development, and maturation of technologies, and of prototype implementations in operational systems (rocket engine test facilities). The paper will address the following topics: 1. ISHM Model of a system 2. Detection of anomaly indicators. 3. Determination and confirmation of anomalies. 4. Diagnostic of causes and determination of effects. 5. Consistency checking cycle. 6. Management of health information 7. User Interfaces 8. Example implementation ISHM has been defined from many perspectives. We define it as a capability that might be achieved by various approaches. We describe a specific approach that has been matured throughout many years of development, and pilot implementations. ISHM is a capability that is achieved by integrating data, information, and knowledge (DIaK) that might be distributed throughout the system elements (which inherently implies capability to manage DIaK associated with distributed sub-systems). DIaK must be available to any element of a system at the right time and in accordance with a meaningful context. ISHM Functional Capability Level (FCL) is measured by how well a system performs the following functions: (1) detect anomalies, (2) diagnose causes, (3) predict future anomalies/failures, and (4) provide the user with an integrated awareness about the condition of every element in the system and guide user decisions.
An Assessment Methodology to Evaluate In-Flight Engine Health Management Effectiveness
NASA Astrophysics Data System (ADS)
Maggio, Gaspare; Belyeu, Rebecca; Pelaccio, Dennis G.
2002-01-01
flight effectiveness of candidate engine health management system concepts. A next generation engine health management system will be required to be both reliable and robust in terms of anomaly detection capability. The system must be able to operate successfully in the hostile, high-stress engine system environment. This implies that its system components, such as the instrumentation, process and control, and vehicle interface and support subsystems, must be highly reliable. Additionally, the system must be able to address a vast range of possible engine operation anomalies through a host of different types of measurements supported by a fast algorithm/architecture processing capability that can identify "true" (real) engine operation anomalies. False anomaly condition reports for such a system must be essentially eliminated. The accuracy of identifying only real anomaly conditions has been an issue with the Space Shuttle Main Engine (SSME) in the past. Much improvement in many of the technologies to address these areas is required. The objectives of this study were to identify and demonstrate a consistent assessment methodology that can evaluate the capability of next generation engine health management system concepts to respond in a correct, timely manner to alleviate an operational engine anomaly condition during flight. Science Applications International Corporation (SAIC), with support from NASA Marshall Space Flight Center, identified a probabilistic modeling approach to assess engine health management system concept effectiveness using a deterministic anomaly-time event assessment modeling approach that can be applied in the engine preliminary design stage of development to assess engine health management system concept effectiveness. Much discussion in this paper focuses on the formulation and application approach in performing this assessment. This includes detailed discussion of key modeling assumptions, the overall assessment methodology approach identified, and the identification of key supporting engine health management system concept design/operation and fault mode information required to utilize this methodology. At the paper's conclusion, discussion focuses on a demonstration benchmark study that applied this methodology to the current SSME health management system. A summary of study results and lessons learned are provided. Recommendations for future work in this area are also identified at the conclusion of the paper. * Please direct all correspondence/communication pertaining to this paper to Dennis G. Pelaccio, Science
Visualizing the chiral anomaly in Dirac and Weyl semimetals with photoemission spectroscopy
NASA Astrophysics Data System (ADS)
Behrends, Jan; Grushin, Adolfo G.; Ojanen, Teemu; Bardarson, Jens H.
2016-02-01
Quantum anomalies are the breaking of a classical symmetry by quantum fluctuations. They dictate how physical systems of diverse nature, ranging from fundamental particles to crystalline materials, respond topologically to external perturbations, insensitive to local details. The anomaly paradigm was triggered by the discovery of the chiral anomaly that contributes to the decay of pions into photons and influences the motion of superfluid vortices in 3He-A. In the solid state, it also fundamentally affects the properties of topological Weyl and Dirac semimetals, recently realized experimentally. In this work we propose that the most identifying consequence of the chiral anomaly, the charge density imbalance between fermions of different chirality induced by nonorthogonal electric and magnetic fields, can be directly observed in these materials with the existing technology of photoemission spectroscopy. With angle resolution, the chiral anomaly is identified by a characteristic note-shaped pattern of the emission spectra, originating from the imbalanced occupation of the bulk states and a previously unreported momentum dependent energy shift of the surface state Fermi arcs. We further demonstrate that the chiral anomaly likewise leaves an imprint in angle averaged emission spectra, facilitating its experimental detection. Thereby, our work provides essential theoretical input to foster the direct visualization of the chiral anomaly in condensed matter, in contrast to transport properties, such as negative magnetoresistance, which can also be obtained in the absence of a chiral anomaly.
Automated Propulsion Data Screening demonstration system
NASA Technical Reports Server (NTRS)
Hoyt, W. Andes; Choate, Timothy D.; Whitehead, Bruce A.
1995-01-01
A fully-instrumented firing of a propulsion system typically generates a very large quantity of data. In the case of the Space Shuttle Main Engine (SSME), data analysis from ground tests and flights is currently a labor-intensive process. Human experts spend a great deal of time examining the large volume of sensor data generated by each engine firing. These experts look for any anomalies in the data which might indicate engine conditions warranting further investigation. The contract effort was to develop a 'first-cut' screening system for application to SSME engine firings that would identify the relatively small volume of data which is unusual or anomalous in some way. With such a system, limited and expensive human resources could focus on this small volume of unusual data for thorough analysis. The overall project objective was to develop a fully operational Automated Propulsion Data Screening (APDS) system with the capability of detecting significant trends and anomalies in transient and steady-state data. However, the effort limited screening of transient data to ground test data for throttle-down cases typical of the 3-g acceleration, and for engine throttling required to reach the maximum dynamic pressure limits imposed on the Space Shuttle. This APDS is based on neural networks designed to detect anomalies in propulsion system data that are not part of the data used for neural network training. The delivered system allows engineers to build their own screening sets for application to completed or planned firings of the SSME. ERC developers also built some generic screening sets that NASA engineers could apply immediately to their data analysis efforts.
NASA Astrophysics Data System (ADS)
Eto, S.; Nagai, S.; Tadokoro, K.
2011-12-01
Our group has developed a system for observing seafloor crustal deformation with a combination of acoustic ranging and kinematic GPS positioning techniques. One of the effective factors to reduce estimation error of submarine benchmark in our system is modeling variation of ocean acoustic velocity. We estimated various 1-dimensional velocity models with depth under some constraints, because it is difficult to estimate 3-dimensional acoustic velocity structure including temporal change due to our simple acquisition procedure of acoustic ranging data. We, then, applied the joint hypocenter determination method in seismology [Kissling et al., 1994] to acoustic ranging data. We assume two conditions as constraints in inversion procedure as follows: 1) fixed acoustic velocity in deeper part because it is usually stable both in space and time, 2) each inverted velocity model should be decreased with depth. The following two remarkable spatio-temporal changes of acoustic velocity 1) variations of travel-time residuals at the same points within short time and 2) larger differences between residuals at the neighboring points, which are one's of travel-time from different benchmarks. The First results cannot be explained only by the effect of atmospheric condition change including heating by sunlight. To verify the residual variations mentioned as the second result, we have performed forward modeling of acoustic ranging data with velocity models added velocity anomalies. We calculate travel time by a pseudo-bending ray tracing method [Um and Thurber, 1987] to examine effects of velocity anomaly on the travel-time differences. Comparison between these residuals and travel-time difference in forward modeling, velocity anomaly bodies in shallower depth can make these anomalous residuals, which may indicate moving water bodies. We need to apply an acoustic velocity structure model with velocity anomaly(s) in acoustic ranging data analysis and/or to develop a new system with a large number of sea surface stations to detect them, which may be able to reduce error of seafloor benchmarker position.
Hyperspectral target detection using heavy-tailed distributions
NASA Astrophysics Data System (ADS)
Willis, Chris J.
2009-09-01
One promising approach to target detection in hyperspectral imagery exploits a statistical mixture model to represent scene content at a pixel level. The process then goes on to look for pixels which are rare, when judged against the model, and marks them as anomalies. It is assumed that military targets will themselves be rare and therefore likely to be detected amongst these anomalies. For the typical assumption of multivariate Gaussianity for the mixture components, the presence of the anomalous pixels within the training data will have a deleterious effect on the quality of the model. In particular, the derivation process itself is adversely affected by the attempt to accommodate the anomalies within the mixture components. This will bias the statistics of at least some of the components away from their true values and towards the anomalies. In many cases this will result in a reduction in the detection performance and an increased false alarm rate. This paper considers the use of heavy-tailed statistical distributions within the mixture model. Such distributions are better able to account for anomalies in the training data within the tails of their distributions, and the balance of the pixels within their central masses. This means that an improved model of the majority of the pixels in the scene may be produced, ultimately leading to a better anomaly detection result. The anomaly detection techniques are examined using both synthetic data and hyperspectral imagery with injected anomalous pixels. A range of results is presented for the baseline Gaussian mixture model and for models accommodating heavy-tailed distributions, for different parameterizations of the algorithms. These include scene understanding results, anomalous pixel maps at given significance levels and Receiver Operating Characteristic curves.
Multi-criteria anomaly detection in urban noise sensor networks.
Dauwe, Samuel; Oldoni, Damiano; De Baets, Bernard; Van Renterghem, Timothy; Botteldooren, Dick; Dhoedt, Bart
2014-01-01
The growing concern of citizens about the quality of their living environment and the emergence of low-cost microphones and data acquisition systems triggered the deployment of numerous noise monitoring networks spread over large geographical areas. Due to the local character of noise pollution in an urban environment, a dense measurement network is needed in order to accurately assess the spatial and temporal variations. The use of consumer grade microphones in this context appears to be very cost-efficient compared to the use of measurement microphones. However, the lower reliability of these sensing units requires a strong quality control of the measured data. To automatically validate sensor (microphone) data, prior to their use in further processing, a multi-criteria measurement quality assessment model for detecting anomalies such as microphone breakdowns, drifts and critical outliers was developed. Each of the criteria results in a quality score between 0 and 1. An ordered weighted average (OWA) operator combines these individual scores into a global quality score. The model is validated on datasets acquired from a real-world, extensive noise monitoring network consisting of more than 50 microphones. Over a period of more than a year, the proposed approach successfully detected several microphone faults and anomalies.
Ground Penetrating Radar Survey at Yoros Fortesss,Istanbul
NASA Astrophysics Data System (ADS)
Kucukdemirci, M.; Yalçın, A. B.
2016-12-01
Geophysical methods are effective tool to detect the archaeological remains and materials, which were hidden under the ground. One of the most frequently used methods for archaeological prospection is Ground Penetrating Radar (GPR). This paper illustrates the small scale GPR survey to determine the buried archaeological features around the Yoros Fortress, located on shores of the Bosporus strait in Istanbul, during the archaeological excavations. The survey was carried out with a GSSI SIR 3000 system, using 400 Mhz center frequency bistatic antenna with the configuration of 16 bits dynamic range and 512 samples per scan. The data were collected along parallel profiles with an interval of 0.50 meters with zigzag profile configuration on the survey grids. The GPR data were processed by GPR-Slice V.7 (Ground Penetrating Radar Imaging Software). As a result, in the first shallow depths, some scattered anomalies were detected. These can be related to a small portion of archaeological ruins close to the surface. In the deeper levels, the geometry of the anomalies related to the possible archaeological ruins, looks clearer. Two horizontal and parallel anomalies were detected, with the direction NS in the depth of 1.45 meters, possibly related to the ancient channels.
Small-scale anomaly detection in panoramic imaging using neural models of low-level vision
NASA Astrophysics Data System (ADS)
Casey, Matthew C.; Hickman, Duncan L.; Pavlou, Athanasios; Sadler, James R. E.
2011-06-01
Our understanding of sensory processing in animals has reached the stage where we can exploit neurobiological principles in commercial systems. In human vision, one brain structure that offers insight into how we might detect anomalies in real-time imaging is the superior colliculus (SC). The SC is a small structure that rapidly orients our eyes to a movement, sound or touch that it detects, even when the stimulus may be on a small-scale; think of a camouflaged movement or the rustle of leaves. This automatic orientation allows us to prioritize the use of our eyes to raise awareness of a potential threat, such as a predator approaching stealthily. In this paper we describe the application of a neural network model of the SC to the detection of anomalies in panoramic imaging. The neural approach consists of a mosaic of topographic maps that are each trained using competitive Hebbian learning to rapidly detect image features of a pre-defined shape and scale. What makes this approach interesting is the ability of the competition between neurons to automatically filter noise, yet with the capability of generalizing the desired shape and scale. We will present the results of this technique applied to the real-time detection of obscured targets in visible-band panoramic CCTV images. Using background subtraction to highlight potential movement, the technique is able to correctly identify targets which span as little as 3 pixels wide while filtering small-scale noise.
Evaluation of Anomaly Detection Capability for Ground-Based Pre-Launch Shuttle Operations. Chapter 8
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2010-01-01
This chapter will provide a thorough end-to-end description of the process for evaluation of three different data-driven algorithms for anomaly detection to select the best candidate for deployment as part of a suite of IVHM (Integrated Vehicle Health Management) technologies. These algorithms were deemed to be sufficiently mature enough to be considered viable candidates for deployment in support of the maiden launch of Ares I-X, the successor to the Space Shuttle for NASA's Constellation program. Data-driven algorithms are just one of three different types being deployed. The other two types of algorithms being deployed include a "nile-based" expert system, and a "model-based" system. Within these two categories, the deployable candidates have already been selected based upon qualitative factors such as flight heritage. For the rule-based system, SHINE (Spacecraft High-speed Inference Engine) has been selected for deployment, which is a component of BEAM (Beacon-based Exception Analysis for Multimissions), a patented technology developed at NASA's JPL (Jet Propulsion Laboratory) and serves to aid in the management and identification of operational modes. For the "model-based" system, a commercially available package developed by QSI (Qualtech Systems, Inc.), TEAMS (Testability Engineering and Maintenance System) has been selected for deployment to aid in diagnosis. In the context of this particular deployment, distinctions among the use of the terms "data-driven," "rule-based," and "model-based," can be found in. Although there are three different categories of algorithms that have been selected for deployment, our main focus in this chapter will be on the evaluation of three candidates for data-driven anomaly detection. These algorithms will be evaluated upon their capability for robustly detecting incipient faults or failures in the ground-based phase of pre-launch space shuttle operations, rather than based oil heritage as performed in previous studies. Robust detection will allow for the achievement of pre-specified minimum false alarm and/or missed detection rates in the selection of alert thresholds. All algorithms will also be optimized with respect to an aggregation of these same criteria. Our study relies upon the use of Shuttle data to act as was a proxy for and in preparation for application to Ares I-X data, which uses a very similar hardware platform for the subsystems that are being targeted (TVC - Thrust Vector Control subsystem for the SRB (Solid Rocket Booster)).
Identifying Threats Using Graph-based Anomaly Detection
NASA Astrophysics Data System (ADS)
Eberle, William; Holder, Lawrence; Cook, Diane
Much of the data collected during the monitoring of cyber and other infrastructures is structural in nature, consisting of various types of entities and relationships between them. The detection of threatening anomalies in such data is crucial to protecting these infrastructures. We present an approach to detecting anomalies in a graph-based representation of such data that explicitly represents these entities and relationships. The approach consists of first finding normative patterns in the data using graph-based data mining and then searching for small, unexpected deviations to these normative patterns, assuming illicit behavior tries to mimic legitimate, normative behavior. The approach is evaluated using several synthetic and real-world datasets. Results show that the approach has high truepositive rates, low false-positive rates, and is capable of detecting complex structural anomalies in real-world domains including email communications, cellphone calls and network traffic.
Conditional Anomaly Detection with Soft Harmonic Functions
Valko, Michal; Kveton, Branislav; Valizadegan, Hamed; Cooper, Gregory F.; Hauskrecht, Milos
2012-01-01
In this paper, we consider the problem of conditional anomaly detection that aims to identify data instances with an unusual response or a class label. We develop a new non-parametric approach for conditional anomaly detection based on the soft harmonic solution, with which we estimate the confidence of the label to detect anomalous mislabeling. We further regularize the solution to avoid the detection of isolated examples and examples on the boundary of the distribution support. We demonstrate the efficacy of the proposed method on several synthetic and UCI ML datasets in detecting unusual labels when compared to several baseline approaches. We also evaluate the performance of our method on a real-world electronic health record dataset where we seek to identify unusual patient-management decisions. PMID:25309142
Conditional Anomaly Detection with Soft Harmonic Functions.
Valko, Michal; Kveton, Branislav; Valizadegan, Hamed; Cooper, Gregory F; Hauskrecht, Milos
2011-01-01
In this paper, we consider the problem of conditional anomaly detection that aims to identify data instances with an unusual response or a class label. We develop a new non-parametric approach for conditional anomaly detection based on the soft harmonic solution, with which we estimate the confidence of the label to detect anomalous mislabeling. We further regularize the solution to avoid the detection of isolated examples and examples on the boundary of the distribution support. We demonstrate the efficacy of the proposed method on several synthetic and UCI ML datasets in detecting unusual labels when compared to several baseline approaches. We also evaluate the performance of our method on a real-world electronic health record dataset where we seek to identify unusual patient-management decisions.
Time series analysis of infrared satellite data for detecting thermal anomalies: a hybrid approach
NASA Astrophysics Data System (ADS)
Koeppen, W. C.; Pilger, E.; Wright, R.
2011-07-01
We developed and tested an automated algorithm that analyzes thermal infrared satellite time series data to detect and quantify the excess energy radiated from thermal anomalies such as active volcanoes. Our algorithm enhances the previously developed MODVOLC approach, a simple point operation, by adding a more complex time series component based on the methods of the Robust Satellite Techniques (RST) algorithm. Using test sites at Anatahan and Kīlauea volcanoes, the hybrid time series approach detected ~15% more thermal anomalies than MODVOLC with very few, if any, known false detections. We also tested gas flares in the Cantarell oil field in the Gulf of Mexico as an end-member scenario representing very persistent thermal anomalies. At Cantarell, the hybrid algorithm showed only a slight improvement, but it did identify flares that were undetected by MODVOLC. We estimate that at least 80 MODIS images for each calendar month are required to create good reference images necessary for the time series analysis of the hybrid algorithm. The improved performance of the new algorithm over MODVOLC will result in the detection of low temperature thermal anomalies that will be useful in improving our ability to document Earth's volcanic eruptions, as well as detecting low temperature thermal precursors to larger eruptions.
Automated Detection of Events of Scientific Interest
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
A report presents a slightly different perspective of the subject matter of Fusing Symbolic and Numerical Diagnostic Computations (NPO-42512), which appears elsewhere in this issue of NASA Tech Briefs. Briefly, the subject matter is the X-2000 Anomaly Detection Language, which is a developmental computing language for fusing two diagnostic computer programs one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for real-time detection of events. In the case of the cited companion NASA Tech Briefs article, the contemplated events that one seeks to detect would be primarily failures or other changes that could adversely affect the safety or success of a spacecraft mission. In the case of the instant report, the events to be detected could also include natural phenomena that could be of scientific interest. Hence, the use of X- 2000 Anomaly Detection Language could contribute to a capability for automated, coordinated use of multiple sensors and sensor-output-data-processing hardware and software to effect opportunistic collection and analysis of scientific data.
Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity
Schettini, Raimondo
2018-01-01
Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art. PMID:29329268
A Testbed for Data Fusion for Helicopter Diagnostics and Prognostics
2003-03-01
and algorithm design and tuning in order to develop advanced diagnostic and prognostic techniques for air craft health monitoring . Here a...and development of models for diagnostics, prognostics , and anomaly detection . Figure 5 VMEP Server Browser Interface 7 Download... detections , and prognostic prediction time horizons. The VMEP system and in particular the web component are ideal for performing data collection
Kaasen, Anne; Helbig, Anne; Malt, Ulrik Fredrik; Naes, Tormod; Skari, Hans; Haugen, Guttorm Nils
2013-07-12
In Norway almost all pregnant women attend one routine ultrasound examination. Detection of fetal structural anomalies triggers psychological stress responses in the women affected. Despite the frequent use of ultrasound examination in pregnancy, little attention has been devoted to the psychological response of the expectant father following the detection of fetal anomalies. This is important for later fatherhood and the psychological interaction within the couple. We aimed to describe paternal psychological responses shortly after detection of structural fetal anomalies by ultrasonography, and to compare paternal and maternal responses within the same couple. A prospective observational study was performed at a tertiary referral centre for fetal medicine. Pregnant women with a structural fetal anomaly detected by ultrasound and their partners (study group,n=155) and 100 with normal ultrasound findings (comparison group) were included shortly after sonographic examination (inclusion period: May 2006-February 2009). Gestational age was >12 weeks. We used psychometric questionnaires to assess self-reported social dysfunction, health perception, and psychological distress (intrusion, avoidance, arousal, anxiety, and depression): Impact of Event Scale. General Health Questionnaire and Edinburgh Postnatal Depression Scale. Fetal anomalies were classified according to severity and diagnostic or prognostic ambiguity at the time of assessment. Median (range) gestational age at inclusion in the study and comparison group was 19 (12-38) and 19 (13-22) weeks, respectively. Men and women in the study group had significantly higher levels of psychological distress than men and women in the comparison group on all psychometric endpoints. The lowest level of distress in the study group was associated with the least severe anomalies with no diagnostic or prognostic ambiguity (p < 0.033). Men had lower scores than women on all psychometric outcome variables. The correlation in distress scores between men and women was high in the fetal anomaly group (p < 0.001), but non-significant in the comparison group. Severity of the anomaly including ambiguity significantly influenced paternal response. Men reported lower scores on all psychometric outcomes than women. This knowledge may facilitate support for both expectant parents to reduce strain within the family after detectionof a fetal anomaly.
Apollo-Soyuz pamphlet no. 4: Gravitational field. [experimental design
NASA Technical Reports Server (NTRS)
Page, L. W.; From, T. P.
1977-01-01
Two Apollo Soyuz experiments designed to detect gravity anomalies from spacecraft motion are described. The geodynamics experiment (MA-128) measured large-scale gravity anomalies by detecting small accelerations of Apollo in the 222 km orbit, using Doppler tracking from the ATS-6 satellite. Experiment MA-089 measured 300 km anomalies on the earth's surface by detecting minute changes in the separation between Apollo and the docking module. Topics discussed in relation to these experiments include the Doppler effect, gravimeters, and the discovery of mascons on the moon.
McElhinney, Doff B; Marx, Gerald R; Newburger, Jane W
2011-01-01
Published case reports suggest that congenital portosystemic venous connections (PSVC) and other abdominal venous anomalies may be relatively frequent and potentially important in patients with polysplenia syndrome. Our objective was to investigate the frequency and range of portal and other abdominal systemic venous anomalies in patients with polysplenia and inferior vena cava (IVC) interruption who underwent a cavopulmonary anastomosis procedure at our center, and to review the published literature on this topic and the potential clinical importance of such anomalies. Retrospective cohort study and literature review were used. Among 77 patients with heterotaxy, univentricular heart disease, and IVC interruption who underwent a bidirectional Glenn and/or modified Fontan procedure, pulmonary arteriovenous malformations were diagnosed in 33 (43%). Bilateral superior vena cavas were present in 42 patients (55%). Despite inadequate imaging in many patients, a partial PSVC, dual IVCs, and/or renal vein anomalies were detected in 15 patients (19%). A PSVC formed by a tortuous vessel running from the systemic venous system to the extrahepatic portal vein was found in six patients (8%). Abdominal venous anomalies other than PSVC were documented in 13 patients (16%), including nine (12%) with some form of duplicated IVC system, with a large azygous vein continuing to the superior vena cava and a parallel, contralateral IVC of similar or smaller size, and seven with renal vein anomalies. In patients with a partial PSVC or a duplicate IVC that connected to the atrium, the abnormal connection allowed right-to-left shunting. PSVC and other abdominal venous anomalies may be clinically important but under-recognized in patients with IVC interruption and univentricular heart disease. In such patients, preoperative evaluation of the abdominal systemic venous system may be valuable. More data are necessary to determine whether there is a pathophysiologic connection between the polysplenia variant of heterotaxy, PSVC, and cavopulmonary anastomosis-associated pulmonary arteriovenous malformations. © 2011 Copyright the Authors. Congenital Heart Disease © 2011 Wiley Periodicals, Inc.
Thermal wake/vessel detection technique
Roskovensky, John K [Albuquerque, NM; Nandy, Prabal [Albuquerque, NM; Post, Brian N [Albuquerque, NM
2012-01-10
A computer-automated method for detecting a vessel in water based on an image of a portion of Earth includes generating a thermal anomaly mask. The thermal anomaly mask flags each pixel of the image initially deemed to be a wake pixel based on a comparison of a thermal value of each pixel against other thermal values of other pixels localized about each pixel. Contiguous pixels flagged by the thermal anomaly mask are grouped into pixel clusters. A shape of each of the pixel clusters is analyzed to determine whether each of the pixel clusters represents a possible vessel detection event. The possible vessel detection events are represented visually within the image.
Behavioral pattern identification for structural health monitoring in complex systems
NASA Astrophysics Data System (ADS)
Gupta, Shalabh
Estimation of structural damage and quantification of structural integrity are critical for safe and reliable operation of human-engineered complex systems, such as electromechanical, thermofluid, and petrochemical systems. Damage due to fatigue crack is one of the most commonly encountered sources of structural degradation in mechanical systems. Early detection of fatigue damage is essential because the resulting structural degradation could potentially cause catastrophic failures, leading to loss of expensive equipment and human life. Therefore, for reliable operation and enhanced availability, it is necessary to develop capabilities for prognosis and estimation of impending failures, such as the onset of wide-spread fatigue crack damage in mechanical structures. This dissertation presents information-based online sensing of fatigue damage using the analytical tools of symbolic time series analysis ( STSA). Anomaly detection using STSA is a pattern recognition method that has been recently developed based upon a fixed-structure, fixed-order Markov chain. The analysis procedure is built upon the principles of Symbolic Dynamics, Information Theory and Statistical Pattern Recognition. The dissertation demonstrates real-time fatigue damage monitoring based on time series data of ultrasonic signals. Statistical pattern changes are measured using STSA to monitor the evolution of fatigue damage. Real-time anomaly detection is presented as a solution to the forward (analysis) problem and the inverse (synthesis) problem. (1) the forward problem - The primary objective of the forward problem is identification of the statistical changes in the time series data of ultrasonic signals due to gradual evolution of fatigue damage. (2) the inverse problem - The objective of the inverse problem is to infer the anomalies from the observed time series data in real time based on the statistical information generated during the forward problem. A computer-controlled special-purpose fatigue test apparatus, equipped with multiple sensing devices (e.g., ultrasonics and optical microscope) for damage analysis, has been used to experimentally validate the STSA method for early detection of anomalous behavior. The sensor information is integrated with a software module consisting of the STSA algorithm for real-time monitoring of fatigue damage. Experiments have been conducted under different loading conditions on specimens constructed from the ductile aluminium alloy 7075 - T6. The dissertation has also investigated the application of the STSA method for early detection of anomalies in other engineering disciplines. Two primary applications include combustion instability in a generic thermal pulse combustor model and whirling phenomenon in a typical misaligned shaft.
Anomaly Detection in Large Sets of High-Dimensional Symbol Sequences
NASA Technical Reports Server (NTRS)
Budalakoti, Suratna; Srivastava, Ashok N.; Akella, Ram; Turkov, Eugene
2006-01-01
This paper addresses the problem of detecting and describing anomalies in large sets of high-dimensional symbol sequences. The approach taken uses unsupervised clustering of sequences using the normalized longest common subsequence (LCS) as a similarity measure, followed by detailed analysis of outliers to detect anomalies. As the LCS measure is expensive to compute, the first part of the paper discusses existing algorithms, such as the Hunt-Szymanski algorithm, that have low time-complexity. We then discuss why these algorithms often do not work well in practice and present a new hybrid algorithm for computing the LCS that, in our tests, outperforms the Hunt-Szymanski algorithm by a factor of five. The second part of the paper presents new algorithms for outlier analysis that provide comprehensible indicators as to why a particular sequence was deemed to be an outlier. The algorithms provide a coherent description to an analyst of the anomalies in the sequence, compared to more normal sequences. The algorithms we present are general and domain-independent, so we discuss applications in related areas such as anomaly detection.
Anomaly Monitoring Method for Key Components of Satellite
Fan, Linjun; Xiao, Weidong; Tang, Jun
2014-01-01
This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703
Sensor Anomaly Detection in Wireless Sensor Networks for Healthcare
Haque, Shah Ahsanul; Rahman, Mustafizur; Aziz, Syed Mahfuzul
2015-01-01
Wireless Sensor Networks (WSN) are vulnerable to various sensor faults and faulty measurements. This vulnerability hinders efficient and timely response in various WSN applications, such as healthcare. For example, faulty measurements can create false alarms which may require unnecessary intervention from healthcare personnel. Therefore, an approach to differentiate between real medical conditions and false alarms will improve remote patient monitoring systems and quality of healthcare service afforded by WSN. In this paper, a novel approach is proposed to detect sensor anomaly by analyzing collected physiological data from medical sensors. The objective of this method is to effectively distinguish false alarms from true alarms. It predicts a sensor value from historic values and compares it with the actual sensed value for a particular instance. The difference is compared against a threshold value, which is dynamically adjusted, to ascertain whether the sensor value is anomalous. The proposed approach has been applied to real healthcare datasets and compared with existing approaches. Experimental results demonstrate the effectiveness of the proposed system, providing high Detection Rate (DR) and low False Positive Rate (FPR). PMID:25884786
Prevalence and distribution of dental anomalies in orthodontic patients.
Montasser, Mona A; Taha, Mahasen
2012-01-01
To study the prevalence and distribution of dental anomalies in a sample of orthodontic patients. The dental casts, intraoral photographs, and lateral panoramic and cephalometric radiographs of 509 Egyptian orthodontic patients were studied. Patients were examined for dental anomalies in number, size, shape, position, and structure. The prevalence of each dental anomaly was calculated and compared between sexes. Of the total study sample, 32.6% of the patients had at least one dental anomaly other than agenesis of third molars; 32.1% of females and 33.5% of males had at least one dental anomaly other than agenesis of third molars. The most commonly detected dental anomalies were impaction (12.8%) and ectopic eruption (10.8%). The total prevalence of hypodontia (excluding third molars) and hyperdontia was 2.4% and 2.8%, respectively, with similiar distributions in females and males. Gemination and accessory roots were reported in this study; each of these anomalies was detected in 0.2% of patients. In addition to genetic and racial factors, environmental factors could have more important influence on the prevalence of dental anomalies in every population. Impaction, ectopic eruption, hyperdontia, hypodontia, and microdontia were the most common dental anomalies, while fusion and dentinogenesis imperfecta were absent.
Detection of Low Temperature Volcanogenic Thermal Anomalies with ASTER
NASA Astrophysics Data System (ADS)
Pieri, D. C.; Baxter, S.
2009-12-01
Predicting volcanic eruptions is a thorny problem, as volcanoes typically exhibit idiosyncratic waxing and/or waning pre-eruption emission, geodetic, and seismic behavior. It is no surprise that increasing our accuracy and precision in eruption prediction depends on assessing the time-progressions of all relevant precursor geophysical, geochemical, and geological phenomena, and on more frequently observing volcanoes when they become restless. The ASTER instrument on the NASA Terra Earth Observing System satellite in low earth orbit provides important capabilities in the area of detection of volcanogenic anomalies such as thermal precursors and increased passive gas emissions. Its unique high spatial resolution multi-spectral thermal IR imaging data (90m/pixel; 5 bands in the 8-12um region), bore-sighted with visible and near-IR imaging data, and combined with off-nadir pointing and stereo-photogrammetric capabilities make ASTER a potentially important volcanic precursor detection tool. We are utilizing the JPL ASTER Volcano Archive (http://ava.jpl.nasa.gov) to systematically examine 80,000+ ASTER volcano images to analyze (a) thermal emission baseline behavior for over 1500 volcanoes worldwide, (b) the form and magnitude of time-dependent thermal emission variability for these volcanoes, and (c) the spatio-temporal limits of detection of pre-eruption temporal changes in thermal emission in the context of eruption precursor behavior. We are creating and analyzing a catalog of the magnitude, frequency, and distribution of volcano thermal signatures worldwide as observed from ASTER since 2000 at 90m/pixel. Of particular interest as eruption precursors are small low contrast thermal anomalies of low apparent absolute temperature (e.g., melt-water lakes, fumaroles, geysers, grossly sub-pixel hotspots), for which the signal-to-noise ratio may be marginal (e.g., scene confusion due to clouds, water and water vapor, fumarolic emissions, variegated ground emissivity, and their combinations). To systematically detect such intrinsically difficult anomalies within our large archive, we are exploring a four step approach: (a) the recursive application of a GPU-accelerated, edge-preserving bilateral filter prepares a thermal image by removing noise and fine detail; (b) the resulting stylized filtered image is segmented by a path-independent region-growing algorithm, (c) the resulting segments are fused based on thermal affinity, and (d) fused segments are subjected to thermal and geographical tests for hotspot detection and classification, to eliminate false alarms or non-volcanogenic anomalies. We will discuss our progress in creating the general thermal anomaly catalog as well as algorithm approach and results. This work was carried out at the Jet Propulsion Laboratory of the California Institute of Technology under contract to NASA.
NASA Astrophysics Data System (ADS)
Tabassum, Asma
This thesis work analyzes the performance of Automatic Dependent Surveillance-Broadcast (ADS-B) data received from Grand Forks International Airport, detects anomalies in the data and quantifies the associated potential risk. This work also assesses severity associated anomalous data in Detect and Avoid (DAA) for Unmanned Aircraft System (UAS). The received data were raw and archived in GDL-90 format. A python module is developed to parse the raw data into readable data in a .csv file. The anomaly detection algorithm is based on Federal Aviation Administration's (FAA) ADS-B performance assessment report. An extensive study is carried out on two main types of anomalies, namely dropouts and altitude deviations. A dropout is considered when the update rate exceeds three seconds. Dropouts are of different durations and have a different level of risk depending on how much time ADS-B is unavailable as the surveillance system. Altitude deviation refers to the deviation between barometric and geometric altitude. Deviation ranges from 25 feet to 600 feet have been observed. As of now, barometric altitude has been used for separation and surveillance while geometric altitude can be used in cases where barometric altitude is not available. Many UAS might not have both sensors installed on board due to size and weight constrains. There might be a chance of misinterpretation of vertical separation specially while flying in National Airspace (NAS) if the ownship UAS and intruder manned aircraft use two different altitude sources for separation standard. The characteristics and agreement between two different altitudes is investigated with a regression based approach. Multiple risk matrices are established based on the severity of the DAA well clear. ADS-B is called the Backbone of FAA Next Generation Air Transportation System, NextGen. NextGen is the series of inter-linked programs, systems, and policies that implement advanced technologies and capabilities. ADS-B utilizes the Satellite based Global Positioning System (GPS) technology to provide the pilot and the Air Traffic Control (ATC) with more information which enables an efficient navigation of aircraft in increasingly congested airspace. FAA mandated all aircraft, both manned and unmanned, be equipped with ADS-B out by the year 2020 to fly within most controlled airspace. As a fundamental component of NextGen it is crucial to understand the behavior and potential risk with ADS-B Systems.
A novel interacting multiple model based network intrusion detection scheme
NASA Astrophysics Data System (ADS)
Xin, Ruichi; Venkatasubramanian, Vijay; Leung, Henry
2006-04-01
In today's information age, information and network security are of primary importance to any organization. Network intrusion is a serious threat to security of computers and data networks. In internet protocol (IP) based network, intrusions originate in different kinds of packets/messages contained in the open system interconnection (OSI) layer 3 or higher layers. Network intrusion detection and prevention systems observe the layer 3 packets (or layer 4 to 7 messages) to screen for intrusions and security threats. Signature based methods use a pre-existing database that document intrusion patterns as perceived in the layer 3 to 7 protocol traffics and match the incoming traffic for potential intrusion attacks. Alternately, network traffic data can be modeled and any huge anomaly from the established traffic pattern can be detected as network intrusion. The latter method, also known as anomaly based detection is gaining popularity for its versatility in learning new patterns and discovering new attacks. It is apparent that for a reliable performance, an accurate model of the network data needs to be established. In this paper, we illustrate using collected data that network traffic is seldom stationary. We propose the use of multiple models to accurately represent the traffic data. The improvement in reliability of the proposed model is verified by measuring the detection and false alarm rates on several datasets.
Model-Biased, Data-Driven Adaptive Failure Prediction
NASA Technical Reports Server (NTRS)
Leen, Todd K.
2004-01-01
This final report, which contains a research summary and a viewgraph presentation, addresses clustering and data simulation techniques for failure prediction. The researchers applied their techniques to both helicopter gearbox anomaly detection and segmentation of Earth Observing System (EOS) satellite imagery.
Retrieving Temperature Anomaly in the Global Subsurface and Deeper Ocean From Satellite Observations
NASA Astrophysics Data System (ADS)
Su, Hua; Li, Wene; Yan, Xiao-Hai
2018-01-01
Retrieving the subsurface and deeper ocean (SDO) dynamic parameters from satellite observations is crucial for effectively understanding ocean interior anomalies and dynamic processes, but it is challenging to accurately estimate the subsurface thermal structure over the global scale from sea surface parameters. This study proposes a new approach based on Random Forest (RF) machine learning to retrieve subsurface temperature anomaly (STA) in the global ocean from multisource satellite observations including sea surface height anomaly (SSHA), sea surface temperature anomaly (SSTA), sea surface salinity anomaly (SSSA), and sea surface wind anomaly (SSWA) via in situ Argo data for RF training and testing. RF machine-learning approach can accurately retrieve the STA in the global ocean from satellite observations of sea surface parameters (SSHA, SSTA, SSSA, SSWA). The Argo STA data were used to validate the accuracy and reliability of the results from the RF model. The results indicated that SSHA, SSTA, SSSA, and SSWA together are useful parameters for detecting SDO thermal information and obtaining accurate STA estimations. The proposed method also outperformed support vector regression (SVR) in global STA estimation. It will be a useful technique for studying SDO thermal variability and its role in global climate system from global-scale satellite observations.
Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm
NASA Astrophysics Data System (ADS)
Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong
2018-06-01
The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.
An artificial bioindicator system for network intrusion detection.
Blum, Christian; Lozano, José A; Davidson, Pedro Pinacho
An artificial bioindicator system is developed in order to solve a network intrusion detection problem. The system, inspired by an ecological approach to biological immune systems, evolves a population of agents that learn to survive in their environment. An adaptation process allows the transformation of the agent population into a bioindicator that is capable of reacting to system anomalies. Two characteristics stand out in our proposal. On the one hand, it is able to discover new, previously unseen attacks, and on the other hand, contrary to most of the existing systems for network intrusion detection, it does not need any previous training. We experimentally compare our proposal with three state-of-the-art algorithms and show that it outperforms the competing approaches on widely used benchmark data.
Barisic, Ingeborg; Odak, Ljubica; Loane, Maria; Garne, Ester; Wellesley, Diana; Calzolari, Elisa; Dolk, Helen; Addor, Marie-Claude; Arriola, Larraitz; Bergman, Jorieke; Bianca, Sebastiano; Doray, Berenice; Khoshnood, Babak; Klungsoyr, Kari; McDonnell, Bob; Pierini, Anna; Rankin, Judith; Rissmann, Anke; Rounding, Catherine; Queisser-Luft, Annette; Scarano, Gioacchino; Tucker, David
2014-01-01
Oculo-auriculo-vertebral spectrum is a complex developmental disorder characterised mainly by anomalies of the ear, hemifacial microsomia, epibulbar dermoids and vertebral anomalies. The aetiology is largely unknown, and the epidemiological data are limited and inconsistent. We present the largest population-based epidemiological study to date, using data provided by the large network of congenital anomalies registries in Europe. The study population included infants diagnosed with oculo-auriculo-vertebral spectrum during the 1990–2009 period from 34 registries active in 16 European countries. Of the 355 infants diagnosed with oculo-auriculo-vertebral spectrum, there were 95.8% (340/355) live born, 0.8% (3/355) fetal deaths, 3.4% (12/355) terminations of pregnancy for fetal anomaly and 1.5% (5/340) neonatal deaths. In 18.9%, there was prenatal detection of anomaly/anomalies associated with oculo-auriculo-vertebral spectrum, 69.7% were diagnosed at birth, 3.9% in the first week of life and 6.1% within 1 year of life. Microtia (88.8%), hemifacial microsomia (49.0%) and ear tags (44.4%) were the most frequent anomalies, followed by atresia/stenosis of external auditory canal (25.1%), diverse vertebral (24.3%) and eye (24.3%) anomalies. There was a high rate (69.5%) of associated anomalies of other organs/systems. The most common were congenital heart defects present in 27.8% of patients. The prevalence of oculo-auriculo-vertebral spectrum, defined as microtia/ear anomalies and at least one major characteristic anomaly, was 3.8 per 100 000 births. Twinning, assisted reproductive techniques and maternal pre-pregnancy diabetes were confirmed as risk factors. The high rate of different associated anomalies points to the need of performing an early ultrasound screening in all infants born with this disorder. PMID:24398798
Using principal component analysis for selecting network behavioral anomaly metrics
NASA Astrophysics Data System (ADS)
Gregorio-de Souza, Ian; Berk, Vincent; Barsamian, Alex
2010-04-01
This work addresses new approaches to behavioral analysis of networks and hosts for the purposes of security monitoring and anomaly detection. Most commonly used approaches simply implement anomaly detectors for one, or a few, simple metrics and those metrics can exhibit unacceptable false alarm rates. For instance, the anomaly score of network communication is defined as the reciprocal of the likelihood that a given host uses a particular protocol (or destination);this definition may result in an unrealistically high threshold for alerting to avoid being flooded by false positives. We demonstrate that selecting and adapting the metrics and thresholds, on a host-by-host or protocol-by-protocol basis can be done by established multivariate analyses such as PCA. We will show how to determine one or more metrics, for each network host, that records the highest available amount of information regarding the baseline behavior, and shows relevant deviances reliably. We describe the methodology used to pick from a large selection of available metrics, and illustrate a method for comparing the resulting classifiers. Using our approach we are able to reduce the resources required to properly identify misbehaving hosts, protocols, or networks, by dedicating system resources to only those metrics that actually matter in detecting network deviations.
Global Anomaly Detection in Two-Dimensional Symmetry-Protected Topological Phases
NASA Astrophysics Data System (ADS)
Bultinck, Nick; Vanhove, Robijn; Haegeman, Jutho; Verstraete, Frank
2018-04-01
Edge theories of symmetry-protected topological phases are well known to possess global symmetry anomalies. In this Letter we focus on two-dimensional bosonic phases protected by an on-site symmetry and analyze the corresponding edge anomalies in more detail. Physical interpretations of the anomaly in terms of an obstruction to orbifolding and constructing symmetry-preserving boundaries are connected to the cohomology classification of symmetry-protected phases in two dimensions. Using the tensor network and matrix product state formalism we numerically illustrate our arguments and discuss computational detection schemes to identify symmetry-protected order in a ground state wave function.
Model selection for anomaly detection
NASA Astrophysics Data System (ADS)
Burnaev, E.; Erofeev, P.; Smolyakov, D.
2015-12-01
Anomaly detection based on one-class classification algorithms is broadly used in many applied domains like image processing (e.g. detection of whether a patient is "cancerous" or "healthy" from mammography image), network intrusion detection, etc. Performance of an anomaly detection algorithm crucially depends on a kernel, used to measure similarity in a feature space. The standard approaches (e.g. cross-validation) for kernel selection, used in two-class classification problems, can not be used directly due to the specific nature of a data (absence of a second, abnormal, class data). In this paper we generalize several kernel selection methods from binary-class case to the case of one-class classification and perform extensive comparison of these approaches using both synthetic and real-world data.
Detection of sinkholes or anomalies using full seismic wave fields.
DOT National Transportation Integrated Search
2013-04-01
This research presents an application of two-dimensional (2-D) time-domain waveform tomography for detection of embedded sinkholes and anomalies. The measured seismic surface wave fields were inverted using a full waveform inversion (FWI) technique, ...
Dame, Brittany E; Solomon, D Kip; Evans, William C.; Ingebritsen, Steven E.
2015-01-01
Helium (He) concentration and 3 He/ 4 He anomalies in soil gas and spring water are potentially powerful tools for investigating hydrothermal circulation associated with volca- nism and could perhaps serve as part of a hazards warning system. However, in operational practice, He and other gases are often sampled only after volcanic unrest is detected by other means. A new passive diffusion sampler suite, intended to be collected after the onset of unrest, has been developed and tested as a relatively low-cost method of determining He- isotope composition pre- and post-unrest. The samplers, each with a distinct equilibration time, passively record He concen- tration and isotope ratio in springs and soil gas. Once collected and analyzed, the He concentrations in the samplers are used to deconvolve the time history of the He concentration and the 3 He/ 4 He ratio at the collection site. The current suite consisting of three samplers is sufficient to deconvolve both the magnitude and the timing of a step change in in situ con- centration if the suite is collected within 100 h of the change. The effects of temperature and prolonged deployment on the suite ’ s capability of recording He anomalies have also been evaluated. The suite has captured a significant 3 He/ 4 He soil gas anomaly at Horseshoe Lake near Mammoth Lakes, California. The passive diffusion sampler suite appears to be an accurate and affordable alternative for determining He anomalies associated with volcanic unrest.
SmartMal: a service-oriented behavioral malware detection framework for mobile devices.
Wang, Chao; Wu, Zhizhong; Li, Xi; Zhou, Xuehai; Wang, Aili; Hung, Patrick C K
2014-01-01
This paper presents SmartMal--a novel service-oriented behavioral malware detection framework for vehicular and mobile devices. The highlight of SmartMal is to introduce service-oriented architecture (SOA) concepts and behavior analysis into the malware detection paradigms. The proposed framework relies on client-server architecture, the client continuously extracts various features and transfers them to the server, and the server's main task is to detect anomalies using state-of-art detection algorithms. Multiple distributed servers simultaneously analyze the feature vector using various detectors and information fusion is used to concatenate the results of detectors. We also propose a cycle-based statistical approach for mobile device anomaly detection. We accomplish this by analyzing the users' regular usage patterns. Empirical results suggest that the proposed framework and novel anomaly detection algorithm are highly effective in detecting malware on Android devices.
SmartMal: A Service-Oriented Behavioral Malware Detection Framework for Mobile Devices
Wu, Zhizhong; Li, Xi; Zhou, Xuehai; Wang, Aili; Hung, Patrick C. K.
2014-01-01
This paper presents SmartMal—a novel service-oriented behavioral malware detection framework for vehicular and mobile devices. The highlight of SmartMal is to introduce service-oriented architecture (SOA) concepts and behavior analysis into the malware detection paradigms. The proposed framework relies on client-server architecture, the client continuously extracts various features and transfers them to the server, and the server's main task is to detect anomalies using state-of-art detection algorithms. Multiple distributed servers simultaneously analyze the feature vector using various detectors and information fusion is used to concatenate the results of detectors. We also propose a cycle-based statistical approach for mobile device anomaly detection. We accomplish this by analyzing the users' regular usage patterns. Empirical results suggest that the proposed framework and novel anomaly detection algorithm are highly effective in detecting malware on Android devices. PMID:25165729
System and Method for Outlier Detection via Estimating Clusters
NASA Technical Reports Server (NTRS)
Iverson, David J. (Inventor)
2016-01-01
An efficient method and system for real-time or offline analysis of multivariate sensor data for use in anomaly detection, fault detection, and system health monitoring is provided. Models automatically derived from training data, typically nominal system data acquired from sensors in normally operating conditions or from detailed simulations, are used to identify unusual, out of family data samples (outliers) that indicate possible system failure or degradation. Outliers are determined through analyzing a degree of deviation of current system behavior from the models formed from the nominal system data. The deviation of current system behavior is presented as an easy to interpret numerical score along with a measure of the relative contribution of each system parameter to any off-nominal deviation. The techniques described herein may also be used to "clean" the training data.
Intelligent agent-based intrusion detection system using enhanced multiclass SVM.
Ganapathy, S; Yogesh, P; Kannan, A
2012-01-01
Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set.
Intelligent Agent-Based Intrusion Detection System Using Enhanced Multiclass SVM
Ganapathy, S.; Yogesh, P.; Kannan, A.
2012-01-01
Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set. PMID:23056036
Detecting ship targets in spaceborne infrared image based on modeling radiation anomalies
NASA Astrophysics Data System (ADS)
Wang, Haibo; Zou, Zhengxia; Shi, Zhenwei; Li, Bo
2017-09-01
Using infrared imaging sensors to detect ship target in the ocean environment has many advantages compared to other sensor modalities, such as better thermal sensitivity and all-weather detection capability. We propose a new ship detection method by modeling radiation anomalies for spaceborne infrared image. The proposed method can be decomposed into two stages, where in the first stage, a test infrared image is densely divided into a set of image patches and the radiation anomaly of each patch is estimated by a Gaussian Mixture Model (GMM), and thereby target candidates are obtained from anomaly image patches. In the second stage, target candidates are further checked by a more discriminative criterion to obtain the final detection result. The main innovation of the proposed method is inspired by the biological mechanism that human eyes are sensitive to the unusual and anomalous patches among complex background. The experimental result on short wavelength infrared band (1.560 - 2.300 μm) and long wavelength infrared band (10.30 - 12.50 μm) of Landsat-8 satellite shows the proposed method achieves a desired ship detection accuracy with higher recall than other classical ship detection methods.
2008-10-01
AD); Aeolos, a distributed intrusion detection and event correlation infrastructure; STAND, a training-set sanitization technique applicable to ADs...UU 18. NUMBER OF PAGES 25 19a. NAME OF RESPONSIBLE PERSON Frank H. Born a. REPORT U b. ABSTRACT U c . THIS PAGE U 19b. TELEPHONE...Summary of findings 2 (a) Automatic Patch Generation 2 (b) Better Patch Management 2 ( c ) Artificial Diversity 3 (d) Distributed Anomaly Detection 3
An Analysis of Drop Outs and Unusual Behavior from Primary and Secondary Radar
NASA Astrophysics Data System (ADS)
Allen, Nicholas J.
An evaluation of the radar systems in the Red River Valley of North Dakota (ND) and its surrounding areas for its ability to provide Detect and Avoid (DAA) capabilities for manned and unmanned aircraft systems (UAS) was performed. Additionally, the data was analyzed for its feasibility to be used in autonomous Air Traffic Control (ATC) systems in the future. With the almost certain increase in airspace congestion over the coming years, the need for a robust and accurate radar system is crucial. This study focused on the Airport Surveillance Radar (ASR) at Fargo, ND and the Air Route Surveillance Radar at Finley, ND. Each of these radar sites contain primary and secondary radars. It was found that both locations exhibit data anomalies, such as: drop outs, altitude outliers, prolonged altitude failures, repeated data, and multiple aircraft with the same identification number (ID) number. Four weeks of data provided by Harris Corporation throughout the year were analyzed using a MATLAB algorithm developed to identify the data anomalies. The results showed Fargo intercepts on average 450 aircraft, while Finley intercepts 1274 aircraft. Of these aircraft an average of 34% experienced drop outs at Fargo and 69% at Finley. With the average drop out at Fargo of 23.58 seconds and 42.45 seconds at Finley, and several lasting more than several minutes, it shows these data anomalies can occur for an extended period of time. Between 1% to 26% aircraft experienced the other data anomalies, depending on the type of data anomaly and location. When aircraft were near airports or the edge of the effective radar radius, the largest proportion of data anomalies were experienced. It was also discovered that drop outs, altitude outliers, andrepeated data are radar induced errors, while prolonged altitude failures and multiple aircraft with the same ID are transponder induced errors. The risk associated with each data anomaly, by looking at the severity of the event and the occurrence was also produced. The findings from this report will provide meaningful data and likely influence the development of UAS DAA logic and the logic behind autonomous ATC systems.
NASA Astrophysics Data System (ADS)
Meroni, M.; Rembold, F.; Urbano, F.; Lemoine, G.
2016-12-01
Anomaly maps and time profiles of remote sensing derived indicators relevant to monitor crop and vegetation stress can be accessed online thanks to a rapidly growing number of web based portals. However, timely and systematic global analysis and coherent interpretation of such information, as it is needed for example for SDG 2 related monitoring, remains challenging. With the ASAP system (Anomaly hot Spots of Agricultural Production) we propose a two-step analysis to provide monthly warning of production deficits in water-limited agriculture worldwide. The first step is fully automated and aims at classifying each administrative unit (1st sub-national level) into a number of possible warning levels, ranging from "none" to "watch" and up to "extended alarm". The second step involves the verification of the automatic warnings and integration into a short national level analysis by agricultural analysts. In this paper we describe the methodological development of the automatic vegetation anomaly classification system. Warnings are triggered only during the crop growing season, defined by a remote sensing based phenology. The classification takes into consideration the fraction of the agricultural and rangelands area for each administrative unit that is affected by a severe anomaly of two rainfall-based indicators (the Standardized Precipitation Index (SPI), computed at 1 and 3-month scale) and one biophysical indicator (the cumulative NDVI from the start of the growing season). The severity of the warning thus depends on the timing, the nature and the number of indicators for which an anomaly is detected. The prototype system is using global NDVI images of the METOP sensor, while a second version is being developed based on 1km Modis NDVI with temporal smoothing and near real time filtering. Also a specific water balance model is under development to include agriculture water stress information in addition to the SPI. The monthly warning classification and crop condition assessment will be made available on a website and will strengthen the JRC support to information products based on consensus assessment such as the GEOGLAM Crop Monitor for Early Warning.
Network Intrusion Detection and Visualization using Aggregations in a Cyber Security Data Warehouse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czejdo, Bogdan; Ferragut, Erik M; Goodall, John R
2012-01-01
The challenge of achieving situational understanding is a limiting factor in effective, timely, and adaptive cyber-security analysis. Anomaly detection fills a critical role in network assessment and trend analysis, both of which underlie the establishment of comprehensive situational understanding. To that end, we propose a cyber security data warehouse implemented as a hierarchical graph of aggregations that captures anomalies at multiple scales. Each node of our pro-posed graph is a summarization table of cyber event aggregations, and the edges are aggregation operators. The cyber security data warehouse enables domain experts to quickly traverse a multi-scale aggregation space systematically. We describemore » the architecture of a test bed system and a summary of results on the IEEE VAST 2012 Cyber Forensics data.« less
Lytle, R. Jeffrey; Lager, Darrel L.; Laine, Edwin F.; Davis, Donald T.
1979-01-01
Underground anomalies or discontinuities, such as holes, tunnels, and caverns, are located by lowering an electromagnetic signal transmitting antenna down one borehole and a receiving antenna down another, the ground to be surveyed for anomalies being situated between the boreholes. Electronic transmitting and receiving equipment associated with the antennas is activated and the antennas are lowered in unison at the same rate down their respective boreholes a plurality of times, each time with the receiving antenna at a different level with respect to the transmitting antenna. The transmitted electromagnetic waves diffract at each edge of an anomaly. This causes minimal signal reception at the receiving antenna. Triangulation of the straight lines between the antennas for the depths at which the signal minimums are detected precisely locates the anomaly. Alternatively, phase shifts of the transmitted waves may be detected to locate an anomaly, the phase shift being distinctive for the waves directed at the anomaly.
NASA Astrophysics Data System (ADS)
Akhoondzadeh, M.
2013-08-01
On 6 February 2013, at 12:12:27 local time (01:12:27 UTC) a seismic event registering Mw 8.0 struck the Solomon Islands, located at the boundaries of the Australian and Pacific tectonic plates. Time series prediction is an important and widely interesting topic in the research of earthquake precursors. This paper describes a new computational intelligence approach to detect the unusual variations of the total electron content (TEC) seismo-ionospheric anomalies induced by the powerful Solomon earthquake using genetic algorithm (GA). The GA detected a considerable number of anomalous occurrences on earthquake day and also 7 and 8 days prior to the earthquake in a period of high geomagnetic activities. In this study, also the detected TEC anomalies using the proposed method are compared to the results dealing with the observed TEC anomalies by applying the mean, median, wavelet, Kalman filter, ARIMA, neural network and support vector machine methods. The accordance in the final results of all eight methods is a convincing indication for the efficiency of the GA method. It indicates that GA can be an appropriate non-parametric tool for anomaly detection in a non linear time series showing the seismo-ionospheric precursors variations.
NASA Astrophysics Data System (ADS)
Jervis, John R.; Pringle, Jamie K.
2014-09-01
Electrical resistivity surveys have proven useful for locating clandestine graves in a number of forensic searches. However, some aspects of grave detection with resistivity surveys remain imperfectly understood. One such aspect is the effect of seasonal changes in climate on the resistivity response of graves. In this study, resistivity survey data collected over three years over three simulated graves were analysed in order to assess how the graves' resistivity anomalies varied seasonally and when they could most easily be detected. Thresholds were used to identify anomalies, and the ‘residual volume' of grave-related anomalies was calculated as the area bounded by the relevant thresholds multiplied by the anomaly's average value above the threshold. The residual volume of a resistivity anomaly associated with a buried pig cadaver showed evidence of repeating annual patterns and was moderately correlated with the soil moisture budget. This anomaly was easiest to detect between January and April each year, after prolonged periods of high net gain in soil moisture. The resistivity response of a wrapped cadaver was more complex, although it also showed evidence of seasonal variation during the third year after burial. We suggest that the observed variation in the graves' resistivity anomalies was caused by seasonal change in survey data noise levels, which was in turn influenced by the soil moisture budget. It is possible that similar variations occur elsewhere for sites with seasonal climate variations and this could affect successful detection of other subsurface features. Further research to investigate how different climates and soil types affect seasonal variation in grave-related resistivity anomalies would be useful.
Gravity anomaly detection: Apollo/Soyuz
NASA Technical Reports Server (NTRS)
Vonbun, F. O.; Kahn, W. D.; Bryan, J. W.; Schmid, P. E.; Wells, W. T.; Conrad, D. T.
1976-01-01
The Goddard Apollo-Soyuz Geodynamics Experiment is described. It was performed to demonstrate the feasibility of tracking and recovering high frequency components of the earth's gravity field by utilizing a synchronous orbiting tracking station such as ATS-6. Gravity anomalies of 5 MGLS or larger having wavelengths of 300 to 1000 kilometers on the earth's surface are important for geologic studies of the upper layers of the earth's crust. Short wavelength Earth's gravity anomalies were detected from space. Two prime areas of data collection were selected for the experiment: (1) the center of the African continent and (2) the Indian Ocean Depression centered at 5% north latitude and 75% east longitude. Preliminary results show that the detectability objective of the experiment was met in both areas as well as at several additional anomalous areas around the globe. Gravity anomalies of the Karakoram and Himalayan mountain ranges, ocean trenches, as well as the Diamantina Depth, can be seen. Maps outlining the anomalies discovered are shown.
Thermal remote sensing as a part of Exupéry volcano fast response system
NASA Astrophysics Data System (ADS)
Zakšek, Klemen; Hort, Matthias
2010-05-01
In order to understand the eruptive potential of a volcanic system one has to characterize the actual state of stress of a volcanic system that involves proper monitoring strategies. As several volcanoes in highly populated areas especially in south east Asia are still nearly unmonitored a mobile volcano monitoring system is currently being developed in Germany. One of the major novelties of this mobile volcano fast response system called Exupéry is the direct inclusion of satellite based observations. Remote sensing data are introduced together with ground based field measurements into the GIS database, where the statistical properties of all recorded data are estimated. Using physical modelling and statistical methods we hope to constrain the probability of future eruptions. The emphasis of this contribution is on using thermal remote sensing as tool for monitoring active volcanoes. One can detect thermal anomalies originating from a volcano by comparing signals in mid and thermal infrared spectra. A reliable and effective thermal anomalies detection algorithm was developed by Wright (2002) for MODIS sensor; it is based on the threshold of the so called normalized thermal index (NTI). This is the method we use in Exupéry, where we characterize each detected thermal anomaly by temperature, area, heat flux and effusion rate. The recent work has shown that radiant flux is the most robust parameter for this characterization. Its derivation depends on atmosphere, satellite viewing angle and sensor characteristics. Some of these influences are easy to correct using standard remote sensing pre-processing techniques, however, some noise still remains in data. In addition, satellites in polar orbits have long revisit times and thus they might fail to follow a fast evolving volcanic crisis due to long revisit times. Thus we are currently testing Kalman filter on simultaneous use of MODIS and AVHRR data to improve the thermal anomaly characterization. The advantage of this technique is that it increases the temporal resolution through using images from different satellites having different resolution and sensitivity. This algorithm has been tested for an eruption at Mt. Etna (2002) and successfully captures more details of the eruption evolution than would be seen by using only one satellite source. At the moment for Exupéry, merely MODIS (a sensor aboard NASA's Terra and Aqua satellite) data are used for the operational use. As MODIS is a meteorological sensor, it is suitable also for producing general overview images of the crisis area. Therefore, for each processed MODIS image we also produce RGB image where some basic meteorological features are classified: e.g. clouds, volcanic ash plumes, ocean, etc. In the case of detected hotspot an additional image is created; it contains the original measured radiances of the selected channels for the crisis area. All anomaly and processing parameters are additionally written into an XML file. The results are available in web GIS in the worst case two hours after NASA provides level 1b data online.
An Investigation of State-Space Model Fidelity for SSME Data
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2008-01-01
In previous studies, a variety of unsupervised anomaly detection techniques for anomaly detection were applied to SSME (Space Shuttle Main Engine) data. The observed results indicated that the identification of certain anomalies were specific to the algorithmic method under consideration. This is the reason why one of the follow-on goals of these previous investigations was to build an architecture to support the best capabilities of all algorithms. We appeal to that goal here by investigating a cascade, serial architecture for the best performing and most suitable candidates from previous studies. As a precursor to a formal ROC (Receiver Operating Characteristic) curve analysis for validation of resulting anomaly detection algorithms, our primary focus here is to investigate the model fidelity as measured by variants of the AIC (Akaike Information Criterion) for state-space based models. We show that placing constraints on a state-space model during or after the training of the model introduces a modest level of suboptimality. Furthermore, we compare the fidelity of all candidate models including those embodying the cascade, serial architecture. We make recommendations on the most suitable candidates for application to subsequent anomaly detection studies as measured by AIC-based criteria.
Neural network architectures to analyze OPAD data
NASA Technical Reports Server (NTRS)
Whitaker, Kevin W.
1992-01-01
A prototype Optical Plume Anomaly Detection (OPAD) system is now installed on the space shuttle main engine (SSME) Technology Test Bed (TTB) at MSFC. The OPAD system requirements dictate the need for fast, efficient data processing techniques. To address this need of the OPAD system, a study was conducted into how artificial neural networks could be used to assist in the analysis of plume spectral data.
NASA Astrophysics Data System (ADS)
Cunningham, Michael; Samson, Claire; Wood, Alan; Cook, Ian
2017-12-01
Unmanned aircraft systems (UASs) have been under rapid development for applications in the mineral exploration industry, mainly for aeromagnetic surveying. They provide improved detection of smaller, deeper and weaker magnetic targets. A traditional system flying an altitude of 100 m above ground level (AGL) can detect a spherical ore body with a radius of 16 m and a magnetic susceptibility of 10-4 buried at a depth of 40 m. A UAS flying at an altitude of 50 or 2 m AGL would require the radius to be 11 or 5 m, respectively. A demonstration survey was performed using the SkyLance rotary-wing UAS instrumented with a cesium vapour magnetometer in Nash Creek, New Brunswick, Canada. The UAS flew over a zinc deposit featuring three magnetic anomalies. It acquired repeatable data that compared well with upward continuation maps of ground magnetic data. Dykes or faults that are dipping eastward at 25° and are approximately 1.5 m wide fit the observed response of the three anomalies captured on the UAS magnetic data.
Soil-Gas Radon Anomaly Map of an Unknown Fault Zone Area, Chiang Mai, Northern Thailand
NASA Astrophysics Data System (ADS)
Udphuay, S.; Kaweewong, C.; Imurai, W.; Pondthai, P.
2015-12-01
Soil-gas radon concentration anomaly map was constructed to help detect an unknown subsurface fault location in San Sai District, Chiang Mai Province, Northern Thailand where a 5.1-magnitude earthquake took place in December 2006. It was suspected that this earthquake may have been associated with an unrecognized active fault in the area. In this study, soil-gas samples were collected from eighty-four measuring stations covering an area of approximately 50 km2. Radon in soil-gas samples was quantified using Scintrex Radon Detector, RDA-200. The samplings were conducted twice: during December 2014-January 2015 and March 2015-April 2015. The soil-gas radon map obtained from this study reveals linear NNW-SSE trend of high concentration. This anomaly corresponds to the direction of the prospective fault system interpreted from satellite images. The findings from this study support the existence of this unknown fault system. However a more detailed investigation should be conducted in order to confirm its geometry, orientation and lateral extent.
NASA Astrophysics Data System (ADS)
Zhang, L.; Hao, T.; Zhao, B.
2009-12-01
Hydrocarbon seepage effects can cause magnetic alteration zones in near surface, and the magnetic anomalies induced by the alteration zones can thus be used to locate oil-gas potential regions. In order to reduce the inaccuracy and multi-resolution of the hydrocarbon anomalies recognized only by magnetic data, and to meet the requirement of integrated management and sythetic analysis of multi-source geoscientfic data, it is necessary to construct a recognition system that integrates the functions of data management, real-time processing, synthetic evaluation, and geologic mapping. In this paper research for the key techniques of the system is discussed. Image processing methods can be applied to potential field images so as to make it easier for visual interpretation and geological understanding. For gravity or magnetic images, the anomalies with identical frequency-domain characteristics but different spatial distribution will reflect differently in texture and relevant textural statistics. Texture is a description of structural arrangements and spatial variation of a dataset or an image, and has been applied in many research fields. Textural analysis is a procedure that extracts textural features by image processing methods and thus obtains a quantitative or qualitative description of texture. When the two kinds of anomalies have no distinct difference in amplitude or overlap in frequency spectrum, they may be distinguishable due to their texture, which can be considered as textural contrast. Therefore, for the recognition system we propose a new “magnetic spots” recognition method based on image processing techniques. The method can be divided into 3 major steps: firstly, separate local anomalies caused by shallow, relatively small sources from the total magnetic field, and then pre-process the local magnetic anomaly data by image processing methods such that magnetic anomalies can be expressed as points, lines and polygons with spatial correlation, which includes histogram-equalization based image display, object recognition and extraction; then, mine the spatial characteristics and correlations of the magnetic anomalies using textural statistics and analysis, and study the features of known anomalous objects (closures, hydrocarbon-bearing structures, igneous rocks, etc.) in the same research area; finally, classify the anomalies, cluster them according to their similarity, and predict hydrocarbon induced “magnetic spots” combined with geologic, drilling and rock core data. The system uses the ArcGIS as the secondary development platform, inherits the basic functions of the ArcGIS, and develops two main sepecial functional modules, the module for conventional potential-field data processing methods and the module for feature extraction and enhancement based on image processing and analysis techniques. The system can be applied to realize the geophysical detection and recognition of near-surface hydrocarbon seepage anomalies, provide technical support for locating oil-gas potential regions, and promote geophysical data processing and interpretation to advance more efficiently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ondrej Linda; Todd Vollmer; Jim Alves-Foss
2011-08-01
Resiliency and cyber security of modern critical infrastructures is becoming increasingly important with the growing number of threats in the cyber-environment. This paper proposes an extension to a previously developed fuzzy logic based anomaly detection network security cyber sensor via incorporating Type-2 Fuzzy Logic (T2 FL). In general, fuzzy logic provides a framework for system modeling in linguistic form capable of coping with imprecise and vague meanings of words. T2 FL is an extension of Type-1 FL which proved to be successful in modeling and minimizing the effects of various kinds of dynamic uncertainties. In this paper, T2 FL providesmore » a basis for robust anomaly detection and cyber security state awareness. In addition, the proposed algorithm was specifically developed to comply with the constrained computational requirements of low-cost embedded network security cyber sensors. The performance of the system was evaluated on a set of network data recorded from an experimental cyber-security test-bed.« less
An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network
Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian
2015-01-01
Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish–Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection. PMID:26447696
An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network.
Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian
2015-01-01
Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish-Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection.
Detection of Spiroplasma and Wolbachia in the bacterial gonad community of Chorthippus parallelus.
Martínez-Rodríguez, P; Hernández-Pérez, M; Bella, J L
2013-07-01
We have recently detected the endosymbiont Wolbachia in multiple individuals and populations of the grasshopper Chorthippus parallelus (Orthoptera: acrididae). This bacterium induces reproductive anomalies, including cytoplasmic incompatibility. Such incompatibilities may help explain the maintenance of two distinct subspecies of this grasshopper, C. parallelus parallelus and C. parallelus erythropus, which are involved in a Pyrenean hybrid zone that has been extensively studied for the past 20 years, becoming a model system for the study of genetic divergence and speciation. To evaluate whether Wolbachia is the sole bacterial infection that might induce reproductive anomalies, the gonadal bacterial community of individuals from 13 distinct populations of C. parallelus was determined by denaturing gradient gel electrophoresis analysis of bacterial 16S rRNA gene fragments and sequencing. The study revealed low bacterial diversity in the gonads: a persistent bacterial trio consistent with Spiroplasma sp. and the two previously described supergroups of Wolbachia (B and F) dominated the gonad microbiota. A further evaluation of the composition of the gonad bacterial communities was carried out by whole cell hybridization. Our results confirm previous studies of the cytological distribution of Wolbachia in C. parallelus gonads and show a homogeneous infection by Spiroplasma. Spiroplasma and Wolbachia cooccurred in some individuals, but there was no significant association of Spiroplasma with a grasshopper's sex or with Wolbachia infection, although subtle trends might be detected with a larger sample size. This information, together with previous experimental crosses of this grasshopper, suggests that Spiroplasma is unlikely to contribute to sex-specific reproductive anomalies; instead, they implicate Wolbachia as the agent of the observed anomalies in C. parallelus.
Detection and characterization of buried lunar craters with GRAIL data
NASA Astrophysics Data System (ADS)
Sood, Rohan; Chappaz, Loic; Melosh, Henry J.; Howell, Kathleen C.; Milbury, Colleen; Blair, David M.; Zuber, Maria T.
2017-06-01
We used gravity mapping observations from NASA's Gravity Recovery and Interior Laboratory (GRAIL) to detect, characterize and validate the presence of large impact craters buried beneath the lunar maria. In this paper we focus on two prominent anomalies detected in the GRAIL data using the gravity gradiometry technique. Our detection strategy is applied to both free-air and Bouguer gravity field observations to identify gravitational signatures that are similar to those observed over buried craters. The presence of buried craters is further supported by individual analysis of regional free-air gravity anomalies, Bouguer gravity anomaly maps, and forward modeling. Our best candidate, for which we propose the informal name of Earhart Crater, is approximately 200 km in diameter and forms part of the northwestern rim of Lacus Somniorum, The other candidate, for which we propose the informal name of Ashoka Anomaly, is approximately 160 km in diameter and lies completely buried beneath Mare Tranquillitatis. Other large, still unrecognized, craters undoubtedly underlie other portions of the Moon's vast mare lavas.
Anomaly Detection In Additively Manufactured Parts Using Laser Doppler Vibrometery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, Carlos A.
Additively manufactured parts are susceptible to non-uniform structure caused by the unique manufacturing process. This can lead to structural weakness or catastrophic failure. Using laser Doppler vibrometry and frequency response analysis, non-contact detection of anomalies in additively manufactured parts may be possible. Preliminary tests show promise for small scale detection, but more future work is necessary.
Associated congenital anomalies among cases with Down syndrome.
Stoll, Claude; Dott, Beatrice; Alembik, Yves; Roth, Marie-Paule
2015-12-01
Down syndrome (DS) is the most common congenital anomaly widely studied for at least 150 years. However, the type and the frequency of congenital anomalies associated with DS are still controversial. Despite prenatal diagnosis and elective termination of pregnancy for fetal anomalies, in Europe, from 2008 to 2012 the live birth prevalence of DS per 10,000 was 10. 2. The objectives of this study were to examine the major congenital anomalies occurring in infants and fetuses with Down syndrome. The material for this study came from 402,532 consecutive pregnancies of known outcome registered by our registry of congenital anomalies between 1979 and 2008. Four hundred sixty seven (64%) out of the 728 cases with DS registered had at least one major associated congenital anomaly. The most common associated anomalies were cardiac anomalies, 323 cases (44%), followed by digestive system anomalies, 42 cases (6%), musculoskeletal system anomalies, 35 cases (5%), urinary system anomalies, 28 cases (4%), respiratory system anomalies, 13 cases (2%), and other system anomalies, 26 cases (3.6%). Among the cases with DS with congenital heart defects, the most common cardiac anomaly was atrioventricular septal defect (30%) followed by atrial septum defect (25%), ventricular septal defect (22%), patent ductus arteriosus (5%), coarctation of aorta (5%), and tetralogy of Fallot (3%). Among the cases with DS with a digestive system anomaly recorded, duodenal atresia (67%), Hirschsprung disease (14%), and tracheo-esophageal atresia (10%) were the most common. Fourteen (2%) of the cases with DS had an obstructive anomaly of the renal pelvis, including hydronephrosis. The other most common anomalies associated with cases with DS were syndactyly, club foot, polydactyly, limb reduction, cataract, hydrocephaly, cleft palate, hypospadias and diaphragmatic hernia. Many studies to assess the anomalies associated with DS have reported various results. There is no agreement in the literature as to which associated anomalies are most common in cases with DS with associated anomalies. In this study we observed a higher percentage of associated anomalies than in the other reported series as well as an increase in the incidence of duodenal atresia, urinary system anomalies, musculoskeletal system anomalies, and respiratory system anomalies, and a decrease in the incidence of anal atresia, annular pancreas, and limb reduction defects. In conclusion, we observed a high prevalence of total congenital anomalies and specific patterns of malformations associated with Down syndrome which emphasizes the need to evaluate carefully all cases with Down syndrome for possible associated major congenital anomalies. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Towards Reliable Evaluation of Anomaly-Based Intrusion Detection Performance
NASA Technical Reports Server (NTRS)
Viswanathan, Arun
2012-01-01
This report describes the results of research into the effects of environment-induced noise on the evaluation process for anomaly detectors in the cyber security domain. This research was conducted during a 10-week summer internship program from the 19th of August, 2012 to the 23rd of August, 2012 at the Jet Propulsion Laboratory in Pasadena, California. The research performed lies within the larger context of the Los Angeles Department of Water and Power (LADWP) Smart Grid cyber security project, a Department of Energy (DoE) funded effort involving the Jet Propulsion Laboratory, California Institute of Technology and the University of Southern California/ Information Sciences Institute. The results of the present effort constitute an important contribution towards building more rigorous evaluation paradigms for anomaly-based intrusion detectors in complex cyber physical systems such as the Smart Grid. Anomaly detection is a key strategy for cyber intrusion detection and operates by identifying deviations from profiles of nominal behavior and are thus conceptually appealing for detecting "novel" attacks. Evaluating the performance of such a detector requires assessing: (a) how well it captures the model of nominal behavior, and (b) how well it detects attacks (deviations from normality). Current evaluation methods produce results that give insufficient insight into the operation of a detector, inevitably resulting in a significantly poor characterization of a detectors performance. In this work, we first describe a preliminary taxonomy of key evaluation constructs that are necessary for establishing rigor in the evaluation regime of an anomaly detector. We then focus on clarifying the impact of the operational environment on the manifestation of attacks in monitored data. We show how dynamic and evolving environments can introduce high variability into the data stream perturbing detector performance. Prior research has focused on understanding the impact of this variability in training data for anomaly detectors, but has ignored variability in the attack signal that will necessarily affect the evaluation results for such detectors. We posit that current evaluation strategies implicitly assume that attacks always manifest in a stable manner; we show that this assumption is wrong. We describe a simple experiment to demonstrate the effects of environmental noise on the manifestation of attacks in data and introduce the notion of attack manifestation stability. Finally, we argue that conclusions about detector performance will be unreliable and incomplete if the stability of attack manifestation is not accounted for in the evaluation strategy.
2004-02-01
UNCLASSIFIED − Conducted experiments to determine the usability of general-purpose anomaly detection algorithms to monitor a large, complex military...reaction and detection modules to perform tailored analysis sequences to monitor environmental conditions, health hazards and physiological states...scalability of lab proven anomaly detection techniques for intrusion detection in real world high volume environments. Narrative Title FY 2003
Usage of Fault Detection Isolation & Recovery (FDIR) in Constellation (CxP) Launch Operations
NASA Technical Reports Server (NTRS)
Ferrell, Rob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Spirkovska, Lilly; Hall, David; Brown, Barbara
2010-01-01
This paper will explore the usage of Fault Detection Isolation & Recovery (FDIR) in the Constellation Exploration Program (CxP), in particular Launch Operations at Kennedy Space Center (KSC). NASA's Exploration Technology Development Program (ETDP) is currently funding a project that is developing a prototype FDIR to demonstrate the feasibility of incorporating FDIR into the CxP Ground Operations Launch Control System (LCS). An architecture that supports multiple FDIR tools has been formulated that will support integration into the CxP Ground Operation's Launch Control System (LCS). In addition, tools have been selected that provide fault detection, fault isolation, and anomaly detection along with integration between Flight and Ground elements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jared Verba; Michael Milvich
2008-05-01
Current Intrusion Detection System (IDS) technology is not suited to be widely deployed inside a Supervisory, Control and Data Acquisition (SCADA) environment. Anomaly- and signature-based IDS technologies have developed methods to cover information technology-based networks activity and protocols effectively. However, these IDS technologies do not include the fine protocol granularity required to ensure network security inside an environment with weak protocols lacking authentication and encryption. By implementing a more specific and more intelligent packet inspection mechanism, tailored traffic flow analysis, and unique packet tampering detection, IDS technology developed specifically for SCADA environments can be deployed with confidence in detecting maliciousmore » activity.« less
Detection of Anomalies in Hydrometric Data Using Artificial Intelligence Techniques
NASA Astrophysics Data System (ADS)
Lauzon, N.; Lence, B. J.
2002-12-01
This work focuses on the detection of anomalies in hydrometric data sequences, such as 1) outliers, which are individual data having statistical properties that differ from those of the overall population; 2) shifts, which are sudden changes over time in the statistical properties of the historical records of data; and 3) trends, which are systematic changes over time in the statistical properties. For the purpose of the design and management of water resources systems, it is important to be aware of these anomalies in hydrometric data, for they can induce a bias in the estimation of water quantity and quality parameters. These anomalies may be viewed as specific patterns affecting the data, and therefore pattern recognition techniques can be used for identifying them. However, the number of possible patterns is very large for each type of anomaly and consequently large computing capacities are required to account for all possibilities using the standard statistical techniques, such as cluster analysis. Artificial intelligence techniques, such as the Kohonen neural network and fuzzy c-means, are clustering techniques commonly used for pattern recognition in several areas of engineering and have recently begun to be used for the analysis of natural systems. They require much less computing capacity than the standard statistical techniques, and therefore are well suited for the identification of outliers, shifts and trends in hydrometric data. This work constitutes a preliminary study, using synthetic data representing hydrometric data that can be found in Canada. The analysis of the results obtained shows that the Kohonen neural network and fuzzy c-means are reasonably successful in identifying anomalies. This work also addresses the problem of uncertainties inherent to the calibration procedures that fit the clusters to the possible patterns for both the Kohonen neural network and fuzzy c-means. Indeed, for the same database, different sets of clusters can be established with these calibration procedures. A simple method for analyzing uncertainties associated with the Kohonen neural network and fuzzy c-means is developed here. The method combines the results from several sets of clusters, either from the Kohonen neural network or fuzzy c-means, so as to provide an overall diagnosis as to the identification of outliers, shifts and trends. The results indicate an improvement in the performance for identifying anomalies when the method of combining cluster sets is used, compared with when only one cluster set is used.
Moore, Kieran M; Edge, Graham; Kurc, Andrew R
2008-11-14
Timeliness is a critical asset to the detection of public health threats when using syndromic surveillance systems. In order for epidemiologists to effectively distinguish which events are indicative of a true outbreak, the ability to utilize specific data streams from generalized data summaries is necessary. Taking advantage of graphical user interfaces and visualization capacities of current surveillance systems makes it easier for users to investigate detected anomalies by generating custom graphs, maps, plots, and temporal-spatial analysis of specific syndromes or data sources.
Moore, Kieran M; Edge, Graham; Kurc, Andrew R
2008-01-01
Timeliness is a critical asset to the detection of public health threats when using syndromic surveillance systems. In order for epidemiologists to effectively distinguish which events are indicative of a true outbreak, the ability to utilize specific data streams from generalized data summaries is necessary. Taking advantage of graphical user interfaces and visualization capacities of current surveillance systems makes it easier for users to investigate detected anomalies by generating custom graphs, maps, plots, and temporal-spatial analysis of specific syndromes or data sources. PMID:19025683
Detecting Visually Observable Disease Symptoms from Faces.
Wang, Kuan; Luo, Jiebo
2016-12-01
Recent years have witnessed an increasing interest in the application of machine learning to clinical informatics and healthcare systems. A significant amount of research has been done on healthcare systems based on supervised learning. In this study, we present a generalized solution to detect visually observable symptoms on faces using semi-supervised anomaly detection combined with machine vision algorithms. We rely on the disease-related statistical facts to detect abnormalities and classify them into multiple categories to narrow down the possible medical reasons of detecting. Our method is in contrast with most existing approaches, which are limited by the availability of labeled training data required for supervised learning, and therefore offers the major advantage of flagging any unusual and visually observable symptoms.
Ramman, Tarun Raina; Dutta, Nilanjan; Chowdhuri, Kuntal Roy; Agrawal, Sunny; Girotra, Sumir; Azad, Sushil; Radhakrishnan, Sitaraman; Iyer, Parvathi Unninayar; Iyer, Krishna Subramony
2018-01-01
Persistent left superior vena cava is a common congenital anomaly of the thoracic venous system. Left superior vena cava draining into left atrium is a malformation of sinus venosus and caval system. The anomaly may be a cause of unexplained hypoxia even in adults. It may give rise to various diagnostic and technical challenges during cardiac catheterization and open-heart surgery. It is often detected serendipitously during diagnostic workup. Isolated left superior vena cava opening into left atrium is very commonly associated with other congenital heart defects. But tetralogy of Fallot is very rarely associated with persistent left superior vena cava which drains into left atrium. We report four such cases who underwent surgical correction successfully.
NASA Astrophysics Data System (ADS)
Duling, Irl N.
2016-05-01
Terahertz energy, with its ability to penetrate clothing and non-conductive materials, has held much promise in the area of security scanning. Millimeter wave systems (300 GHz and below) have been widely deployed. These systems have used full two-dimensional surface imaging, and have resulted in privacy concerns. Pulsed terahertz imaging, can detect the presence of unwanted objects without the need for two-dimensional photographic imaging. With high-speed waveform acquisition it is possible to create handheld tools that can be used to locate anomalies under clothing or headgear looking exclusively at either single point waveforms or cross-sectional images which do not pose a privacy concern. Identification of the anomaly to classify it as a potential threat or a benign object is also possible.
Ionospheric earthquake effects detection based on Total Electron Content (TEC) GPS Correlation
NASA Astrophysics Data System (ADS)
Sunardi, Bambang; Muslim, Buldan; Eka Sakya, Andi; Rohadi, Supriyanto; Sulastri; Murjaya, Jaya
2018-03-01
Advances in science and technology showed that ground-based GPS receiver was able to detect ionospheric Total Electron Content (TEC) disturbances caused by various natural phenomena such as earthquakes. One study of Tohoku (Japan) earthquake, March 11, 2011, magnitude M 9.0 showed TEC fluctuations observed from GPS observation network spread around the disaster area. This paper discussed the ionospheric earthquake effects detection using TEC GPS data. The case studies taken were Kebumen earthquake, January 25, 2014, magnitude M 6.2, Sumba earthquake, February 12, 2016, M 6.2 and Halmahera earthquake, February 17, 2016, M 6.1. TEC-GIM (Global Ionosphere Map) correlation methods for 31 days were used to monitor TEC anomaly in ionosphere. To ensure the geomagnetic disturbances due to solar activity, we also compare with Dst index in the same time window. The results showed anomalous ratio of correlation coefficient deviation to its standard deviation upon occurrences of Kebumen and Sumba earthquake, but not detected a similar anomaly for the Halmahera earthquake. It was needed a continous monitoring of TEC GPS data to detect the earthquake effects in ionosphere. This study giving hope in strengthening the earthquake effect early warning system using TEC GPS data. The method development of continuous TEC GPS observation derived from GPS observation network that already exists in Indonesia is needed to support earthquake effects early warning systems.
NASA Technical Reports Server (NTRS)
Westmeyer, Paul A. (Inventor); Wertenberg, Russell F. (Inventor); Krage, Frederick J. (Inventor); Riegel, Jack F. (Inventor)
2017-01-01
An authentication procedure utilizes multiple independent sources of data to determine whether usage of a device, such as a desktop computer, is authorized. When a comparison indicates an anomaly from the base-line usage data, the system, provides a notice that access of the first device is not authorized.
Shaikh, Riaz Ahmed; Jameel, Hassan; d'Auriol, Brian J; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae
2009-01-01
Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.
Shaikh, Riaz Ahmed; Jameel, Hassan; d’Auriol, Brian J.; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae
2009-01-01
Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm. PMID:22454568
Medical sieve: a cognitive assistant for radiologists and cardiologists
NASA Astrophysics Data System (ADS)
Syeda-Mahmood, T.; Walach, E.; Beymer, D.; Gilboa-Solomon, F.; Moradi, M.; Kisilev, P.; Kakrania, D.; Compas, C.; Wang, H.; Negahdar, R.; Cao, Y.; Baldwin, T.; Guo, Y.; Gur, Y.; Rajan, D.; Zlotnick, A.; Rabinovici-Cohen, S.; Ben-Ari, R.; Guy, Amit; Prasanna, P.; Morey, J.; Boyko, O.; Hashoul, S.
2016-03-01
Radiologists and cardiologists today have to view large amounts of imaging data relatively quickly leading to eye fatigue. Further, they have only limited access to clinical information relying mostly on their visual interpretation of imaging studies for their diagnostic decisions. In this paper, we present Medical Sieve, an automated cognitive assistant for radiologists and cardiologists designed to help in their clinical decision-making. The sieve is a clinical informatics system that collects clinical, textual and imaging data of patients from electronic health records systems. It then analyzes multimodal content to detect anomalies if any, and summarizes the patient record collecting all relevant information pertinent to a chief complaint. The results of anomaly detection are then fed into a reasoning engine which uses evidence from both patient-independent clinical knowledge and large-scale patient-driven similar patient statistics to arrive at potential differential diagnosis to help in clinical decision making. In compactly summarizing all relevant information to the clinician per chief complaint, the system still retains links to the raw data for detailed review providing holistic summaries of patient conditions. Results of clinical studies in the domains of cardiology and breast radiology have already shown the promise of the system in differential diagnosis and imaging studies summarization.
Routine screening for fetal anomalies: expectations.
Goldberg, James D
2004-03-01
Ultrasound has become a routine part of prenatal care. Despite this, the sensitivity and specificity of the procedure is unclear to many patients and healthcare providers. In a small study from Canada, 54.9% of women reported that they had received no information about ultrasound before their examination. In addition, 37.2% of women indicated that they were unaware of any fetal problems that ultrasound could not detect. Most centers that perform ultrasound do not have their own statistics regarding sensitivity and specificity; it is necessary to rely on large collaborative studies. Unfortunately, wide variations exist in these studies with detection rates for fetal anomalies between 13.3% and 82.4%. The Eurofetus study is the largest prospective study performed to date and because of the time and expense involved in this type of study, a similar study is not likely to be repeated. The overall fetal detection rate for anomalous fetuses was 64.1%. It is important to note that in this study, ultrasounds were performed in tertiary centers with significant experience in detecting fetal malformations. The RADIUS study also demonstrated a significantly improved detection rate of anomalies before 24 weeks in tertiary versus community centers (35% versus 13%). Two concepts seem to emerge from reviewing these data. First, patients must be made aware of the limitations of ultrasound in detecting fetal anomalies. This information is critical to allow them to make informed decisions whether to undergo ultrasound examination and to prepare them for potential outcomes.Second, to achieve the detection rates reported in the Eurofetus study, ultrasound examination must be performed in centers that have extensive experience in the detection of fetal anomalies.
SSME Advanced Health Management: Project Overview
NASA Technical Reports Server (NTRS)
Plowden, John
2000-01-01
This document is the viewgraphs from a presentation concerning the development of the Health Management system for the Space Shuttle Main Engine (SSME). It reviews the historical background of the SSME Advanced Health Management effort through the present final Health management configuration. The document includes reviews of three subsystems to the Advanced Health Management System: (1) the Real-Time Vibration Monitor System, (2) the Linear Engine Model, and (3) the Optical Plume Anomaly Detection system.
Data Mining for Anomaly Detection
NASA Technical Reports Server (NTRS)
Biswas, Gautam; Mack, Daniel; Mylaraswamy, Dinkar; Bharadwaj, Raj
2013-01-01
The Vehicle Integrated Prognostics Reasoner (VIPR) program describes methods for enhanced diagnostics as well as a prognostic extension to current state of art Aircraft Diagnostic and Maintenance System (ADMS). VIPR introduced a new anomaly detection function for discovering previously undetected and undocumented situations, where there are clear deviations from nominal behavior. Once a baseline (nominal model of operations) is established, the detection and analysis is split between on-aircraft outlier generation and off-aircraft expert analysis to characterize and classify events that may not have been anticipated by individual system providers. Offline expert analysis is supported by data curation and data mining algorithms that can be applied in the contexts of supervised learning methods and unsupervised learning. In this report, we discuss efficient methods to implement the Kolmogorov complexity measure using compression algorithms, and run a systematic empirical analysis to determine the best compression measure. Our experiments established that the combination of the DZIP compression algorithm and CiDM distance measure provides the best results for capturing relevant properties of time series data encountered in aircraft operations. This combination was used as the basis for developing an unsupervised learning algorithm to define "nominal" flight segments using historical flight segments.
Locality-constrained anomaly detection for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Liu, Jiabin; Li, Wei; Du, Qian; Liu, Kui
2015-12-01
Detecting a target with low-occurrence-probability from unknown background in a hyperspectral image, namely anomaly detection, is of practical significance. Reed-Xiaoli (RX) algorithm is considered as a classic anomaly detector, which calculates the Mahalanobis distance between local background and the pixel under test. Local RX, as an adaptive RX detector, employs a dual-window strategy to consider pixels within the frame between inner and outer windows as local background. However, the detector is sensitive if such a local region contains anomalous pixels (i.e., outliers). In this paper, a locality-constrained anomaly detector is proposed to remove outliers in the local background region before employing the RX algorithm. Specifically, a local linear representation is designed to exploit the internal relationship between linearly correlated pixels in the local background region and the pixel under test and its neighbors. Experimental results demonstrate that the proposed detector improves the original local RX algorithm.
Automation for deep space vehicle monitoring
NASA Technical Reports Server (NTRS)
Schwuttke, Ursula M.
1991-01-01
Information on automation for deep space vehicle monitoring is given in viewgraph form. Information is given on automation goals and strategy; the Monitor Analyzer of Real-time Voyager Engineering Link (MARVEL); intelligent input data management; decision theory for making tradeoffs; dynamic tradeoff evaluation; evaluation of anomaly detection results; evaluation of data management methods; system level analysis with cooperating expert systems; the distributed architecture of multiple expert systems; and event driven response.
Anomaly detection for machine learning redshifts applied to SDSS galaxies
NASA Astrophysics Data System (ADS)
Hoyle, Ben; Rau, Markus Michael; Paech, Kerstin; Bonnett, Christopher; Seitz, Stella; Weller, Jochen
2015-10-01
We present an analysis of anomaly detection for machine learning redshift estimation. Anomaly detection allows the removal of poor training examples, which can adversely influence redshift estimates. Anomalous training examples may be photometric galaxies with incorrect spectroscopic redshifts, or galaxies with one or more poorly measured photometric quantity. We select 2.5 million `clean' SDSS DR12 galaxies with reliable spectroscopic redshifts, and 6730 `anomalous' galaxies with spectroscopic redshift measurements which are flagged as unreliable. We contaminate the clean base galaxy sample with galaxies with unreliable redshifts and attempt to recover the contaminating galaxies using the Elliptical Envelope technique. We then train four machine learning architectures for redshift analysis on both the contaminated sample and on the preprocessed `anomaly-removed' sample and measure redshift statistics on a clean validation sample generated without any preprocessing. We find an improvement on all measured statistics of up to 80 per cent when training on the anomaly removed sample as compared with training on the contaminated sample for each of the machine learning routines explored. We further describe a method to estimate the contamination fraction of a base data sample.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neil, Joshua Charles; Fisk, Michael Edward; Brugh, Alexander William
A system, apparatus, computer-readable medium, and computer-implemented method are provided for detecting anomalous behavior in a network. Historical parameters of the network are determined in order to determine normal activity levels. A plurality of paths in the network are enumerated as part of a graph representing the network, where each computing system in the network may be a node in the graph and the sequence of connections between two computing systems may be a directed edge in the graph. A statistical model is applied to the plurality of paths in the graph on a sliding window basis to detect anomalousmore » behavior. Data collected by a Unified Host Collection Agent ("UHCA") may also be used to detect anomalous behavior.« less
Bio-Inspired Distributed Decision Algorithms for Anomaly Detection
2017-03-01
TERMS DIAMoND, Local Anomaly Detector, Total Impact Estimation, Threat Level Estimator 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU...21 4.2 Performance of the DIAMoND Algorithm as a DNS-Server Level Attack Detection and Mitigation...with 6 Nodes ........................................................................................ 13 8 Hierarchical 2- Level Topology
Anomaly Detection in the Right Hemisphere: The Influence of Visuospatial Factors
ERIC Educational Resources Information Center
Smith, Stephen D.; Dixon, Michael J.; Tays, William J.; Bulman-Fleming, M. Barbara
2004-01-01
Previous research with both brain-damaged and neurologically intact populations has demonstrated that the right cerebral hemisphere (RH) is superior to the left cerebral hemisphere (LH) at detecting anomalies (or incongruities) in objects (Ramachandran, 1995; Smith, Tays, Dixon, & Bulman-Fleming, 2002). The current research assesses whether the RH…
A Semiparametric Model for Hyperspectral Anomaly Detection
2012-01-01
treeline ) in the presence of natural background clutter (e.g., trees, dirt roads, grasses). Each target consists of about 7 × 4 pixels, and each pixel...vehicles near the treeline in Cube 1 (Figure 1) constitutes the target set, but, since anomaly detectors are not designed to detect a particular target
Failure Control Techniques for the SSME
NASA Technical Reports Server (NTRS)
Taniguchi, M. H.
1987-01-01
Since ground testing of the Space Shuttle Main Engine (SSME) began in 1975, the detection of engine anomalies and the prevention of major damage have been achieved by a multi-faceted detection/shutdown system. The system continues the monitoring task today and consists of the following: sensors, automatic redline and other limit logic, redundant sensors and controller voting logic, conditional decision logic, and human monitoring. Typically, on the order of 300 to 500 measurements are sensed and recorded for each test, while on the order of 100 are used for control and monitoring. Despite extensive monitoring by the current detection system, twenty-seven (27) major incidents have occurred. This number would appear insignificant compared with over 1200 hot-fire tests which have taken place since 1976. However, the number suggests the requirement for and future benefits of a more advanced failure detection system.
Diagnosis of fetal syndromes by three- and four-dimensional ultrasound: is there any improvement?
Barišić, Lara Spalldi; Stanojević, Milan; Kurjak, Asim; Porović, Selma; Gaber, Ghalia
2017-08-28
With all of our present knowledge, high technology diagnostic equipment, electronic databases and other available supporting resources, detection of fetal syndromes is still a challenge for healthcare providers in prenatal as well as in the postnatal period. Prenatal diagnosis of fetal syndromes is not straightforward, and it is a difficult puzzle that needs to be assembled and solved. Detection of one anomaly should always raise a suspicion of the existence of more anomalies, and can be a trigger to investigate further and raise awareness of possible syndromes. Highly specialized software systems for three- and four-dimensional ultrasound (3D/4D US) enabled detailed depiction of fetal anatomy and assessment of the dynamics of fetal structural and functional development in real time. With recent advances in 3D/4D US technology, antenatal diagnosis of fetal anomalies and syndromes shifted from the 2nd to the 1st trimester of pregnancy. It is questionable what can and should be done after the prenatal diagnosis of fetal syndrome. The 3D and 4D US techniques improved detection accuracy of fetal abnormalities and syndromes from early pregnancy onwards. It is not easy to make prenatal diagnosis of fetal syndromes, so tools which help like online integrated databases are needed to increase diagnostic precision. The aim of this paper is to present the possibilities of different US techniques in the detection of some fetal syndromes prenatally.
Fuzzy Logic Based Anomaly Detection for Embedded Network Security Cyber Sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ondrej Linda; Todd Vollmer; Jason Wright
Resiliency and security in critical infrastructure control systems in the modern world of cyber terrorism constitute a relevant concern. Developing a network security system specifically tailored to the requirements of such critical assets is of a primary importance. This paper proposes a novel learning algorithm for anomaly based network security cyber sensor together with its hardware implementation. The presented learning algorithm constructs a fuzzy logic rule based model of normal network behavior. Individual fuzzy rules are extracted directly from the stream of incoming packets using an online clustering algorithm. This learning algorithm was specifically developed to comply with the constrainedmore » computational requirements of low-cost embedded network security cyber sensors. The performance of the system was evaluated on a set of network data recorded from an experimental test-bed mimicking the environment of a critical infrastructure control system.« less
Iliescu, D; Tudorache, S; Comanescu, A; Antsaklis, P; Cotarcea, S; Novac, L; Cernea, N; Antsaklis, A
2013-09-01
To assess the potential of first-trimester sonography in the detection of fetal abnormalities using an extended protocol that is achievable with reasonable resources of time, personnel and ultrasound equipment. This was a prospective two-center 2-year study of 5472 consecutive unselected pregnant women examined at 12 to 13 + 6 gestational weeks. Women were examined using an extended morphogenetic ultrasound protocol that, in addition to the basic evaluation, involved a color Doppler cardiac sweep and identification of early contingent markers for major abnormalities. The prevalence of lethal and severe malformations was 1.39%. The first-trimester scan identified 40.6% of the cases detected overall and 76.3% of major structural defects. The first-trimester detection rate (DR) for major congenital heart disease (either isolated or associated with extracardiac abnormalities) was 90% and that for major central nervous system anomalies was 69.5%. In fetuses with increased nuchal translucency (NT), the first-trimester DR for major anomalies was 96%, and in fetuses with normal NT it was 66.7%. Most (67.1%) cases with major abnormalities presented with normal NT. A detailed first-trimester anomaly scan using an extended protocol is an efficient screening method to detect major fetal structural abnormalities in low-risk pregnancies. It is feasible at 12 to 13 + 6 weeks with ultrasound equipment and personnel already used for routine first-trimester screening. Rate of detection of severe malformations is greater in early- than in mid-pregnancy and on postnatal evaluation. Early heart investigation could be improved by an extended protocol involving use of color Doppler. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.
Liu, Guanqun; Jia, Yonggang; Liu, Hongjun; Qiu, Hanxue; Qiu, Dongling; Shan, Hongxian
2002-03-01
The exploration and determination of leakage of underground pressureless nonmetallic pipes is difficult to deal with. A comprehensive method combining Ground Penetrating Rader (GPR), electric potential survey and geochemical survey is introduced in the leakage detection of an underground pressureless nonmetallic sewage pipe in this paper. Theoretically, in the influencing scope of a leakage spot, the obvious changes of the electromagnetic properties and the physical-chemical properties of the underground media will be reflected as anomalies in GPR and electrical survey plots. The advantages of GPR and electrical survey are fast and accurate in detection of anomaly scope. In-situ analysis of the geophysical surveys can guide the geochemical survey. Then water and soil sampling and analyzing can be the evidence for judging the anomaly is caused by pipe leakage or not. On the basis of previous tests and practical surveys, the GPR waveforms, electric potential curves, contour maps, and chemical survey results are all classified into three types according to the extent or indexes of anomalies in orderto find out the leakage spots. When three survey methods all show their anomalies as type I in an anomalous spot, this spot is suspected as the most possible leakage location. Otherwise, it will be down grade suspected point. The suspect leakage spots should be confirmed by referring the site conditions because some anomalies are caused other factors. The excavation afterward proved that the method for determining the suspected location by anomaly type is effective and economic. Comprehensive method of GRP, electric potential survey, and geochemical survey is one of the effective methods in the leakage detection of underground nonmetallic pressureless pipe with its advantages of being fast and accurate.
Cost Analysis of Following Up Incomplete Low-Risk Fetal Anatomy Ultrasounds.
O'Brien, Karen; Shainker, Scott A; Modest, Anna M; Spiel, Melissa H; Resetkova, Nina; Shah, Neel; Hacker, Michele R
2017-03-01
To examine the clinical utility and cost of follow-up ultrasounds performed as a result of suboptimal views at the time of initial second-trimester ultrasound in a cohort of low-risk pregnant women. We conducted a retrospective cohort study of women at low risk for fetal structural anomalies who had second-trimester ultrasounds at 16 to less than 24 weeks of gestation from 2011 to 2013. We determined the probability of women having follow-up ultrasounds as a result of suboptimal views at the time of the initial second-trimester ultrasound, and calculated the probability of detecting an anomaly on follow-up ultrasound. These probabilities were used to estimate the national cost of our current ultrasound practice, and the cost to identify one fetal anomaly on follow-up ultrasound. During the study period, 1,752 women met inclusion criteria. Four fetuses (0.23% [95% CI 0.06-0.58]) were found to have anomalies at the initial ultrasound. Because of suboptimal views, 205 women (11.7%) returned for a follow-up ultrasound, and one (0.49% [95% CI 0.01-2.7]) anomaly was detected. Two women (0.11%) still had suboptimal views and returned for an additional follow-up ultrasound, with no anomalies detected. When the incidence of incomplete ultrasounds was applied to a similar low-risk national cohort, the annual cost of these follow-up scans was estimated at $85,457,160. In our cohort, the cost to detect an anomaly on follow-up ultrasound was approximately $55,000. The clinical yield of performing follow-up ultrasounds because of suboptimal views on low-risk second-trimester ultrasounds is low. Since so few fetal abnormalities were identified on follow-up scans, this added cost and patient burden may not be warranted. © 2016 Wiley Periodicals, Inc.
Tune, Sarah; Schlesewsky, Matthias; Small, Steven L.; Sanford, Anthony J.; Bohan, Jason; Sassenhagen, Jona; Bornkessel-Schlesewsky, Ina
2014-01-01
The N400 event-related brain potential (ERP) has played a major role in the examination of how the human brain processes meaning. For current theories of the N400, classes of semantic inconsistencies which do not elicit N400 effects have proven particularly influential. Semantic anomalies that are difficult to detect are a case in point (“borderline anomalies”, e.g. “After an air crash, where should the survivors be buried?”), engendering a late positive ERP response but no N400 effect in English (Sanford, Leuthold, Bohan, & Sanford, 2011). In three auditory ERP experiments, we demonstrate that this result is subject to cross-linguistic variation. In a German version of Sanford and colleagues' experiment (Experiment 1), detected borderline anomalies elicited both N400 and late positivity effects compared to control stimuli or to missed borderline anomalies. Classic easy-to-detect semantic (non-borderline) anomalies showed the same pattern as in English (N400 plus late positivity). The cross-linguistic difference in the response to borderline anomalies was replicated in two additional studies with a slightly modified task (Experiment 2a: German; Experiment 2b: English), with a reliable LANGUAGE × ANOMALY interaction for the borderline anomalies confirming that the N400 effect is subject to systematic cross-linguistic variation. We argue that this variation results from differences in the language-specific default weighting of top-down and bottom-up information, concluding that N400 amplitude reflects the interaction between the two information sources in the form-to-meaning mapping. PMID:24447768
Joint geophysical investigation of a small scale magnetic anomaly near Gotha, Germany
NASA Astrophysics Data System (ADS)
Queitsch, Matthias; Schiffler, Markus; Goepel, Andreas; Stolz, Ronny; Guenther, Thomas; Malz, Alexander; Meyer, Matthias; Meyer, Hans-Georg; Kukowski, Nina
2014-05-01
In the framework of the multidisciplinary project INFLUINS (INtegrated FLUid Dynamics IN Sedimentary Basins) several airborne surveys using a full tensor magnetic gradiometer (FTMG) system were conducted in and around the Thuringian basin (central Germany). These sensors are based on highly sensitive superconducting quantum interference devices (SQUIDs) with a planar-type gradiometer setup. One of the main goals was to map magnetic anomalies along major fault zones in this sedimentary basin. In most survey areas low signal amplitudes were observed caused by very low magnetization of subsurface rocks. Due to the high lateral resolution of a magnetic gradiometer system and a flight line spacing of only 50m, however, we were able to detect even small magnetic lineaments. Especially close to Gotha a NW-SE striking strong magnetic anomaly with a length of 1.5 km was detected, which cannot be explained by the structure of the Eichenberg-Gotha-Saalfeld (EGS) fault zone and the rock-physical properties (low susceptibilities). Therefore, we hypothesize that the source of the anomaly must be related to an anomalous magnetization in the fault plane. To test this hypothesis, here we focus on the results of the 3D inversion of the airborne magnetic data set and compare them with existing structural geological models. In addition, we conducted several ground based measurements such as electrical resistivity tomography (ERT) and frequency domain electromagnetics (FDEM) to locate the fault. Especially, the geoelectrical measurements were able to image the fault zone. The result of the 2D electrical resistivity tomography shows a lower resistivity in the fault zone. Joint interpretation of airborne magnetics, geoelectrical and geological information let us propose that the source of the magnetization may be a fluid-flow induced impregnation with iron-oxide bearing minerals in the vicinity of the EGS fault plane.
NASA Astrophysics Data System (ADS)
McCann, Cooper Patrick
Low-cost flight-based hyperspectral imaging systems have the potential to provide valuable information for ecosystem and environmental studies as well as aide in land management and land health monitoring. This thesis describes (1) a bootstrap method of producing mesoscale, radiometrically-referenced hyperspectral data using the Landsat surface reflectance (LaSRC) data product as a reference target, (2) biophysically relevant basis functions to model the reflectance spectra, (3) an unsupervised classification technique based on natural histogram splitting of these biophysically relevant parameters, and (4) local and multi-temporal anomaly detection. The bootstrap method extends standard processing techniques to remove uneven illumination conditions between flight passes, allowing the creation of radiometrically self-consistent data. Through selective spectral and spatial resampling, LaSRC data is used as a radiometric reference target. Advantages of the bootstrap method include the need for minimal site access, no ancillary instrumentation, and automated data processing. Data from a flight on 06/02/2016 is compared with concurrently collected ground based reflectance spectra as a means of validation achieving an average error of 2.74%. Fitting reflectance spectra using basis functions, based on biophysically relevant spectral features, allows both noise and data reductions while shifting information from spectral bands to biophysical features. Histogram splitting is used to determine a clustering based on natural splittings of these fit parameters. The Indian Pines reference data enabled comparisons of the efficacy of this technique to established techniques. The splitting technique is shown to be an improvement over the ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. This improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA. Three hyperspectral flights over the Kevin Dome area, covering 1843 ha, acquired 06/21/2014, 06/24/2015 and 06/26/2016 are examined with different methods of anomaly detection. Detection of anomalies within a single data set is examined to determine, on a local scale, areas that are significantly different from the surrounding area. Additionally, the detection and identification of persistent anomalies and non-persistent anomalies was investigated across multiple data sets.
1986-12-01
earthquake that is likely to occur in a given louality [Ref. 8:p. 1082]. The accumulation law of seismotectonic movement relates the amount of...mechanism - fault creep anomaly - seismic wave velocity - geomagnetic field - telluric (earth) currents - electromagnetic emissions - resistivity of
[Upper lateral incisor with 2 canals].
Fabra Campos, H
1991-01-01
Clinical case summary of the patient with an upper lateral incisor with two root canals. The suspicion that there might be an anatomic anomaly in the root that includes a complex root canal system was made when an advanced radicular groove was detected in the lingual surface or an excessively enlarged cingulum.
Verification of Minimum Detectable Activity for Radiological Threat Source Search
NASA Astrophysics Data System (ADS)
Gardiner, Hannah; Myjak, Mitchell; Baciak, James; Detwiler, Rebecca; Seifert, Carolyn
2015-10-01
The Department of Homeland Security's Domestic Nuclear Detection Office is working to develop advanced technologies that will improve the ability to detect, localize, and identify radiological and nuclear sources from airborne platforms. The Airborne Radiological Enhanced-sensor System (ARES) program is developing advanced data fusion algorithms for analyzing data from a helicopter-mounted radiation detector. This detector platform provides a rapid, wide-area assessment of radiological conditions at ground level. The NSCRAD (Nuisance-rejection Spectral Comparison Ratios for Anomaly Detection) algorithm was developed to distinguish low-count sources of interest from benign naturally occurring radiation and irrelevant nuisance sources. It uses a number of broad, overlapping regions of interest to statistically compare each newly measured spectrum with the current estimate for the background to identify anomalies. We recently developed a method to estimate the minimum detectable activity (MDA) of NSCRAD in real time. We present this method here and report on the MDA verification using both laboratory measurements and simulated injects on measured backgrounds at or near the detection limits. This work is supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-12-X-00376. This support does not constitute an express or implied endorsement on the part of the Gov't.
Discrepancy of cytogenetic analysis in Western and eastern Taiwan.
Chang, Yu-Hsun; Chen, Pui-Yi; Li, Tzu-Ying; Yeh, Chung-Nan; Li, Yi-Shian; Chu, Shao-Yin; Lee, Ming-Liang
2013-06-01
This study aimed at investigating the results of second-trimester amniocyte karyotyping in western and eastern Taiwan, and identifying any regional differences in the prevalence of fetal chromosomal anomalies. From 2004 to 2009, pregnant women who underwent amniocentesis in their second trimester at three hospitals in western Taiwan and at four hospitals in eastern Taiwan were included. All the cytogenetic analyses of cultured amniocytes were performed in the cytogenetics laboratory of the Genetic Counseling Center of Hualien Buddhist Tzu Chi General Hospital. We used the chi-square test, Student t test, and Mann-Whitney U test to evaluate the variants of clinical indications, amniocyte karyotyping results, and prevalence and types of chromosomal anomalies in western and eastern Taiwan. During the study period, 3573 samples, 1990 (55.7%) from western Taiwan and 1583 (44.3%) from eastern Taiwan, were collected and analyzed. The main indication for amniocyte karyotyping was advanced maternal age (69.0% in western Taiwan, 67.1% in eastern Taiwan). The detection rates of chromosomal anomalies by amniocyte karyotyping in eastern Taiwan (45/1582, 2.8%) did not differ significantly from that in western Taiwan (42/1989, 2.1%) (p = 1.58). Mothers who had abnormal ultrasound findings and histories of familial hereditary diseases or chromosomal anomalies had higher detection rates of chromosomal anomalies (9.3% and 7.2%, respectively). The detection rate of autosomal anomalies was higher in eastern Taiwan (93.3% vs. 78.6%, p = 0.046), but the detection rate of sex-linked chromosomal anomalies was higher in western Taiwan (21.4% vs. 6.7%, p = 0.046). We demonstrated regional differences in second-trimester amniocyte karyotyping results and established a database of common chromosomal anomalies that could be useful for genetic counseling, especially in eastern Taiwan. Copyright © 2012. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Stoiber, R. E. (Principal Investigator); Rose, W. I., Jr.
1975-01-01
The author has identified the following significant results. Ground truth data collection proves that significant anomalies exist at 13 volcanoes within the test site of Central America. The dimensions and temperature contrast of these ten anomalies are large enough to be detected by the Skylab 192 instrument. The dimensions and intensity of thermal anomalies have changed at most of these volcanoes during the Skylab mission.
Effects of Sampling and Spatio/Temporal Granularity in Traffic Monitoring on Anomaly Detectability
NASA Astrophysics Data System (ADS)
Ishibashi, Keisuke; Kawahara, Ryoichi; Mori, Tatsuya; Kondoh, Tsuyoshi; Asano, Shoichiro
We quantitatively evaluate how sampling and spatio/temporal granularity in traffic monitoring affect the detectability of anomalous traffic. Those parameters also affect the monitoring burden, so network operators face a trade-off between the monitoring burden and detectability and need to know which are the optimal paramter values. We derive equations to calculate the false positive ratio and false negative ratio for given values of the sampling rate, granularity, statistics of normal traffic, and volume of anomalies to be detected. Specifically, assuming that the normal traffic has a Gaussian distribution, which is parameterized by its mean and standard deviation, we analyze how sampling and monitoring granularity change these distribution parameters. This analysis is based on observation of the backbone traffic, which exhibits spatially uncorrelated and temporally long-range dependence. Then we derive the equations for detectability. With those equations, we can answer the practical questions that arise in actual network operations: what sampling rate to set to find the given volume of anomaly, or, if the sampling is too high for actual operation, what granularity is optimal to find the anomaly for a given lower limit of sampling rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tardiff, Mark F.; Runkle, Robert C.; Anderson, K. K.
2006-01-23
The goal of primary radiation monitoring in support of routine screening and emergency response is to detect characteristics in vehicle radiation signatures that indicate the presence of potential threats. Two conceptual approaches to analyzing gamma-ray spectra for threat detection are isotope identification and anomaly detection. While isotope identification is the time-honored method, an emerging technique is anomaly detection that uses benign vehicle gamma ray signatures to define an expectation of the radiation signature for vehicles that do not pose a threat. Newly acquired spectra are then compared to this expectation using statistical criteria that reflect acceptable false alarm rates andmore » probabilities of detection. The gamma-ray spectra analyzed here were collected at a U.S. land Port of Entry (POE) using a NaI-based radiation portal monitor (RPM). The raw data were analyzed to develop a benign vehicle expectation by decimating the original pulse-height channels to 35 energy bins, extracting composite variables via principal components analysis (PCA), and estimating statistically weighted distances from the mean vehicle spectrum with the mahalanobis distance (MD) metric. This paper reviews the methods used to establish the anomaly identification criteria and presents a systematic analysis of the response of the combined PCA and MD algorithm to modeled mono-energetic gamma-ray sources.« less
2003-11-01
Lafayette, IN 47907. [Lane et al-97b] T. Lane and C . E. Brodley. Sequence matching and learning in anomaly detection for computer security. Proceedings of...Mining, pp 259-263. 1998. [Lane et al-98b] T. Lane and C . E. Brodley. Temporal sequence learning and data reduction for anomaly detection ...W. Lee, C . Park, and S. Stolfo. Towards Automatic Intrusion Detection using NFR. 1st USENIX Workshop on Intrusion Detection and Network Monitoring
Congenital aplastic-hypoplastic lumbar pedicle in infants and young children
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yousefzadeh, D.K.; El-Khoury, G.Y.; Lupetin, A.R.
1982-01-01
Nine cases of congenital aplastic-hypoplastic lumbar pedicle (mean age 27 months) are described. Their data are compared to those of 18 other reported cases (mean age 24.7 years) and the following conclusions are made: (1) Almost exclusively, the pedicular defect in infants and young children is due to developmental anomaly rather than destruction by malignancy or infectious processes. (2) This anomaly, we think, is more common than it is believed to be. (3) Unlike adults, infants and young children rarely develop hypertrophy and/or sclerosis of the contralateral pedicle. (4) Detection of pedicular anomaly is more than satisfying a radiographic curiositymore » and may lead to discovery of other coexisting anomalies. (5) Ultrasonic screening of the patients with congenital pedicular defects may detect the associated genitourinary anomalies, if present, and justify further studies in a selected group of patients.« less
Machine Learning in Intrusion Detection
2005-07-01
machine learning tasks. Anomaly detection provides the core technology for a broad spectrum of security-centric applications. In this dissertation, we examine various aspects of anomaly based intrusion detection in computer security. First, we present a new approach to learn program behavior for intrusion detection. Text categorization techniques are adopted to convert each process to a vector and calculate the similarity between two program activities. Then the k-nearest neighbor classifier is employed to classify program behavior as normal or intrusive. We demonstrate
Eddy-Current Inspection of Ball Bearings
NASA Technical Reports Server (NTRS)
Bankston, B.
1985-01-01
Custom eddy-current probe locates surface anomalies. Low friction air cushion within cone allows ball to roll easily. Eddy current probe reliably detects surface and near-surface cracks, voids, and material anomalies in bearing balls or other spherical objects. Defects in ball surface detected by probe displayed on CRT and recorded on strip-chart recorder.
Anomaly Detection Techniques for Ad Hoc Networks
ERIC Educational Resources Information Center
Cai, Chaoli
2009-01-01
Anomaly detection is an important and indispensable aspect of any computer security mechanism. Ad hoc and mobile networks consist of a number of peer mobile nodes that are capable of communicating with each other absent a fixed infrastructure. Arbitrary node movements and lack of centralized control make them vulnerable to a wide variety of…
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2014-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan Walker
2015-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
NASA Astrophysics Data System (ADS)
Vegh, János; Kiss, Sándor; Lipcsei, Sándor; Horvath, Csaba; Pos, István; Kiss, Gábor
2010-10-01
The paper deals with two recently developed, high-precision nuclear measurement systems installed at the VVER-440 units of the Hungarian Paks NPP. Both developments were motivated by the reactor power increase to 108%, and by the planned plant service time extension. The first part describes the RMR start-up reactivity measurement system with advanced services. High-precision picoampere meters were installed at each reactor unit and measured ionization chamber current signals are handled by a portable computer providing data acquisition and online reactivity calculation service. Detailed offline evaluation and analysis of reactor start-up measurements can be performed on the portable unit, too. The second part of the paper describes a new reactor noise diagnostics system using state-of-the-art data acquisition hardware and signal processing methods. Details of the new reactor noise measurement evaluation software are also outlined. Noise diagnostics at Paks NPP is a standard tool for core anomaly detection and for long-term noise trend monitoring. Regular application of these systems is illustrated by real plant data, e.g., results of standard reactivity measurements during a reactor startup session are given. Noise applications are also illustrated by real plant measurements; results of core anomaly detection are presented.
Attention focussing and anomaly detection in real-time systems monitoring
NASA Technical Reports Server (NTRS)
Doyle, Richard J.; Chien, Steve A.; Fayyad, Usama M.; Porta, Harry J.
1993-01-01
In real-time monitoring situations, more information is not necessarily better. When faced with complex emergency situations, operators can experience information overload and a compromising of their ability to react quickly and correctly. We describe an approach to focusing operator attention in real-time systems monitoring based on a set of empirical and model-based measures for determining the relative importance of sensor data.
Model-Based Reasoning in the Detection of Satellite Anomalies
1990-12-01
Conference on Artificial Intellegence . 1363-1368. Detroit, Michigan, August 89. Chu, Wei-Hai. "Generic Expert System Shell for Diagnostic Reasoning... Intellegence . 1324-1330. Detroit, Michigan, August 89. de Kleer, Johan and Brian C. Williams. "Diagnosing Multiple Faults," Artificial Intellegence , 32(1): 97...Benjamin Kuipers. "Model-Based Monitoring of Dynamic Systems," Proceedings of the Eleventh Intematianal Joint Conference on Artificial Intellegence . 1238
Anomaly-Based Intrusion Detection Systems Utilizing System Call Data
2012-03-01
Functionality Description Persistence mechanism Mimicry technique Camouflage malware image: • renaming its image • appending its image to victim...particular industrial plant . Exactly which one was targeted still remains unknown, however a majority of the attacks took place in Iran [24]. Due... plant to unstable phase and eventually physical damage. It is interesting to note that a particular block of code - block DB8061 is automatically
Methodology of automated ionosphere front velocity estimation for ground-based augmentation of GNSS
NASA Astrophysics Data System (ADS)
Bang, Eugene; Lee, Jiyun
2013-11-01
ionospheric anomalies occurring during severe ionospheric storms can pose integrity threats to Global Navigation Satellite System (GNSS) Ground-Based Augmentation Systems (GBAS). Ionospheric anomaly threat models for each region of operation need to be developed to analyze the potential impact of these anomalies on GBAS users and develop mitigation strategies. Along with the magnitude of ionospheric gradients, the speed of the ionosphere "fronts" in which these gradients are embedded is an important parameter for simulation-based GBAS integrity analysis. This paper presents a methodology for automated ionosphere front velocity estimation which will be used to analyze a vast amount of ionospheric data, build ionospheric anomaly threat models for different regions, and monitor ionospheric anomalies continuously going forward. This procedure automatically selects stations that show a similar trend of ionospheric delays, computes the orientation of detected fronts using a three-station-based trigonometric method, and estimates speeds for the front using a two-station-based method. It also includes fine-tuning methods to improve the estimation to be robust against faulty measurements and modeling errors. It demonstrates the performance of the algorithm by comparing the results of automated speed estimation to those manually computed previously. All speed estimates from the automated algorithm fall within error bars of ± 30% of the manually computed speeds. In addition, this algorithm is used to populate the current threat space with newly generated threat points. A larger number of velocity estimates helps us to better understand the behavior of ionospheric gradients under geomagnetic storm conditions.
Flux-fusion anomaly test and bosonic topological crystalline insulators
Hermele, Michael; Chen, Xie
2016-10-13
Here, we introduce a method, dubbed the flux-fusion anomaly test, to detect certain anomalous symmetry fractionalization patterns in two-dimensional symmetry-enriched topological (SET) phases. We focus on bosonic systems with Z2 topological order and a symmetry group of the form G=U(1)xG', where G' is an arbitrary group that may include spatial symmetries and/or time reversal. The anomalous fractionalization patterns we identify cannot occur in strictly d=2 systems but can occur at surfaces of d=3 symmetry-protected topological (SPT) phases. This observation leads to examples of d=3 bosonic topological crystalline insulators (TCIs) that, to our knowledge, have not previously been identified. In somemore » cases, these d=3 bosonic TCIs can have an anomalous superfluid at the surface, which is characterized by nontrivial projective transformations of the superfluid vortices under symmetry. The basic idea of our anomaly test is to introduce fluxes of the U(1) symmetry and to show that some fractionalization patterns cannot be extended to a consistent action of G' symmetry on the fluxes. For some anomalies, this can be described in terms of dimensional reduction to d=1 SPT phases. We apply our method to several different symmetry groups with nontrivial anomalies, including G=U(1)×Z T 2 and G=U(1)×Z P 2, where Z T 2 and Z P 2 are time-reversal and d=2 reflection symmetry, respectively.« less
Unattended Sensor System With CLYC Detectors
NASA Astrophysics Data System (ADS)
Myjak, Mitchell J.; Becker, Eric M.; Gilbert, Andrew J.; Hoff, Jonathan E.; Knudson, Christa K.; Landgren, Peter C.; Lee, Samantha F.; McDonald, Benjamin S.; Pfund, David M.; Redding, Rebecca L.; Smart, John E.; Taubman, Matthew S.; Torres-Torres, Carlos R.; Wiseman, Clinton G.
2016-06-01
We have developed an unattended sensor for detecting anomalous radiation sources. The system combines several technologies to reduce size and weight, increase battery lifetime, and improve decision-making capabilities. Sixteen Cs2LiYCl6:Ce (CLYC) scintillators allow for gamma-ray spectroscopy and neutron detection in the same volume. Low-power electronics for readout, high voltage bias, and digital processing reduce the total operating power to 1.7 W. Computationally efficient analysis algorithms perform spectral anomaly detection and isotope identification. When an alarm occurs, the system transmits alarm information over a cellular modem. In this paper, we describe the overall design of the unattended sensor, present characterization results, and compare the performance to stock NaI:Tl and 3He detectors.
Unattended Sensor System With CLYC Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myjak, Mitchell J.; Becker, Eric M.; Gilbert, Andrew J.
2016-06-01
We have developed a next-generation unattended sensor for detecting anomalous radiation sources. The system combines several technologies to reduce size and weight, increase battery lifetime, and improve decision-making capabilities. Sixteen Cs2LiYCl6:Ce (CLYC) scintillators allow for gamma-ray spectroscopy and neutron detection in the same volume. Low-power electronics for readout, high voltage bias, and digital processing reduce the total operating power to 1.3 W. Computationally efficient analysis algorithms perform spectral anomaly detection and isotope identification. When an alarm occurs, the system transmits alarm information over a cellular modem. In this paper, we describe the overall design of the unattended sensor, present characterizationmore » results, and compare the performance to stock NaI:Tl and 3He detectors.« less
Integrated System Health Management: Foundational Concepts, Approach, and Implementation
NASA Technical Reports Server (NTRS)
Figueroa, Fernando
2009-01-01
A sound basis to guide the community in the conception and implementation of ISHM (Integrated System Health Management) capability in operational systems was provided. The concept of "ISHM Model of a System" and a related architecture defined as a unique Data, Information, and Knowledge (DIaK) architecture were described. The ISHM architecture is independent of the typical system architecture, which is based on grouping physical elements that are assembled to make up a subsystem, and subsystems combine to form systems, etc. It was emphasized that ISHM capability needs to be implemented first at a low functional capability level (FCL), or limited ability to detect anomalies, diagnose, determine consequences, etc. As algorithms and tools to augment or improve the FCL are identified, they should be incorporated into the system. This means that the architecture, DIaK management, and software, must be modular and standards-based, in order to enable systematic augmentation of FCL (no ad-hoc modifications). A set of technologies (and tools) needed to implement ISHM were described. One essential tool is a software environment to create the ISHM Model. The software environment encapsulates DIaK, and an infrastructure to focus DIaK on determining health (detect anomalies, determine causes, determine effects, and provide integrated awareness of the system to the operator). The environment includes gateways to communicate in accordance to standards, specially the IEEE 1451.1 Standard for Smart Sensors and Actuators.
Tellechea, Ana Laura; Luppo, Victoria; Morales, María Alejandra; Groisman, Boris; Baricalla, Agustin; Fabbri, Cintia; Sinchi, Anabel; Alonso, Alicia; Gonzalez, Cecilia; Ledesma, Bibiana; Masi, Patricia; Silva, María; Israilev, Adriana; Rocha, Marcela; Quaglia, Marcela; Bidondo, María Paz; Liascovich, Rosa; Barbero, Pablo
2018-06-19
Zika virus (ZIKV) vertical transmission may lead to microcephaly and other congenital anomalies. In March and April 2016, the first outbreak of ZIKV occurred in Argentina. The objective was to describe the surveillance of newborns with microcephaly and other selected brain anomalies in Argentina, and evaluation different etiologies. Participants were enrolled between April 2016 and March 2017. newborns from the National Network of Congenital Abnormalities of Argentina (RENAC) with head circumference lower than the 3rd percentile according to gestational age and sex, or selected brain anomalies. Blood and urine samples from cases and their mothers were tested for ZIKV by real-time polymerase chain reaction (RT-PCR), antigen-specific Immunoglobulin M (MAC-ELISA) and plaque-reduction neutralization test (PRNT 90 ). Toxoplasmosis, rubella, herpes simplex, syphilis, and cytomegalovirus (CMV) infection were also tested. A total of 104 cases were reported, with a prevalence of 6.9 per 10,000 [95% confidence interval (CI): 5.7-8.4], a significant increase when compared with the data prior to 2016, Prevalence Rate Ratio 1.7 (95% CI 1.2-2.3). In five cases positive serology for ZIKV (IgM and IgG by PRNT) was detected. The five cases presented microcephaly with craniofacial disproportion. We detected four cases of CMV infection, three cases of congenital toxoplasmosis, two cases of herpes simplex infection, and one case of congenital syphilis. The prevalence of microcephaly was significantly higher when compared with the previous period. The system had the capacity to detect five cases with congenital ZIKV syndrome in a country with limited viral circulation. © 2018 Wiley Periodicals, Inc.
Archean Isotope Anomalies as a Window into the Differentiation History of the Earth
NASA Astrophysics Data System (ADS)
Wainwright, A. N.; Debaille, V.; Zincone, S. A.
2018-05-01
No resolvable µ142Nd anomaly was detected in Paleo- Mesoarchean rocks of São Francisco and West African cratons. The lack of µ142Nd anomalies outside of North America and Greenland implies the Earth differentiated into at least two distinct domains.
Framework for behavioral analytics in anomaly identification
NASA Astrophysics Data System (ADS)
Touma, Maroun; Bertino, Elisa; Rivera, Brian; Verma, Dinesh; Calo, Seraphin
2017-05-01
Behavioral Analytics (BA) relies on digital breadcrumbs to build user profiles and create clusters of entities that exhibit a large degree of similarity. The prevailing assumption is that an entity will assimilate the group behavior of the cluster it belongs to. Our understanding of BA and its application in different domains continues to evolve and is a direct result of the growing interest in Machine Learning research. When trying to detect security threats, we use BA techniques to identify anomalies, defined in this paper as deviation from the group behavior. Early research papers in this field reveal a high number of false positives where a security alert is triggered based on deviation from the cluster learned behavior but still within the norm of what the system defines as an acceptable behavior. Further, domain specific security policies tend to be narrow and inadequately represent what an entity can do. Hence, they: a) limit the amount of useful data during the learning phase; and, b) lead to violation of policy during the execution phase. In this paper, we propose a framework for future research on the role of policies and behavior security in a coalition setting with emphasis on anomaly detection and individual's deviation from group activities.
NASA Astrophysics Data System (ADS)
McCarthy, J. Howard, Jr.; Reimer, G. Michael
1986-11-01
Field studies have demonstrated that gas anomalies are found over buried mineral deposits. Abnormally high concentrations of sulfur gases and carbon dioxide and abnormally low concentrations of oxygen are commonly found over sulfide ore deposits. Helium anomalies are commonly associated with uranium deposits and geothermal areas. Helium and hydrocarbon gas anomalies have been detected over oil and gas deposits. Gases are sampled by extracting them from the pore space of soil, by degassing soil or rock, or by adsorbing them on artificial collectors. The two most widely used techniques for gas analysis are gas chromatography and mass spectrometry. The detection of gas anomalies at or near the surface may be an effective method to locate buried mineral deposits.
NASA Astrophysics Data System (ADS)
Bellaoui, Mebrouk; Hassini, Abdelatif; Bouchouicha, Kada
2017-05-01
Detection of thermal anomaly prior to earthquake events has been widely confirmed by researchers over the past decade. One of the popular approaches for anomaly detection is the Robust Satellite Approach (RST). In this paper, we use this method on a collection of six years of MODIS satellite data, representing land surface temperature (LST) images to predict 21st May 2003 Boumerdes Algeria earthquake. The thermal anomalies results were compared with the ambient temperature variation measured in three meteorological stations of Algerian National Office of Meteorology (ONM) (DELLYS-AFIR, TIZI-OUZOU, and DAR-EL-BEIDA). The results confirm the importance of RST as an approach highly effective for monitoring the earthquakes.
NASA Technical Reports Server (NTRS)
Lafuse, Sharon A.
1991-01-01
The paper describes the Shuttle Leak Management Expert System (SLMES), a preprototype expert system developed to enable the ECLSS subsystem manager to analyze subsystem anomalies and to formulate flight procedures based on flight data. The SLMES combines the rule-based expert system technology with the traditional FORTRAN-based software into an integrated system. SLMES analyzes the data using rules, and, when it detects a problem that requires simulation, it sets up the input for the FORTRAN-based simulation program ARPCS2AT2, which predicts the cabin total pressure and composition as a function of time. The program simulates the pressure control system, the crew oxygen masks, the airlock repress/depress valves, and the leakage. When the simulation has completed, other SLMES rules are triggered to examine the results of simulation contrary to flight data and to suggest methods for correcting the problem. Results are then presented in form of graphs and tables.
Mitigating the Insider Threat with High-Dimensional Anomaly Detection
2004-12-01
a more serious attack. Various systems such as NSM [56], GrIDS [57], snort [58], Emerald [59], and Spice [60] generate alerts for portscan...reboot etc. The user measurements include the user profiles such as time of login , duration of user session, cumulative CPU time, names of files...already been implemented in a real-time system for information retrieval [3]. A technique developed at SRI in the Emerald system [22] uses historical
Practical method to identify orbital anomaly as spacecraft breakup in the geostationary region
NASA Astrophysics Data System (ADS)
Hanada, Toshiya; Uetsuhara, Masahiko; Nakaniwa, Yoshitaka
2012-07-01
Identifying a spacecraft breakup is an essential issue to define the current orbital debris environment. This paper proposes a practical method to identify an orbital anomaly, which appears as a significant discontinuity in the observation data, as a spacecraft breakup. The proposed method is applicable to orbital anomalies in the geostationary region. Long-term orbital evolutions of breakup fragments may conclude that their orbital planes will converge into several corresponding regions in inertial space even if the breakup epoch is not specified. This empirical method combines the aforementioned conclusion with the search strategy developed at Kyushu University, which can identify origins of observed objects as fragments released from a specified spacecraft. This practical method starts with selecting a spacecraft that experienced an orbital anomaly, and formulates a hypothesis to generate fragments from the anomaly. Then, the search strategy is applied to predict the behavior of groups of fragments hypothetically generated. Outcome of this predictive analysis specifies effectively when, where and how we should conduct optical measurements using ground-based telescopes. Objects detected based on the outcome are supposed to be from the anomaly, so that we can confirm the anomaly as a spacecraft breakup to release the detected objects. This paper also demonstrates observation planning for a spacecraft anomaly in the geostationary region.
The importance of localizing pulmonary veins in atrial septal defect closure!
2011-01-01
An 8-year-old girl was admitted for a simple closure of echocardiographically diagnosed Atrial Septal Defect (ASD). During the operation the right pulmonary veins orifices were not detected in the left atrium and attempt to localize them led to the discovery of three additional anomalies, namely Interrupted Inferior Vena Cava (IIVC), Scimitar syndrome, and systemic arterial supply of the lung. Postoperatively these finding were confirmed by CT angiography. This case report emphasizes the need for adequate preoperative diagnosis and presents a very rare constellation of four congenital anomalies that to the best of our knowledge is not reported before. PMID:21450090
Tone calibration technique: A digital signaling scheme for mobile applications
NASA Technical Reports Server (NTRS)
Davarian, F.
1986-01-01
Residual carrier modulation is conventionally used in a communication link to assist the receiver with signal demodulation and detection. Although suppressed carrier modulation has a slight power advantage over the residual carrier approach in systems enjoying a high level of stability, it lacks sufficient robustness to be used in channels severely contaminated by noise, interference and propagation effects. In mobile links, in particular, the vehicle motion and multipath waveform propagation affect the received carrier in an adverse fashion. A residual carrier scheme that uses a pilot carrier to calibrate a mobile channel against multipath fading anomalies is described. The benefits of this scheme, known as tone calibration technique, are described. A brief study of the system performance in the presence of implementation anomalies is also given.
Detecting Pulsing Denial-of-Service Attacks with Nondeterministic Attack Intervals
NASA Astrophysics Data System (ADS)
Luo, Xiapu; Chan, Edmond W. W.; Chang, Rocky K. C.
2009-12-01
This paper addresses the important problem of detecting pulsing denial of service (PDoS) attacks which send a sequence of attack pulses to reduce TCP throughput. Unlike previous works which focused on a restricted form of attacks, we consider a very broad class of attacks. In particular, our attack model admits any attack interval between two adjacent pulses, whether deterministic or not. It also includes the traditional flooding-based attacks as a limiting case (i.e., zero attack interval). Our main contribution is Vanguard, a new anomaly-based detection scheme for this class of PDoS attacks. The Vanguard detection is based on three traffic anomalies induced by the attacks, and it detects them using a CUSUM algorithm. We have prototyped Vanguard and evaluated it on a testbed. The experiment results show that Vanguard is more effective than the previous methods that are based on other traffic anomalies (after a transformation using wavelet transform, Fourier transform, and autocorrelation) and detection algorithms (e.g., dynamic time warping).
Machine learning for the automatic detection of anomalous events
NASA Astrophysics Data System (ADS)
Fisher, Wendy D.
In this dissertation, we describe our research contributions for a novel approach to the application of machine learning for the automatic detection of anomalous events. We work in two different domains to ensure a robust data-driven workflow that could be generalized for monitoring other systems. Specifically, in our first domain, we begin with the identification of internal erosion events in earth dams and levees (EDLs) using geophysical data collected from sensors located on the surface of the levee. As EDLs across the globe reach the end of their design lives, effectively monitoring their structural integrity is of critical importance. The second domain of interest is related to mobile telecommunications, where we investigate a system for automatically detecting non-commercial base station routers (BSRs) operating in protected frequency space. The presence of non-commercial BSRs can disrupt the connectivity of end users, cause service issues for the commercial providers, and introduce significant security concerns. We provide our motivation, experimentation, and results from investigating a generalized novel data-driven workflow using several machine learning techniques. In Chapter 2, we present results from our performance study that uses popular unsupervised clustering algorithms to gain insights to our real-world problems, and evaluate our results using internal and external validation techniques. Using EDL passive seismic data from an experimental laboratory earth embankment, results consistently show a clear separation of events from non-events in four of the five clustering algorithms applied. Chapter 3 uses a multivariate Gaussian machine learning model to identify anomalies in our experimental data sets. For the EDL work, we used experimental data from two different laboratory earth embankments. Additionally, we explore five wavelet transform methods for signal denoising. The best performance is achieved with the Haar wavelets. We achieve up to 97.3% overall accuracy and less than 1.4% false negatives in anomaly detection. In Chapter 4, we research using two-class and one-class support vector machines (SVMs) for an effective anomaly detection system. We again use the two different EDL data sets from experimental laboratory earth embankments (each having approximately 80% normal and 20% anomalies) to ensure our workflow is robust enough to work with multiple data sets and different types of anomalous events (e.g., cracks and piping). We apply Haar wavelet-denoising techniques and extract nine spectral features from decomposed segments of the time series data. The two-class SVM with 10-fold cross validation achieved over 94% overall accuracy and 96% F1-score. Our approach provides a means for automatically identifying anomalous events using various machine learning techniques. Detecting internal erosion events in aging EDLs, earlier than is currently possible, can allow more time to prevent or mitigate catastrophic failures. Results show that we can successfully separate normal from anomalous data observations in passive seismic data, and provide a step towards techniques for continuous real-time monitoring of EDL health. Our lightweight non-commercial BSR detection system also has promise in separating commercial from non-commercial BSR scans without the need for prior geographic location information, extensive time-lapse surveys, or a database of known commercial carriers. (Abstract shortened by ProQuest.).
A scalable architecture for online anomaly detection of WLCG batch jobs
NASA Astrophysics Data System (ADS)
Kuehn, E.; Fischer, M.; Giffels, M.; Jung, C.; Petzold, A.
2016-10-01
For data centres it is increasingly important to monitor the network usage, and learn from network usage patterns. Especially configuration issues or misbehaving batch jobs preventing a smooth operation need to be detected as early as possible. At the GridKa data and computing centre we therefore operate a tool BPNetMon for monitoring traffic data and characteristics of WLCG batch jobs and pilots locally on different worker nodes. On the one hand local information itself are not sufficient to detect anomalies for several reasons, e.g. the underlying job distribution on a single worker node might change or there might be a local misconfiguration. On the other hand a centralised anomaly detection approach does not scale regarding network communication as well as computational costs. We therefore propose a scalable architecture based on concepts of a super-peer network.
Detection of emerging sunspot regions in the solar interior.
Ilonidis, Stathis; Zhao, Junwei; Kosovichev, Alexander
2011-08-19
Sunspots are regions where strong magnetic fields emerge from the solar interior and where major eruptive events occur. These energetic events can cause power outages, interrupt telecommunication and navigation services, and pose hazards to astronauts. We detected subsurface signatures of emerging sunspot regions before they appeared on the solar disc. Strong acoustic travel-time anomalies of an order of 12 to 16 seconds were detected as deep as 65,000 kilometers. These anomalies were associated with magnetic structures that emerged with an average speed of 0.3 to 0.6 kilometer per second and caused high peaks in the photospheric magnetic flux rate 1 to 2 days after the detection of the anomalies. Thus, synoptic imaging of subsurface magnetic activity may allow anticipation of large sunspot regions before they become visible, improving space weather forecast.
Quantum adiabatic machine learning
NASA Astrophysics Data System (ADS)
Pudenz, Kristen L.; Lidar, Daniel A.
2013-05-01
We develop an approach to machine learning and anomaly detection via quantum adiabatic evolution. This approach consists of two quantum phases, with some amount of classical preprocessing to set up the quantum problems. In the training phase we identify an optimal set of weak classifiers, to form a single strong classifier. In the testing phase we adiabatically evolve one or more strong classifiers on a superposition of inputs in order to find certain anomalous elements in the classification space. Both the training and testing phases are executed via quantum adiabatic evolution. All quantum processing is strictly limited to two-qubit interactions so as to ensure physical feasibility. We apply and illustrate this approach in detail to the problem of software verification and validation, with a specific example of the learning phase applied to a problem of interest in flight control systems. Beyond this example, the algorithm can be used to attack a broad class of anomaly detection problems.
NASA Astrophysics Data System (ADS)
Benedetto, J.; Cloninger, A.; Czaja, W.; Doster, T.; Kochersberger, K.; Manning, B.; McCullough, T.; McLane, M.
2014-05-01
Successful performance of radiological search mission is dependent on effective utilization of mixture of signals. Examples of modalities include, e.g., EO imagery and gamma radiation data, or radiation data collected during multiple events. In addition, elevation data or spatial proximity can be used to enhance the performance of acquisition systems. State of the art techniques in processing and exploitation of complex information manifolds rely on diffusion operators. Our approach involves machine learning techniques based on analysis of joint data- dependent graphs and their associated diffusion kernels. Then, the significant eigenvectors of the derived fused graph Laplace and Schroedinger operators form the new representation, which provides integrated features from the heterogeneous input data. The families of data-dependent Laplace and Schroedinger operators on joint data graphs, shall be integrated by means of appropriately designed fusion metrics. These fused representations are used for target and anomaly detection.
NASA Astrophysics Data System (ADS)
Pattisahusiwa, Asis; Houw Liong, The; Purqon, Acep
2016-08-01
In this study, we compare two learning mechanisms: outliers and novelty detection in order to detect ionospheric TEC disturbance by November 2004 geomagnetic storm and January 2005 substorm. The mechanisms are applied by using v-SVR learning algorithm which is a regression version of SVM. Our results show that both mechanisms are quiet accurate in learning TEC data. However, novelty detection is more accurate than outliers detection in extracting anomalies related to geomagnetic events. The detected anomalies by outliers detection are mostly related to trend of data, while novelty detection are associated to geomagnetic events. Novelty detection also shows evidence of LSTID during geomagnetic events.
Road detection and buried object detection in elevated EO/IR imagery
NASA Astrophysics Data System (ADS)
Kennedy, Levi; Kolba, Mark P.; Walters, Joshua R.
2012-06-01
To assist the warfighter in visually identifying potentially dangerous roadside objects, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has developed an elevated video sensor system testbed for data collection. This system provides color and mid-wave infrared (MWIR) imagery. Signal Innovations Group (SIG) has developed an automated processing capability that detects the road within the sensor field of view and identifies potentially threatening buried objects within the detected road. The road detection algorithm leverages system metadata to project the collected imagery onto a flat ground plane, allowing for more accurate detection of the road as well as the direct specification of realistic physical constraints in the shape of the detected road. Once the road has been detected in an image frame, a buried object detection algorithm is applied to search for threatening objects within the detected road space. The buried object detection algorithm leverages textural and pixel intensity-based features to detect potential anomalies and then classifies them as threatening or non-threatening objects. Both the road detection and the buried object detection algorithms have been developed to facilitate their implementation in real-time in the NVESD system.
Hitier, Martin; Hamon, Michèle; Denise, Pierre; Lacoudre, Julien; Thenint, Marie-Aude; Mallet, Jean-François; Moreau, Sylvain; Quarck, Gaëlle
2015-01-01
Introduction Despite its high incidence and severe morbidity, the physiopathogenesis of adolescent idiopathic scoliosis (AIS) is still unknown. Here, we looked for early anomalies in AIS which are likely to be the cause of spinal deformity and could also be targeted by early treatments. We focused on the vestibular system, which is suspected of acting in AIS pathogenesis and which exhibits an end organ with size and shape fixed before birth. We hypothesize that, in adolescents with idiopathic scoliosis, vestibular morphological anomalies were already present at birth and could possibly have caused other abnormalities. Materials and Methods The vestibular organ of 18 adolescents with AIS and 9 controls were evaluated with MRI in a prospective case controlled study. We studied lateral semicircular canal orientation and the three semicircular canal positions relative to the midline. Lateral semicircular canal function was also evaluated by vestibulonystagmography after bithermal caloric stimulation. Results The left lateral semicircular canal was more vertical and further from the midline in AIS (p = 0.01) and these two parameters were highly correlated (r = -0.6; p = 0.02). These morphological anomalies were associated with functional anomalies in AIS (lower excitability, higher canal paresis), but were not significantly different from controls (p>0.05). Conclusion Adolescents with idiopathic scoliosis exhibit morphological vestibular asymmetry, probably determined well before birth. Since the vestibular system influences the vestibulospinal pathway, the hypothalamus, and the cerebellum, this indicates that the vestibular system is a possible cause of later morphological, hormonal and neurosensory anomalies observed in AIS. Moreover, the simple lateral SCC MRI measurement demonstrated here could be used for early detection of AIS, selection of children for close follow-up, and initiation of preventive treatment before spinal deformity occurs. PMID:26186348
A sonographic approach to prenatal classification of congenital spine anomalies
Robertson, Meiri; Sia, Sock Bee
2015-01-01
Abstract Objective: To develop a classification system for congenital spine anomalies detected by prenatal ultrasound. Methods: Data were collected from fetuses with spine abnormalities diagnosed in our institution over a five‐year period between June 2005 and June 2010. The ultrasound images were analysed to determine which features were associated with different congenital spine anomalies. Findings of the prenatal ultrasound images were correlated with other prenatal imaging, post mortem findings, post mortem imaging, neonatal imaging, karyotype, and other genetic workup. Data from published case reports of prenatal diagnosis of rare congenital spine anomalies were analysed to provide a comprehensive work. Results: During the study period, eighteen cases of spine abnormalities were diagnosed in 7819 women. The mean gestational age at diagnosis was 18.8w ± 2.2 SD. While most cases represented open NTD, a spectrum of vertebral abnormalities were diagnosed prenatally. These included hemivertebrae, block vertebrae, cleft or butterfly vertebrae, sacral agenesis, and a lipomeningocele. The most sensitive features for diagnosis of a spine abnormality included flaring of the vertebral arch ossification centres, abnormal spine curvature, and short spine length. While reported findings at the time of diagnosis were often conservative, retrospective analysis revealed good correlation with radiographic imaging. 3D imaging was found to be a valuable tool in many settings. Conclusions: Analysis of the study findings showed prenatal ultrasound allowed detection of disruption to the normal appearances of the fetal spine. Using the three features of flaring of the vertebral arch ossification centres, abnormal spine curvature, and short spine length, an algorithm was devised to aid with the diagnosis of spine anomalies for those who perform and report prenatal ultrasound. PMID:28191204
Thermal structure of the Panama Basin by analysis of seismic attenuation
NASA Astrophysics Data System (ADS)
Vargas, Carlos A.; Pulido, José E.; Hobbs, Richard W.
2018-04-01
Using recordings of earthquakes on Oceanic Bottom Seismographs and onshore stations on the coastal margins of Colombia, Panama, and Ecuador, we estimate attenuation parameters in the upper lithosphere of the Panama Basin. The tomographic images of the derived coda-Q values are correlated with estimates of Curie Point Depth and measured and theoretical heat flow. Our study reveals three tectonic domains where magmatic/hydrothermal activity or lateral variations of the lithologic composition in the upper lithosphere can account for the modeled thermal structure and the anelasticity. We find that the Costa Rica Ridge and the Panama Fracture Zone are significant tectonic features probably related to thermal anomalies detected in the study area. We interpret a large and deep intrinsic attenuation anomaly as related to the heat source at the Costa Rica Ridge and show how interactions with regional fault systems cause contrasting attenuation anomalies.
Method for localizing and isolating an errant process step
Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Ferrell, Regina K.
2003-01-01
A method for localizing and isolating an errant process includes the steps of retrieving from a defect image database a selection of images each image having image content similar to image content extracted from a query image depicting a defect, each image in the selection having corresponding defect characterization data. A conditional probability distribution of the defect having occurred in a particular process step is derived from the defect characterization data. A process step as a highest probable source of the defect according to the derived conditional probability distribution is then identified. A method for process step defect identification includes the steps of characterizing anomalies in a product, the anomalies detected by an imaging system. A query image of a product defect is then acquired. A particular characterized anomaly is then correlated with the query image. An errant process step is then associated with the correlated image.
Ghalyan, Najah F; Miller, David J; Ray, Asok
2018-06-12
Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.
GraphPrints: Towards a Graph Analytic Method for Network Anomaly Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harshaw, Chris R; Bridges, Robert A; Iannacone, Michael D
This paper introduces a novel graph-analytic approach for detecting anomalies in network flow data called \\textit{GraphPrints}. Building on foundational network-mining techniques, our method represents time slices of traffic as a graph, then counts graphlets\\textemdash small induced subgraphs that describe local topology. By performing outlier detection on the sequence of graphlet counts, anomalous intervals of traffic are identified, and furthermore, individual IPs experiencing abnormal behavior are singled-out. Initial testing of GraphPrints is performed on real network data with an implanted anomaly. Evaluation shows false positive rates bounded by 2.84\\% at the time-interval level, and 0.05\\% at the IP-level with 100\\% truemore » positive rates at both.« less
Capacitance probe for detection of anomalies in non-metallic plastic pipe
Mathur, Mahendra P.; Spenik, James L.; Condon, Christopher M.; Anderson, Rodney; Driscoll, Daniel J.; Fincham, Jr., William L.; Monazam, Esmail R.
2010-11-23
The disclosure relates to analysis of materials using a capacitive sensor to detect anomalies through comparison of measured capacitances. The capacitive sensor is used in conjunction with a capacitance measurement device, a location device, and a processor in order to generate a capacitance versus location output which may be inspected for the detection and localization of anomalies within the material under test. The components may be carried as payload on an inspection vehicle which may traverse through a pipe interior, allowing evaluation of nonmetallic or plastic pipes when the piping exterior is not accessible. In an embodiment, supporting components are solid-state devices powered by a low voltage on-board power supply, providing for use in environments where voltage levels may be restricted.
Anomaly detection driven active learning for identifying suspicious tracks and events in WAMI video
NASA Astrophysics Data System (ADS)
Miller, David J.; Natraj, Aditya; Hockenbury, Ryler; Dunn, Katherine; Sheffler, Michael; Sullivan, Kevin
2012-06-01
We describe a comprehensive system for learning to identify suspicious vehicle tracks from wide-area motion (WAMI) video. First, since the road network for the scene of interest is assumed unknown, agglomerative hierarchical clustering is applied to all spatial vehicle measurements, resulting in spatial cells that largely capture individual road segments. Next, for each track, both at the cell (speed, acceleration, azimuth) and track (range, total distance, duration) levels, extreme value feature statistics are both computed and aggregated, to form summary (p-value based) anomaly statistics for each track. Here, to fairly evaluate tracks that travel across different numbers of spatial cells, for each cell-level feature type, a single (most extreme) statistic is chosen, over all cells traveled. Finally, a novel active learning paradigm, applied to a (logistic regression) track classifier, is invoked to learn to distinguish suspicious from merely anomalous tracks, starting from anomaly-ranked track prioritization, with ground-truth labeling by a human operator. This system has been applied to WAMI video data (ARGUS), with the tracks automatically extracted by a system developed in-house at Toyon Research Corporation. Our system gives promising preliminary results in highly ranking as suspicious aerial vehicles, dismounts, and traffic violators, and in learning which features are most indicative of suspicious tracks.
NASA Astrophysics Data System (ADS)
Butler, D. K.; Whitten, C. B.; Smith, F. L.
1983-03-01
Results of a microgravimetric survey at Manatee Springs, Levy County, Fla., are presented. The survey area was 100 by 400 ft, with 20-ft gravity station spacing, and with the long dimension of the area approximately perpendicular to the known trend of the main cavity. The main cavity is about 80 to 100 ft below the surface and has a cross section about 16 to 20 ft in height and 30 to 40 ft in width beneath the survey area. Using a density contrast of -1.3 g/cucm, the gravity anomaly is calculated to be -35 micro Gal with a width at half maximum of 205 ft. The microgravimetric survey results clearly indicate a broad negative anomaly coincident with the location and trend of the cavity system across the survey area. The anomaly magnitude and width are consistent with those calculated from the known depth and dimensions of the main cavity. In addition, a small, closed negative anomaly feature, superimposed on the broad negative feature due to the main cavity, satisfactorily delineated a small secondary cavity feature which was discovered and mapped by cave divers.
Smart Grid Educational Series | Energy Systems Integration Facility | NREL
generation through transmission, all the way to the distribution infrastructure. Download presentation | Text on key takeaways from breakout group discussions. Learn more about the workshop. Text Version Text presentation PDF | Text Version Using MultiSpeak Data Model Standard & Essence Anomaly Detection for ICS
Abernethy malformation with portal vein aneurysm in a child.
Chandrashekhara, Sheragaru H; Bhalla, Ashu Seith; Gupta, Arun Kumar; Vikash, C S; Kabra, Susheel Kumar
2011-01-01
Abernethy malformation is an extremely rare anomaly of the splanchnic venous system. We describe multidetector computed tomography findings of an incidentally detected Abernethy malformation with portal vein aneurysm in a two-and-half-year old child. The computed tomography scan was performed for the evaluation of respiratory distress, poor growth, and loss of appetite.
NASA Astrophysics Data System (ADS)
Fink, Wolfgang; Brooks, Alexander J.-W.; Tarbell, Mark A.; Dohm, James M.
2017-05-01
Autonomous reconnaissance missions are called for in extreme environments, as well as in potentially hazardous (e.g., the theatre, disaster-stricken areas, etc.) or inaccessible operational areas (e.g., planetary surfaces, space). Such future missions will require increasing degrees of operational autonomy, especially when following up on transient events. Operational autonomy encompasses: (1) Automatic characterization of operational areas from different vantages (i.e., spaceborne, airborne, surface, subsurface); (2) automatic sensor deployment and data gathering; (3) automatic feature extraction including anomaly detection and region-of-interest identification; (4) automatic target prediction and prioritization; (5) and subsequent automatic (re-)deployment and navigation of robotic agents. This paper reports on progress towards several aspects of autonomous C4ISR systems, including: Caltech-patented and NASA award-winning multi-tiered mission paradigm, robotic platform development (air, ground, water-based), robotic behavior motifs as the building blocks for autonomous tele-commanding, and autonomous decision making based on a Caltech-patented framework comprising sensor-data-fusion (feature-vectors), anomaly detection (clustering and principal component analysis), and target prioritization (hypothetical probing).
NASA Astrophysics Data System (ADS)
Piatyszek, E.; Voignier, P.; Graillot, D.
2000-05-01
One of the aims of sewer networks is the protection of population against floods and the reduction of pollution rejected to the receiving water during rainy events. To meet these goals, managers have to equip the sewer networks with and to set up real-time control systems. Unfortunately, a component fault (leading to intolerable behaviour of the system) or sensor fault (deteriorating the process view and disturbing the local automatism) makes the sewer network supervision delicate. In order to ensure an adequate flow management during rainy events it is essential to set up procedures capable of detecting and diagnosing these anomalies. This article introduces a real-time fault detection method, applicable to sewer networks, for the follow-up of rainy events. This method consists in comparing the sensor response with a forecast of this response. This forecast is provided by a model and more precisely by a state estimator: a Kalman filter. This Kalman filter provides not only a flow estimate but also an entity called 'innovation'. In order to detect abnormal operations within the network, this innovation is analysed with the binary sequential probability ratio test of Wald. Moreover, by crossing available information on several nodes of the network, a diagnosis of the detected anomalies is carried out. This method provided encouraging results during the analysis of several rains, on the sewer network of Seine-Saint-Denis County, France.
Real-time detection and classification of anomalous events in streaming data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferragut, Erik M.; Goodall, John R.; Iannacone, Michael D.
2016-04-19
A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The events can be displayed to a user in user-defined groupings in an animated fashion. The system can include a plurality of anomaly detectors that together implement an algorithm to identify low probability events and detect atypical traffic patterns. The atypical traffic patterns can then be classified as being of interest or not. In one particular example, in a network environment, the classification can be whether the network traffic is malicious or not.
Cold Vacuum Drying (CVD) Set Point Determination
DOE Office of Scientific and Technical Information (OSTI.GOV)
PHILIPP, B.L.
2000-03-21
The Safety Class Instrumentation and Control (SCIC) system provides active detection and response to process anomalies that, if unmitigated, would result in a safety event. Specifically, actuation of the SCIC system includes two portions. The portion which isolates the MCO and initiates the safety-class helium (SCHe) purge, and the portion which detects and stops excessive heat input to the MCO on high tempered water MCO inlet temperature. For the MCO isolation and purge, the SCIC receives signals from MCO pressure (both positive pressure and vacuum), helium flow rate, bay high temperature switches, seismic trips and time under vacuum trips.
Semi-Supervised Novelty Detection with Adaptive Eigenbases, and Application to Radio Transients
NASA Technical Reports Server (NTRS)
Thompson, David R.; Majid, Walid A.; Reed, Colorado J.; Wagstaff, Kiri L.
2011-01-01
We present a semi-supervised online method for novelty detection and evaluate its performance for radio astronomy time series data. Our approach uses adaptive eigenbases to combine 1) prior knowledge about uninteresting signals with 2) online estimation of the current data properties to enable highly sensitive and precise detection of novel signals. We apply the method to the problem of detecting fast transient radio anomalies and compare it to current alternative algorithms. Tests based on observations from the Parkes Multibeam Survey show both effective detection of interesting rare events and robustness to known false alarm anomalies.
Bronshtein, Moshe; Solt, Ido; Blumenfeld, Zeev
2014-06-01
Despite more than three decades of universal popularity of fetal sonography as an integral part of pregnancy evaluation, there is still no unequivocal agreement regarding the optimal dating of fetal sonographic screening and the type of ultrasound (transvaginal vs abdominal). TransvaginaL systematic sonography at 14-17 weeks for fetal organ screening. The evaluation of over 72.000 early (14-17 weeks) and late (18-24 weeks) fetal ultrasonographic systematic organ screenings revealed that 96% of the malformations are detectable in the early screening with an incidence of 1:50 gestations. Only 4% of the fetal anomalies are diagnosed later in pregnancy. Over 99% of the fetal cardiac anomalies are detectable in the early screening and most of them appear in low risk gestations. Therefore, we suggest a new platform of fetal sonographic evaluation and follow-up: The extensive systematic fetal organ screening should be performed by an expert sonographer who has been trained in the detection of fetal malformations, at 14-17 weeks gestation. This examination should also include fetal cardiac echography Three additional ultrasound examinations are suggested during pregnancy: the first, performed by the patient's obstetrician at 6-7 weeks for the exclusion of ectopic pregnancy, confirmation of fetal viability, dating, assessment of chorionicity in multiple gestations, and visualization of maternal adnexae. The other two, at 22-26 and 32-34 weeks, require less training and should be performed by an obstetrician who has been qualified in the sonographic detection of fetal anomalies. The advantages of early midtrimester targeted fetal systematic organ screening for the detection of fetal anomalies may dictate a global change.
NASA Astrophysics Data System (ADS)
Park, Won-Kwang; Kim, Hwa Pyung; Lee, Kwang-Jae; Son, Seong-Ho
2017-11-01
Motivated by the biomedical engineering used in early-stage breast cancer detection, we investigated the use of MUltiple SIgnal Classification (MUSIC) algorithm for location searching of small anomalies using S-parameters. We considered the application of MUSIC to functional imaging where a small number of dipole antennas are used. Our approach is based on the application of Born approximation or physical factorization. We analyzed cases in which the anomaly is respectively small and large in relation to the wavelength, and the structure of the left-singular vectors is linked to the nonzero singular values of a Multi-Static Response (MSR) matrix whose elements are the S-parameters. Using simulations, we demonstrated the strengths and weaknesses of the MUSIC algorithm in detecting both small and extended anomalies.
Predictability in space launch vehicle anomaly detection using intelligent neuro-fuzzy systems
NASA Technical Reports Server (NTRS)
Gulati, Sandeep; Toomarian, Nikzad; Barhen, Jacob; Maccalla, Ayanna; Tawel, Raoul; Thakoor, Anil; Daud, Taher
1994-01-01
Included in this viewgraph presentation on intelligent neuroprocessors for launch vehicle health management systems (HMS) are the following: where the flight failures have been in launch vehicles; cumulative delay time; breakdown of operations hours; failure of Mars Probe; vehicle health management (VHM) cost optimizing curve; target HMS-STS auxiliary power unit location; APU monitoring and diagnosis; and integration of neural networks and fuzzy logic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Humberto E.; Simpson, Michael F.; Lin, Wen-Chiao
In this paper, we apply an advanced safeguards approach and associated methods for process monitoring to a hypothetical nuclear material processing system. The assessment regarding the state of the processing facility is conducted at a systemcentric level formulated in a hybrid framework. This utilizes architecture for integrating both time- and event-driven data and analysis for decision making. While the time-driven layers of the proposed architecture encompass more traditional process monitoring methods based on time series data and analysis, the event-driven layers encompass operation monitoring methods based on discrete event data and analysis. By integrating process- and operation-related information and methodologiesmore » within a unified framework, the task of anomaly detection is greatly improved. This is because decision-making can benefit from not only known time-series relationships among measured signals but also from known event sequence relationships among generated events. This available knowledge at both time series and discrete event layers can then be effectively used to synthesize observation solutions that optimally balance sensor and data processing requirements. The application of the proposed approach is then implemented on an illustrative monitored system based on pyroprocessing and results are discussed.« less
A mobile device system for early warning of ECG anomalies.
Szczepański, Adam; Saeed, Khalid
2014-06-20
With the rapid increase in computational power of mobile devices the amount of ambient intelligence-based smart environment systems has increased greatly in recent years. A proposition of such a solution is described in this paper, namely real time monitoring of an electrocardiogram (ECG) signal during everyday activities for identification of life threatening situations. The paper, being both research and review, describes previous work of the authors, current state of the art in the context of the authors' work and the proposed aforementioned system. Although parts of the solution were described in earlier publications of the authors, the whole concept is presented completely for the first time along with the prototype implementation on mobile device-a Windows 8 tablet with Modern UI. The system has three main purposes. The first goal is the detection of sudden rapid cardiac malfunctions and informing the people in the patient's surroundings, family and friends and the nearest emergency station about the deteriorating health of the monitored person. The second goal is a monitoring of ECG signals under non-clinical conditions to detect anomalies that are typically not found during diagnostic tests. The third goal is to register and analyze repeatable, long-term disturbances in the regular signal and finding their patterns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, K.A.; Neuman, M.C.; Simmonds, D.D.
An effective method for detecting computer misuse is the automatic monitoring and analysis of on-line user activity. This activity is reflected in the system audit record, in the system vulnerability posture, and in other evidence found through active testing of the system. During the last several years we have implemented an automatic misuse detection system at Los Alamos. This is the Network Anomaly Detection and Intrusion Reporter (NADIR). We are currently expanding NADIR to include processing of the Cray UNICOS operating system. This new component is called the UNICOS Realtime NADIR, or UNICORN. UNICORN summarizes user activity and system configurationmore » in statistical profiles. It compares these profiles to expert rules that define security policy and improper or suspicious behavior. It reports suspicious behavior to security auditors and provides tools to aid in follow-up investigations. The first phase of UNICORN development is nearing completion, and will be operational in late 1994.« less
Wiemken, Timothy L; Furmanek, Stephen P; Mattingly, William A; Wright, Marc-Oliver; Persaud, Annuradha K; Guinn, Brian E; Carrico, Ruth M; Arnold, Forest W; Ramirez, Julio A
2018-02-01
Although not all health care-associated infections (HAIs) are preventable, reducing HAIs through targeted intervention is key to a successful infection prevention program. To identify areas in need of targeted intervention, robust statistical methods must be used when analyzing surveillance data. The objective of this study was to compare and contrast statistical process control (SPC) charts with Twitter's anomaly and breakout detection algorithms. SPC and anomaly/breakout detection (ABD) charts were created for vancomycin-resistant Enterococcus, Acinetobacter baumannii, catheter-associated urinary tract infection, and central line-associated bloodstream infection data. Both SPC and ABD charts detected similar data points as anomalous/out of control on most charts. The vancomycin-resistant Enterococcus ABD chart detected an extra anomalous point that appeared to be higher than the same time period in prior years. Using a small subset of the central line-associated bloodstream infection data, the ABD chart was able to detect anomalies where the SPC chart was not. SPC charts and ABD charts both performed well, although ABD charts appeared to work better in the context of seasonal variation and autocorrelation. Because they account for common statistical issues in HAI data, ABD charts may be useful for practitioners for analysis of HAI surveillance data. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Propulsion health monitoring of a turbine engine disk using spin test data
NASA Astrophysics Data System (ADS)
Abdul-Aziz, Ali; Woike, Mark; Oza, Nikunj; Matthews, Bryan; Baakilini, George
2010-03-01
On line detection techniques to monitor the health of rotating engine components are becoming increasingly attractive options to aircraft engine companies in order to increase safety of operation and lower maintenance costs. Health monitoring remains a challenging feature to easily implement, especially, in the presence of scattered loading conditions, crack size, component geometry and materials properties. The current trend, however, is to utilize noninvasive types of health monitoring or nondestructive techniques to detect hidden flaws and mini cracks before any catastrophic event occurs. These techniques go further to evaluate materials' discontinuities and other anomalies that have grown to the level of critical defects which can lead to failure. Generally, health monitoring is highly dependent on sensor systems that are capable of performing in various engine environmental conditions and able to transmit a signal upon a predetermined crack length, while acting in a neutral form upon the overall performance of the engine system. Efforts are under way at NASA Glenn Research Center through support of the Intelligent Vehicle Health Management Project (IVHM) to develop and implement such sensor technology for a wide variety of applications. These efforts are focused on developing high temperature, wireless, low cost and durable products. Therefore, in an effort to address the technical issues concerning health monitoring of a rotor disk, this paper considers data collected from an experimental study using high frequency capacitive sensor technology to capture blade tip clearance and tip timing measurements in a rotating engine-like-disk-to predict the disk faults and assess its structural integrity. The experimental results collected at a range of rotational speeds from tests conducted at the NASA Glenn Research Center's Rotordynamics Laboratory will be evaluated using multiple data-driven anomaly detection techniques to identify anomalies in the disk. This study is expected to present a select evaluation of online health monitoring of a rotating disk using these high caliber sensors and test the capability of the in-house spin system.
System for closure of a physical anomaly
Bearinger, Jane P; Maitland, Duncan J; Schumann, Daniel L; Wilson, Thomas S
2014-11-11
Systems for closure of a physical anomaly. Closure is accomplished by a closure body with an exterior surface. The exterior surface contacts the opening of the anomaly and closes the anomaly. The closure body has a primary shape for closing the anomaly and a secondary shape for being positioned in the physical anomaly. The closure body preferably comprises a shape memory polymer.
Asynchronous Data-Driven Classification of Weapon Systems
2009-10-01
Classification of Weapon SystemsF Xin Jin† Kushal Mukherjee† Shalabh Gupta† Asok Ray † Shashi Phoha† Thyagaraju Damarla‡ xuj103@psu.edu kum162@psu.edu szg107...Orlando, FL. [8] A. Ray , “Symbolic dynamic analysis of complex systems for anomaly detection,” Signal Processing, vol. 84, no. 7, pp. 1115–1130, July...2004. [9] S. Gupta and A. Ray , “Symbolic dynamic filtering for data-driven pat- tern recognition,” PATTERN RECOGNITION: Theory and Application
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2008-01-01
In this paper, an enhanced on-line diagnostic system which utilizes dual-channel sensor measurements is developed for the aircraft engine application. The enhanced system is composed of a nonlinear on-board engine model (NOBEM), the hybrid Kalman filter (HKF) algorithm, and fault detection and isolation (FDI) logic. The NOBEM provides the analytical third channel against which the dual-channel measurements are compared. The NOBEM is further utilized as part of the HKF algorithm which estimates measured engine parameters. Engine parameters obtained from the dual-channel measurements, the NOBEM, and the HKF are compared against each other. When the discrepancy among the signals exceeds a tolerance level, the FDI logic determines the cause of discrepancy. Through this approach, the enhanced system achieves the following objectives: 1) anomaly detection, 2) component fault detection, and 3) sensor fault detection and isolation. The performance of the enhanced system is evaluated in a simulation environment using faults in sensors and components, and it is compared to an existing baseline system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humberto E. Garcia
This paper illustrates safeguards benefits that process monitoring (PM) can have as a diversion deterrent and as a complementary safeguards measure to nuclear material accountancy (NMA). In order to infer the possible existence of proliferation-driven activities, the objective of NMA-based methods is often to statistically evaluate materials unaccounted for (MUF) computed by solving a given mass balance equation related to a material balance area (MBA) at every material balance period (MBP), a particular objective for a PM-based approach may be to statistically infer and evaluate anomalies unaccounted for (AUF) that may have occurred within a MBP. Although possibly being indicativemore » of proliferation-driven activities, the detection and tracking of anomaly patterns is not trivial because some executed events may be unobservable or unreliably observed as others. The proposed similarity between NMA- and PM-based approaches is important as performance metrics utilized for evaluating NMA-based methods, such as detection probability (DP) and false alarm probability (FAP), can also be applied for assessing PM-based safeguards solutions. To this end, AUF count estimates can be translated into significant quantity (SQ) equivalents that may have been diverted within a given MBP. A diversion alarm is reported if this mass estimate is greater than or equal to the selected value for alarm level (AL), appropriately chosen to optimize DP and FAP based on the particular characteristics of the monitored MBA, the sensors utilized, and the data processing method employed for integrating and analyzing collected measurements. To illustrate the application of the proposed PM approach, a protracted diversion of Pu in a waste stream was selected based on incomplete fuel dissolution in a dissolver unit operation, as this diversion scenario is considered to be problematic for detection using NMA-based methods alone. Results demonstrate benefits of conducting PM under a system-centric strategy that utilizes data collected from a system of sensors and that effectively exploits known characterizations of sensors and facility operations in order to significantly improve anomaly detection, reduce false alarm, and enhance assessment robustness under unreliable partial sensor information.« less
Tactile sensor of hardness recognition based on magnetic anomaly detection
NASA Astrophysics Data System (ADS)
Xue, Lingyun; Zhang, Dongfang; Chen, Qingguang; Rao, Huanle; Xu, Ping
2018-03-01
Hardness, as one kind of tactile sensing, plays an important role in the field of intelligent robot application such as gripping, agricultural harvesting, prosthetic hand and so on. Recently, with the rapid development of magnetic field sensing technology with high performance, a number of magnetic sensors have been developed for intelligent application. The tunnel Magnetoresistance(TMR) based on magnetoresistance principal works as the sensitive element to detect the magnetic field and it has proven its excellent ability of weak magnetic detection. In the paper, a new method based on magnetic anomaly detection was proposed to detect the hardness in the tactile way. The sensor is composed of elastic body, ferrous probe, TMR element, permanent magnet. When the elastic body embedded with ferrous probe touches the object under the certain size of force, deformation of elastic body will produce. Correspondingly, the ferrous probe will be forced to displace and the background magnetic field will be distorted. The distorted magnetic field was detected by TMR elements and the output signal at different time can be sampled. The slope of magnetic signal with the sampling time is different for object with different hardness. The result indicated that the magnetic anomaly sensor can recognize the hardness rapidly within 150ms after the tactile moment. The hardness sensor based on magnetic anomaly detection principal proposed in the paper has the advantages of simple structure, low cost, rapid response and it has shown great application potential in the field of intelligent robot.
New Non-Intrusive Inspection Technologies for Nuclear Security and Nonproliferation
NASA Astrophysics Data System (ADS)
Ledoux, Robert J.
2015-10-01
Comprehensive monitoring of the supply chain for nuclear materials has historically been hampered by non-intrusive inspection systems that have such large false alarm rates that they are impractical in the flow of commerce. Passport Systems, Inc. (Passport) has developed an active interrogation system which detects fissionable material, high Z material, and other contraband in land, sea and air cargo. Passport's design utilizes several detection modalities including high resolution imaging, passive radiation detection, effective-Z (EZ-3D™) anomaly detection, Prompt Neutrons from Photofission (PNPF), and Nuclear Resonance Fluorescence (NRF) isotopic identification. These technologies combine to: detect fissionable, high-Z, radioactive and contraband materials, differentiate fissionable materials from high-Z shielding materials, and isotopically identify actinides, Special Nuclear Materials (SNM), and other contraband (e.g. explosives, drugs, nerve agents). Passport's system generates a 3-D image of the scanned object which contains information such as effective-Z and density, as well as a 2-D image and isotopic and fissionable information for regions of interest.
Intelligent transient transitions detection of LRE test bed
NASA Astrophysics Data System (ADS)
Zhu, Fengyu; Shen, Zhengguang; Wang, Qi
2013-01-01
Health Monitoring Systems is an implementation of monitoring strategies for complex systems whereby avoiding catastrophic failure, extending life and leading to improved asset management. A Health Monitoring Systems generally encompasses intelligence at many levels and sub-systems including sensors, actuators, devices, etc. In this paper, a smart sensor is studied, which is use to detect transient transitions of liquid-propellant rocket engines test bed. In consideration of dramatic changes of variable condition, wavelet decomposition is used to work real time in areas. Contrast to traditional Fourier transform method, the major advantage of adding wavelet analysis is the ability to detect transient transitions as well as obtaining the frequency content using a much smaller data set. Historically, transient transitions were only detected by offline analysis of the data. The methods proposed in this paper provide an opportunity to detect transient transitions automatically as well as many additional data anomalies, and provide improved data-correction and sensor health diagnostic abilities. The developed algorithms have been tested on actual rocket test data.
NASA Astrophysics Data System (ADS)
Abdi, Abdi M.; Szu, Harold H.
2003-04-01
With the growing rate of interconnection among computer systems, network security is becoming a real challenge. Intrusion Detection System (IDS) is designed to protect the availability, confidentiality and integrity of critical network information systems. Today"s approach to network intrusion detection involves the use of rule-based expert systems to identify an indication of known attack or anomalies. However, these techniques are less successful in identifying today"s attacks. Hackers are perpetually inventing new and previously unanticipated techniques to compromise information infrastructure. This paper proposes a dynamic way of detecting network intruders on time serious data. The proposed approach consists of a two-step process. Firstly, obtaining an efficient multi-user detection method, employing the recently introduced complexity minimization approach as a generalization of a standard ICA. Secondly, we identified unsupervised learning neural network architecture based on Kohonen"s Self-Organizing Map for potential functional clustering. These two steps working together adaptively will provide a pseudo-real time novelty detection attribute to supplement the current intrusion detection statistical methodology.
Acharya, Sujeet S; Gundeti, Mohan S; Zagaja, Gregory P; Shalhav, Arieh L; Zorn, Kevin C
2009-04-01
Although malformations of the genitourinary tract are typically identified during childhood, they can remain silent until incidental detection in evaluation and treatment of other pathologies during adulthood. The advent of the minimally invasive era in urologic surgery has given rise to unique challenges in the surgical management of anomalies of the genitourinary tract. This article reviews the embryology of anomalies of Wolffian duct (WD) derivatives with specific attention to the seminal vesicles, vas deferens, ureter, and kidneys. This is followed by a discussion of the history of the laparoscopic approach to WD derivative anomalies. Finally, we present two cases to describe technical considerations when managing these anomalies when encountered during robotic-assisted radical prostatectomy. The University of Chicago Robotic Laparoscopic Radical Prostatectomy (RLRP) database was reviewed for cases where anomalies of WD derivatives were encountered. We describe how modifications in technique allowed for completion of the procedure without difficulty. None Of the 1230 RLRP procedures performed at our institution by three surgeons, only two cases (0.16%) have been noted to have a WD anomaly. These cases were able to be completed without difficulty by making simple modifications in technique. Although uncommon, it is important for the urologist to be familiar with the origin and surgical management of WD anomalies, particularly when detected incidentally during surgery. Simple modifications in technique allow for completion of RLRP without difficulty.
Model-based approach for cyber-physical attack detection in water distribution systems.
Housh, Mashor; Ohar, Ziv
2018-08-01
Modern Water Distribution Systems (WDSs) are often controlled by Supervisory Control and Data Acquisition (SCADA) systems and Programmable Logic Controllers (PLCs) which manage their operation and maintain a reliable water supply. As such, and with the cyber layer becoming a central component of WDS operations, these systems are at a greater risk of being subjected to cyberattacks. This paper offers a model-based methodology based on a detailed hydraulic understanding of WDSs combined with an anomaly detection algorithm for the identification of complex cyberattacks that cannot be fully identified by hydraulically based rules alone. The results show that the proposed algorithm is capable of achieving the best-known performance when tested on the data published in the BATtle of the Attack Detection ALgorithms (BATADAL) competition (http://www.batadal.net). Copyright © 2018. Published by Elsevier Ltd.
A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.
Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang
2017-01-01
Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.
A lightweight network anomaly detection technique
Kim, Jinoh; Yoo, Wucherl; Sim, Alex; ...
2017-03-13
While the network anomaly detection is essential in network operations and management, it becomes further challenging to perform the first line of detection against the exponentially increasing volume of network traffic. In this paper, we develop a technique for the first line of online anomaly detection with two important considerations: (i) availability of traffic attributes during the monitoring time, and (ii) computational scalability for streaming data. The presented learning technique is lightweight and highly scalable with the beauty of approximation based on the grid partitioning of the given dimensional space. With the public traffic traces of KDD Cup 1999 andmore » NSL-KDD, we show that our technique yields 98.5% and 83% of detection accuracy, respectively, only with a couple of readily available traffic attributes that can be obtained without the help of post-processing. Finally, the results are at least comparable with the classical learning methods including decision tree and random forest, with approximately two orders of magnitude faster learning performance.« less
A lightweight network anomaly detection technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jinoh; Yoo, Wucherl; Sim, Alex
While the network anomaly detection is essential in network operations and management, it becomes further challenging to perform the first line of detection against the exponentially increasing volume of network traffic. In this paper, we develop a technique for the first line of online anomaly detection with two important considerations: (i) availability of traffic attributes during the monitoring time, and (ii) computational scalability for streaming data. The presented learning technique is lightweight and highly scalable with the beauty of approximation based on the grid partitioning of the given dimensional space. With the public traffic traces of KDD Cup 1999 andmore » NSL-KDD, we show that our technique yields 98.5% and 83% of detection accuracy, respectively, only with a couple of readily available traffic attributes that can be obtained without the help of post-processing. Finally, the results are at least comparable with the classical learning methods including decision tree and random forest, with approximately two orders of magnitude faster learning performance.« less
NASA Astrophysics Data System (ADS)
Simmons, Gary G.; Howett, Carly J. A.; Young, Leslie A.; Spencer, John R.
2015-11-01
In the last few decades, thermal data from the Galileo and Cassini spacecraft have detected various anomalies on Jovian and Saturnian satellites, including the thermally anomalous “PacMan” regions on Mimas and Tethys and the Pwyll anomaly on Europa (Howett et al. 2011, Howett et al. 2012, Spencer et al. 1999). Yet, the peculiarities of some of these anomalies, like the weak detection of the “PacMan” anomalies on Rhea and Dione and the low thermal inertia values of the widespread anomalies on equatorial Europa, are subjects for on-going research (Howett et al. 2014, Rathbun et al. 2010). Further, analysis and review of all the data both Galileo and Cassini took of these worlds will provide information of the thermal inertia and albedos of their surfaces, perhaps highlighting potential targets of interest for future Jovian and Saturnian system missions. Many previous works have used a thermophysical model for airless planets developed by Spencer (1990). However, the Three Dimensional Volatile-Transport (VT3D) model proposed by Young (2012) is able to predict surface temperatures in significantly faster computation time, incorporating seasonal and diurnal insolation variations. This work is the first step in an ongoing investigation, which will use VT3D’s capabilities to reanalyze Galileo and Cassini data. VT3D, which has already been used to analyze volatile transport on Pluto, is validated by comparing its results to that of the Spencer thermal model. We will also present our initial results using VT3D to reanalyze the thermophysical properties of the PacMan anomaly previous discovered on Mimas by Howett et al. (2011), using temperature constraints of diurnal data from Cassini/CIRS. VT3D is expected to be an efficient tool in identifying new thermal anomalies in future Saturnian and Jovian missions.Bibliography:C.J.A. Howett et al. (2011), Icarus 216, 221.C.J.A. Howett et al. (2012), Icarus 221, 1084.C.J.A. Howett et al. (2014), Icarus 241, 239.J.A. Rathbun et al. (2010), Icarus 210, 763J. R. Spencer (1990), Icarus 83, 27.J. R. Spencer et al. (1999), Science 284, 1514.L. A. Young (2012), Icarus 221, 80.
NASA Astrophysics Data System (ADS)
Ángel Marazuela, Miguel; Vázquez-Suñé, Enric; Criollo-Manjarrez, Rotman; García-Gil, Alejandro
2017-04-01
During the drilling of line 9 of the Barcelona underground (Spain), a geothermal anomaly was detected, in which groundwater temperature was found to be up to 50°C. Previously, during the construction of the Fondo station in Santa Coloma de Gramanet (SCG), temperatures up to 37°C were already detected in this area. To study the feasibility of a future energy exploitation of the geothermal anomaly, a local and regional study is being undertaken. We present the first results of the new study. The objectives of this study were (1) to understand the flux regime of the hydrothermal system on both, local and large scale, (2) to explain the origin of the identified geothermal anomaly in SCG, and (3) to know the quality of the geothermal potential of the underground resources. To achieve these goals, different numerical models of groundwater flow and heat transport were performed. The area of study is constituted mainly of low permeability Palaeozoic granodiorites strongly weathered towards the top (lehm). These materials are affected by two sets of faults: the first one consists of porphyrýs dikes with a SW-NE direction and the second fault family which presents a NW-SE direction (Vázquez-Suñé et al., 2016). In the southeast area, the Quaternary deposits of the Besós River delta overlap these Palaeozoic materials. In spite of being a regional model, all these geological features have been incorporated in a complex mesh with more than 2.5 million finite elements. The results obtained suggest that in the case of SCG, the thermal anomaly found on the surface would have its origin in the rapid ascent of the hot water through these fracturing planes. Understanding the hydrogeothermal operation of the SCG system in detail and its possible temporal evolution will be of great interest for future evaluation, monitoring and management of the geothermal resources found, as well as to understand the interaction of these systems with urban infrastructures. REFERENCES Vázquez-Suñé, E.; Marazuela, M.Á.; Velasco, V.; Diviu, M.; Pérez-Estaún, A.; Álverez-Marrón, J. (2016): A geological model for the management of subsurface data in the urban environment of Barcelona and surrounding area. Solid Earth, 7, 1317-1329.
NASA Technical Reports Server (NTRS)
Mehr, Ali Farhang; Sauvageon, Julien; Agogino, Alice M.; Tumer, Irem Y.
2006-01-01
Recent advances in micro electromechanical systems technology, digital electronics, and wireless communications have enabled development of low-cost, low-power, multifunctional miniature smart sensors. These sensors can be deployed throughout a region in an aerospace vehicle to build a network for measurement, detection and surveillance applications. Event detection using such centralized sensor networks is often regarded as one of the most promising health management technologies in aerospace applications where timely detection of local anomalies has a great impact on the safety of the mission. In this paper, we propose to conduct a qualitative comparison of several local event detection algorithms for centralized redundant sensor networks. The algorithms are compared with respect to their ability to locate and evaluate an event in the presence of noise and sensor failures for various node geometries and densities.
Effectiveness of an expert system for astronaut assistance on a sleep experiment.
Callini, G; Essig, S M; Heher, D M; Young, L R
2000-10-01
Principal Investigator-in-a-Box ([PI]) is an expert system designed to train and assist astronauts with the performance of an experiment outside their field of expertise, particularly when contact with the Principal Investigators on the ground is limited or impossible. In the current case, [PI] was designed to assist with the calibration and troubleshooting procedures of the Neurolab Sleep and Respiration Experiment. [PI] displays physiological signals in real time during the pre-sleep instrumentation period, alerts the astronauts when a poor signal quality is detected, and displays steps to improve quality. Two studies are presented in this paper. In the first study 12 subjects monitored a set of prerecorded physiological signals and attempted to identify any signal artifacts appearing on the computer screen. Every subject performed the experiment twice, once with the assistance of [PI] and once without. The second part of this study focuses on the postflight analysis of the data gathered from the Neurolab Mission. After replaying the physiological signals on the ground, the frequency of correct [PI] alerts and false alarms (i.e., incorrect diagnoses by the expert system) was determined in order to assess the robustness and accuracy of the rules. Results of the ground study indicated a beneficial effect of [PI] and training in reducing anomaly detection time and the number of undetected anomalies. For the in-flight performance, excluding the saturated signals, the expert system had an 84% detection accuracy, and the questionnaires filled out by the astronauts showed positive crew reactions to the expert system.
Spatial-temporal event detection in climate parameter imagery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKenna, Sean Andrew; Gutierrez, Karen A.
Previously developed techniques that comprise statistical parametric mapping, with applications focused on human brain imaging, are examined and tested here for new applications in anomaly detection within remotely-sensed imagery. Two approaches to analysis are developed: online, regression-based anomaly detection and conditional differences. These approaches are applied to two example spatial-temporal data sets: data simulated with a Gaussian field deformation approach and weekly NDVI images derived from global satellite coverage. Results indicate that anomalies can be identified in spatial temporal data with the regression-based approach. Additionally, la Nina and el Nino climatic conditions are used as different stimuli applied to themore » earth and this comparison shows that el Nino conditions lead to significant decreases in NDVI in both the Amazon Basin and in Southern India.« less
Syed, Zeeshan; Saeed, Mohammed; Rubinfeld, Ilan
2010-01-01
For many clinical conditions, only a small number of patients experience adverse outcomes. Developing risk stratification algorithms for these conditions typically requires collecting large volumes of data to capture enough positive and negative for training. This process is slow, expensive, and may not be appropriate for new phenomena. In this paper, we explore different anomaly detection approaches to identify high-risk patients as cases that lie in sparse regions of the feature space. We study three broad categories of anomaly detection methods: classification-based, nearest neighbor-based, and clustering-based techniques. When evaluated on data from the National Surgical Quality Improvement Program (NSQIP), these methods were able to successfully identify patients at an elevated risk of mortality and rare morbidities following inpatient surgical procedures. PMID:21347083
NASA Astrophysics Data System (ADS)
Li, H.; Kusky, T. M.; Peng, S.; Zhu, M.
2012-12-01
Thermal infrared (TIR) remote sensing is an important technique in the exploration of geothermal resources. In this study, a geothermal survey is conducted in Tengchong area of Yunnan province in China using multi-temporal MODIS LST (Land Surface Temperature). The monthly night MODIS LST data from Mar. 2000 to Mar. 2011 of the study area were collected and analyzed. The 132 month average LST map was derived and three geothermal anomalies were identified. The findings of this study agree well with the results from relative geothermal gradient measurements. Finally, we conclude that TIR remote sensing is a cost-effective technique to detect geothermal anomalies. Combining TIR remote sensing with geological analysis and the understanding of geothermal mechanism is an accurate and efficient approach to geothermal area detection.
NASA Technical Reports Server (NTRS)
Shrestha, S.; Kharkovsky, S.; Zoughi, R.; Hepburn, F
2005-01-01
The Space Shuttle Columbia s catastrophic failure has been attributed to a piece of external fuel tank insulating SOFI (Spray On Foam Insulation) foam striking the leading edge of the left wing of the orbiter causing significant damage to some of the protecting heat tiles. The accident emphasizes the growing need to develop effective, robust and life-cycle oriented methods of nondestructive testing and evaluation (NDT&E) of complex conductor-backed insulating foam and protective acreage heat tiles used in the space shuttle fleet and in future multi-launch space vehicles. The insulating SOFI foam is constructed from closed-cell foam. In the microwave regime this foam is in the family of low permittivity and low loss dielectric materials. Near-field microwave and millimeter wave NDT methods were one of the techniques chosen for this purpose. To this end several flat and thick SOFI foam panels, two structurally complex panels similar to the external fuel tank and a "blind" panel were used in this investigation. Several anomalies such as voids and disbonds were embedded in these panels at various locations. The location and properties of the embedded anomalies in the "blind" panel were not disclosed to the investigating team prior to the investigation. Three frequency bands were used in this investigation covering a frequency range of 8-75 GHz. Moreover, the influence of signal polarization was also investigated. Overall the results of this investigation were very promising for detecting the presence of anomalies in different panels covered with relatively thick insulating SOFI foam. Different types of anomalies were detected in foam up to 9 in thick. Many of the anomalies in the more complex panels were also detected. When investigating the blind panel no false positives were detected. Anomalies in between and underneath bolt heads were not easily detected. This paper presents the results of this investigation along with a discussion of the capabilities of the method used.
NASA Astrophysics Data System (ADS)
Grijalva-Rodríguez, T.; Valencia-Moreno, M.; Calmus, T.; Del Rio-Salas, R.; Balcázar-García, M.
2017-12-01
This work reviews the characteristics of the El Horror uranium prospect in northeastern Sonora, Mexico. It was formerly detected by a radiometric anomaly after airborne gamma ray exploration carried out in the 70's by the Mexican government. As a promising site to contain important uranium resources, the El Horror was re-evaluated by CFE (Federal Electricity Commission) by in situ gamma ray surveys. The study also incorporates rock and stream sediment ICP-MS geochemistry, X-ray diffraction, X-ray fluorescence, Raman spectrometry and Scanning Electron Microscopy (SEM) to provide a better understanding of the radiometric anomaly. The results show that, instead of a single anomaly, it comprises at least five individual anomalies hosted in hydrothermally altered Laramide (80-40 Ma) andesitic volcanic rocks of the Tarahumara Formation. Concentrations for elemental uranium and uranium calculated from gamma ray surveys (i.e., equivalent uranium) are not spatially coincident within the anomaly, but, at least at some degree, they do so in specific sites. X-ray diffraction and Raman spectrometry revealed the presence of rutile/anatase, uvite, bukouvskyte and allanite as the more likely mineral phases to contain uranium. SEM studies revealed a process of iron-rich concretion formation, suggesting that uranium was initially incorporated to the system by adsorption, but was largely removed later during incorporation of Fe+3 ions. Stream sediment geochemistry reveals that the highest uranium concentrations are derived from the southern part of the Sierra La Madera batholith (∼63 Ma), and decrease toward the El Horror anomaly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hermele, Michael; Chen, Xie
Here, we introduce a method, dubbed the flux-fusion anomaly test, to detect certain anomalous symmetry fractionalization patterns in two-dimensional symmetry-enriched topological (SET) phases. We focus on bosonic systems with Z2 topological order and a symmetry group of the form G=U(1)xG', where G' is an arbitrary group that may include spatial symmetries and/or time reversal. The anomalous fractionalization patterns we identify cannot occur in strictly d=2 systems but can occur at surfaces of d=3 symmetry-protected topological (SPT) phases. This observation leads to examples of d=3 bosonic topological crystalline insulators (TCIs) that, to our knowledge, have not previously been identified. In somemore » cases, these d=3 bosonic TCIs can have an anomalous superfluid at the surface, which is characterized by nontrivial projective transformations of the superfluid vortices under symmetry. The basic idea of our anomaly test is to introduce fluxes of the U(1) symmetry and to show that some fractionalization patterns cannot be extended to a consistent action of G' symmetry on the fluxes. For some anomalies, this can be described in terms of dimensional reduction to d=1 SPT phases. We apply our method to several different symmetry groups with nontrivial anomalies, including G=U(1)×Z T 2 and G=U(1)×Z P 2, where Z T 2 and Z P 2 are time-reversal and d=2 reflection symmetry, respectively.« less
Aerospike Engine Post-Test Diagnostic System Delivered to Rocketdyne
NASA Technical Reports Server (NTRS)
Meyer, Claudia M.
2000-01-01
The NASA Glenn Research Center at Lewis Field, in cooperation with Rocketdyne, has designed, developed, and implemented an automated Post-Test Diagnostic System (PTDS) for the X-33 linear aerospike engine. The PTDS was developed to reduce analysis time and to increase the accuracy and repeatability of rocket engine ground test fire and flight data analysis. This diagnostic system provides a fast, consistent, first-pass data analysis, thereby aiding engineers who are responsible for detecting and diagnosing engine anomalies from sensor data. It uses analytical methods modeled after the analysis strategies used by engineers. Glenn delivered the first version of PTDS in September of 1998 to support testing of the engine s power pack assembly. The system was used to analyze all 17 power pack tests and assisted Rocketdyne engineers in troubleshooting both data acquisition and test article anomalies. The engine version of PTDS, which was delivered in June of 1999, will support all single-engine, dual-engine, and flight firings of the aerospike engine.
Caldera unrest detected with seawater temperature anomalies at Deception Island, Antarctic Peninsula
NASA Astrophysics Data System (ADS)
Berrocoso, M.; Prates, G.; Fernández-Ros, A.; Peci, L. M.; de Gil, A.; Rosado, B.; Páez, R.; Jigena, B.
2018-04-01
Increased thermal activity was detected to coincide with the onset of volcano inflation in the seawater-filled caldera at Deception Island. This thermal activity was manifested in pulses of high water temperature that coincided with ocean tide cycles. The seawater temperature anomalies were detected by a thermometric sensor attached to the tide gauge (bottom pressure sensor). This was installed where the seawater circulation and the locations of known thermal anomalies, fumaroles and thermal springs, together favor the detection of water warmed within the caldera. Detection of the increased thermal activity was also possible because sea ice, which covers the entire caldera during the austral winter months, insulates the water and thus reduces temperature exchange between seawater and atmosphere. In these conditions, the water temperature data has been shown to provide significant information about Deception volcano activity. The detected seawater temperature increase, also observed in soil temperature readings, suggests rapid and near-simultaneous increase in geothermal activity with onset of caldera inflation and an increased number of seismic events observed in the following austral summer.
Sun, Minglei; Yang, Shaobao; Jiang, Jinling; Wang, Qiwei
2015-01-01
Pelger-Huet anomaly (PHA) and Pseudo Pelger-Huet anomaly (PPHA) are neutrophil with abnormal morphology. They have the bilobed or unilobed nucleus and excessive clumping chromatin. Currently, detection of this kind of cell mainly depends on the manual microscopic examination by a clinician, thus, the quality of detection is limited by the efficiency and a certain subjective consciousness of the clinician. In this paper, a detection method for PHA and PPHA is proposed based on karyomorphism and chromatin distribution features. Firstly, the skeleton of the nucleus is extracted using an augmented Fast Marching Method (AFMM) and width distribution is obtained through distance transform. Then, caryoplastin in the nucleus is extracted based on Speeded Up Robust Features (SURF) and a K-nearest-neighbor (KNN) classifier is constructed to analyze the features. Experiment shows that the sensitivity and specificity of this method achieved 87.5% and 83.33%, which means that the detection accuracy of PHA is acceptable. Meanwhile, the detection method should be helpful to the automatic morphological classification of blood cells.
Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy
2016-01-01
Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.
Data Analytics in Procurement Fraud Prevention
2014-05-30
Certified Fraud Examiners CAC common access card COR contracting officer’s representative CPAR Contractor Performance Assessment Reporting System DCAA...using analytics to predict patterns occurring in known credit card fraud investigations to prevent future schemes before they happen. The goal of...or iTunes . 4. Distributional Analytics Distributional analytics are used to detect anomalies within data. Through the use of distributional
Abernethy malformation with portal vein aneurysm in a child
Chandrashekhara, Sheragaru H.; Bhalla, Ashu Seith; Gupta, Arun Kumar; Vikash, C. S.; Kabra, Susheel Kumar
2011-01-01
Abernethy malformation is an extremely rare anomaly of the splanchnic venous system. We describe multidetector computed tomography findings of an incidentally detected Abernethy malformation with portal vein aneurysm in a two-and-half-year old child. The computed tomography scan was performed for the evaluation of respiratory distress, poor growth, and loss of appetite. PMID:21430844
Using Innovative Outliers to Detect Discrete Shifts in Dynamics in Group-Based State-Space Models
ERIC Educational Resources Information Center
Chow, Sy-Miin; Hamaker, Ellen L.; Allaire, Jason C.
2009-01-01
Outliers are typically regarded as data anomalies that should be discarded. However, dynamic or "innovative" outliers can be appropriately utilized to capture unusual but substantively meaningful shifts in a system's dynamics. We extend De Jong and Penzer's 1998 approach for representing outliers in single-subject state-space models to a…
NASA Astrophysics Data System (ADS)
Varma, R. A. Raveendra
Magnetic fields of naval vessels are widely used all over the world for detection and localization of naval vessel. Magnetic Anomaly Detectors (MADs) installed on air borne vehicles are used to detect submarine operating in shallow waters. Underwater mines fitted with magnetic sensor are used for detection and destruction of naval vessels in the times of conflict. Reduction of magnetic signature of naval vessels is carried out by deperming and installation of degaussing system onboard the vessel. Present paper elaborates details of studies carried out at Magnetics Division of Naval Science and Technological Laboratory (NSTL) for minimizing the magnetic signature of naval vessels by designing a degaussing system. Magnetic fields of a small ship model are predicted and a degaussing system is designed for reducing magnetic detection. The details of the model, methodology used for estimation of magnetic signature of the vessel and design of degaussing system is brought out in this paper with details of experimental setup and results.