Sample records for event detection system

  1. Generalized Detectability for Discrete Event Systems

    PubMed Central

    Shu, Shaolong; Lin, Feng

    2011-01-01

    In our previous work, we investigated detectability of discrete event systems, which is defined as the ability to determine the current and subsequent states of a system based on observation. For different applications, we defined four types of detectabilities: (weak) detectability, strong detectability, (weak) periodic detectability, and strong periodic detectability. In this paper, we extend our results in three aspects. (1) We extend detectability from deterministic systems to nondeterministic systems. Such a generalization is necessary because there are many systems that need to be modeled as nondeterministic discrete event systems. (2) We develop polynomial algorithms to check strong detectability. The previous algorithms are based on observer whose construction is of exponential complexity, while the new algorithms are based on a new automaton called detector. (3) We extend detectability to D-detectability. While detectability requires determining the exact state of a system, D-detectability relaxes this requirement by asking only to distinguish certain pairs of states. With these extensions, the theory on detectability of discrete event systems becomes more applicable in solving many practical problems. PMID:21691432

  2. Surface Management System Departure Event Data Analysis

    NASA Technical Reports Server (NTRS)

    Monroe, Gilena A.

    2010-01-01

    This paper presents a data analysis of the Surface Management System (SMS) performance of departure events, including push-back and runway departure events.The paper focuses on the detection performance, or the ability to detect departure events, as well as the prediction performance of SMS. The results detail a modest overall detection performance of push-back events and a significantly high overall detection performance of runway departure events. The overall detection performance of SMS for push-back events is approximately 55%.The overall detection performance of SMS for runway departure events nears 100%. This paper also presents the overall SMS prediction performance for runway departure events as well as the timeliness of the Aircraft Situation Display for Industry data source for SMS predictions.

  3. Real-time distributed fiber optic sensor for security systems: Performance, event classification and nuisance mitigation

    NASA Astrophysics Data System (ADS)

    Mahmoud, Seedahmed S.; Visagathilagar, Yuvaraja; Katsifolis, Jim

    2012-09-01

    The success of any perimeter intrusion detection system depends on three important performance parameters: the probability of detection (POD), the nuisance alarm rate (NAR), and the false alarm rate (FAR). The most fundamental parameter, POD, is normally related to a number of factors such as the event of interest, the sensitivity of the sensor, the installation quality of the system, and the reliability of the sensing equipment. The suppression of nuisance alarms without degrading sensitivity in fiber optic intrusion detection systems is key to maintaining acceptable performance. Signal processing algorithms that maintain the POD and eliminate nuisance alarms are crucial for achieving this. In this paper, a robust event classification system using supervised neural networks together with a level crossings (LCs) based feature extraction algorithm is presented for the detection and recognition of intrusion and non-intrusion events in a fence-based fiber-optic intrusion detection system. A level crossings algorithm is also used with a dynamic threshold to suppress torrential rain-induced nuisance alarms in a fence system. Results show that rain-induced nuisance alarms can be suppressed for rainfall rates in excess of 100 mm/hr with the simultaneous detection of intrusion events. The use of a level crossing based detection and novel classification algorithm is also presented for a buried pipeline fiber optic intrusion detection system for the suppression of nuisance events and discrimination of intrusion events. The sensor employed for both types of systems is a distributed bidirectional fiber-optic Mach-Zehnder (MZ) interferometer.

  4. An integrated logit model for contamination event detection in water distribution systems.

    PubMed

    Housh, Mashor; Ostfeld, Avi

    2015-05-15

    The problem of contamination event detection in water distribution systems has become one of the most challenging research topics in water distribution systems analysis. Current attempts for event detection utilize a variety of approaches including statistical, heuristics, machine learning, and optimization methods. Several existing event detection systems share a common feature in which alarms are obtained separately for each of the water quality indicators. Unifying those single alarms from different indicators is usually performed by means of simple heuristics. A salient feature of the current developed approach is using a statistically oriented model for discrete choice prediction which is estimated using the maximum likelihood method for integrating the single alarms. The discrete choice model is jointly calibrated with other components of the event detection system framework in a training data set using genetic algorithms. The fusing process of each indicator probabilities, which is left out of focus in many existing event detection system models, is confirmed to be a crucial part of the system which could be modelled by exploiting a discrete choice model for improving its performance. The developed methodology is tested on real water quality data, showing improved performances in decreasing the number of false positive alarms and in its ability to detect events with higher probabilities, compared to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Embedded security system for multi-modal surveillance in a railway carriage

    NASA Astrophysics Data System (ADS)

    Zouaoui, Rhalem; Audigier, Romaric; Ambellouis, Sébastien; Capman, François; Benhadda, Hamid; Joudrier, Stéphanie; Sodoyer, David; Lamarque, Thierry

    2015-10-01

    Public transport security is one of the main priorities of the public authorities when fighting against crime and terrorism. In this context, there is a great demand for autonomous systems able to detect abnormal events such as violent acts aboard passenger cars and intrusions when the train is parked at the depot. To this end, we present an innovative approach which aims at providing efficient automatic event detection by fusing video and audio analytics and reducing the false alarm rate compared to classical stand-alone video detection. The multi-modal system is composed of two microphones and one camera and integrates onboard video and audio analytics and fusion capabilities. On the one hand, for detecting intrusion, the system relies on the fusion of "unusual" audio events detection with intrusion detections from video processing. The audio analysis consists in modeling the normal ambience and detecting deviation from the trained models during testing. This unsupervised approach is based on clustering of automatically extracted segments of acoustic features and statistical Gaussian Mixture Model (GMM) modeling of each cluster. The intrusion detection is based on the three-dimensional (3D) detection and tracking of individuals in the videos. On the other hand, for violent events detection, the system fuses unsupervised and supervised audio algorithms with video event detection. The supervised audio technique detects specific events such as shouts. A GMM is used to catch the formant structure of a shout signal. Video analytics use an original approach for detecting aggressive motion by focusing on erratic motion patterns specific to violent events. As data with violent events is not easily available, a normality model with structured motions from non-violent videos is learned for one-class classification. A fusion algorithm based on Dempster-Shafer's theory analyses the asynchronous detection outputs and computes the degree of belief of each probable event.

  6. Detection of planets in extremely weak central perturbation microlensing events via next-generation ground-based surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Sun-Ju; Lee, Chung-Uk; Koo, Jae-Rim, E-mail: sjchung@kasi.re.kr, E-mail: leecu@kasi.re.kr, E-mail: koojr@kasi.re.kr

    2014-04-20

    Even though the recently discovered high-magnification event MOA-2010-BLG-311 had complete coverage over its peak, confident planet detection did not happen due to extremely weak central perturbations (EWCPs, fractional deviations of ≲ 2%). For confident detection of planets in EWCP events, it is necessary to have both high cadence monitoring and high photometric accuracy better than those of current follow-up observation systems. The next-generation ground-based observation project, Korea Microlensing Telescope Network (KMTNet), satisfies these conditions. We estimate the probability of occurrence of EWCP events with fractional deviations of ≤2% in high-magnification events and the efficiency of detecting planets in the EWCPmore » events using the KMTNet. From this study, we find that the EWCP events occur with a frequency of >50% in the case of ≲ 100 M {sub E} planets with separations of 0.2 AU ≲ d ≲ 20 AU. We find that for main-sequence and sub-giant source stars, ≳ 1 M {sub E} planets in EWCP events with deviations ≤2% can be detected with frequency >50% in a certain range that changes with the planet mass. However, it is difficult to detect planets in EWCP events of bright stars like giant stars because it is easy for KMTNet to be saturated around the peak of the events because of its constant exposure time. EWCP events are caused by close, intermediate, and wide planetary systems with low-mass planets and close and wide planetary systems with massive planets. Therefore, we expect that a much greater variety of planetary systems than those already detected, which are mostly intermediate planetary systems, regardless of the planet mass, will be significantly detected in the near future.« less

  7. Event detection in an assisted living environment.

    PubMed

    Stroiescu, Florin; Daly, Kieran; Kuris, Benjamin

    2011-01-01

    This paper presents the design of a wireless event detection and in building location awareness system. The systems architecture is based on using a body worn sensor to detect events such as falls where they occur in an assisted living environment. This process involves developing event detection algorithms and transmitting such events wirelessly to an in house network based on the 802.15.4 protocol. The network would then generate alerts both in the assisted living facility and remotely to an offsite monitoring facility. The focus of this paper is on the design of the system architecture and the compliance challenges in applying this technology.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Jiang, Huaiguang; Tan, Jin

    This paper proposes an event-driven approach for reconfiguring distribution systems automatically. Specifically, an optimal synchrophasor sensor placement (OSSP) is used to reduce the number of synchrophasor sensors while keeping the whole system observable. Then, a wavelet-based event detection and location approach is used to detect and locate the event, which performs as a trigger for network reconfiguration. With the detected information, the system is then reconfigured using the hierarchical decentralized approach to seek for the new optimal topology. In this manner, whenever an event happens the distribution network can be reconfigured automatically based on the real-time information that is observablemore » and detectable.« less

  9. Exploiting semantics for sensor re-calibration in event detection systems

    NASA Astrophysics Data System (ADS)

    Vaisenberg, Ronen; Ji, Shengyue; Hore, Bijit; Mehrotra, Sharad; Venkatasubramanian, Nalini

    2008-01-01

    Event detection from a video stream is becoming an important and challenging task in surveillance and sentient systems. While computer vision has been extensively studied to solve different kinds of detection problems over time, it is still a hard problem and even in a controlled environment only simple events can be detected with a high degree of accuracy. Instead of struggling to improve event detection using image processing only, we bring in semantics to direct traditional image processing. Semantics are the underlying facts that hide beneath video frames, which can not be "seen" directly by image processing. In this work we demonstrate that time sequence semantics can be exploited to guide unsupervised re-calibration of the event detection system. We present an instantiation of our ideas by using an appliance as an example--Coffee Pot level detection based on video data--to show that semantics can guide the re-calibration of the detection model. This work exploits time sequence semantics to detect when re-calibration is required to automatically relearn a new detection model for the newly evolved system state and to resume monitoring with a higher rate of accuracy.

  10. Developing Fluorescence Sensor Systems for Early Detection of Nitrification Events in Chloraminated Drinking Water Distribution Systems

    EPA Science Inventory

    Detection of nitrification events in chloraminated drinking water distribution systems remains an ongoing challenge for many drinking water utilities, including Dallas Water Utilities (DWU) and the City of Houston (CoH). Each year, these utilities experience nitrification events ...

  11. Real-Time Event Detection for Monitoring Natural and Source ...

    EPA Pesticide Factsheets

    The use of event detection systems in finished drinking water systems is increasing in order to monitor water quality in both operational and security contexts. Recent incidents involving harmful algal blooms and chemical spills into watersheds have increased interest in monitoring source water quality prior to treatment. This work highlights the use of the CANARY event detection software in detecting suspected illicit events in an actively monitored watershed in South Carolina. CANARY is an open source event detection software that was developed by USEPA and Sandia National Laboratories. The software works with any type of sensor, utilizes multiple detection algorithms and approaches, and can incorporate operational information as needed. Monitoring has been underway for several years to detect events related to intentional or unintentional dumping of materials into the monitored watershed. This work evaluates the feasibility of using CANARY to enhance the detection of events in this watershed. This presentation will describe the real-time monitoring approach used in this watershed, the selection of CANARY configuration parameters that optimize detection for this watershed and monitoring application, and the performance of CANARY during the time frame analyzed. Further, this work will highlight how rainfall events impacted analysis, and the innovative application of CANARY taken in order to effectively detect the suspected illicit events. This presentation d

  12. Initial Evaluation of Signal-Based Bayesian Monitoring

    NASA Astrophysics Data System (ADS)

    Moore, D.; Russell, S.

    2016-12-01

    We present SIGVISA (Signal-based Vertically Integrated Seismic Analysis), a next-generation system for global seismic monitoring through Bayesian inference on seismic signals. Traditional seismic monitoring systems rely on discrete detections produced by station processing software, discarding significant information present in the original recorded signal. By modeling signals directly, our forward model is able to incorporate a rich representation of the physics underlying the signal generation process, including source mechanisms, wave propagation, and station response. This allows inference in the model to recover the qualitative behavior of geophysical methods including waveform matching and double-differencing, all as part of a unified Bayesian monitoring system that simultaneously detects and locates events from a network of stations. We report results from an evaluation of SIGVISA monitoring the western United States for a two-week period following the magnitude 6.0 event in Wells, NV in February 2008. During this period, SIGVISA detects more than twice as many events as NETVISA, and three times as many as SEL3, while operating at the same precision; at lower precisions it detects up to five times as many events as SEL3. At the same time, signal-based monitoring reduces mean location errors by a factor of four relative to detection-based systems. We provide evidence that, given only IMS data, SIGVISA detects events that are missed by regional monitoring networks, indicating that our evaluations may even underestimate its performance. Finally, SIGVISA matches or exceeds the detection rates of existing systems for de novo events - events with no nearby historical seismicity - and detects through automated processing a number of such events missed even by the human analysts generating the LEB.

  13. Automatic Detection and Classification of Audio Events for Road Surveillance Applications.

    PubMed

    Almaadeed, Noor; Asim, Muhammad; Al-Maadeed, Somaya; Bouridane, Ahmed; Beghdadi, Azeddine

    2018-06-06

    This work investigates the problem of detecting hazardous events on roads by designing an audio surveillance system that automatically detects perilous situations such as car crashes and tire skidding. In recent years, research has shown several visual surveillance systems that have been proposed for road monitoring to detect accidents with an aim to improve safety procedures in emergency cases. However, the visual information alone cannot detect certain events such as car crashes and tire skidding, especially under adverse and visually cluttered weather conditions such as snowfall, rain, and fog. Consequently, the incorporation of microphones and audio event detectors based on audio processing can significantly enhance the detection accuracy of such surveillance systems. This paper proposes to combine time-domain, frequency-domain, and joint time-frequency features extracted from a class of quadratic time-frequency distributions (QTFDs) to detect events on roads through audio analysis and processing. Experiments were carried out using a publicly available dataset. The experimental results conform the effectiveness of the proposed approach for detecting hazardous events on roads as demonstrated by 7% improvement of accuracy rate when compared against methods that use individual temporal and spectral features.

  14. A coupled classification - evolutionary optimization model for contamination event detection in water distribution systems.

    PubMed

    Oliker, Nurit; Ostfeld, Avi

    2014-03-15

    This study describes a decision support system, alerts for contamination events in water distribution systems. The developed model comprises a weighted support vector machine (SVM) for the detection of outliers, and a following sequence analysis for the classification of contamination events. The contribution of this study is an improvement of contamination events detection ability and a multi-dimensional analysis of the data, differing from the parallel one-dimensional analysis conducted so far. The multivariate analysis examines the relationships between water quality parameters and detects changes in their mutual patterns. The weights of the SVM model accomplish two goals: blurring the difference between sizes of the two classes' data sets (as there are much more normal/regular than event time measurements), and adhering the time factor attribute by a time decay coefficient, ascribing higher importance to recent observations when classifying a time step measurement. All model parameters were determined by data driven optimization so the calibration of the model was completely autonomic. The model was trained and tested on a real water distribution system (WDS) data set with randomly simulated events superimposed on the original measurements. The model is prominent in its ability to detect events that were only partly expressed in the data (i.e., affecting only some of the measured parameters). The model showed high accuracy and better detection ability as compared to previous modeling attempts of contamination event detection. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Distributed Events in Sentinel: Design and Implementation of a Global Event Detector

    DTIC Science & Technology

    1999-01-01

    local event detector and a global event detector to detect events. Global event detector in this case plays the role of a message sending/receiving than...significant in this case . The system performance will decrease with increase in the number of applications involved in global event detection. Yet from a...Figure 8: A Global event tree (2) 1. Global composite event is detected at the GED In this case , the whole global composite event tree is sent to the

  16. On-Die Sensors for Transient Events

    NASA Astrophysics Data System (ADS)

    Suchak, Mihir Vimal

    Failures caused by transient electromagnetic events like Electrostatic Discharge (ESD) are a major concern for embedded systems. The component often failing is an integrated circuit (IC). Determining which IC is affected in a multi-device system is a challenging task. Debugging errors often requires sophisticated lab setups which require intentionally disturbing and probing various parts of the system which might not be easily accessible. Opening the system and adding probes may change its response to the transient event, which further compounds the problem. On-die transient event sensors were developed that require relatively little area on die, making them inexpensive, they consume negligible static current, and do not interfere with normal operation of the IC. These circuits can be used to determine the pin involved and the level of the event in the event of a transient event affecting the IC, thus allowing the user to debug system-level transient events without modifying the system. The circuit and detection scheme design has been completed and verified in simulations with Cadence Virtuoso environment. Simulations accounted for the impact of the ESD protection circuits, parasitics from the I/O pin, package and I/O ring, and included a model of an ESD gun to test the circuit's response to an ESD pulse as specified in IEC 61000-4-2. Multiple detection schemes are proposed. The final detection scheme consists of an event detector and a level sensor. The event detector latches on the presence of an event at a pad, to determine on which pin an event occurred. The level sensor generates current proportional to the level of the event. This current is converted to a voltage and digitized at the A/D converter to be read by the microprocessor. Detection scheme shows good performance in simulations when checked against process variations and different kind of events.

  17. Event-Triggered Fault Detection of Nonlinear Networked Systems.

    PubMed

    Li, Hongyi; Chen, Ziran; Wu, Ligang; Lam, Hak-Keung; Du, Haiping

    2017-04-01

    This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.

  18. Factors influencing performance of internet-based biosurveillance systems used in epidemic intelligence for early detection of infectious diseases outbreaks.

    PubMed

    Barboza, Philippe; Vaillant, Laetitia; Le Strat, Yann; Hartley, David M; Nelson, Noele P; Mawudeku, Abla; Madoff, Lawrence C; Linge, Jens P; Collier, Nigel; Brownstein, John S; Astagneau, Pascal

    2014-01-01

    Internet-based biosurveillance systems have been developed to detect health threats using information available on the Internet, but system performance has not been assessed relative to end-user needs and perspectives. Infectious disease events from the French Institute for Public Health Surveillance (InVS) weekly international epidemiological bulletin published in 2010 were used to construct the gold-standard official dataset. Data from six biosurveillance systems were used to detect raw signals (infectious disease events from informal Internet sources): Argus, BioCaster, GPHIN, HealthMap, MedISys and ProMED-mail. Crude detection rates (C-DR), crude sensitivity rates (C-Se) and intrinsic sensitivity rates (I-Se) were calculated from multivariable regressions to evaluate the systems' performance (events detected compared to the gold-standard) 472 raw signals (Internet disease reports) related to the 86 events included in the gold-standard data set were retrieved from the six systems. 84 events were detected before their publication in the gold-standard. The type of sources utilised by the systems varied significantly (p<0001). I-Se varied significantly from 43% to 71% (p=0001) whereas other indicators were similar (C-DR: p=020; C-Se, p=013). I-Se was significantly associated with individual systems, types of system, languages, regions of occurrence, and types of infectious disease. Conversely, no statistical difference of C-DR was observed after adjustment for other variables. Although differences could result from a biosurveillance system's conceptual design, findings suggest that the combined expertise amongst systems enhances early detection performance for detection of infectious diseases. While all systems showed similar early detection performance, systems including human moderation were found to have a 53% higher I-Se (p=00001) after adjustment for other variables. Overall, the use of moderation, sources, languages, regions of occurrence, and types of cases were found to influence system performance.

  19. Adaptive Self-Tuning Networks

    NASA Astrophysics Data System (ADS)

    Knox, H. A.; Draelos, T.; Young, C. J.; Lawry, B.; Chael, E. P.; Faust, A.; Peterson, M. G.

    2015-12-01

    The quality of automatic detections from seismic sensor networks depends on a large number of data processing parameters that interact in complex ways. The largely manual process of identifying effective parameters is painstaking and does not guarantee that the resulting controls are the optimal configuration settings. Yet, achieving superior automatic detection of seismic events is closely related to these parameters. We present an automated sensor tuning (AST) system that learns near-optimal parameter settings for each event type using neuro-dynamic programming (reinforcement learning) trained with historic data. AST learns to test the raw signal against all event-settings and automatically self-tunes to an emerging event in real-time. The overall goal is to reduce the number of missed legitimate event detections and the number of false event detections. Reducing false alarms early in the seismic pipeline processing will have a significant impact on this goal. Applicable both for existing sensor performance boosting and new sensor deployment, this system provides an important new method to automatically tune complex remote sensing systems. Systems tuned in this way will achieve better performance than is currently possible by manual tuning, and with much less time and effort devoted to the tuning process. With ground truth on detections in seismic waveforms from a network of stations, we show that AST increases the probability of detection while decreasing false alarms.

  20. Factors Influencing Performance of Internet-Based Biosurveillance Systems Used in Epidemic Intelligence for Early Detection of Infectious Diseases Outbreaks

    PubMed Central

    Barboza, Philippe; Vaillant, Laetitia; Le Strat, Yann; Hartley, David M.; Nelson, Noele P.; Mawudeku, Abla; Madoff, Lawrence C.; Linge, Jens P.; Collier, Nigel; Brownstein, John S.; Astagneau, Pascal

    2014-01-01

    Background Internet-based biosurveillance systems have been developed to detect health threats using information available on the Internet, but system performance has not been assessed relative to end-user needs and perspectives. Method and Findings Infectious disease events from the French Institute for Public Health Surveillance (InVS) weekly international epidemiological bulletin published in 2010 were used to construct the gold-standard official dataset. Data from six biosurveillance systems were used to detect raw signals (infectious disease events from informal Internet sources): Argus, BioCaster, GPHIN, HealthMap, MedISys and ProMED-mail. Crude detection rates (C-DR), crude sensitivity rates (C-Se) and intrinsic sensitivity rates (I-Se) were calculated from multivariable regressions to evaluate the systems’ performance (events detected compared to the gold-standard) 472 raw signals (Internet disease reports) related to the 86 events included in the gold-standard data set were retrieved from the six systems. 84 events were detected before their publication in the gold-standard. The type of sources utilised by the systems varied significantly (p<0001). I-Se varied significantly from 43% to 71% (p = 0001) whereas other indicators were similar (C-DR: p = 020; C-Se, p = 013). I-Se was significantly associated with individual systems, types of system, languages, regions of occurrence, and types of infectious disease. Conversely, no statistical difference of C-DR was observed after adjustment for other variables. Conclusion Although differences could result from a biosurveillance system's conceptual design, findings suggest that the combined expertise amongst systems enhances early detection performance for detection of infectious diseases. While all systems showed similar early detection performance, systems including human moderation were found to have a 53% higher I-Se (p = 00001) after adjustment for other variables. Overall, the use of moderation, sources, languages, regions of occurrence, and types of cases were found to influence system performance. PMID:24599062

  1. Design and Deployment of a Pediatric Cardiac Arrest Surveillance System

    PubMed Central

    Newton, Heather Marie; McNamara, Leann; Engorn, Branden Michael; Jones, Kareen; Bernier, Meghan; Dodge, Pamela; Salamone, Cheryl; Bhalala, Utpal; Jeffers, Justin M.; Engineer, Lilly; Diener-West, Marie; Hunt, Elizabeth Anne

    2018-01-01

    Objective We aimed to increase detection of pediatric cardiopulmonary resuscitation (CPR) events and collection of physiologic and performance data for use in quality improvement (QI) efforts. Materials and Methods We developed a workflow-driven surveillance system that leveraged organizational information technology systems to trigger CPR detection and analysis processes. We characterized detection by notification source, type, location, and year, and compared it to previous methods of detection. Results From 1/1/2013 through 12/31/2015, there were 2,986 unique notifications associated with 2,145 events, 317 requiring CPR. PICU and PEDS-ED accounted for 65% of CPR events, whereas floor care areas were responsible for only 3% of events. 100% of PEDS-OR and >70% of PICU CPR events would not have been included in QI efforts. Performance data from both defibrillator and bedside monitor increased annually. (2013: 1%; 2014: 18%; 2015: 27%). Discussion After deployment of this system, detection has increased ∼9-fold and performance data collection increased annually. Had the system not been deployed, 100% of PEDS-OR and 50–70% of PICU, NICU, and PEDS-ED events would have been missed. Conclusion By leveraging hospital information technology and medical device data, identification of pediatric cardiac arrest with an associated increased capture in the proportion of objective performance data is possible. PMID:29854451

  2. Design and Deployment of a Pediatric Cardiac Arrest Surveillance System.

    PubMed

    Duval-Arnould, Jordan Michel; Newton, Heather Marie; McNamara, Leann; Engorn, Branden Michael; Jones, Kareen; Bernier, Meghan; Dodge, Pamela; Salamone, Cheryl; Bhalala, Utpal; Jeffers, Justin M; Engineer, Lilly; Diener-West, Marie; Hunt, Elizabeth Anne

    2018-01-01

    We aimed to increase detection of pediatric cardiopulmonary resuscitation (CPR) events and collection of physiologic and performance data for use in quality improvement (QI) efforts. We developed a workflow-driven surveillance system that leveraged organizational information technology systems to trigger CPR detection and analysis processes. We characterized detection by notification source, type, location, and year, and compared it to previous methods of detection. From 1/1/2013 through 12/31/2015, there were 2,986 unique notifications associated with 2,145 events, 317 requiring CPR. PICU and PEDS-ED accounted for 65% of CPR events, whereas floor care areas were responsible for only 3% of events. 100% of PEDS-OR and >70% of PICU CPR events would not have been included in QI efforts. Performance data from both defibrillator and bedside monitor increased annually. (2013: 1%; 2014: 18%; 2015: 27%). After deployment of this system, detection has increased ∼9-fold and performance data collection increased annually. Had the system not been deployed, 100% of PEDS-OR and 50-70% of PICU, NICU, and PEDS-ED events would have been missed. By leveraging hospital information technology and medical device data, identification of pediatric cardiac arrest with an associated increased capture in the proportion of objective performance data is possible.

  3. Adverse event detection (AED) system for continuously monitoring and evaluating structural health status

    NASA Astrophysics Data System (ADS)

    Yun, Jinsik; Ha, Dong Sam; Inman, Daniel J.; Owen, Robert B.

    2011-03-01

    Structural damage for spacecraft is mainly due to impacts such as collision of meteorites or space debris. We present a structural health monitoring (SHM) system for space applications, named Adverse Event Detection (AED), which integrates an acoustic sensor, an impedance-based SHM system, and a Lamb wave SHM system. With these three health-monitoring methods in place, we can determine the presence, location, and severity of damage. An acoustic sensor continuously monitors acoustic events, while the impedance-based and Lamb wave SHM systems are in sleep mode. If an acoustic sensor detects an impact, it activates the impedance-based SHM. The impedance-based system determines if the impact incurred damage. When damage is detected, it activates the Lamb wave SHM system to determine the severity and location of the damage. Further, since an acoustic sensor dissipates much less power than the two SHM systems and the two systems are activated only when there is an acoustic event, our system reduces overall power dissipation significantly. Our prototype system demonstrates the feasibility of the proposed concept.

  4. Useful Interplay Between Spontaneous ADR Reports and Electronic Healthcare Records in Signal Detection.

    PubMed

    Pacurariu, Alexandra C; Straus, Sabine M; Trifirò, Gianluca; Schuemie, Martijn J; Gini, Rosa; Herings, Ron; Mazzaglia, Giampiero; Picelli, Gino; Scotti, Lorenza; Pedersen, Lars; Arlett, Peter; van der Lei, Johan; Sturkenboom, Miriam C; Coloma, Preciosa M

    2015-12-01

    Spontaneous reporting systems (SRSs) remain the cornerstone of post-marketing drug safety surveillance despite their well-known limitations. Judicious use of other available data sources is essential to enable better detection, strengthening and validation of signals. In this study, we investigated the potential of electronic healthcare records (EHRs) to be used alongside an SRS as an independent system, with the aim of improving signal detection. A signal detection strategy, focused on a limited set of adverse events deemed important in pharmacovigilance, was performed retrospectively in two data sources-(1) the Exploring and Understanding Adverse Drug Reactions (EU-ADR) database network and (2) the EudraVigilance database-using data between 2000 and 2010. Five events were considered for analysis: (1) acute myocardial infarction (AMI); (2) bullous eruption; (3) hip fracture; (4) acute pancreatitis; and (5) upper gastrointestinal bleeding (UGIB). Potential signals identified in each system were verified using the current published literature. The complementarity of the two systems to detect signals was expressed as the percentage of the unilaterally identified signals out of the total number of confirmed signals. As a proxy for the associated costs, the number of signals that needed to be reviewed to detect one true signal (number needed to detect [NND]) was calculated. The relationship between the background frequency of the events and the capability of each system to detect signals was also investigated. The contribution of each system to signal detection appeared to be correlated with the background incidence of the events, being directly proportional to the incidence in EU-ADR and inversely proportional in EudraVigilance. EudraVigilance was particularly valuable in identifying bullous eruption and acute pancreatitis (71 and 42 % of signals were correctly identified from the total pool of known associations, respectively), while EU-ADR was most useful in identifying hip fractures (60 %). Both systems contributed reasonably well to identification of signals related to UGIB (45 % in EudraVigilance, 40 % in EU-ADR) but only fairly for signals related to AMI (25 % in EU-ADR, 20 % in EudraVigilance). The costs associated with detection of signals were variable across events; however, it was often more costly to detect safety signals in EU-ADR than in EudraVigilance (median NNDs: 7 versus 5). An EHR-based system may have additional value for signal detection, alongside already established systems, especially in the presence of adverse events with a high background incidence. While the SRS appeared to be more cost effective overall, for some events the costs associated with signal detection in the EHR might be justifiable.

  5. The effectiveness of pretreatment physics plan review for detecting errors in radiation therapy.

    PubMed

    Gopan, Olga; Zeng, Jing; Novak, Avrey; Nyflot, Matthew; Ford, Eric

    2016-09-01

    The pretreatment physics plan review is a standard tool for ensuring treatment quality. Studies have shown that the majority of errors in radiation oncology originate in treatment planning, which underscores the importance of the pretreatment physics plan review. This quality assurance measure is fundamentally important and central to the safety of patients and the quality of care that they receive. However, little is known about its effectiveness. The purpose of this study was to analyze reported incidents to quantify the effectiveness of the pretreatment physics plan review with the goal of improving it. This study analyzed 522 potentially severe or critical near-miss events within an institutional incident learning system collected over a three-year period. Of these 522 events, 356 originated at a workflow point that was prior to the pretreatment physics plan review. The remaining 166 events originated after the pretreatment physics plan review and were not considered in the study. The applicable 356 events were classified into one of the three categories: (1) events detected by the pretreatment physics plan review, (2) events not detected but "potentially detectable" by the physics review, and (3) events "not detectable" by the physics review. Potentially detectable events were further classified by which specific checks performed during the pretreatment physics plan review detected or could have detected the event. For these events, the associated specific check was also evaluated as to the possibility of automating that check given current data structures. For comparison, a similar analysis was carried out on 81 events from the international SAFRON radiation oncology incident learning system. Of the 356 applicable events from the institutional database, 180/356 (51%) were detected or could have been detected by the pretreatment physics plan review. Of these events, 125 actually passed through the physics review; however, only 38% (47/125) were actually detected at the review. Of the 81 events from the SAFRON database, 66/81 (81%) were potentially detectable by the pretreatment physics plan review. From the institutional database, three specific physics checks were particularly effective at detecting events (combined effectiveness of 38%): verifying the isocenter (39/180), verifying DRRs (17/180), and verifying that the plan matched the prescription (12/180). The most effective checks from the SAFRON database were verifying that the plan matched the prescription (13/66) and verifying the field parameters in the record and verify system against those in the plan (23/66). Software-based plan checking systems, if available, would have potential effectiveness of 29% and 64% at detecting events from the institutional and SAFRON databases, respectively. Pretreatment physics plan review is a key safety measure and can detect a high percentage of errors. However, the majority of errors that potentially could have been detected were not detected in this study, indicating the need to improve the pretreatment physics plan review performance. Suggestions for improvement include the automation of specific physics checks performed during the pretreatment physics plan review and the standardization of the review process.

  6. An islanding detection methodology combining decision trees and Sandia frequency shift for inverter-based distributed generations

    DOE PAGES

    Azim, Riyasat; Li, Fangxing; Xue, Yaosuo; ...

    2017-07-14

    Distributed generations (DGs) for grid-connected applications require an accurate and reliable islanding detection methodology (IDM) for secure system operation. This paper presents an IDM for grid-connected inverter-based DGs. The proposed method is a combination of passive and active islanding detection techniques for aggregation of their advantages and elimination/minimisation of the drawbacks. In the proposed IDM, the passive method utilises critical system attributes extracted from local voltage measurements at target DG locations as well as employs decision tree-based classifiers for characterisation and detection of islanding events. The active method is based on Sandia frequency shift technique and is initiated only whenmore » the passive method is unable to differentiate islanding events from other system events. Thus, the power quality degradation introduced into the system by active islanding detection techniques can be minimised. Furthermore, a combination of active and passive techniques allows detection of islanding events under low power mismatch scenarios eliminating the disadvantage associated with the use of passive techniques alone. Finally, detailed case study results demonstrate the effectiveness of the proposed method in detection of islanding events under various power mismatch scenarios, load quality factors and in the presence of single or multiple grid-connected inverter-based DG units.« less

  7. An islanding detection methodology combining decision trees and Sandia frequency shift for inverter-based distributed generations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azim, Riyasat; Li, Fangxing; Xue, Yaosuo

    Distributed generations (DGs) for grid-connected applications require an accurate and reliable islanding detection methodology (IDM) for secure system operation. This paper presents an IDM for grid-connected inverter-based DGs. The proposed method is a combination of passive and active islanding detection techniques for aggregation of their advantages and elimination/minimisation of the drawbacks. In the proposed IDM, the passive method utilises critical system attributes extracted from local voltage measurements at target DG locations as well as employs decision tree-based classifiers for characterisation and detection of islanding events. The active method is based on Sandia frequency shift technique and is initiated only whenmore » the passive method is unable to differentiate islanding events from other system events. Thus, the power quality degradation introduced into the system by active islanding detection techniques can be minimised. Furthermore, a combination of active and passive techniques allows detection of islanding events under low power mismatch scenarios eliminating the disadvantage associated with the use of passive techniques alone. Finally, detailed case study results demonstrate the effectiveness of the proposed method in detection of islanding events under various power mismatch scenarios, load quality factors and in the presence of single or multiple grid-connected inverter-based DG units.« less

  8. Adaptively Adjusted Event-Triggering Mechanism on Fault Detection for Networked Control Systems.

    PubMed

    Wang, Yu-Long; Lim, Cheng-Chew; Shi, Peng

    2016-12-08

    This paper studies the problem of adaptively adjusted event-triggering mechanism-based fault detection for a class of discrete-time networked control system (NCS) with applications to aircraft dynamics. By taking into account the fault occurrence detection progress and the fault occurrence probability, and introducing an adaptively adjusted event-triggering parameter, a novel event-triggering mechanism is proposed to achieve the efficient utilization of the communication network bandwidth. Both the sensor-to-control station and the control station-to-actuator network-induced delays are taken into account. The event-triggered sensor and the event-triggered control station are utilized simultaneously to establish new network-based closed-loop models for the NCS subject to faults. Based on the established models, the event-triggered simultaneous design of fault detection filter (FDF) and controller is presented. A new algorithm for handling the adaptively adjusted event-triggering parameter is proposed. Performance analysis verifies the effectiveness of the adaptively adjusted event-triggering mechanism, and the simultaneous design of FDF and controller.

  9. Human visual system-based smoking event detection

    NASA Astrophysics Data System (ADS)

    Odetallah, Amjad D.; Agaian, Sos S.

    2012-06-01

    Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.

  10. The effectiveness of pretreatment physics plan review for detecting errors in radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopan, Olga; Zeng, Jing; Novak, Avrey

    Purpose: The pretreatment physics plan review is a standard tool for ensuring treatment quality. Studies have shown that the majority of errors in radiation oncology originate in treatment planning, which underscores the importance of the pretreatment physics plan review. This quality assurance measure is fundamentally important and central to the safety of patients and the quality of care that they receive. However, little is known about its effectiveness. The purpose of this study was to analyze reported incidents to quantify the effectiveness of the pretreatment physics plan review with the goal of improving it. Methods: This study analyzed 522 potentiallymore » severe or critical near-miss events within an institutional incident learning system collected over a three-year period. Of these 522 events, 356 originated at a workflow point that was prior to the pretreatment physics plan review. The remaining 166 events originated after the pretreatment physics plan review and were not considered in the study. The applicable 356 events were classified into one of the three categories: (1) events detected by the pretreatment physics plan review, (2) events not detected but “potentially detectable” by the physics review, and (3) events “not detectable” by the physics review. Potentially detectable events were further classified by which specific checks performed during the pretreatment physics plan review detected or could have detected the event. For these events, the associated specific check was also evaluated as to the possibility of automating that check given current data structures. For comparison, a similar analysis was carried out on 81 events from the international SAFRON radiation oncology incident learning system. Results: Of the 356 applicable events from the institutional database, 180/356 (51%) were detected or could have been detected by the pretreatment physics plan review. Of these events, 125 actually passed through the physics review; however, only 38% (47/125) were actually detected at the review. Of the 81 events from the SAFRON database, 66/81 (81%) were potentially detectable by the pretreatment physics plan review. From the institutional database, three specific physics checks were particularly effective at detecting events (combined effectiveness of 38%): verifying the isocenter (39/180), verifying DRRs (17/180), and verifying that the plan matched the prescription (12/180). The most effective checks from the SAFRON database were verifying that the plan matched the prescription (13/66) and verifying the field parameters in the record and verify system against those in the plan (23/66). Software-based plan checking systems, if available, would have potential effectiveness of 29% and 64% at detecting events from the institutional and SAFRON databases, respectively. Conclusions: Pretreatment physics plan review is a key safety measure and can detect a high percentage of errors. However, the majority of errors that potentially could have been detected were not detected in this study, indicating the need to improve the pretreatment physics plan review performance. Suggestions for improvement include the automation of specific physics checks performed during the pretreatment physics plan review and the standardization of the review process.« less

  11. Artificial Neural Network applied to lightning flashes

    NASA Astrophysics Data System (ADS)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a success rate of 90%. The videos used in this experiment were acquired by seven video cameras installed in São Bernardo do Campo, Brazil, that continuously recorded lightning events during the summer. The cameras were disposed in a 360 loop, recording all data at a time resolution of 33ms. During this period, several convective storms were recorded.

  12. Spatiotemporal Detection of Unusual Human Population Behavior Using Mobile Phone Data

    PubMed Central

    Dobra, Adrian; Williams, Nathalie E.; Eagle, Nathan

    2015-01-01

    With the aim to contribute to humanitarian response to disasters and violent events, scientists have proposed the development of analytical tools that could identify emergency events in real-time, using mobile phone data. The assumption is that dramatic and discrete changes in behavior, measured with mobile phone data, will indicate extreme events. In this study, we propose an efficient system for spatiotemporal detection of behavioral anomalies from mobile phone data and compare sites with behavioral anomalies to an extensive database of emergency and non-emergency events in Rwanda. Our methodology successfully captures anomalous behavioral patterns associated with a broad range of events, from religious and official holidays to earthquakes, floods, violence against civilians and protests. Our results suggest that human behavioral responses to extreme events are complex and multi-dimensional, including extreme increases and decreases in both calling and movement behaviors. We also find significant temporal and spatial variance in responses to extreme events. Our behavioral anomaly detection system and extensive discussion of results are a significant contribution to the long-term project of creating an effective real-time event detection system with mobile phone data and we discuss the implications of our findings for future research to this end. PMID:25806954

  13. Towards a global flood detection system using social media

    NASA Astrophysics Data System (ADS)

    de Bruijn, Jens; de Moel, Hans; Jongman, Brenden; Aerts, Jeroen

    2017-04-01

    It is widely recognized that an early warning is critical in improving international disaster response. Analysis of social media in real-time can provide valuable information about an event or help to detect unexpected events. For successful and reliable detection systems that work globally, it is important that sufficient data is available and that the algorithm works both in data-rich and data-poor environments. In this study, both a new geotagging system and multi-level event detection system for flood hazards was developed using Twitter data. Geotagging algorithms that regard one tweet as a single document are well-studied. However, no algorithms exist that combine several sequential tweets mentioning keywords regarding a specific event type. Within the time frame of an event, multiple users use event related keywords that refer to the same place name. This notion allows us to treat several sequential tweets posted in the last 24 hours as one document. For all these tweets, we collect a series of spatial indicators given in the tweet metadata and extract additional topological indicators from the text. Using these indicators, we can reduce ambiguity and thus better estimate what locations are tweeted about. Using these localized tweets, Bayesian change-point analysis is used to find significant increases of tweets mentioning countries, provinces or towns. In data-poor environments detection of events on a country level is possible, while in other, data-rich, environments detection on a city level is achieved. Additionally, on a city-level we analyse the spatial dependence of mentioned places. If multiple places within a limited spatial extent are mentioned, detection confidence increases. We run the algorithm using 2 years of Twitter data with flood related keywords in 13 major languages and validate against a flood event database. We find that the geotagging algorithm yields significantly more data than previously developed algorithms and successfully deals with ambiguous place names. In addition, we show that our detection system can both quickly and reliably detect floods, even in countries where data is scarce, while achieving high detail in countries where more data is available.

  14. Three-axis asymmetric radiation detector system

    DOEpatents

    Martini, Mario Pierangelo; Gedcke, Dale A.; Raudorf, Thomas W.; Sangsingkeow, Pat

    2000-01-01

    A three-axis radiation detection system whose inner and outer electrodes are shaped and positioned so that the shortest path between any point on the inner electrode and the outer electrode is a different length whereby the rise time of a pulse derived from a detected radiation event can uniquely define the azimuthal and radial position of that event, and the outer electrode is divided into a plurality of segments in the longitudinal axial direction for locating the axial location of a radiation detection event occurring in the diode.

  15. Wireless battery management control and monitoring system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zumstein, James M.; Chang, John T.; Farmer, Joseph C.

    A battery management system using a sensor inside of the battery that sensor enables monitoring and detection of various events in the battery and transmission of a signal from the sensor through the battery casing to a control and data acquisition module by wireless transmission. The detection of threshold events in the battery enables remedial action to be taken to avoid catastrophic events.

  16. Detecting NEO Impacts using the International Monitoring System

    NASA Astrophysics Data System (ADS)

    Brown, Peter G.; Dube, Kimberlee; Silber, Elizabeth

    2014-11-01

    As part of the verification regime for the Comprehensive Nuclear Test Ban Treaty an International Monitoring System (IMS) consisting of seismic, hydroacoustic, infrasound and radionuclide technologies has been globally deployed beginning in the late 1990s. The infrasound network sub-component of the IMS consists of 47 active stations as of mid-2014. These microbarograph arrays detect coherent infrasonic signals from a range of sources including volcanoes, man-made explosions and bolides. Bolide detections from IMS stations have been reported since ~2000, but with the maturation of the network over the last several years the rate of detections has increased substantially. Presently the IMS performs semi-automated near real-time global event identification on timescales of 6-12 hours as well as analyst verified event identification having time lags of several weeks. Here we report on infrasound events identified by the IMS between 2010-2014 which are likely bolide impacts. Identification in this context refers to an event being included in one of the event bulletins issued by the IMS. In this untargeted study we find that the IMS globally identifies approximately 16 events per year which are likely bolide impacts. Using data released since the beginning of 2014 of US Government sensor detections (as given at http://neo.jpl.nasa.gov/fireballs/ ) of fireballs we find in a complementary targeted survey that the current IMS system is able to identify ~25% of fireballs with E > 0.1 kT energy. Using all 16 US Government sensor fireballs listed as of July 31, 2014 we are able to detect infrasound from 75% of these events on at least one IMS station. The high ratio of detection/identification is a product of the stricter criteria adopted by the IMS for inclusion in an event bulletin as compared to simple station detection.We discuss energy comparisons between infrasound-estimated energies based on amplitudes and periods and estimates provided by US Government sensors. Specific impact events of interest will be discussed as well as the utility of the global IMS infrasound system for location and timing of future NEAs detected prior to impact.

  17. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    PubMed

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  18. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality

    PubMed Central

    Tata, Matthew S.

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518

  19. Event Detection Challenges, Methods, and Applications in Natural and Artificial Systems

    DTIC Science & Technology

    2009-03-01

    using the composite event detection method [Kerman, Jiang, Blumberg , and Buttrey, 2009]. Although the techniques and utility of the...aforementioned method have been clearly demonstrated, there is still much work and research to be conducted within the realm of event detection. This...detection methods . The paragraphs that follow summarize the discoveries of and lessons learned by multiple researchers and authors over many

  20. Development of a database and processing method for detecting hematotoxicity adverse drug events.

    PubMed

    Shimai, Yoshie; Takeda, Toshihiro; Manabe, Shirou; Teramoto, Kei; Mihara, Naoki; Matsumura, Yasushi

    2015-01-01

    Adverse events are detected by monitoring the patient's status, including blood test results. However, it is difficult to identify all adverse events based on recognition by individual doctors. We developed a system that can be used to detect hematotoxicity adverse events according to blood test results recorded in an electronic medical record system. The blood test results were graded based on Common Terminology Criteria for Adverse Events (CTCAE) and changes in the blood test results (Up, Down, Flat) were assessed according to the variation in the grade. The changes in the blood test and injection data were stored in a database. By comparing the date of injection and start and end dates of the change in the blood test results, adverse events related to a designated drug were detected. Using this method, we searched for the occurrence of serious adverse events (CTCAE Grades 3 or 4) concerning WBC, ALT and creatinine related to paclitaxel at Osaka University Hospital. The rate of occurrence of a decreased WBC count, increased ALT level and increased creatinine level was 36.0%, 0.6% and 0.4%, respectively. This method is useful for detecting and estimating the rate of occurrence of hematotoxicity adverse drug events.

  1. Real-Time Event Detection for Monitoring Natural and Source Waterways - Sacramento, CA

    EPA Science Inventory

    The use of event detection systems in finished drinking water systems is increasing in order to monitor water quality in both operational and security contexts. Recent incidents involving harmful algal blooms and chemical spills into watersheds have increased interest in monitori...

  2. Event Detection in Aerospace Systems using Centralized Sensor Networks: A Comparative Study of Several Methodologies

    NASA Technical Reports Server (NTRS)

    Mehr, Ali Farhang; Sauvageon, Julien; Agogino, Alice M.; Tumer, Irem Y.

    2006-01-01

    Recent advances in micro electromechanical systems technology, digital electronics, and wireless communications have enabled development of low-cost, low-power, multifunctional miniature smart sensors. These sensors can be deployed throughout a region in an aerospace vehicle to build a network for measurement, detection and surveillance applications. Event detection using such centralized sensor networks is often regarded as one of the most promising health management technologies in aerospace applications where timely detection of local anomalies has a great impact on the safety of the mission. In this paper, we propose to conduct a qualitative comparison of several local event detection algorithms for centralized redundant sensor networks. The algorithms are compared with respect to their ability to locate and evaluate an event in the presence of noise and sensor failures for various node geometries and densities.

  3. Time until diagnosis of clinical events with different remote monitoring systems in Implantable Cardioverter-Defibrillator patients.

    PubMed

    Söth-Hansen, Malene; Witt, Christoffer Tobias; Rasmussen, Mathis; Kristensen, Jens; Gerdes, Christian; Nielsen, Jens Cosedis

    2018-05-24

    Remote monitoring (RM) is an established technology integrated into routine follow-up of patients with implantable cardioverter-defibrillator (ICD). Current RM systems differ according to transmission frequency and alert definition. We aimed to compare time difference between detection and acknowledgement of clinically relevant events between four RM systems. We analyzed time delay between detection of ventricular arrhythmic and technical events by the ICD and acknowledgement by hospital staff in 1.802 consecutive patients followed with RM during September 2014 - August 2016. Devices from Biotronik (BIO, n=374), Boston Scientific (BSC, n=196), Medtronic (MDT, n=468) and St Jude Medical (SJM, n=764) were included. We identified all events from RM webpages and their acknowledgement with RM or at in-clinic follow-up. Events occurring during weekends were excluded. We included 3.472 events. Proportion of events acknowledged within 24 hours was 72%, 23%, 18% and 65% with BIO, BSC, MDT and SJM, respectively, with median times of 13, 222, 163 and 18 hours from detection to acknowledgement (p<0.001 for both comparisons between manufacturers). Including only events transmitted as alerts by RM, 72%, 68%, 61% and 65% for BIO, BSC, MDT and SJM, respectively were acknowledged within 24 hours. Variation in time to acknowledgement of ventricular tachyarrhythmia episodes not treated with shock therapy was the primary cause for the difference between manufacturers. Significant and clinically relevant differences in time delay from event detection to acknowledgement exist between RM systems. Varying definitions of which events RM transmits as alerts are important for the differences observed. Copyright © 2018. Published by Elsevier Inc.

  4. A novel real-time health monitoring system for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Zhang, David C.; Ouyang, Lien; Qing, Peter; Li, Irene

    2008-04-01

    Real-time monitoring the status of in-service structures such as unmanned vehicles can provide invaluable information to detect the damages to the structures on time. The unmanned vehicles can be maintained and repaired in time if such damages are found. One typical cause of damages of unmanned vehicles is from impacts caused by bumping into some obstacles or being hit by some objects such as hostile fire. This paper introduces a novel impact event sensing system that can detect the location of the impact events and the force-time history of the impact events. The system consists of the Piezo-electric sensor network, the hardware platform and the analysis software. The new customized battery-powered impact event sensing system supports up to 64-channel parallel data acquisition. It features an innovative low-power hardware trigger circuit that monitors 64 channels simultaneously. The system is in the sleep mode most of the time. When an impact event happens, the system will wake up in micro-seconds and detect the impact location and corresponding force-time history. The system can be combined with the SMART sensing system to further evaluate the impact damage severity.

  5. Alternatives for Laboratory Measurement of Aerosol Samples from the International Monitoring System of the CTBT

    NASA Astrophysics Data System (ADS)

    Miley, H.; Forrester, J. B.; Greenwood, L. R.; Keillor, M. E.; Eslinger, P. W.; Regmi, R.; Biegalski, S.; Erikson, L. E.

    2013-12-01

    The aerosol samples taken from the CTBT International Monitoring Systems stations are measured in the field with a minimum detectable concentration (MDC) of ~30 microBq/m3 of Ba-140. This is sufficient to detect far less than 1 kt of aerosol fission products in the atmosphere when the station is in the plume from such an event. Recent thinking about minimizing the potential source region (PSR) from a detection has led to a desire for a multi-station or multi-time period detection. These would be connected through the concept of ';event formation', analogous to event formation in seismic event study. However, to form such events, samples from the nearest neighbors of the detection would require re-analysis with a more sensitive laboratory to gain a substantially lower MDC, and potentially find radionuclide concentrations undetected by the station. The authors will present recent laboratory work with air filters showing various cost effective means for enhancing laboratory sensitivity.

  6. Multi-Station Broad Regional Event Detection Using Waveform Correlation

    NASA Astrophysics Data System (ADS)

    Slinkard, M.; Stephen, H.; Young, C. J.; Eckert, R.; Schaff, D. P.; Richards, P. G.

    2013-12-01

    Previous waveform correlation studies have established the occurrence of repeating seismic events in various regions, and the utility of waveform-correlation event-detection on broad regional or even global scales to find events currently not included in traditionally-prepared bulletins. The computational burden, however, is high, limiting previous experiments to relatively modest template libraries and/or processing time periods. We have developed a distributed computing waveform correlation event detection utility that allows us to process years of continuous waveform data with template libraries numbering in the thousands. We have used this system to process several years of waveform data from IRIS stations in East Asia, using libraries of template events taken from global and regional bulletins. Detections at a given station are confirmed by 1) comparison with independent bulletins of seismicity, and 2) consistent detections at other stations. We find that many of the detected events are not in traditional catalogs, hence the multi-station comparison is essential. In addition to detecting the similar events, we also estimate magnitudes very precisely based on comparison with the template events (when magnitudes are available). We have investigated magnitude variation within detected families of similar events, false alarm rates, and the temporal and spatial reach of templates.

  7. Piecing together the puzzle: Improving event content coverage for real-time sub-event detection using adaptive microblog crawling

    PubMed Central

    Tokarchuk, Laurissa; Wang, Xinyue; Poslad, Stefan

    2017-01-01

    In an age when people are predisposed to report real-world events through their social media accounts, many researchers value the benefits of mining user generated content from social media. Compared with the traditional news media, social media services, such as Twitter, can provide more complete and timely information about the real-world events. However events are often like a puzzle and in order to solve the puzzle/understand the event, we must identify all the sub-events or pieces. Existing Twitter event monitoring systems for sub-event detection and summarization currently typically analyse events based on partial data as conventional data collection methodologies are unable to collect comprehensive event data. This results in existing systems often being unable to report sub-events in real-time and often in completely missing sub-events or pieces in the broader event puzzle. This paper proposes a Sub-event detection by real-TIme Microblog monitoring (STRIM) framework that leverages the temporal feature of an expanded set of news-worthy event content. In order to more comprehensively and accurately identify sub-events this framework first proposes the use of adaptive microblog crawling. Our adaptive microblog crawler is capable of increasing the coverage of events while minimizing the amount of non-relevant content. We then propose a stream division methodology that can be accomplished in real time so that the temporal features of the expanded event streams can be analysed by a burst detection algorithm. In the final steps of the framework, the content features are extracted from each divided stream and recombined to provide a final summarization of the sub-events. The proposed framework is evaluated against traditional event detection using event recall and event precision metrics. Results show that improving the quality and coverage of event contents contribute to better event detection by identifying additional valid sub-events. The novel combination of our proposed adaptive crawler and our stream division/recombination technique provides significant gains in event recall (44.44%) and event precision (9.57%). The addition of these sub-events or pieces, allows us to get closer to solving the event puzzle. PMID:29107976

  8. Piecing together the puzzle: Improving event content coverage for real-time sub-event detection using adaptive microblog crawling.

    PubMed

    Tokarchuk, Laurissa; Wang, Xinyue; Poslad, Stefan

    2017-01-01

    In an age when people are predisposed to report real-world events through their social media accounts, many researchers value the benefits of mining user generated content from social media. Compared with the traditional news media, social media services, such as Twitter, can provide more complete and timely information about the real-world events. However events are often like a puzzle and in order to solve the puzzle/understand the event, we must identify all the sub-events or pieces. Existing Twitter event monitoring systems for sub-event detection and summarization currently typically analyse events based on partial data as conventional data collection methodologies are unable to collect comprehensive event data. This results in existing systems often being unable to report sub-events in real-time and often in completely missing sub-events or pieces in the broader event puzzle. This paper proposes a Sub-event detection by real-TIme Microblog monitoring (STRIM) framework that leverages the temporal feature of an expanded set of news-worthy event content. In order to more comprehensively and accurately identify sub-events this framework first proposes the use of adaptive microblog crawling. Our adaptive microblog crawler is capable of increasing the coverage of events while minimizing the amount of non-relevant content. We then propose a stream division methodology that can be accomplished in real time so that the temporal features of the expanded event streams can be analysed by a burst detection algorithm. In the final steps of the framework, the content features are extracted from each divided stream and recombined to provide a final summarization of the sub-events. The proposed framework is evaluated against traditional event detection using event recall and event precision metrics. Results show that improving the quality and coverage of event contents contribute to better event detection by identifying additional valid sub-events. The novel combination of our proposed adaptive crawler and our stream division/recombination technique provides significant gains in event recall (44.44%) and event precision (9.57%). The addition of these sub-events or pieces, allows us to get closer to solving the event puzzle.

  9. Large-Scale Test of Dynamic Correlation Processors: Implications for Correlation-Based Seismic Pipelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodge, D. A.; Harris, D. B.

    Correlation detectors are of considerable interest to the seismic monitoring communities because they offer reduced detection thresholds and combine detection, location and identification functions into a single operation. They appear to be ideal for applications requiring screening of frequent repeating events. However, questions remain about how broadly empirical correlation methods are applicable. We describe the effectiveness of banks of correlation detectors in a system that combines traditional power detectors with correlation detectors in terms of efficiency, which we define to be the fraction of events detected by the correlators. This paper elaborates and extends the concept of a dynamic correlationmore » detection framework – a system which autonomously creates correlation detectors from event waveforms detected by power detectors; and reports observed performance on a network of arrays in terms of efficiency. We performed a large scale test of dynamic correlation processors on an 11 terabyte global dataset using 25 arrays in the single frequency band 1-3 Hz. The system found over 3.2 million unique signals and produced 459,747 screened detections. A very satisfying result is that, on average, efficiency grows with time and, after nearly 16 years of operation, exceeds 47% for events observed over all distance ranges and approaches 70% for near regional and 90% for local events. This observation suggests that future pipeline architectures should make extensive use of correlation detectors, principally for decluttering observations of local and near-regional events. Our results also suggest that future operations based on correlation detection will require commodity large-scale computing infrastructure, since the numbers of correlators in an autonomous system can grow into the hundreds of thousands.« less

  10. Large-Scale Test of Dynamic Correlation Processors: Implications for Correlation-Based Seismic Pipelines

    DOE PAGES

    Dodge, D. A.; Harris, D. B.

    2016-03-15

    Correlation detectors are of considerable interest to the seismic monitoring communities because they offer reduced detection thresholds and combine detection, location and identification functions into a single operation. They appear to be ideal for applications requiring screening of frequent repeating events. However, questions remain about how broadly empirical correlation methods are applicable. We describe the effectiveness of banks of correlation detectors in a system that combines traditional power detectors with correlation detectors in terms of efficiency, which we define to be the fraction of events detected by the correlators. This paper elaborates and extends the concept of a dynamic correlationmore » detection framework – a system which autonomously creates correlation detectors from event waveforms detected by power detectors; and reports observed performance on a network of arrays in terms of efficiency. We performed a large scale test of dynamic correlation processors on an 11 terabyte global dataset using 25 arrays in the single frequency band 1-3 Hz. The system found over 3.2 million unique signals and produced 459,747 screened detections. A very satisfying result is that, on average, efficiency grows with time and, after nearly 16 years of operation, exceeds 47% for events observed over all distance ranges and approaches 70% for near regional and 90% for local events. This observation suggests that future pipeline architectures should make extensive use of correlation detectors, principally for decluttering observations of local and near-regional events. Our results also suggest that future operations based on correlation detection will require commodity large-scale computing infrastructure, since the numbers of correlators in an autonomous system can grow into the hundreds of thousands.« less

  11. Real-time detection and classification of anomalous events in streaming data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferragut, Erik M.; Goodall, John R.; Iannacone, Michael D.

    2016-04-19

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The events can be displayed to a user in user-defined groupings in an animated fashion. The system can include a plurality of anomaly detectors that together implement an algorithm to identify low probability events and detect atypical traffic patterns. The atypical traffic patterns can then be classified as being of interest or not. In one particular example, in a network environment, the classification can be whether the network traffic is malicious or not.

  12. Network hydraulics inclusion in water quality event detection using multiple sensor stations data.

    PubMed

    Oliker, Nurit; Ostfeld, Avi

    2015-09-01

    Event detection is one of the current most challenging topics in water distribution systems analysis: how regular on-line hydraulic (e.g., pressure, flow) and water quality (e.g., pH, residual chlorine, turbidity) measurements at different network locations can be efficiently utilized to detect water quality contamination events. This study describes an integrated event detection model which combines multiple sensor stations data with network hydraulics. To date event detection modelling is likely limited to single sensor station location and dataset. Single sensor station models are detached from network hydraulics insights and as a result might be significantly exposed to false positive alarms. This work is aimed at decreasing this limitation through integrating local and spatial hydraulic data understanding into an event detection model. The spatial analysis complements the local event detection effort through discovering events with lower signatures by exploring the sensors mutual hydraulic influences. The unique contribution of this study is in incorporating hydraulic simulation information into the overall event detection process of spatially distributed sensors. The methodology is demonstrated on two example applications using base runs and sensitivity analyses. Results show a clear advantage of the suggested model over single-sensor event detection schemes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Reference set for performance testing of pediatric vaccine safety signal detection methods and systems.

    PubMed

    Brauchli Pernus, Yolanda; Nan, Cassandra; Verstraeten, Thomas; Pedenko, Mariia; Osokogu, Osemeke U; Weibel, Daniel; Sturkenboom, Miriam; Bonhoeffer, Jan

    2016-12-12

    Safety signal detection in spontaneous reporting system databases and electronic healthcare records is key to detection of previously unknown adverse events following immunization. Various statistical methods for signal detection in these different datasources have been developed, however none are geared to the pediatric population and none specifically to vaccines. A reference set comprising pediatric vaccine-adverse event pairs is required for reliable performance testing of statistical methods within and across data sources. The study was conducted within the context of the Global Research in Paediatrics (GRiP) project, as part of the seventh framework programme (FP7) of the European Commission. Criteria for the selection of vaccines considered in the reference set were routine and global use in the pediatric population. Adverse events were primarily selected based on importance. Outcome based systematic literature searches were performed for all identified vaccine-adverse event pairs and complemented by expert committee reports, evidence based decision support systems (e.g. Micromedex), and summaries of product characteristics. Classification into positive (PC) and negative control (NC) pairs was performed by two independent reviewers according to a pre-defined algorithm and discussed for consensus in case of disagreement. We selected 13 vaccines and 14 adverse events to be included in the reference set. From a total of 182 vaccine-adverse event pairs, we classified 18 as PC, 113 as NC and 51 as unclassifiable. Most classifications (91) were based on literature review, 45 were based on expert committee reports, and for 46 vaccine-adverse event pairs, an underlying pathomechanism was not plausible classifying the association as NC. A reference set of vaccine-adverse event pairs was developed. We propose its use for comparing signal detection methods and systems in the pediatric population. Published by Elsevier Ltd.

  14. Evaluating the automated blood glucose pattern detection and case-retrieval modules of the 4 Diabetes Support System.

    PubMed

    Schwartz, Frank L; Vernier, Stanley J; Shubrook, Jay H; Marling, Cynthia R

    2010-11-01

    We have developed a prototypical case-based reasoning system to enhance management of patients with type 1 diabetes mellitus (T1DM). The system is capable of automatically analyzing large volumes of life events, self-monitoring of blood glucose readings, continuous glucose monitoring system results, and insulin pump data to detect clinical problems. In a preliminary study, manual entry of large volumes of life-event and other data was too burdensome for patients. In this study, life-event and pump data collection were automated, and then the system was reevaluated. Twenty-three adult T1DM patients on insulin pumps completed the five-week study. A usual daily schedule was entered into the database, and patients were only required to upload their insulin pump data to Medtronic's CareLink® Web site weekly. Situation assessment routines were run weekly for each participant to detect possible problems, and once the trial was completed, the case-retrieval module was tested. Using the situation assessment routines previously developed, the system found 295 possible problems. The enhanced system detected only 2.6 problems per patient per week compared to 4.9 problems per patient per week in the preliminary study (p=.017). Problems detected by the system were correctly identified in 97.9% of the cases, and 96.1% of these were clinically useful. With less life-event data, the system is unable to detect certain clinical problems and detects fewer problems overall. Additional work is needed to provide device/software interfaces that allow patients to provide this data quickly and conveniently. © 2010 Diabetes Technology Society.

  15. Sources of Infrasound events listed in IDC Reviewed Event Bulletin

    NASA Astrophysics Data System (ADS)

    Bittner, Paulina; Polich, Paul; Gore, Jane; Ali, Sherif; Medinskaya, Tatiana; Mialle, Pierrick

    2017-04-01

    Until 2003 two waveform technologies, i.e. seismic and hydroacoustic were used to detect and locate events included in the International Data Centre (IDC) Reviewed Event Bulletin (REB). The first atmospheric event was published in the REB in 2003, however automatic processing required significant improvements to reduce the number of false events. In the beginning of 2010 the infrasound technology was reintroduced to the IDC operations and has contributed to both automatic and reviewed IDC bulletins. The primary contribution of infrasound technology is to detect atmospheric events. These events may also be observed at seismic stations, which will significantly improve event location. Examples sources of REB events, which were detected by the International Monitoring System (IMS) infrasound network were fireballs (e.g. Bangkok fireball, 2015), volcanic eruptions (e.g. Calbuco, Chile 2015) and large surface explosions (e.g. Tjanjin, China 2015). Query blasts (e.g. Zheleznogorsk) and large earthquakes (e.g. Italy 2016) belong to events primarily recorded at seismic stations of the IMS network but often detected at the infrasound stations. In case of earthquakes analysis of infrasound signals may help to estimate the area affected by ground vibration. Infrasound associations to query blast events may help to obtain better source location. The role of IDC analysts is to verify and improve location of events detected by the automatic system and to add events which were missed in the automatic process. Open source materials may help to identify nature of some events. Well recorded examples may be added to the Reference Infrasound Event Database to help in analysis process. This presentation will provide examples of events generated by different sources which were included in the IDC bulletins.

  16. Visual Sensor Based Abnormal Event Detection with Moving Shadow Removal in Home Healthcare Applications

    PubMed Central

    Lee, Young-Sook; Chung, Wan-Young

    2012-01-01

    Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dudleson, B.; Arnold, M.; McCann, D.

    Rapid detection of unexpected drilling events requires continuous monitoring of drilling parameters. A major R and D program by a drilling contractor has led to the introduction of a computerized monitoring system on its offshore rigs. System includes advanced color graphics displays and new smart alarms to help both contractor and operator personnel detect and observe drilling events before they would normally be apparent with conventional rig instrumentation. This article describes a module of this monitoring system, which uses expert system technology to detect the earliest stages of drillstring washouts. Field results demonstrate the effectiveness of the smart alarm incorporatedmore » in the system. Early detection allows the driller to react before a twist-off results in expensive fishing operations.« less

  18. Sudden Event Recognition: A Survey

    PubMed Central

    Suriani, Nor Surayahani; Hussain, Aini; Zulkifley, Mohd Asyraf

    2013-01-01

    Event recognition is one of the most active research areas in video surveillance fields. Advancement in event recognition systems mainly aims to provide convenience, safety and an efficient lifestyle for humanity. A precise, accurate and robust approach is necessary to enable event recognition systems to respond to sudden changes in various uncontrolled environments, such as the case of an emergency, physical threat and a fire or bomb alert. The performance of sudden event recognition systems depends heavily on the accuracy of low level processing, like detection, recognition, tracking and machine learning algorithms. This survey aims to detect and characterize a sudden event, which is a subset of an abnormal event in several video surveillance applications. This paper discusses the following in detail: (1) the importance of a sudden event over a general anomalous event; (2) frameworks used in sudden event recognition; (3) the requirements and comparative studies of a sudden event recognition system and (4) various decision-making approaches for sudden event recognition. The advantages and drawbacks of using 3D images from multiple cameras for real-time application are also discussed. The paper concludes with suggestions for future research directions in sudden event recognition. PMID:23921828

  19. Binary Microlensing Events from the MACHO Project

    NASA Astrophysics Data System (ADS)

    Alcock, C.; Allsman, R. A.; Alves, D.; Axelrod, T. S.; Baines, D.; Becker, A. C.; Bennett, D. P.; Bourke, A.; Brakel, A.; Cook, K. H.; Crook, B.; Crouch, A.; Dan, J.; Drake, A. J.; Fragile, P. C.; Freeman, K. C.; Gal-Yam, A.; Geha, M.; Gray, J.; Griest, K.; Gurtierrez, A.; Heller, A.; Howard, J.; Johnson, B. R.; Kaspi, S.; Keane, M.; Kovo, O.; Leach, C.; Leach, T.; Leibowitz, E. M.; Lehner, M. J.; Lipkin, Y.; Maoz, D.; Marshall, S. L.; McDowell, D.; McKeown, S.; Mendelson, H.; Messenger, B.; Minniti, D.; Nelson, C.; Peterson, B. A.; Popowski, P.; Pozza, E.; Purcell, P.; Pratt, M. R.; Quinn, J.; Quinn, P. J.; Rhie, S. H.; Rodgers, A. W.; Salmon, A.; Shemmer, O.; Stetson, P.; Stubbs, C. W.; Sutherland, W.; Thomson, S.; Tomaney, A.; Vandehei, T.; Walker, A.; Ward, K.; Wyper, G.

    2000-09-01

    We present the light curves of 21 gravitational microlensing events from the first six years of the MACHO Project gravitational microlensing survey that are likely examples of lensing by binary systems. These events were manually selected from a total sample of ~350 candidate microlensing events that were either detected by the MACHO Alert System or discovered through retrospective analyses of the MACHO database. At least 14 of these 21 events exhibit strong (caustic) features, and four of the events are well fit with lensing by large mass ratio (brown dwarf or planetary) systems, although these fits are not necessarily unique. The total binary event rate is roughly consistent with predictions based upon our knowledge of the properties of binary stars, but a precise comparison cannot be made without a determination of our binary lens event detection efficiency. Toward the Galactic bulge, we find a ratio of caustic crossing to noncaustic crossing binary lensing events of 12:4, excluding one event for which we present two fits. This suggests significant incompleteness in our ability to detect and characterize noncaustic crossing binary lensing. The distribution of mass ratios, N(q), for these binary lenses appears relatively flat. We are also able to reliably measure source-face crossing times in four of the bulge caustic crossing events, and recover from them a distribution of lens proper motions, masses, and distances consistent with a population of Galactic bulge lenses at a distance of 7+/-1 kpc. This analysis yields two systems with companions of ~0.05 Msolar.

  20. High-speed event detector for embedded nanopore bio-systems.

    PubMed

    Huang, Yiyun; Magierowski, Sebastian; Ghafar-Zadeh, Ebrahim; Wang, Chengjie

    2015-08-01

    Biological measurements of microscopic phenomena often deal with discrete-event signals. The ability to automatically carry out such measurements at high-speed in a miniature embedded system is desirable but compromised by high-frequency noise along with practical constraints on filter quality and sampler resolution. This paper presents a real-time event-detection method in the context of nanopore sensing that helps to mitigate these drawbacks and allows accurate signal processing in an embedded system. Simulations show at least a 10× improvement over existing on-line detection methods.

  1. Pickless event detection and location: The waveform correlation event detection system (WCEDS) revisited

    DOE PAGES

    Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford; ...

    2016-01-01

    The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events thatmore » generated the phase picks.« less

  2. Multi-Detection Events, Probability Density Functions, and Reduced Location Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Schrom, Brian T.

    2016-03-01

    Abstract Several efforts have been made in the Comprehensive Nuclear-Test-Ban Treaty (CTBT) community to assess the benefits of combining detections of radionuclides to improve the location estimates available from atmospheric transport modeling (ATM) backtrack calculations. We present a Bayesian estimation approach rather than a simple dilution field of regard approach to allow xenon detections and non-detections to be combined mathematically. This system represents one possible probabilistic approach to radionuclide event formation. Application of this method to a recent interesting radionuclide event shows a substantial reduction in the location uncertainty of that event.

  3. Negated bio-events: analysis and identification

    PubMed Central

    2013-01-01

    Background Negation occurs frequently in scientific literature, especially in biomedical literature. It has previously been reported that around 13% of sentences found in biomedical research articles contain negation. Historically, the main motivation for identifying negated events has been to ensure their exclusion from lists of extracted interactions. However, recently, there has been a growing interest in negative results, which has resulted in negation detection being identified as a key challenge in biomedical relation extraction. In this article, we focus on the problem of identifying negated bio-events, given gold standard event annotations. Results We have conducted a detailed analysis of three open access bio-event corpora containing negation information (i.e., GENIA Event, BioInfer and BioNLP’09 ST), and have identified the main types of negated bio-events. We have analysed the key aspects of a machine learning solution to the problem of detecting negated events, including selection of negation cues, feature engineering and the choice of learning algorithm. Combining the best solutions for each aspect of the problem, we propose a novel framework for the identification of negated bio-events. We have evaluated our system on each of the three open access corpora mentioned above. The performance of the system significantly surpasses the best results previously reported on the BioNLP’09 ST corpus, and achieves even better results on the GENIA Event and BioInfer corpora, both of which contain more varied and complex events. Conclusions Recently, in the field of biomedical text mining, the development and enhancement of event-based systems has received significant interest. The ability to identify negated events is a key performance element for these systems. We have conducted the first detailed study on the analysis and identification of negated bio-events. Our proposed framework can be integrated with state-of-the-art event extraction systems. The resulting systems will be able to extract bio-events with attached polarities from textual documents, which can serve as the foundation for more elaborate systems that are able to detect mutually contradicting bio-events. PMID:23323936

  4. Radionuclide data analysis in connection of DPRK event in May 2009

    NASA Astrophysics Data System (ADS)

    Nikkinen, Mika; Becker, Andreas; Zähringer, Matthias; Polphong, Pornsri; Pires, Carla; Assef, Thierry; Han, Dongmei

    2010-05-01

    The seismic event detected in DPRK on 25.5.2009 was triggering a series of actions within CTBTO/PTS to ensure its preparedness to detect any radionuclide emissions possibly linked with the event. Despite meticulous work to detect and verify, traces linked to the DPRK event were not found. After three weeks of high alert the PTS resumed back to normal operational routine. This case illuminates the importance of objectivity and procedural approach in the data evaluation. All the data coming from particulate and noble gas stations were evaluated daily, some of the samples even outside of office hours and during the weekends. Standard procedures were used to determine the network detection thresholds of the key (CTBT relevant) radionuclides achieved across the DPRK event area and for the assessment of radionuclides typically occurring at IMS stations (background history). Noble gas system has sometimes detections that are typical for the sites due to legitimate non-nuclear test related activities. Therefore, set of hypothesis were used to see if the detection is consistent with event time and location through atmospheric transport modelling. Also the consistency of event timing and isotopic ratios was used in the evaluation work. As a result it was concluded that if even 1/1000 of noble gasses from a nuclear detonation would had leaked, the IMS system would not had problems to detect it. This case also showed the importance of on-site inspections to verify the nuclear traces of possible tests.

  5. [Study on the timeliness of detection and reporting on public health emergency events in China].

    PubMed

    Li, Ke-Li; Feng, Zi-Jian; Ni, Da-Xin

    2009-03-01

    To analyze the timeliness of detection and reporting on public health emergency events, and to explore the effective strategies for improving the relative capacity on those issues. We conducted a retrospective survey on 3275 emergency events reported through Public Health Emergency Events Surveillance System from 2005 to the first half of 2006. Developed by county Centers for Disease Control and Prevention, a uniformed self-administrated questionnaire was used to collect data, which would include information on the detection, reporting of the events. For communicable diseases events, the median of time interval between the occurrence of first case and the detection of event was 6 days (P25 = 2, P75 = 13). For food poisoning events and clusters of disease with unknown origin, the medians were 3 hours (P25, P75 = 16) and 1 days (P25 = 0, P75 = 5). 71.54% of the events were reported by the discoverers within 2 hours after the detection. In general, the ranges of time intervals between the occurrence, detection or reporting of the events were different, according to the categories of events. The timeliness of detection and reporting of events could have been improved dramatically if the definition of events, according to their characteristics, had been more reasonable and accessible, as well as the improvement of training program for healthcare staff and teachers.

  6. An Event-Based Verification Scheme for the Real-Time Flare Detection System at Kanzelhöhe Observatory

    NASA Astrophysics Data System (ADS)

    Pötzi, W.; Veronig, A. M.; Temmer, M.

    2018-06-01

    In the framework of the Space Situational Awareness program of the European Space Agency (ESA/SSA), an automatic flare detection system was developed at Kanzelhöhe Observatory (KSO). The system has been in operation since mid-2013. The event detection algorithm was upgraded in September 2017. All data back to 2014 was reprocessed using the new algorithm. In order to evaluate both algorithms, we apply verification measures that are commonly used for forecast validation. In order to overcome the problem of rare events, which biases the verification measures, we introduce a new event-based method. We divide the timeline of the Hα observations into positive events (flaring period) and negative events (quiet period), independent of the length of each event. In total, 329 positive and negative events were detected between 2014 and 2016. The hit rate for the new algorithm reached 96% (just five events were missed) and a false-alarm ratio of 17%. This is a significant improvement of the algorithm, as the original system had a hit rate of 85% and a false-alarm ratio of 33%. The true skill score and the Heidke skill score both reach values of 0.8 for the new algorithm; originally, they were at 0.5. The mean flare positions are accurate within {±} 1 heliographic degree for both algorithms, and the peak times improve from a mean difference of 1.7± 2.9 minutes to 1.3± 2.3 minutes. The flare start times that had been systematically late by about 3 minutes as determined by the original algorithm, now match the visual inspection within -0.47± 4.10 minutes.

  7. Domain Anomaly Detection in Machine Perception: A System Architecture and Taxonomy.

    PubMed

    Kittler, Josef; Christmas, William; de Campos, Teófilo; Windridge, David; Yan, Fei; Illingworth, John; Osman, Magda

    2014-05-01

    We address the problem of anomaly detection in machine perception. The concept of domain anomaly is introduced as distinct from the conventional notion of anomaly used in the literature. We propose a unified framework for anomaly detection which exposes the multifaceted nature of anomalies and suggest effective mechanisms for identifying and distinguishing each facet as instruments for domain anomaly detection. The framework draws on the Bayesian probabilistic reasoning apparatus which clearly defines concepts such as outlier, noise, distribution drift, novelty detection (object, object primitive), rare events, and unexpected events. Based on these concepts we provide a taxonomy of domain anomaly events. One of the mechanisms helping to pinpoint the nature of anomaly is based on detecting incongruence between contextual and noncontextual sensor(y) data interpretation. The proposed methodology has wide applicability. It underpins in a unified way the anomaly detection applications found in the literature. To illustrate some of its distinguishing features, in here the domain anomaly detection methodology is applied to the problem of anomaly detection for a video annotation system.

  8. TED: a novel man portable infrared detection and situation awareness system

    NASA Astrophysics Data System (ADS)

    Tidhar, Gil; Manor, Ran

    2007-04-01

    Infrared Search and Track (IRST) and threat warning systems are used in vehicle mounted or in fixed land positions. Migration of this technology to the man portable applications proves to be difficult due to the tight constraints of power consumption, dimensions, weight and due to the high video rate requirements. In this report we provide design details of a novel transient event detection (TED) system, capable of detection of blasts and gun shot events in a very wide field of view, while used by an operator in motion

  9. ATLAS EventIndex general dataflow and monitoring infrastructure

    NASA Astrophysics Data System (ADS)

    Fernández Casaní, Á.; Barberis, D.; Favareto, A.; García Montoro, C.; González de la Hoz, S.; Hřivnáč, J.; Prokoshin, F.; Salt, J.; Sánchez, J.; Többicke, R.; Yuan, R.; ATLAS Collaboration

    2017-10-01

    The ATLAS EventIndex has been running in production since mid-2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure at CERN. A subset of this information is copied to an Oracle relational database for fast dataset discovery, event-picking, crosschecks with other ATLAS systems and checks for event duplication. The system design and its optimization is serving event picking from requests of a few events up to scales of tens of thousand of events, and in addition, data consistency checks are performed for large production campaigns. Detecting duplicate events with a scope of physics collections has recently arisen as an important use case. This paper describes the general architecture of the project and the data flow and operation issues, which are addressed by recent developments to improve the throughput of the overall system. In this direction, the data collection system is reducing the usage of the messaging infrastructure to overcome the performance shortcomings detected during production peaks; an object storage approach is instead used to convey the event index information, and messages to signal their location and status. Recent changes in the Producer/Consumer architecture are also presented in detail, as well as the monitoring infrastructure.

  10. Real-time monitoring of clinical processes using complex event processing and transition systems.

    PubMed

    Meinecke, Sebastian

    2014-01-01

    Dependencies between tasks in clinical processes are often complex and error-prone. Our aim is to describe a new approach for the automatic derivation of clinical events identified via the behaviour of IT systems using Complex Event Processing. Furthermore we map these events on transition systems to monitor crucial clinical processes in real-time for preventing and detecting erroneous situations.

  11. Simultaneous Event-Triggered Fault Detection and Estimation for Stochastic Systems Subject to Deception Attacks.

    PubMed

    Li, Yunji; Wu, QingE; Peng, Li

    2018-01-23

    In this paper, a synthesized design of fault-detection filter and fault estimator is considered for a class of discrete-time stochastic systems in the framework of event-triggered transmission scheme subject to unknown disturbances and deception attacks. A random variable obeying the Bernoulli distribution is employed to characterize the phenomena of the randomly occurring deception attacks. To achieve a fault-detection residual is only sensitive to faults while robust to disturbances, a coordinate transformation approach is exploited. This approach can transform the considered system into two subsystems and the unknown disturbances are removed from one of the subsystems. The gain of fault-detection filter is derived by minimizing an upper bound of filter error covariance. Meanwhile, system faults can be reconstructed by the remote fault estimator. An recursive approach is developed to obtain fault estimator gains as well as guarantee the fault estimator performance. Furthermore, the corresponding event-triggered sensor data transmission scheme is also presented for improving working-life of the wireless sensor node when measurement information are aperiodically transmitted. Finally, a scaled version of an industrial system consisting of local PC, remote estimator and wireless sensor node is used to experimentally evaluate the proposed theoretical results. In particular, a novel fault-alarming strategy is proposed so that the real-time capacity of fault-detection is guaranteed when the event condition is triggered.

  12. Covert Network Analysis for Key Player Detection and Event Prediction Using a Hybrid Classifier

    PubMed Central

    Akram, M. Usman; Khan, Shoab A.; Javed, Muhammad Younus

    2014-01-01

    National security has gained vital importance due to increasing number of suspicious and terrorist events across the globe. Use of different subfields of information technology has also gained much attraction of researchers and practitioners to design systems which can detect main members which are actually responsible for such kind of events. In this paper, we present a novel method to predict key players from a covert network by applying a hybrid framework. The proposed system calculates certain centrality measures for each node in the network and then applies novel hybrid classifier for detection of key players. Our system also applies anomaly detection to predict any terrorist activity in order to help law enforcement agencies to destabilize the involved network. As a proof of concept, the proposed framework has been implemented and tested using different case studies including two publicly available datasets and one local network. PMID:25136674

  13. Structural monitoring for rare events in remote locations

    NASA Astrophysics Data System (ADS)

    Hale, J. M.

    2005-01-01

    A structural monitoring system has been developed for use on high value engineering structures, which is particularly suitable for use in remote locations where rare events such as accidental impacts, seismic activity or terrorist attack might otherwise go undetected. The system comprises a low power intelligent on-site data logger and a remote analysis computer that communicate with one another using the internet and mobile telephone technology. The analysis computer also generates e-mail alarms and maintains a web page that displays detected events in near real-time to authorised users. The application of the prototype system to pipeline monitoring is described in which the analysis of detected events is used to differentiate between impacts and pressure surges. The system has been demonstrated successfully and is ready for deployment.

  14. Real-time classification of signals from three-component seismic sensors using neural nets

    NASA Astrophysics Data System (ADS)

    Bowman, B. C.; Dowla, F.

    1992-05-01

    Adaptive seismic data acquisition systems with capabilities of signal discrimination and event classification are important in treaty monitoring, proliferation, and earthquake early detection systems. Potential applications include monitoring underground chemical explosions, as well as other military, cultural, and natural activities where characteristics of signals change rapidly and without warning. In these applications, the ability to detect and interpret events rapidly without falling behind the influx of the data is critical. We developed a system for real-time data acquisition, analysis, learning, and classification of recorded events employing some of the latest technology in computer hardware, software, and artificial neural networks methods. The system is able to train dynamically, and updates its knowledge based on new data. The software is modular and hardware-independent; i.e., the front-end instrumentation is transparent to the analysis system. The software is designed to take advantage of the multiprocessing environment of the Unix operating system. The Unix System V shared memory and static RAM protocols for data access and the semaphore mechanism for interprocess communications were used. As the three-component sensor detects a seismic signal, it is displayed graphically on a color monitor using X11/Xlib graphics with interactive screening capabilities. For interesting events, the triaxial signal polarization is computed, a fast Fourier Transform (FFT) algorithm is applied, and the normalized power spectrum is transmitted to a backpropagation neural network for event classification. The system is currently capable of handling three data channels with a sampling rate of 500 Hz, which covers the bandwidth of most seismic events. The system has been tested in laboratory setting with artificial events generated in the vicinity of a three-component sensor.

  15. Automatic detection and notification of "wrong patient-wrong location'' errors in the operating room.

    PubMed

    Sandberg, Warren S; Häkkinen, Matti; Egan, Marie; Curran, Paige K; Fairbrother, Pamela; Choquette, Ken; Daily, Bethany; Sarkka, Jukka-Pekka; Rattner, David

    2005-09-01

    When procedures and processes to assure patient location based on human performance do not work as expected, patients are brought incrementally closer to a possible "wrong patient-wrong procedure'' error. We developed a system for automated patient location monitoring and management. Real-time data from an active infrared/radio frequency identification tracking system provides patient location data that are robust and can be compared with an "expected process'' model to automatically flag wrong-location events as soon as they occur. The system also generates messages that are automatically sent to process managers via the hospital paging system, thus creating an active alerting function to annunciate errors. We deployed the system to detect and annunciate "patient-in-wrong-OR'' events. The system detected all "wrong-operating room (OR)'' events, and all "wrong-OR'' locations were correctly assigned within 0.50+/-0.28 minutes (mean+/-SD). This corresponded to the measured latency of the tracking system. All wrong-OR events were correctly annunciated via the paging function. This experiment demonstrates that current technology can automatically collect sufficient data to remotely monitor patient flow through a hospital, provide decision support based on predefined rules, and automatically notify stakeholders of errors.

  16. Detecting paralinguistic events in audio stream using context in features and probabilistic decisions☆

    PubMed Central

    Gupta, Rahul; Audhkhasi, Kartik; Lee, Sungbok; Narayanan, Shrikanth

    2017-01-01

    Non-verbal communication involves encoding, transmission and decoding of non-lexical cues and is realized using vocal (e.g. prosody) or visual (e.g. gaze, body language) channels during conversation. These cues perform the function of maintaining conversational flow, expressing emotions, and marking personality and interpersonal attitude. In particular, non-verbal cues in speech such as paralanguage and non-verbal vocal events (e.g. laughters, sighs, cries) are used to nuance meaning and convey emotions, mood and attitude. For instance, laughters are associated with affective expressions while fillers (e.g. um, ah, um) are used to hold floor during a conversation. In this paper we present an automatic non-verbal vocal events detection system focusing on the detect of laughter and fillers. We extend our system presented during Interspeech 2013 Social Signals Sub-challenge (that was the winning entry in the challenge) for frame-wise event detection and test several schemes for incorporating local context during detection. Specifically, we incorporate context at two separate levels in our system: (i) the raw frame-wise features and, (ii) the output decisions. Furthermore, our system processes the output probabilities based on a few heuristic rules in order to reduce erroneous frame-based predictions. Our overall system achieves an Area Under the Receiver Operating Characteristics curve of 95.3% for detecting laughters and 90.4% for fillers on the test set drawn from the data specifications of the Interspeech 2013 Social Signals Sub-challenge. We perform further analysis to understand the interrelation between the features and obtained results. Specifically, we conduct a feature sensitivity analysis and correlate it with each feature's stand alone performance. The observations suggest that the trained system is more sensitive to a feature carrying higher discriminability with implications towards a better system design. PMID:28713197

  17. Intelligent Software Agents: Sensor Integration and Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulesz, James J; Lee, Ronald W

    2013-01-01

    Abstract In a post Macondo world the buzzwords are Integrity Management and Incident Response Management. The twin processes are not new but the opportunity to link the two is novel. Intelligent software agents can be used with sensor networks in distributed and centralized computing systems to enhance real-time monitoring of system integrity as well as manage the follow-on incident response to changing, and potentially hazardous, environmental conditions. The software components are embedded at the sensor network nodes in surveillance systems used for monitoring unusual events. When an event occurs, the software agents establish a new concept of operation at themore » sensing node, post the event status to a blackboard for software agents at other nodes to see , and then react quickly and efficiently to monitor the scale of the event. The technology addresses a current challenge in sensor networks that prevents a rapid and efficient response when a sensor measurement indicates that an event has occurred. By using intelligent software agents - which can be stationary or mobile, interact socially, and adapt to changing situations - the technology offers features that are particularly important when systems need to adapt to active circumstances. For example, when a release is detected, the local software agent collaborates with other agents at the node to exercise the appropriate operation, such as: targeted detection, increased detection frequency, decreased detection frequency for other non-alarming sensors, and determination of environmental conditions so that adjacent nodes can be informed that an event is occurring and when it will arrive. The software agents at the nodes can also post the data in a targeted manner, so that agents at other nodes and the command center can exercise appropriate operations to recalibrate the overall sensor network and associated intelligence systems. The paper describes the concepts and provides examples of real-world implementations including the Threat Detection and Analysis System (TDAS) at the International Port of Memphis and the Biological Warning and Incident Characterization System (BWIC) Environmental Monitoring (EM) Component. Technologies developed for these 24/7 operational systems have applications for improved real-time system integrity awareness as well as provide incident response (as needed) for production and field applications.« less

  18. Event-specific real-time detection and quantification of genetically modified Roundup Ready soybean.

    PubMed

    Huang, Chia-Chia; Pan, Tzu-Ming

    2005-05-18

    The event-specific real-time detection and quantification of Roundup Ready soybean (RRS) using an ABI PRISM 7700 sequence detection system with light upon extension (LUX) primer was developed in this study. The event-specific primers were designed, targeting the junction of the RRS 5' integration site and the endogenous gene lectin1. Then, a standard reference plasmid was constructed that carried both of the targeted sequences for quantitative analysis. The detection limit of the LUX real-time PCR system was 0.05 ng of 100% RRS genomic DNA, which was equal to 20.5 copies. The range of quantification was from 0.1 to 100%. The sensitivity and range of quantification successfully met the requirement of the labeling rules in the European Union and Taiwan.

  19. Application of Knowledge Discovery in Databases Methodologies for Predictive Models for Pregnancy Adverse Events

    ERIC Educational Resources Information Center

    Taft, Laritza M.

    2010-01-01

    In its report "To Err is Human", The Institute of Medicine recommended the implementation of internal and external voluntary and mandatory automatic reporting systems to increase detection of adverse events. Knowledge Discovery in Databases (KDD) allows the detection of patterns and trends that would be hidden or less detectable if analyzed by…

  20. Closing the Loop in ICU Decision Support: Physiologic Event Detection, Alerts, and Documentation

    PubMed Central

    Norris, Patrick R.; Dawant, Benoit M.

    2002-01-01

    Automated physiologic event detection and alerting is a challenging task in the ICU. Ideally care providers should be alerted only when events are clinically significant and there is opportunity for corrective action. However, the concepts of clinical significance and opportunity are difficult to define in automated systems, and effectiveness of alerting algorithms is difficult to measure. This paper describes recent efforts on the Simon project to capture information from ICU care providers about patient state and therapy in response to alerts, in order to assess the value of event definitions and progressively refine alerting algorithms. Event definitions for intracranial pressure and cerebral perfusion pressure were studied by implementing a reliable system to automatically deliver alerts to clinical users’ alphanumeric pagers, and to capture associated documentation about patient state and therapy when the alerts occurred. During a 6-month test period in the trauma ICU at Vanderbilt University Medical Center, 530 alerts were detected in 2280 hours of data spanning 14 patients. Clinical users electronically documented 81% of these alerts as they occurred. Retrospectively classifying documentation based on therapeutic actions taken, or reasons why actions were not taken, provided useful information about ways to potentially improve event definitions and enhance system utility.

  1. Association rule mining in the US Vaccine Adverse Event Reporting System (VAERS).

    PubMed

    Wei, Lai; Scott, John

    2015-09-01

    Spontaneous adverse event reporting systems are critical tools for monitoring the safety of licensed medical products. Commonly used signal detection algorithms identify disproportionate product-adverse event pairs and may not be sensitive to more complex potential signals. We sought to develop a computationally tractable multivariate data-mining approach to identify product-multiple adverse event associations. We describe an application of stepwise association rule mining (Step-ARM) to detect potential vaccine-symptom group associations in the US Vaccine Adverse Event Reporting System. Step-ARM identifies strong associations between one vaccine and one or more adverse events. To reduce the number of redundant association rules found by Step-ARM, we also propose a clustering method for the post-processing of association rules. In sample applications to a trivalent intradermal inactivated influenza virus vaccine and to measles, mumps, rubella, and varicella (MMRV) vaccine and in simulation studies, we find that Step-ARM can detect a variety of medically coherent potential vaccine-symptom group signals efficiently. In the MMRV example, Step-ARM appears to outperform univariate methods in detecting a known safety signal. Our approach is sensitive to potentially complex signals, which may be particularly important when monitoring novel medical countermeasure products such as pandemic influenza vaccines. The post-processing clustering algorithm improves the applicability of the approach as a screening method to identify patterns that may merit further investigation. Copyright © 2015 John Wiley & Sons, Ltd.

  2. The digital trigger system for the RED-100 detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naumov, P. P., E-mail: ddr727@yandex.ru; Akimov, D. Yu.; Belov, V. A.

    The system for forming a trigger for the liquid xenon detector RED-100 is developed. The trigger can be generated for all types of events that the detector needs for calibration and data acquisition, including the events with a single electron of ionization. In the system, a mechanism of event detection is implemented according to which the timestamp and event type are assigned to each event. The trigger system is required in the systems searching for rare events to select and keep only the necessary information from the ADC array. The specifications and implementation of the trigger unit which provides amore » high efficiency of response even to low-energy events are considered.« less

  3. Developing assessment system for wireless capsule endoscopy videos based on event detection

    NASA Astrophysics Data System (ADS)

    Chen, Ying-ju; Yasen, Wisam; Lee, Jeongkyu; Lee, Dongha; Kim, Yongho

    2009-02-01

    Along with the advancing of technology in wireless and miniature camera, Wireless Capsule Endoscopy (WCE), the combination of both, enables a physician to diagnose patient's digestive system without actually perform a surgical procedure. Although WCE is a technical breakthrough that allows physicians to visualize the entire small bowel noninvasively, the video viewing time takes 1 - 2 hours. This is very time consuming for the gastroenterologist. Not only it sets a limit on the wide application of this technology but also it incurs considerable amount of cost. Therefore, it is important to automate such process so that the medical clinicians only focus on interested events. As an extension from our previous work that characterizes the motility of digestive tract in WCE videos, we propose a new assessment system for energy based events detection (EG-EBD) to classify the events in WCE videos. For the system, we first extract general features of a WCE video that can characterize the intestinal contractions in digestive organs. Then, the event boundaries are identified by using High Frequency Content (HFC) function. The segments are classified into WCE event by special features. In this system, we focus on entering duodenum, entering cecum, and active bleeding. This assessment system can be easily extended to discover more WCE events, such as detailed organ segmentation and more diseases, by using new special features. In addition, the system provides a score for every WCE image for each event. Using the event scores, the system helps a specialist to speedup the diagnosis process.

  4. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications

    PubMed Central

    2018-01-01

    Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter, and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events. PMID:29614060

  5. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications.

    PubMed

    Costa, Daniel G; Duran-Faundez, Cristian; Andrade, Daniel C; Rocha-Junior, João B; Peixoto, João Paulo Just

    2018-04-03

    Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter , and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events.

  6. ElarmS Earthquake Early Warning System 2016 Performance and New Research

    NASA Astrophysics Data System (ADS)

    Chung, A. I.; Allen, R. M.; Hellweg, M.; Henson, I. H.; Neuhauser, D. S.

    2016-12-01

    The ElarmS earthquake early warning system has been detecting earthquakes throughout California since 2007. It is one of the algorithms that contributes to the West Coast ShakeAlert, a prototype earthquake early warning system being developed for the US West Coast. ElarmS is also running in the Pacific Northwest, and in Israel, Chile, Turkey, and Peru in test mode. We summarize the performance of the ElarmS system over the past year and review some of the more problematic events that the system has encountered. During the first half of 2016 (2016-01-01 through 2016-07-21), ElarmS successfully alerted on all events with ANSS catalog magnitudes M>3 in the Los Angeles area. The mean alert time for these 9 events was just 4.84 seconds. In the San Francisco Bay Area, ElarmS detected 26 events with ANSS catalog magnitudes M>3. The alert times for these events is 9.12 seconds. The alert times are longer in the Bay Area than in the Los Angeles area due to the sparser network of stations in the Bay Area. 7 Bay Area events were not detected by ElarmS. These events occurred in areas where there is less dense station coverage. In addition, ElarmS sent alerts for 13 of the 16 moderately-sized (ANSS catalog magnitudes M>4) events that occurred throughout the state of California. One of those missed events was a M4.5 that occurred far offshore in the northernmost part of the state. The other two missed events occurred inland in regions with sparse station coverage. Over the past year, we have worked towards the implementation of a new filterbank teleseismic filter algorithm, which we will discuss. Other than teleseismic events, a significant cause of false alerts and severely mislocated events is spurious triggers being associated with triggers from a real earthquake. Here, we address new approaches to filtering out problematic triggers.

  7. Unsupervised Spatial Event Detection in Targeted Domains with Applications to Civil Unrest Modeling

    PubMed Central

    Zhao, Liang; Chen, Feng; Dai, Jing; Hua, Ting; Lu, Chang-Tien; Ramakrishnan, Naren

    2014-01-01

    Twitter has become a popular data source as a surrogate for monitoring and detecting events. Targeted domains such as crime, election, and social unrest require the creation of algorithms capable of detecting events pertinent to these domains. Due to the unstructured language, short-length messages, dynamics, and heterogeneity typical of Twitter data streams, it is technically difficult and labor-intensive to develop and maintain supervised learning systems. We present a novel unsupervised approach for detecting spatial events in targeted domains and illustrate this approach using one specific domain, viz. civil unrest modeling. Given a targeted domain, we propose a dynamic query expansion algorithm to iteratively expand domain-related terms, and generate a tweet homogeneous graph. An anomaly identification method is utilized to detect spatial events over this graph by jointly maximizing local modularity and spatial scan statistics. Extensive experiments conducted in 10 Latin American countries demonstrate the effectiveness of the proposed approach. PMID:25350136

  8. TU-G-BRD-01: Quantifying the Effectiveness of the Physics Pre-Treatment Plan Review for Detecting Errors in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopan, O; Novak, A; Zeng, J

    Purpose: Physics pre-treatment plan review is crucial to safe radiation oncology treatments. Studies show that most errors originate in treatment planning, which underscores the importance of physics plan review. As a QA measure the physics review is of fundamental importance and is central to the profession of medical physics. However, little is known about its effectiveness. More hard data are needed. The purpose of this study was to quantify the effectiveness of physics review with the goal of improving it. Methods: This study analyzed 315 “potentially serious” near-miss incidents within an institutional incident learning system collected over a two-year period.more » 139 of these originated prior to physics review and were found at the review or after. Incidents were classified as events that: 1)were detected by physics review, 2)could have been detected (but were not), and 3)could not have been detected. Category 1 and 2 events were classified by which specific check (within physics review) detected or could have detected the event. Results: Of the 139 analyzed events, 73/139 (53%) were detected or could have been detected by the physics review; although, 42/73 (58%) were not actually detected. 45/73 (62%) errors originated in treatment planning, making physics review the first step in the workflow that could detect the error. Two specific physics checks were particularly effective (combined effectiveness of >20%): verifying DRRs (8/73) and verifying isocenter (7/73). Software-based plan checking systems were evaluated and found to have potential effectiveness of 40%. Given current data structures, software implementations of some tests such as isocenter verification check would be challenging. Conclusion: Physics plan review is a key safety measure and can detect majority of reported events. However, a majority of events that potentially could have been detected were NOT detected in this study, indicating the need to improve the performance of physics review.« less

  9. Contribution of Infrasound to IDC Reviewed Event Bulletin

    NASA Astrophysics Data System (ADS)

    Bittner, Paulina; Polich, Paul; Gore, Jane; Ali, Sherif Mohamed; Medinskaya, Tatiana; Mialle, Pierrick

    2016-04-01

    Until 2003 two waveform technologies, i.e. seismic and hydroacoustic were used to detect and locate events included in the International Data Centre (IDC) Reviewed Event Bulletin (REB). The first atmospheric event was published in the REB in 2003 but infrasound detections could not be used by the Global Association (GA) Software due to the unmanageable high number of spurious associations. Offline improvements of the automatic processing took place to reduce the number of false detections to a reasonable level. In February 2010 the infrasound technology was reintroduced to the IDC operations and has contributed to both automatic and reviewed IDC bulletins. The primary contribution of infrasound technology is to detect atmospheric events. These events may also be observed at seismic stations, which will significantly improve event location. Examples of REB events, which were detected by the International Monitoring System (IMS) infrasound network were fireballs (e.g. Bangkok fireball, 2015), volcanic eruptions (e.g. Calbuco, Chile 2015) and large surface explosions (e.g. Tjanjin, China 2015). Query blasts and large earthquakes belong to events primarily recorded at seismic stations of the IMS network but often detected at the infrasound stations. Presence of infrasound detection associated to an event from a mining area indicates a surface explosion. Satellite imaging and a database of active mines can be used to confirm the origin of such events. This presentation will summarize the contribution of 6 years of infrasound data to IDC bulletins and provide examples of events recorded at the IMS infrasound network. Results of this study may help to improve location of small events with observations on infrasound stations.

  10. Laboratory-Based Prospective Surveillance for Community Outbreaks of Shigella spp. in Argentina

    PubMed Central

    Viñas, María R.; Tuduri, Ezequiel; Galar, Alicia; Yih, Katherine; Pichel, Mariana; Stelling, John; Brengi, Silvina P.; Della Gaspera, Anabella; van der Ploeg, Claudia; Bruno, Susana; Rogé, Ariel; Caffer, María I.; Kulldorff, Martin; Galas, Marcelo

    2013-01-01

    Background To implement effective control measures, timely outbreak detection is essential. Shigella is the most common cause of bacterial diarrhea in Argentina. Highly resistant clones of Shigella have emerged, and outbreaks have been recognized in closed settings and in whole communities. We hereby report our experience with an evolving, integrated, laboratory-based, near real-time surveillance system operating in six contiguous provinces of Argentina during April 2009 to March 2012. Methodology To detect localized shigellosis outbreaks timely, we used the prospective space-time permutation scan statistic algorithm of SaTScan, embedded in WHONET software. Twenty three laboratories sent updated Shigella data on a weekly basis to the National Reference Laboratory. Cluster detection analysis was performed at several taxonomic levels: for all Shigella spp., for serotypes within species and for antimicrobial resistance phenotypes within species. Shigella isolates associated with statistically significant signals (clusters in time/space with recurrence interval ≥365 days) were subtyped by pulsed field gel electrophoresis (PFGE) using PulseNet protocols. Principal Findings In three years of active surveillance, our system detected 32 statistically significant events, 26 of them identified before hospital staff was aware of any unexpected increase in the number of Shigella isolates. Twenty-six signals were investigated by PFGE, which confirmed a close relationship among the isolates for 22 events (84.6%). Seven events were investigated epidemiologically, which revealed links among the patients. Seventeen events were found at the resistance profile level. The system detected events of public health importance: infrequent resistance profiles, long-lasting and/or re-emergent clusters and events important for their duration or size, which were reported to local public health authorities. Conclusions/Significance The WHONET-SaTScan system may serve as a model for surveillance and can be applied to other pathogens, implemented by other networks, and scaled up to national and international levels for early detection and control of outbreaks. PMID:24349586

  11. Laboratory-based prospective surveillance for community outbreaks of Shigella spp. in Argentina.

    PubMed

    Viñas, María R; Tuduri, Ezequiel; Galar, Alicia; Yih, Katherine; Pichel, Mariana; Stelling, John; Brengi, Silvina P; Della Gaspera, Anabella; van der Ploeg, Claudia; Bruno, Susana; Rogé, Ariel; Caffer, María I; Kulldorff, Martin; Galas, Marcelo

    2013-01-01

    To implement effective control measures, timely outbreak detection is essential. Shigella is the most common cause of bacterial diarrhea in Argentina. Highly resistant clones of Shigella have emerged, and outbreaks have been recognized in closed settings and in whole communities. We hereby report our experience with an evolving, integrated, laboratory-based, near real-time surveillance system operating in six contiguous provinces of Argentina during April 2009 to March 2012. To detect localized shigellosis outbreaks timely, we used the prospective space-time permutation scan statistic algorithm of SaTScan, embedded in WHONET software. Twenty three laboratories sent updated Shigella data on a weekly basis to the National Reference Laboratory. Cluster detection analysis was performed at several taxonomic levels: for all Shigella spp., for serotypes within species and for antimicrobial resistance phenotypes within species. Shigella isolates associated with statistically significant signals (clusters in time/space with recurrence interval ≥365 days) were subtyped by pulsed field gel electrophoresis (PFGE) using PulseNet protocols. In three years of active surveillance, our system detected 32 statistically significant events, 26 of them identified before hospital staff was aware of any unexpected increase in the number of Shigella isolates. Twenty-six signals were investigated by PFGE, which confirmed a close relationship among the isolates for 22 events (84.6%). Seven events were investigated epidemiologically, which revealed links among the patients. Seventeen events were found at the resistance profile level. The system detected events of public health importance: infrequent resistance profiles, long-lasting and/or re-emergent clusters and events important for their duration or size, which were reported to local public health authorities. The WHONET-SaTScan system may serve as a model for surveillance and can be applied to other pathogens, implemented by other networks, and scaled up to national and international levels for early detection and control of outbreaks.

  12. Development and application of absolute quantitative detection by duplex chamber-based digital PCR of genetically modified maize events without pretreatment steps.

    PubMed

    Zhu, Pengyu; Fu, Wei; Wang, Chenguang; Du, Zhixin; Huang, Kunlun; Zhu, Shuifang; Xu, Wentao

    2016-04-15

    The possibility of the absolute quantitation of GMO events by digital PCR was recently reported. However, most absolute quantitation methods based on the digital PCR required pretreatment steps. Meanwhile, singleplex detection could not meet the demand of the absolute quantitation of GMO events that is based on the ratio of foreign fragments and reference genes. Thus, to promote the absolute quantitative detection of different GMO events by digital PCR, we developed a quantitative detection method based on duplex digital PCR without pretreatment. Moreover, we tested 7 GMO events in our study to evaluate the fitness of our method. The optimized combination of foreign and reference primers, limit of quantitation (LOQ), limit of detection (LOD) and specificity were validated. The results showed that the LOQ of our method for different GMO events was 0.5%, while the LOD is 0.1%. Additionally, we found that duplex digital PCR could achieve the detection results with lower RSD compared with singleplex digital PCR. In summary, the duplex digital PCR detection system is a simple and stable way to achieve the absolute quantitation of different GMO events. Moreover, the LOQ and LOD indicated that this method is suitable for the daily detection and quantitation of GMO events. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video

    PubMed Central

    Lee, Gil-beom; Lee, Myeong-jin; Lee, Woo-Kyung; Park, Joo-heon; Kim, Tae-Hwan

    2017-01-01

    Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object’s vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos. PMID:28327515

  14. Flight Tests of the Turbulence Prediction and Warning System (TPAWS)

    NASA Technical Reports Server (NTRS)

    Hamilton, David W.; Proctor, Fred H.; Ahmad, Nashat N.

    2012-01-01

    Flight tests of the National Aeronautics and Space Administration's Turbulence Prediction And Warning System (TPAWS) were conducted in the Fall of 2000 and Spring of 2002. TPAWS is a radar-based airborne turbulence detection system. During twelve flights, NASA's B-757 tallied 53 encounters with convectively induced turbulence. Analysis of data collected during 49 encounters in the Spring of 2002 showed that the TPAWS Airborne Turbulence Detection System (ATDS) successfully detected 80% of the events at least 30 seconds prior to the encounter, achieving FAA recommended performance criteria. Details of the flights, the prevailing weather conditions, and each of the turbulence events are presented in this report. Sensor and environmental characterizations are also provided.

  15. Automatic fall detection using wearable biomedical signal measurement terminal.

    PubMed

    Nguyen, Thuy-Trang; Cho, Myeong-Chan; Lee, Tae-Soo

    2009-01-01

    In our study, we developed a mobile waist-mounted device which can monitor the subject's acceleration signal and detect the fall events in real-time with high accuracy and automatically send an emergency message to a remote server via CDMA module. When fall event happens, the system also generates an alarm sound at 50Hz to alarm other people until a subject can sit up or stand up. A Kionix KXM52-1050 tri-axial accelerometer and a Bellwave BSM856 CDMA standalone modem were used to detect and manage fall events. We used not only a simple threshold algorithm but also some supporting methods to increase an accuracy of our system (nearly 100% in laboratory environment). Timely fall detection can prevent regrettable death due to long-lie effect; therefore increase the independence of elderly people in an unsupervised living environment.

  16. Traffic Congestion Detection System through Connected Vehicles and Big Data

    PubMed Central

    Cárdenas-Benítez, Néstor; Aquino-Santos, Raúl; Magaña-Espinoza, Pedro; Aguilar-Velazco, José; Edwards-Block, Arthur; Medina Cass, Aldo

    2016-01-01

    This article discusses the simulation and evaluation of a traffic congestion detection system which combines inter-vehicular communications, fixed roadside infrastructure and infrastructure-to-infrastructure connectivity and big data. The system discussed in this article permits drivers to identify traffic congestion and change their routes accordingly, thus reducing the total emissions of CO2 and decreasing travel time. This system monitors, processes and stores large amounts of data, which can detect traffic congestion in a precise way by means of a series of algorithms that reduces localized vehicular emission by rerouting vehicles. To simulate and evaluate the proposed system, a big data cluster was developed based on Cassandra, which was used in tandem with the OMNeT++ discreet event network simulator, coupled with the SUMO (Simulation of Urban MObility) traffic simulator and the Veins vehicular network framework. The results validate the efficiency of the traffic detection system and its positive impact in detecting, reporting and rerouting traffic when traffic events occur. PMID:27136548

  17. Traffic Congestion Detection System through Connected Vehicles and Big Data.

    PubMed

    Cárdenas-Benítez, Néstor; Aquino-Santos, Raúl; Magaña-Espinoza, Pedro; Aguilar-Velazco, José; Edwards-Block, Arthur; Medina Cass, Aldo

    2016-04-28

    This article discusses the simulation and evaluation of a traffic congestion detection system which combines inter-vehicular communications, fixed roadside infrastructure and infrastructure-to-infrastructure connectivity and big data. The system discussed in this article permits drivers to identify traffic congestion and change their routes accordingly, thus reducing the total emissions of CO₂ and decreasing travel time. This system monitors, processes and stores large amounts of data, which can detect traffic congestion in a precise way by means of a series of algorithms that reduces localized vehicular emission by rerouting vehicles. To simulate and evaluate the proposed system, a big data cluster was developed based on Cassandra, which was used in tandem with the OMNeT++ discreet event network simulator, coupled with the SUMO (Simulation of Urban MObility) traffic simulator and the Veins vehicular network framework. The results validate the efficiency of the traffic detection system and its positive impact in detecting, reporting and rerouting traffic when traffic events occur.

  18. Seismic Characterization of EGS Reservoirs

    NASA Astrophysics Data System (ADS)

    Templeton, D. C.; Pyle, M. L.; Matzel, E.; Myers, S.; Johannesson, G.

    2014-12-01

    To aid in the seismic characterization of Engineered Geothermal Systems (EGS), we enhance the traditional microearthquake detection and location methodologies at two EGS systems. We apply the Matched Field Processing (MFP) seismic imaging technique to detect new seismic events using known discrete microearthquake sources. Events identified using MFP are typically smaller magnitude events or events that occur within the coda of a larger event. Additionally, we apply a Bayesian multiple-event seismic location algorithm, called MicroBayesLoc, to estimate the 95% probability ellipsoids for events with high signal-to-noise ratios (SNR). Such probability ellipsoid information can provide evidence for determining if a seismic lineation could be real or simply within the anticipated error range. We apply this methodology to the Basel EGS data set and compare it to another EGS dataset. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  19. Automatic Detection and Vulnerability Analysis of Areas Endangered by Heavy Rain

    NASA Astrophysics Data System (ADS)

    Krauß, Thomas; Fischer, Peter

    2016-08-01

    In this paper we present a new method for fully automatic detection and derivation of areas endangered by heavy rainfall based only on digital elevation models. Tracking news show that the majority of occuring natural hazards are flood events. So already many flood prediction systems were developed. But most of these existing systems for deriving areas endangered by flooding events are based only on horizontal and vertical distances to existing rivers and lakes. Typically such systems take not into account dangers arising directly from heavy rain events. In a study conducted by us together with a german insurance company a new approach for detection of areas endangered by heavy rain was proven to give a high correlation of the derived endangered areas and the losses claimed at the insurance company. Here we describe three methods for classification of digital terrain models and analyze their usability for automatic detection and vulnerability analysis for areas endangered by heavy rainfall and analyze the results using the available insurance data.

  20. 40 CFR 63.7833 - How do I demonstrate continuous compliance with the emission limitations that apply to me?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... baghouse equipped with a bag leak detection system, operating and maintaining each bag leak detection... requirements. If you increase or decrease the sensitivity of the bag leak detection system beyond the limits... event of a bag leak detection system alarm or when the hourly average opacity exceeded 5 percent, the...

  1. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talathi, S. S.

    Epilepsy is common neurological diseases, affecting about 0.6-0.8 % of world population. Epileptic patients suffer from chronic unprovoked seizures, which can result in broad spectrum of debilitating medical and social consequences. Since seizures, in general, occur infrequently and are unpredictable, automated seizure detection systems are recommended to screen for seizures during long-term electroencephalogram (EEG) recordings. In addition, systems for early seizure detection can lead to the development of new types of intervention systems that are designed to control or shorten the duration of seizure events. In this article, we investigate the utility of recurrent neural networks (RNNs) in designing seizuremore » detection and early seizure detection systems. We propose a deep learning framework via the use of Gated Recurrent Unit (GRU) RNNs for seizure detection. We use publicly available data in order to evaluate our method and demonstrate very promising evaluation results with overall accuracy close to 100 %. We also systematically investigate the application of our method for early seizure warning systems. Our method can detect about 98% of seizure events within the first 5 seconds of the overall epileptic seizure duration.« less

  2. A Method for Automated Detection of Usability Problems from Client User Interface Events

    PubMed Central

    Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.

    2005-01-01

    Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121

  3. Detection of anomalous events

    DOEpatents

    Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.

    2016-06-07

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.

  4. An Efficient Pattern Mining Approach for Event Detection in Multivariate Temporal Data

    PubMed Central

    Batal, Iyad; Cooper, Gregory; Fradkin, Dmitriy; Harrison, James; Moerchen, Fabian; Hauskrecht, Milos

    2015-01-01

    This work proposes a pattern mining approach to learn event detection models from complex multivariate temporal data, such as electronic health records. We present Recent Temporal Pattern mining, a novel approach for efficiently finding predictive patterns for event detection problems. This approach first converts the time series data into time-interval sequences of temporal abstractions. It then constructs more complex time-interval patterns backward in time using temporal operators. We also present the Minimal Predictive Recent Temporal Patterns framework for selecting a small set of predictive and non-spurious patterns. We apply our methods for predicting adverse medical events in real-world clinical data. The results demonstrate the benefits of our methods in learning accurate event detection models, which is a key step for developing intelligent patient monitoring and decision support systems. PMID:26752800

  5. Possible Evidence for an Event Horizon in Cyg XR-1

    NASA Technical Reports Server (NTRS)

    Dolan, Joseph F.; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    The X-ray emitting component in the Cyg XR-1/HDE226868 system is a leading candidate for identification as a stellar-mass sized black hole. The positive identification of a black hole as predicted by general relativity requires the detection of an event horizon surrounding the point singularity. One signature of such an event horizon would be the existence of dying pulse trains emitted by material spiraling into the event horizon from the last stable orbit around the black hole. We observed the Cyg XR-1 system at three different epochs in a 1400 - 3000 A bandpass with 0.1 ms time resolution using the Hubble Space Telescope's High Speed Photometer. Repeated excursions of the detected flux by more than three standard deviations above the mean are present in the UV flux with FWHM 1 - 10 ms. If any of these excursions are pulses of radiation produced in the system (and not just stochastic variability associated with the Poisson distribution of detected photon arrival times), then this short a timescale requires that the pulses originate in the accretion disk around Cyg XR-1. Two series of pulses with characteristics similar to those expected from dying pulse trains were detected in three hours of observation.

  6. On-site detection of stacked genetically modified soybean based on event-specific TM-LAMP and a DNAzyme-lateral flow biosensor.

    PubMed

    Cheng, Nan; Shang, Ying; Xu, Yuancong; Zhang, Li; Luo, Yunbo; Huang, Kunlun; Xu, Wentao

    2017-05-15

    Stacked genetically modified organisms (GMO) are becoming popular for their enhanced production efficiency and improved functional properties, and on-site detection of stacked GMO is an urgent challenge to be solved. In this study, we developed a cascade system combining event-specific tag-labeled multiplex LAMP with a DNAzyme-lateral flow biosensor for reliable detection of stacked events (DP305423× GTS 40-3-2). Three primer sets, both event-specific and soybean species-specific, were newly designed for the tag-labeled multiplex LAMP system. A trident-like lateral flow biosensor displayed amplified products simultaneously without cross contamination, and DNAzyme enhancement improved the sensitivity effectively. After optimization, the limit of detection was approximately 0.1% (w/w) for stacked GM soybean, which is sensitive enough to detect genetically modified content up to a threshold value established by several countries for regulatory compliance. The entire detection process could be shortened to 120min without any large-scale instrumentation. This method may be useful for the in-field detection of DP305423× GTS 40-3-2 soybean on a single kernel basis and on-site screening tests of stacked GM soybean lines and individual parent GM soybean lines in highly processed foods. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Passive (Micro-) Seismic Event Detection by Identifying Embedded "Event" Anomalies Within Statistically Describable Background Noise

    NASA Astrophysics Data System (ADS)

    Baziw, Erick; Verbeek, Gerald

    2012-12-01

    Among engineers there is considerable interest in the real-time identification of "events" within time series data with a low signal to noise ratio. This is especially true for acoustic emission analysis, which is utilized to assess the integrity and safety of many structures and is also applied in the field of passive seismic monitoring (PSM). Here an array of seismic receivers are used to acquire acoustic signals to monitor locations where seismic activity is expected: underground excavations, deep open pits and quarries, reservoirs into which fluids are injected or from which fluids are produced, permeable subsurface formations, or sites of large underground explosions. The most important element of PSM is event detection: the monitoring of seismic acoustic emissions is a continuous, real-time process which typically runs 24 h a day, 7 days a week, and therefore a PSM system with poor event detection can easily acquire terabytes of useless data as it does not identify crucial acoustic events. This paper outlines a new algorithm developed for this application, the so-called SEED™ (Signal Enhancement and Event Detection) algorithm. The SEED™ algorithm uses real-time Bayesian recursive estimation digital filtering techniques for PSM signal enhancement and event detection.

  8. Closing the loop in ICU decision support: physiologic event detection, alerts, and documentation.

    PubMed Central

    Norris, P. R.; Dawant, B. M.

    2001-01-01

    Automated physiologic event detection and alerting is a challenging task in the ICU. Ideally care providers should be alerted only when events are clinically significant and there is opportunity for corrective action. However, the concepts of clinical significance and opportunity are difficult to define in automated systems, and effectiveness of alerting algorithms is difficult to measure. This paper describes recent efforts on the Simon project to capture information from ICU care providers about patient state and therapy in response to alerts, in order to assess the value of event definitions and progressively refine alerting algorithms. Event definitions for intracranial pressure and cerebral perfusion pressure were studied by implementing a reliable system to automatically deliver alerts to clinical users alphanumeric pagers, and to capture associated documentation about patient state and therapy when the alerts occurred. During a 6-month test period in the trauma ICU at Vanderbilt University Medical Center, 530 alerts were detected in 2280 hours of data spanning 14 patients. Clinical users electronically documented 81% of these alerts as they occurred. Retrospectively classifying documentation based on therapeutic actions taken, or reasons why actions were not taken, provided useful information about ways to potentially improve event definitions and enhance system utility. PMID:11825238

  9. 40 CFR 264.310 - Closure and post-closure care.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... settling, subsidence, erosion, or other events; (2) Continue to operate the leachate collection and removal system until leachate is no longer detected; (3) Maintain and monitor the leak detection system in...

  10. 40 CFR 264.310 - Closure and post-closure care.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... settling, subsidence, erosion, or other events; (2) Continue to operate the leachate collection and removal system until leachate is no longer detected; (3) Maintain and monitor the leak detection system in...

  11. 40 CFR 264.310 - Closure and post-closure care.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... settling, subsidence, erosion, or other events; (2) Continue to operate the leachate collection and removal system until leachate is no longer detected; (3) Maintain and monitor the leak detection system in...

  12. 40 CFR 264.310 - Closure and post-closure care.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... settling, subsidence, erosion, or other events; (2) Continue to operate the leachate collection and removal system until leachate is no longer detected; (3) Maintain and monitor the leak detection system in...

  13. 40 CFR 264.310 - Closure and post-closure care.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... settling, subsidence, erosion, or other events; (2) Continue to operate the leachate collection and removal system until leachate is no longer detected; (3) Maintain and monitor the leak detection system in...

  14. Video-tracker trajectory analysis: who meets whom, when and where

    NASA Astrophysics Data System (ADS)

    Jäger, U.; Willersinn, D.

    2010-04-01

    Unveiling unusual or hostile events by observing manifold moving persons in a crowd is a challenging task for human operators, especially when sitting in front of monitor walls for hours. Typically, hostile events are rare. Thus, due to tiredness and negligence the operator may miss important events. In such situations, an automatic alarming system is able to support the human operator. The system incorporates a processing chain consisting of (1) people tracking, (2) event detection, (3) data retrieval, and (4) display of relevant video sequence overlaid by highlighted regions of interest. In this paper we focus on the event detection stage of the processing chain mentioned above. In our case, the selected event of interest is the encounter of people. Although being based on a rather simple trajectory analysis, this kind of event embodies great practical importance because it paves the way to answer the question "who meets whom, when and where". This, in turn, forms the basis to detect potential situations where e.g. money, weapons, drugs etc. are handed over from one person to another in crowded environments like railway stations, airports or busy streets and places etc.. The input to the trajectory analysis comes from a multi-object video-based tracking system developed at IOSB which is able to track multiple individuals within a crowd in real-time [1]. From this we calculate the inter-distances between all persons on a frame-to-frame basis. We use a sequence of simple rules based on the individuals' kinematics to detect the event mentioned above to output the frame number, the persons' IDs from the tracker and the pixel coordinates of the meeting position. Using this information, a data retrieval system may extract the corresponding part of the recorded video image sequence and finally allows for replaying the selected video clip with a highlighted region of interest to attract the operator's attention for further visual inspection.

  15. Drivers of Emerging Infectious Disease Events as a Framework for Digital Detection.

    PubMed

    Olson, Sarah H; Benedum, Corey M; Mekaru, Sumiko R; Preston, Nicholas D; Mazet, Jonna A K; Joly, Damien O; Brownstein, John S

    2015-08-01

    The growing field of digital disease detection, or epidemic intelligence, attempts to improve timely detection and awareness of infectious disease (ID) events. Early detection remains an important priority; thus, the next frontier for ID surveillance is to improve the recognition and monitoring of drivers (antecedent conditions) of ID emergence for signals that precede disease events. These data could help alert public health officials to indicators of elevated ID risk, thereby triggering targeted active surveillance and interventions. We believe that ID emergence risks can be anticipated through surveillance of their drivers, just as successful warning systems of climate-based, meteorologically sensitive diseases are supported by improved temperature and precipitation data. We present approaches to driver surveillance, gaps in the current literature, and a scientific framework for the creation of a digital warning system. Fulfilling the promise of driver surveillance will require concerted action to expand the collection of appropriate digital driver data.

  16. Screening DNA chip and event-specific multiplex PCR detection methods for biotech crops.

    PubMed

    Lee, Seong-Hun

    2014-11-01

    There are about 80 biotech crop events that have been approved by safety assessment in Korea. They have been controlled by genetically modified organism (GMO) and living modified organism (LMO) labeling systems. The DNA-based detection method has been used as an efficient scientific management tool. Recently, the multiplex polymerase chain reaction (PCR) and DNA chip have been developed as simultaneous detection methods for several biotech crops' events. The event-specific multiplex PCR method was developed to detect five biotech maize events: MIR604, Event 3272, LY 038, MON 88017 and DAS-59122-7. The specificity was confirmed and the sensitivity was 0.5%. The screening DNA chip was developed from four endogenous genes of soybean, maize, cotton and canola respectively along with two regulatory elements and seven genes: P35S, tNOS, pat, bar, epsps1, epsps2, pmi, cry1Ac and cry3B. The specificity was confirmed and the sensitivity was 0.5% for four crops' 12 events: one soybean, six maize, three cotton and two canola events. The multiplex PCR and DNA chip can be available for screening, gene-specific and event-specific analysis of biotech crops as efficient detection methods by saving on workload and time. © 2014 Society of Chemical Industry. © 2014 Society of Chemical Industry.

  17. APDS: Autonomous Pathogen Detection System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langlois, R G; Brown, S; Burris, L

    An early warning system to counter bioterrorism, the Autonomous Pathogen Detection System (APDS) continuously monitors the environment for the presence of biological pathogens (e.g., anthrax) and once detected, it sounds an alarm much like a smoke detector warns of a fire. Long before September 11, 2001, this system was being developed to protect domestic venues and events including performing arts centers, mass transit systems, major sporting and entertainment events, and other high profile situations in which the public is at risk of becoming a target of bioterrorist attacks. Customizing off-the-shelf components and developing new components, a multidisciplinary team developed APDS,more » a stand-alone system for rapid, continuous monitoring of multiple airborne biological threat agents in the environment. The completely automated APDS samples the air, prepares fluid samples in-line, and performs two orthogonal tests: immunoassay and nucleic acid detection. When compared to competing technologies, APDS is unprecedented in terms of flexibility and system performance.« less

  18. Automatic identification of alpine mass movements based on seismic and infrasound signals

    NASA Astrophysics Data System (ADS)

    Schimmel, Andreas; Hübl, Johannes

    2017-04-01

    The automatic detection and identification of alpine mass movements like debris flows, debris floods or landslides gets increasing importance for mitigation measures in the densely populated and intensively used alpine regions. Since this mass movement processes emits characteristically seismic and acoustic waves in the low frequency range this events can be detected and identified based on this signals. So already several approaches for detection and warning systems based on seismic or infrasound signals has been developed. But a combination of both methods, which can increase detection probability and reduce false alarms is currently used very rarely and can serve as a promising method for developing an automatic detection and identification system. So this work presents an approach for a detection and identification system based on a combination of seismic and infrasound sensors, which can detect sediment related mass movements from a remote location unaffected by the process. The system is based on one infrasound sensor and one geophone which are placed co-located and a microcontroller where a specially designed detection algorithm is executed which can detect mass movements in real time directly at the sensor site. Further this work tries to get out more information from the seismic and infrasound spectrum produced by different sediment related mass movements to identify the process type and estimate the magnitude of the event. The system is currently installed and tested on five test sites in Austria, two in Italy and one in Switzerland as well as one in Germany. This high number of test sites is used to get a large database of very different events which will be the basis for a new identification method for alpine mass movements. These tests shows promising results and so this system provides an easy to install and inexpensive approach for a detection and warning system.

  19. Signaling communication events in a computer network

    DOEpatents

    Bender, Carl A.; DiNicola, Paul D.; Gildea, Kevin J.; Govindaraju, Rama K.; Kim, Chulho; Mirza, Jamshed H.; Shah, Gautam H.; Nieplocha, Jaroslaw

    2000-01-01

    A method, apparatus and program product for detecting a communication event in a distributed parallel data processing system in which a message is sent from an origin to a target. A low-level application programming interface (LAPI) is provided which has an operation for associating a counter with a communication event to be detected. The LAPI increments the counter upon the occurrence of the communication event. The number in the counter is monitored, and when the number increases, the event is detected. A completion counter in the origin is associated with the completion of a message being sent from the origin to the target. When the message is completed, LAPI increments the completion counter such that monitoring the completion counter detects the completion of the message. The completion counter may be used to insure that a first message has been sent from the origin to the target and completed before a second message is sent.

  20. Flow detection via sparse frame analysis for suspicious event recognition in infrared imagery

    NASA Astrophysics Data System (ADS)

    Fernandes, Henrique C.; Batista, Marcos A.; Barcelos, Celia A. Z.; Maldague, Xavier P. V.

    2013-05-01

    It is becoming increasingly evident that intelligent systems are very bene¯cial for society and that the further development of such systems is necessary to continue to improve society's quality of life. One area that has drawn the attention of recent research is the development of automatic surveillance systems. In our work we outline a system capable of monitoring an uncontrolled area (an outside parking lot) using infrared imagery and recognizing suspicious events in this area. The ¯rst step is to identify moving objects and segment them from the scene's background. Our approach is based on a dynamic background-subtraction technique which robustly adapts detection to illumination changes. It is analyzed only regions where movement is occurring, ignoring in°uence of pixels from regions where there is no movement, to segment moving objects. Regions where movement is occurring are identi¯ed using °ow detection via sparse frame analysis. During the tracking process the objects are classi¯ed into two categories: Persons and Vehicles, based on features such as size and velocity. The last step is to recognize suspicious events that may occur in the scene. Since the objects are correctly segmented and classi¯ed it is possible to identify those events using features such as velocity and time spent motionless in one spot. In this paper we recognize the suspicious event suspicion of object(s) theft from inside a parked vehicle at spot X by a person" and results show that the use of °ow detection increases the recognition of this suspicious event from 78:57% to 92:85%.

  1. Monitoring the Microgravity Environment Quality On-Board the International Space Station Using Soft Computing Techniques

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Lin, Paul P.

    2001-01-01

    This paper presents an artificial intelligence monitoring system developed by the NASA Glenn Principal Investigator Microgravity Services project to help the principal investigator teams identify the primary vibratory disturbance sources that are active, at any moment in time, on-board the International Space Station, which might impact the microgravity environment their experiments are exposed to. From the Principal Investigator Microgravity Services' web site, the principal investigator teams can monitor via a graphical display, in near real time, which event(s) is/are on, such as crew activities, pumps, fans, centrifuges, compressor, crew exercise, platform structural modes, etc., and decide whether or not to run their experiments based on the acceleration environment associated with a specific event. This monitoring system is focused primarily on detecting the vibratory disturbance sources, but could be used as well to detect some of the transient disturbance sources, depending on the events duration. The system has built-in capability to detect both known and unknown vibratory disturbance sources. Several soft computing techniques such as Kohonen's Self-Organizing Feature Map, Learning Vector Quantization, Back-Propagation Neural Networks, and Fuzzy Logic were used to design the system.

  2. Systematic detection of seismic events at Mount St. Helens with an ultra-dense array

    NASA Astrophysics Data System (ADS)

    Meng, X.; Hartog, J. R.; Schmandt, B.; Hotovec-Ellis, A. J.; Hansen, S. M.; Vidale, J. E.; Vanderplas, J.

    2016-12-01

    During the summer of 2014, an ultra-dense array of 900 geophones was deployed around the crater of Mount St. Helens and continuously operated for 15 days. This dataset provides us an unprecedented opportunity to systematically detect seismic events around an active volcano and study their underlying mechanisms. We use a waveform-based matched filter technique to detect seismic events from this dataset. Due to the large volume of continuous data ( 1 TB), we performed the detection on the GPU cluster Stampede (https://www.tacc.utexas.edu/systems/stampede). We build a suite of template events from three catalogs: 1) the standard Pacific Northwest Seismic Network (PNSN) catalog (45 events); 2) the catalog from Hansen&Schmandt (2015) obtained with a reverse-time imaging method (212 events); and 3) the catalog identified with a matched filter technique using the PNSN permanent stations (190 events). By searching for template matches in the ultra-dense array, we find 2237 events. We then calibrate precise relative magnitudes for template and detected events, using a principal component fit to measure waveform amplitude ratios. The magnitude of completeness and b-value of the detected catalog is -0.5 and 1.1, respectively. Our detected catalog shows several intensive swarms, which are likely driven by fluid pressure transients in conduits or slip transients on faults underneath the volcano. We are currently relocating the detected catalog with HypoDD and measuring the seismic velocity changes at Mount St. Helens using the coda wave interferometry of detected repeating earthquakes. The accurate temporal-spatial migration pattern of seismicity and seismic property changes should shed light on the physical processes beneath Mount St. Helens.

  3. Robotic guarded motion system and method

    DOEpatents

    Bruemmer, David J.

    2010-02-23

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes instructions for repeating, on each iteration through an event timing loop, the acts of defining an event horizon, detecting a range to obstacles around the robot, and testing for an event horizon intrusion. Defining the event horizon includes determining a distance from the robot that is proportional to a current velocity of the robot and testing for the event horizon intrusion includes determining if any range to the obstacles is within the event horizon. Finally, on each iteration through the event timing loop, the method includes reducing the current velocity of the robot in proportion to a loop period of the event timing loop if the event horizon intrusion occurs.

  4. Detection of Traveling Ionospheric Disturbances (TIDs) from various man-made sources using Global Navigation Satellite System (GNSS)

    NASA Astrophysics Data System (ADS)

    Helmboldt, J.; Park, J.; von Frese, R. R. B.; Grejner-Brzezinska, D. A.

    2016-12-01

    Traveling ionospheric disturbance (TID) is generated by various sources and detectable by observing the spatial and temporal change of electron contents in the ionosphere. This study focused on detecting and analyzing TIDs generated by acoustic-gravity waves from man-made events including underground nuclear explosions (UNEs), mine collapses, mine blasts, and large chemical explosions (LCEs) using Global Navigation Satellite System (GNSS). In this study we selected different types of events for case study which covers two US and three North Korean UNEs, two large US mine collapses, three large US mine blasts, and a LCE in northern China and a second LCE at the Nevada Test Site. In most cases, we successfully detected the TIDs as array signatures from the multiple nearby GNSS stations. The array-based TID signatures from these studies were found to yield event-appropriate TID propagation speeds ranging from about a few hundred m/s to roughly a km/s. In addition, the event TID waveforms, and propagation angles and directions were established. The TID waveforms and the maximum angle between each event and the IPP of its TID with the longest travel distance from the source may help differentiate UNEs and LCEs, but the uneven distributions of the observing GNSS stations complicates these results. Thus, further analysis is required of the utility of the apertures of event signatures in the ionosphere for discriminating these events. In general, the results of this study show the potential utility of GNSS observations for detecting and mapping the ionospheric signatures of large-energy anthropological explosions and subsurface collapses.

  5. Detection of cough signals in continuous audio recordings using hidden Markov models.

    PubMed

    Matos, Sergio; Birring, Surinder S; Pavord, Ian D; Evans, David H

    2006-06-01

    Cough is a common symptom of many respiratory diseases. The evaluation of its intensity and frequency of occurrence could provide valuable clinical information in the assessment of patients with chronic cough. In this paper we propose the use of hidden Markov models (HMMs) to automatically detect cough sounds from continuous ambulatory recordings. The recording system consists of a digital sound recorder and a microphone attached to the patient's chest. The recognition algorithm follows a keyword-spotting approach, with cough sounds representing the keywords. It was trained on 821 min selected from 10 ambulatory recordings, including 2473 manually labeled cough events, and tested on a database of nine recordings from separate patients with a total recording time of 3060 min and comprising 2155 cough events. The average detection rate was 82% at a false alarm rate of seven events/h, when considering only events above an energy threshold relative to each recording's average energy. These results suggest that HMMs can be applied to the detection of cough sounds from ambulatory patients. A postprocessing stage to perform a more detailed analysis on the detected events is under development, and could allow the rejection of some of the incorrectly detected events.

  6. Event detection for car park entries by video-surveillance

    NASA Astrophysics Data System (ADS)

    Coquin, Didier; Tailland, Johan; Cintract, Michel

    2007-10-01

    Intelligent surveillance has become an important research issue due to the high cost and low efficiency of human supervisors, and machine intelligence is required to provide a solution for automated event detection. In this paper we describe a real-time system that has been used for detecting car park entries, using an adaptive background learning algorithm and two indicators representing activity and identity to overcome the difficulty of tracking objects.

  7. Real World Experience With Ion Implant Fault Detection at Freescale Semiconductor

    NASA Astrophysics Data System (ADS)

    Sing, David C.; Breeden, Terry; Fakhreddine, Hassan; Gladwin, Steven; Locke, Jason; McHugh, Jim; Rendon, Michael

    2006-11-01

    The Freescale automatic fault detection and classification (FDC) system has logged data from over 3.5 million implants in the past two years. The Freescale FDC system is a low cost system which collects summary implant statistics at the conclusion of each implant run. The data is collected by either downloading implant data log files from the implant tool workstation, or by exporting summary implant statistics through the tool's automation interface. Compared to the traditional FDC systems which gather trace data from sensors on the tool as the implant proceeds, the Freescale FDC system cannot prevent scrap when a fault initially occurs, since the data is collected after the implant concludes. However, the system can prevent catastrophic scrap events due to faults which are not detected for days or weeks, leading to the loss of hundreds or thousands of wafers. At the Freescale ATMC facility, the practical applications of the FD system fall into two categories: PM trigger rules which monitor tool signals such as ion gauges and charge control signals, and scrap prevention rules which are designed to detect specific failure modes that have been correlated to yield loss and scrap. PM trigger rules are designed to detect shifts in tool signals which indicate normal aging of tool systems. For example, charging parameters gradually shift as flood gun assemblies age, and when charge control rules start to fail a flood gun PM is performed. Scrap prevention rules are deployed to detect events such as particle bursts and excessive beam noise, events which have been correlated to yield loss. The FDC system does have tool log-down capability, and scrap prevention rules often use this capability to automatically log the tool into a maintenance state while simultaneously paging the sustaining technician for data review and disposition of the affected product.

  8. Candidate Binary Microlensing Events from the MACHO Project

    NASA Astrophysics Data System (ADS)

    Becker, A. C.; Alcock, C.; Allsman, R. A.; Alves, D. R.; Axelrod, T. S.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Griest, K.; King, L. J.; Lehner, M. J.; Marshall, S. L.; Minniti, D.; Peterson, B. A.; Popowski, P.; Pratt, M. R.; Quinn, P. J.; Rodgers, A. W.; Stubbs, C. W.; Sutherland, W.; Tomaney, A.; Vandehei, T.; Welch, D. L.; Baines, D.; Brakel, A.; Crook, B.; Howard, J.; Leach, T.; McDowell, D.; McKeown, S.; Mitchell, J.; Moreland, J.; Pozza, E.; Purcell, P.; Ring, S.; Salmon, A.; Ward, K.; Wyper, G.; Heller, A.; Kaspi, S.; Kovo, O.; Maoz, D.; Retter, A.; Rhie, S. H.; Stetson, P.; Walker, A.; MACHO Collaboration

    1998-12-01

    We present the lightcurves of 22 gravitational microlensing events from the first six years of the MACHO Project gravitational microlensing survey which are likely examples of lensing by binary systems. These events were selected from a total sample of ~ 300 events which were either detected by the MACHO Alert System or discovered through retrospective analyses of the MACHO database. Many of these events appear to have undergone a caustic or cusp crossing, and 2 of the events are well fit with lensing by binary systems with large mass ratios, indicating secondary companions of approximately planetary mass. The event rate is roughly consistent with predictions based upon our knowledge of the properties of binary stars. The utility of binary lensing in helping to solve the Galactic dark matter problem is demonstrated with analyses of 3 binary microlensing events seen towards the Magellanic Clouds. Source star resolution during caustic crossings in 2 of these events allows us to estimate the location of the lensing systems, assuming each source is a single star and not a short period binary. * MACHO LMC-9 appears to be a binary lensing event with a caustic crossing partially resolved in 2 observations. The resulting lens proper motion appears too small for a single source and LMC disk lens. However, it is considerably less likely to be a single source star and Galactic halo lens. We estimate the a priori probability of a short period binary source with a detectable binary character to be ~ 10 %. If the source is also a binary, then we currently have no constraints on the lens location. * The most recent of these events, MACHO 98-SMC-1, was detected in real-time. Follow-up observations by the MACHO/GMAN, PLANET, MPS, EROS and OGLE microlensing collaborations lead to the robust conclusion that the lens likely resides in the SMC.

  9. A Low-Cost, Reliable, High-Throughput System for Rodent Behavioral Phenotyping in a Home Cage Environment

    PubMed Central

    Parkison, Steven A.; Carlson, Jay D.; Chaudoin, Tammy R.; Hoke, Traci A.; Schenk, A. Katrin; Goulding, Evan H.; Pérez, Lance C.; Bonasera, Stephen J.

    2016-01-01

    Inexpensive, high-throughput, low maintenance systems for precise temporal and spatial measurement of mouse home cage behavior (including movement, feeding, and drinking) are required to evaluate products from large scale pharmaceutical design and genetic lesion programs. These measurements are also required to interpret results from more focused behavioral assays. We describe the design and validation of a highly-scalable, reliable mouse home cage behavioral monitoring system modeled on a previously described, one-of-a-kind system [1]. Mouse position was determined by solving static equilibrium equations describing the force and torques acting on the system strain gauges; feeding events were detected by a photobeam across the food hopper, and drinking events were detected by a capacitive lick sensor. Validation studies show excellent agreement between mouse position and drinking events measured by the system compared with video-based observation – a gold standard in neuroscience. PMID:23366406

  10. Application of a Hidden Bayes Naive Multiclass Classifier in Network Intrusion Detection

    ERIC Educational Resources Information Center

    Koc, Levent

    2013-01-01

    With increasing Internet connectivity and traffic volume, recent intrusion incidents have reemphasized the importance of network intrusion detection systems for combating increasingly sophisticated network attacks. Techniques such as pattern recognition and the data mining of network events are often used by intrusion detection systems to classify…

  11. Automated Detection of Events of Scientific Interest

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    A report presents a slightly different perspective of the subject matter of Fusing Symbolic and Numerical Diagnostic Computations (NPO-42512), which appears elsewhere in this issue of NASA Tech Briefs. Briefly, the subject matter is the X-2000 Anomaly Detection Language, which is a developmental computing language for fusing two diagnostic computer programs one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for real-time detection of events. In the case of the cited companion NASA Tech Briefs article, the contemplated events that one seeks to detect would be primarily failures or other changes that could adversely affect the safety or success of a spacecraft mission. In the case of the instant report, the events to be detected could also include natural phenomena that could be of scientific interest. Hence, the use of X- 2000 Anomaly Detection Language could contribute to a capability for automated, coordinated use of multiple sensors and sensor-output-data-processing hardware and software to effect opportunistic collection and analysis of scientific data.

  12. Full-waveform detection of non-impulsive seismic events based on time-reversal methods

    NASA Astrophysics Data System (ADS)

    Solano, Ericka Alinne; Hjörleifsdóttir, Vala; Liu, Qinya

    2017-12-01

    We present a full-waveform detection method for non-impulsive seismic events, based on time-reversal principles. We use the strain Green's tensor as a matched filter, correlating it with continuous observed seismograms, to detect non-impulsive seismic events. We show that this is mathematically equivalent to an adjoint method for detecting earthquakes. We define the detection function, a scalar valued function, which depends on the stacked correlations for a group of stations. Event detections are given by the times at which the amplitude of the detection function exceeds a given value relative to the noise level. The method can make use of the whole seismic waveform or any combination of time-windows with different filters. It is expected to have an advantage compared to traditional detection methods for events that do not produce energetic and impulsive P waves, for example glacial events, landslides, volcanic events and transform-fault earthquakes for events which velocity structure along the path is relatively well known. Furthermore, the method has advantages over empirical Greens functions template matching methods, as it does not depend on records from previously detected events, and therefore is not limited to events occurring in similar regions and with similar focal mechanisms as these events. The method is not specific to any particular way of calculating the synthetic seismograms, and therefore complicated structural models can be used. This is particularly beneficial for intermediate size events that are registered on regional networks, for which the effect of lateral structure on the waveforms can be significant. To demonstrate the feasibility of the method, we apply it to two different areas located along the mid-oceanic ridge system west of Mexico where non-impulsive events have been reported. The first study area is between Clipperton and Siqueiros transform faults (9°N), during the time of two earthquake swarms, occurring in March 2012 and May 2016. The second area of interest is the Gulf of California where two swarms took place during July and September of 2015. We show that we are able to detect previously non-reported, non-impulsive events and recommend that this method be used together with more traditional template matching methods to maximize the number of detected events.

  13. Detection and attribution of climate extremes in the observed record

    DOE PAGES

    Easterling, David R.; Kunkel, Kenneth E.; Wehner, Michael F.; ...

    2016-01-18

    We present an overview of practices and challenges related to the detection and attribution of observed changes in climate extremes. Detection is the identification of a statistically significant change in the extreme values of a climate variable over some period of time. Issues in detection discussed include data quality, coverage, and completeness. Attribution takes that detection of a change and uses climate model simulations to evaluate whether a cause can be assigned to that change. Additionally, we discuss a newer field of attribution, event attribution, where individual extreme events are analyzed for the express purpose of assigning some measure ofmore » whether that event was directly influenced by anthropogenic forcing of the climate system.« less

  14. Apparatus and method for detecting tampering in flexible structures

    DOEpatents

    Maxey, Lonnie C [Knoxville, TN; Haynes, Howard D [Knoxville, TN

    2011-02-01

    A system for monitoring or detecting tampering in a flexible structure includes taking electrical measurements on a sensing cable coupled to the structure, performing spectral analysis on the measured data, and comparing the spectral characteristics of the event to those of known benign and/or known suspicious events. A threshold or trigger value may used to identify an event of interest and initiate data collection. Alternatively, the system may be triggered at preset intervals, triggered manually, or triggered by a signal from another sensing device such as a motion detector. The system may be used to monitor electrical cables and conduits, hoses and flexible ducts, fences and other perimeter control devices, structural cables, flexible fabrics, and other flexible structures.

  15. Li-ion battery thermal runaway suppression system using microchannel coolers and refrigerant injections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bandhauer, Todd M.; Farmer, Joseph C.

    A battery management system with thermally integrated fire suppression includes a multiplicity of individual battery cells in a housing; a multiplicity of cooling passages in the housing within or between the multiplicity of individual battery cells; a multiplicity of sensors operably connected to the individual battery cells, the sensors adapted to detect a thermal runaway event related to one or more of the multiplicity of individual battery cells; and a management system adapted to inject coolant into at least one of the multiplicity of cooling passages upon the detection of the thermal runaway event by the any one of themore » multiplicity of sensors, so that the thermal runaway event is rapidly quenched.« less

  16. Online track detection in triggerless mode for INO

    NASA Astrophysics Data System (ADS)

    Jain, A.; Padmini, S.; Joseph, A. N.; Mahesh, P.; Preetha, N.; Behere, A.; Sikder, S. S.; Majumder, G.; Behera, S. P.

    2018-03-01

    The India based Neutrino Observatory (INO) is a proposed particle physics research project to study the atmospheric neutrinos. INO-Iron Calorimeter (ICAL) will consist of 28,800 detectors having 3.6 million electronic channels expected to activate with 100 Hz single rate, producing data at a rate of 3 GBps. Data collected contains a few real hits generated by muon tracks and the remaining noise-induced spurious hits. Estimated reduction factor after filtering out data of interest from generated data is of the order of 103. This makes trigger generation critical for efficient data collection and storage. Trigger is generated by detecting coincidence across multiple channels satisfying trigger criteria, within a small window of 200 ns in the trigger region. As the probability of neutrino interaction is very low, track detection algorithm has to be efficient and fast enough to process 5 × 106 events-candidates/s without introducing significant dead time, so that not even a single neutrino event is missed out. A hardware based trigger system is presently proposed for on-line track detection considering stringent timing requirements. Though the trigger system can be designed with scalability, a lot of hardware devices and interconnections make it a complex and expensive solution with limited flexibility. A software based track detection approach working on the hit information offers an elegant solution with possibility of varying trigger criteria for selecting various potentially interesting physics events. An event selection approach for an alternative triggerless readout scheme has been developed. The algorithm is mathematically simple, robust and parallelizable. It has been validated by detecting simulated muon events for energies of the range of 1 GeV-10 GeV with 100% efficiency at a processing rate of 60 μs/event on a 16 core machine. The algorithm and result of a proof-of-concept for its faster implementation over multiple cores is presented. The paper also discusses about harnessing the computing capabilities of multi-core computing farm, thereby optimizing number of nodes required for the proposed system.

  17. Targeting safety improvements through identification of incident origination and detection in a near-miss incident learning system.

    PubMed

    Novak, Avrey; Nyflot, Matthew J; Ermoian, Ralph P; Jordan, Loucille E; Sponseller, Patricia A; Kane, Gabrielle M; Ford, Eric C; Zeng, Jing

    2016-05-01

    Radiation treatment planning involves a complex workflow that has multiple potential points of vulnerability. This study utilizes an incident reporting system to identify the origination and detection points of near-miss errors, in order to guide their departmental safety improvement efforts. Previous studies have examined where errors arise, but not where they are detected or applied a near-miss risk index (NMRI) to gauge severity. From 3/2012 to 3/2014, 1897 incidents were analyzed from a departmental incident learning system. All incidents were prospectively reviewed weekly by a multidisciplinary team and assigned a NMRI score ranging from 0 to 4 reflecting potential harm to the patient (no potential harm to potential critical harm). Incidents were classified by point of incident origination and detection based on a 103-step workflow. The individual steps were divided among nine broad workflow categories (patient assessment, imaging for radiation therapy (RT) planning, treatment planning, pretreatment plan review, treatment delivery, on-treatment quality management, post-treatment completion, equipment/software quality management, and other). The average NMRI scores of incidents originating or detected within each broad workflow area were calculated. Additionally, out of 103 individual process steps, 35 were classified as safety barriers, the process steps whose primary function is to catch errors. The safety barriers which most frequently detected incidents were identified and analyzed. Finally, the distance between event origination and detection was explored by grouping events by the number of broad workflow area events passed through before detection, and average NMRI scores were compared. Near-miss incidents most commonly originated within treatment planning (33%). However, the incidents with the highest average NMRI scores originated during imaging for RT planning (NMRI = 2.0, average NMRI of all events = 1.5), specifically during the documentation of patient positioning and localization of the patient. Incidents were most frequently detected during treatment delivery (30%), and incidents identified at this point also had higher severity scores than other workflow areas (NMRI = 1.6). Incidents identified during on-treatment quality management were also more severe (NMRI = 1.7), and the specific process steps of reviewing portal and CBCT images tended to catch highest-severity incidents. On average, safety barriers caught 46% of all incidents, most frequently at physics chart review, therapist's chart check, and the review of portal images; however, most of the incidents that pass through a particular safety barrier are not designed to be capable of being captured at that barrier. Incident learning systems can be used to assess the most common points of error origination and detection in radiation oncology. This can help tailor safety improvement efforts and target the highest impact portions of the workflow. The most severe near-miss events tend to originate during simulation, with the most severe near-miss events detected at the time of patient treatment. Safety barriers can be improved to allow earlier detection of near-miss events.

  18. Station Set Residual: Event Classification Using Historical Distribution of Observing Stations

    NASA Astrophysics Data System (ADS)

    Procopio, Mike; Lewis, Jennifer; Young, Chris

    2010-05-01

    Analysts working at the International Data Centre in support of treaty monitoring through the Comprehensive Nuclear-Test-Ban Treaty Organization spend a significant amount of time reviewing hypothesized seismic events produced by an automatic processing system. When reviewing these events to determine their legitimacy, analysts take a variety of approaches that rely heavily on training and past experience. One method used by analysts to gauge the validity of an event involves examining the set of stations involved in the detection of an event. In particular, leveraging past experience, an analyst can say that an event located in a certain part of the world is expected to be detected by Stations A, B, and C. Implicit in this statement is that such an event would usually not be detected by Stations X, Y, or Z. For some well understood parts of the world, the absence of one or more "expected" stations—or the presence of one or more "unexpected" stations—is correlated with a hypothesized event's legitimacy and to its survival to the event bulletin. The primary objective of this research is to formalize and quantify the difference between the observed set of stations detecting some hypothesized event, versus the expected set of stations historically associated with detecting similar nearby events close in magnitude. This Station Set Residual can be quantified in many ways, some of which are correlated with the analysts' determination of whether or not the event is valid. We propose that this Station Set Residual score can be used to screen out certain classes of "false" events produced by automatic processing with a high degree of confidence, reducing the analyst burden. Moreover, we propose that the visualization of the historically expected distribution of detecting stations can be immediately useful as an analyst aid during their review process.

  19. Statin-associated muscular and renal adverse events: data mining of the public version of the FDA adverse event reporting system.

    PubMed

    Sakaeda, Toshiyuki; Kadoyama, Kaori; Okuno, Yasushi

    2011-01-01

    Adverse event reports (AERs) submitted to the US Food and Drug Administration (FDA) were reviewed to assess the muscular and renal adverse events induced by the administration of 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase inhibitors (statins) and to attempt to determine the rank-order of the association. After a revision of arbitrary drug names and the deletion of duplicated submissions, AERs involving pravastatin, simvastatin, atorvastatin, or rosuvastatin were analyzed. Authorized pharmacovigilance tools were used for quantitative detection of signals, i.e., drug-associated adverse events, including the proportional reporting ratio, the reporting odds ratio, the information component given by a Bayesian confidence propagation neural network, and the empirical Bayes geometric mean. Myalgia, rhabdomyolysis and an increase in creatine phosphokinase level were focused on as the muscular adverse events, and acute renal failure, non-acute renal failure, and an increase in blood creatinine level as the renal adverse events. Based on 1,644,220 AERs from 2004 to 2009, signals were detected for 4 statins with respect to myalgia, rhabdomyolysis, and an increase in creatine phosphokinase level, but these signals were stronger for rosuvastatin than pravastatin and atorvastatin. Signals were also detected for acute renal failure, though in the case of atorvastatin, the association was marginal, and furthermore, a signal was not detected for non-acute renal failure or for an increase in blood creatinine level. Data mining of the FDA's adverse event reporting system, AERS, is useful for examining statin-associated muscular and renal adverse events. The data strongly suggest the necessity of well-organized clinical studies with respect to statin-associated adverse events.

  20. Detecting, Monitoring, and Reporting Possible Adverse Drug Events Using an Arden-Syntax-based Rule Engine.

    PubMed

    Fehre, Karsten; Plössnig, Manuela; Schuler, Jochen; Hofer-Dückelmann, Christina; Rappelsberger, Andrea; Adlassnig, Klaus-Peter

    2015-01-01

    The detection of adverse drug events (ADEs) is an important aspect of improving patient safety. The iMedication system employs predefined triggers associated with significant events in a patient's clinical data to automatically detect possible ADEs. We defined four clinically relevant conditions: hyperkalemia, hyponatremia, renal failure, and over-anticoagulation. These are some of the most relevant ADEs in internal medical and geriatric wards. For each patient, ADE risk scores for all four situations are calculated, compared against a threshold, and judged to be monitored, or reported. A ward-based cockpit view summarizes the results.

  1. USGS Tweet Earthquake Dispatch (@USGSted): Using Twitter for Earthquake Detection and Characterization

    NASA Astrophysics Data System (ADS)

    Liu, S. B.; Bouchard, B.; Bowden, D. C.; Guy, M.; Earle, P.

    2012-12-01

    The U.S. Geological Survey (USGS) is investigating how online social networking services like Twitter—a microblogging service for sending and reading public text-based messages of up to 140 characters—can augment USGS earthquake response products and the delivery of hazard information. The USGS Tweet Earthquake Dispatch (TED) system is using Twitter not only to broadcast seismically-verified earthquake alerts via the @USGSted and @USGSbigquakes Twitter accounts, but also to rapidly detect widely felt seismic events through a real-time detection system. The detector algorithm scans for significant increases in tweets containing the word "earthquake" or its equivalent in other languages and sends internal alerts with the detection time, tweet text, and the location of the city where most of the tweets originated. It has been running in real-time for 7 months and finds, on average, two or three felt events per day with a false detection rate of less than 10%. The detections have reasonable coverage of populated areas globally. The number of detections is small compared to the number of earthquakes detected seismically, and only a rough location and qualitative assessment of shaking can be determined based on Tweet data alone. However, the Twitter detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The main benefit of the tweet-based detections is speed, with most detections occurring between 19 seconds and 2 minutes from the origin time. This is considerably faster than seismic detections in poorly instrumented regions of the world. Going beyond the initial detection, the USGS is developing data mining techniques to continuously archive and analyze relevant tweets for additional details about the detected events. The information generated about an event is displayed on a web-based map designed using HTML5 for the mobile environment, which can be valuable when the user is not able to access a desktop computer at the time of the detections. The continuously updating map displays geolocated tweets arriving after the detection and plots epicenters of recent earthquakes. When available, seismograms from nearby stations are displayed as an additional form of verification. A time series of tweets-per-minute is also shown to illustrate the volume of tweets being generated for the detected event. Future additions are being investigated to provide a more in-depth characterization of the seismic events based on an analysis of tweet text and content from other social media sources.

  2. Rapid and reliable detection and identification of GM events using multiplex PCR coupled with oligonucleotide microarray.

    PubMed

    Xu, Xiaodan; Li, Yingcong; Zhao, Heng; Wen, Si-yuan; Wang, Sheng-qi; Huang, Jian; Huang, Kun-lun; Luo, Yun-bo

    2005-05-18

    To devise a rapid and reliable method for the detection and identification of genetically modified (GM) events, we developed a multiplex polymerase chain reaction (PCR) coupled with a DNA microarray system simultaneously aiming at many targets in a single reaction. The system included probes for screening gene, species reference gene, specific gene, construct-specific gene, event-specific gene, and internal and negative control genes. 18S rRNA was combined with species reference genes as internal controls to assess the efficiency of all reactions and to eliminate false negatives. Two sets of the multiplex PCR system were used to amplify four and five targets, respectively. Eight different structure genes could be detected and identified simultaneously for Roundup Ready soybean in a single microarray. The microarray specificity was validated by its ability to discriminate two GM maizes Bt176 and Bt11. The advantages of this method are its high specificity and greatly reduced false-positives and -negatives. The multiplex PCR coupled with microarray technology presented here is a rapid and reliable tool for the simultaneous detection of GM organism ingredients.

  3. Identification of unusual events in multi-channel bridge monitoring data

    NASA Astrophysics Data System (ADS)

    Omenzetter, Piotr; Brownjohn, James Mark William; Moyo, Pilate

    2004-03-01

    Continuously operating instrumented structural health monitoring (SHM) systems are becoming a practical alternative to replace visual inspection for assessment of condition and soundness of civil infrastructure such as bridges. However, converting large amounts of data from an SHM system into usable information is a great challenge to which special signal processing techniques must be applied. This study is devoted to identification of abrupt, anomalous and potentially onerous events in the time histories of static, hourly sampled strains recorded by a multi-sensor SHM system installed in a major bridge structure and operating continuously for a long time. Such events may result, among other causes, from sudden settlement of foundation, ground movement, excessive traffic load or failure of post-tensioning cables. A method of outlier detection in multivariate data has been applied to the problem of finding and localising sudden events in the strain data. For sharp discrimination of abrupt strain changes from slowly varying ones wavelet transform has been used. The proposed method has been successfully tested using known events recorded during construction of the bridge, and later effectively used for detection of anomalous post-construction events.

  4. Intelligent monitoring of critical pathological events during anesthesia.

    PubMed

    Gohil, Bhupendra; Gholamhhosseini, Hamid; Harrison, Michael J; Lowe, Andrew; Al-Jumaily, Ahmed

    2007-01-01

    Expert algorithms in the field of intelligent patient monitoring have rapidly revolutionized patient care thereby improving patient safety. Patient monitoring during anesthesia requires cautious attention by anesthetists who are monitoring many modalities, diagnosing clinically critical events and performing patient management tasks simultaneously. The mishaps that occur during day-to-day anesthesia causing disastrous errors in anesthesia administration were classified and studied by Reason [1]. Human errors in anesthesia account for 82% of the preventable mishaps [2]. The aim of this paper is to develop a clinically useful diagnostic alarm system for detecting critical events during anesthesia administration. The development of an expert diagnostic alarm system called ;RT-SAAM' for detecting critical pathological events in the operating theatre is presented. This system provides decision support to the anesthetist by presenting the diagnostic results on an integrative, ergonomic display and thus enhancing patient safety. The performance of the system was validated through a series of offline and real-time testing in the operation theatre. When detecting absolute hypovolaemia (AHV), moderate level of agreement was observed between RT-SAAM and the human expert (anesthetist) during surgical procedures. RT-SAAM is a clinically useful diagnostic tool which can be easily modified for diagnosing additional critical pathological events like relative hypovolaemia, fall in cardiac output, sympathetic response and malignant hyperpyrexia during surgical procedures. RT-SAAM is currently being tested at the Auckland City Hospital with ethical approval from the local ethics committees.

  5. Getting the right blood to the right patient: the contribution of near-miss event reporting and barrier analysis.

    PubMed

    Kaplan, H S

    2005-11-01

    Safety and reliability in blood transfusion are not static, but are dynamic non-events. Since performance deviations continually occur in complex systems, their detection and correction must be accomplished over and over again. Non-conformance must be detected early enough to allow for recovery or mitigation. Near-miss events afford early detection of possible system weaknesses and provide an early chance at correction. National event reporting systems, both voluntary and involuntary, have begun to include near-miss reporting in their classification schemes, raising awareness for their detection. MERS-TM is a voluntary safety reporting initiative in transfusion. Currently 22 hospitals submit reports anonymously to a central database which supports analysis of a hospital's own data and that of an aggregate database. The system encourages reporting of near-miss events, where the patient is protected from receiving an unsuitable or incorrect blood component due to a planned or unplanned recovery step. MERS-TM data suggest approximately 90% of events are near-misses, with 10% caught after issue but before transfusion. Near-miss reporting may increase total reports ten-fold. The ratio of near-misses to events with harm is 339:1, consistent with other industries' ratio of 300:1, which has been proposed as a measure of reporting in event reporting systems. Use of a risk matrix and an event's relation to protective barriers allow prioritization of these events. Near-misses recovered by planned barriers occur ten times more frequently then unplanned recoveries. A bedside check of the patient's identity with that on the blood component is an essential, final barrier. How the typical two person check is performed, is critical. Even properly done, this check is ineffective against sampling and testing errors. Blood testing at bedside just prior to transfusion minimizes the risk of such upstream events. However, even with simple and well designed devices, training may be a critical issue. Sample errors account for more than half of reported events. The most dangerous miscollection is a blood sample passing acceptance with no previous patient results for comparison. Bar code labels or collection of a second sample may counter this upstream vulnerability. Further upstream barriers have been proposed to counter the precariousness of urgent blood sample collection in a changing unstable situation. One, a linking device, allows safer labeling of tubes away from the bedside, the second, a forcing function, prevents omission of critical patient identification steps. Errors in the blood bank itself account for 15% of errors with a high potential severity. In one such event, a component incorrectly issued, but safely detected prior to transfusion, focused attention on multitasking's contribution to laboratory error. In sum, use of near-miss information, by enhancing barriers supporting error prevention and mitigation, increases our capacity to get the right blood to the right patient.

  6. Electronic system

    DOEpatents

    Robison, G H; Dickson, J F

    1960-11-15

    An electronic system is designed for indicating the occurrence of a plurality of electrically detectable events within predetermined time intervals. The system comprises separate input means electrically associated with the events under observation an electronic channel associated with each input means, including control means and indicating means; timing means adapted to apply a signal from the input means after a predetermined time to the control means to deactivate each of the channels; and means for resetting the system to its initial condition after the observation of each group of events. (D.L.C.)

  7. Bayesian Monitoring Systems for the CTBT: Historical Development and New Results

    NASA Astrophysics Data System (ADS)

    Russell, S.; Arora, N. S.; Moore, D.

    2016-12-01

    A project at Berkeley, begun in 2009 in collaboration with CTBTO andmore recently with LLNL, has reformulated the global seismicmonitoring problem in a Bayesian framework. A first-generation system,NETVISA, has been built comprising a spatial event prior andgenerative models of event transmission and detection, as well as aMonte Carlo inference algorithm. The probabilistic model allows forseamless integration of various disparate sources of information,including negative information (the absence of detections). Workingfrom arrivals extracted by traditional station processing fromInternational Monitoring System (IMS) data, NETVISA achieves areduction of around 60% in the number of missed events compared withthe currently deployed network processing system. It also finds manyevents that are missed by the human analysts who postprocess the IMSoutput. Recent improvements include the integration of models forinfrasound and hydroacoustic detections and a global depth model fornatural seismicity trained from ISC data. NETVISA is now fullycompatible with the CTBTO operating environment. A second-generation model called SIGVISA extends NETVISA's generativemodel all the way from events to raw signal data, avoiding theerror-prone bottom-up detection phase of station processing. SIGVISA'smodel automatically captures the phenomena underlying existingdetection and location techniques such as multilateration, waveformcorrelation matching, and double-differencing, and integrates theminto a global inference process that also (like NETVISA) handles denovo events. Initial results for the Western US in early 2008 (whenthe transportable US Array was operating) shows that SIGVISA finds,from IMS data only, more than twice the number of events recorded inthe CTBTO Late Event Bulletin (LEB). For mb 1.0-2.5, the ratio is more than10; put another way, for this data set, SIGVISA lowers the detectionthreshold by roughly one magnitude compared to LEB. The broader message of this work is that probabilistic inference basedon a vertically integrated generative model that directly expressesgeophysical knowledge can be a much more effective approach forinterpreting scientific data than the traditional bottom-up processingpipeline.

  8. Application of process monitoring to anomaly detection in nuclear material processing systems via system-centric event interpretation of data from multiple sensors of varying reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Humberto E.; Simpson, Michael F.; Lin, Wen-Chiao

    In this paper, we apply an advanced safeguards approach and associated methods for process monitoring to a hypothetical nuclear material processing system. The assessment regarding the state of the processing facility is conducted at a systemcentric level formulated in a hybrid framework. This utilizes architecture for integrating both time- and event-driven data and analysis for decision making. While the time-driven layers of the proposed architecture encompass more traditional process monitoring methods based on time series data and analysis, the event-driven layers encompass operation monitoring methods based on discrete event data and analysis. By integrating process- and operation-related information and methodologiesmore » within a unified framework, the task of anomaly detection is greatly improved. This is because decision-making can benefit from not only known time-series relationships among measured signals but also from known event sequence relationships among generated events. This available knowledge at both time series and discrete event layers can then be effectively used to synthesize observation solutions that optimally balance sensor and data processing requirements. The application of the proposed approach is then implemented on an illustrative monitored system based on pyroprocessing and results are discussed.« less

  9. Gravitational Microlensing Events as a Target for the SETI project

    NASA Astrophysics Data System (ADS)

    Rahvar, Sohrab

    2016-09-01

    The detection of signals from a possible extrasolar technological civilization is one of the most challenging efforts of science. In this work, we propose using natural telescopes made of single or binary gravitational lensing systems to magnify leakage of electromagnetic signals from a remote planet that harbors Extraterrestrial Intelligent (ETI) technology. Currently, gravitational microlensing surveys are monitoring a large area of the Galactic bulge to search for microlensing events, finding more than 2000 events per year. These lenses are capable of playing the role of natural telescopes, and, in some instances, they can magnify radio band signals from planets orbiting around the source stars in gravitational microlensing systems. Assuming that the frequency of electromagnetic waves used for telecommunication in ETIs is similar to ours, we propose follow-up observation of microlensing events with radio telescopes such as the Square Kilometre Array (SKA), the Low Frequency Demonstrators, and the Mileura Wide-Field Array. Amplifying signals from the leakage of broadcasting by an Earth-like civilization will allow us to detect them as far as the center of the Milky Way galaxy. Our analysis shows that in binary microlensing systems, the probability of amplification of signals from ETIs is more than that in single microlensing events. Finally, we propose the use of the target of opportunity mode for follow-up observations of binary microlensing events with SKA as a new observational program for searching ETIs. Using optimistic values for the factors of the Drake equation provides detection of about one event per year.

  10. On the reliable use of satellite-derived surface water products for global flood monitoring

    NASA Astrophysics Data System (ADS)

    Hirpa, F. A.; Revilla-Romero, B.; Thielen, J.; Salamon, P.; Brakenridge, R.; Pappenberger, F.; de Groeve, T.

    2015-12-01

    Early flood warning and real-time monitoring systems play a key role in flood risk reduction and disaster response management. To this end, real-time flood forecasting and satellite-based detection systems have been developed at global scale. However, due to the limited availability of up-to-date ground observations, the reliability of these systems for real-time applications have not been assessed in large parts of the globe. In this study, we performed comparative evaluations of the commonly used satellite-based global flood detections and operational flood forecasting system using 10 major flood cases reported over three years (2012-2014). Specially, we assessed the flood detection capabilities of the near real-time global flood maps from the Global Flood Detection System (GFDS), and from the Moderate Resolution Imaging Spectroradiometer (MODIS), and the operational forecasts from the Global Flood Awareness System (GloFAS) for the major flood events recorded in global flood databases. We present the evaluation results of the global flood detection and forecasting systems in terms of correctly indicating the reported flood events and highlight the exiting limitations of each system. Finally, we propose possible ways forward to improve the reliability of large scale flood monitoring tools.

  11. Combination neutron-gamma ray detector

    DOEpatents

    Stuart, Travis P.; Tipton, Wilbur J.

    1976-10-26

    A radiation detection system capable of detecting neutron and gamma events and distinguishing therebetween. The system includes a detector for a photomultiplier which utilizes a combination of two phosphor materials, the first of which is in the form of small glass beads which scintillate primarily in response to neutrons and the second of which is a plastic matrix which scintillates in response to gammas. A combination of pulse shape and pulse height discrimination techniques is utilized to provide an essentially complete separation of the neutron and gamma events.

  12. Radiological Health Protection Issues Associated with Use of Active Detection Technology Systems for Detection of Radioactive Threat Materials

    DTIC Science & Technology

    2013-07-01

    detection system available will simply register events resulting from natural background radiation if a suitable source emission is not employed. The...random fluctuations in the natural background radiation level. Noise within the detection system can result from any of the various components that...Uritani et al., 1994). Nothing can generally be done to reduce or stabilize the amount of natural background radiation present for nonstationary

  13. A computer aided treatment event recognition system in radiation therapy.

    PubMed

    Xia, Junyi; Mart, Christopher; Bayouth, John

    2014-01-01

    To develop an automated system to safeguard radiation therapy treatments by analyzing electronic treatment records and reporting treatment events. CATERS (Computer Aided Treatment Event Recognition System) was developed to detect treatment events by retrieving and analyzing electronic treatment records. CATERS is designed to make the treatment monitoring process more efficient by automating the search of the electronic record for possible deviations from physician's intention, such as logical inconsistencies as well as aberrant treatment parameters (e.g., beam energy, dose, table position, prescription change, treatment overrides, etc). Over a 5 month period (July 2012-November 2012), physicists were assisted by the CATERS software in conducting normal weekly chart checks with the aims of (a) determining the relative frequency of particular events in the authors' clinic and (b) incorporating these checks into the CATERS. During this study period, 491 patients were treated at the University of Iowa Hospitals and Clinics for a total of 7692 fractions. All treatment records from the 5 month analysis period were evaluated using all the checks incorporated into CATERS after the training period. About 553 events were detected as being exceptions, although none of them had significant dosimetric impact on patient treatments. These events included every known event type that was discovered during the trial period. A frequency analysis of the events showed that the top three types of detected events were couch position override (3.2%), extra cone beam imaging (1.85%), and significant couch position deviation (1.31%). The significant couch deviation is defined as the number of treatments where couch vertical exceeded two times standard deviation of all couch verticals, or couch lateral/longitudinal exceeded three times standard deviation of all couch laterals and longitudinals. On average, the application takes about 1 s per patient when executed on either a desktop computer or a mobile device. CATERS offers an effective tool to detect and report treatment events. Automation and rapid processing enables electronic record interrogation daily, alerting the medical physicist of deviations potentially days prior to performing weekly check. The output of CATERS could also be utilized as an important input to failure mode and effects analysis.

  14. A novel CUSUM-based approach for event detection in smart metering

    NASA Astrophysics Data System (ADS)

    Zhu, Zhicheng; Zhang, Shuai; Wei, Zhiqiang; Yin, Bo; Huang, Xianqing

    2018-03-01

    Non-intrusive load monitoring (NILM) plays such a significant role in raising consumer awareness on household electricity use to reduce overall energy consumption in the society. With regard to monitoring low power load, many researchers have introduced CUSUM into the NILM system, since the traditional event detection method is not as effective as expected. Due to the fact that the original CUSUM faces limitations given the small shift is below threshold, we therefore improve the test statistic which allows permissible deviation to gradually rise as the data size increases. This paper proposes a novel event detection and corresponding criterion that could be used in NILM systems to recognize transient states and to help the labelling task. Its performance has been tested in a real scenario where eight different appliances are connected to main line of electric power.

  15. A signal detection method for temporal variation of adverse effect with vaccine adverse event reporting system data.

    PubMed

    Cai, Yi; Du, Jingcheng; Huang, Jing; Ellenberg, Susan S; Hennessy, Sean; Tao, Cui; Chen, Yong

    2017-07-05

    To identify safety signals by manual review of individual report in large surveillance databases is time consuming; such an approach is very unlikely to reveal complex relationships between medications and adverse events. Since the late 1990s, efforts have been made to develop data mining tools to systematically and automatically search for safety signals in surveillance databases. Influenza vaccines present special challenges to safety surveillance because the vaccine changes every year in response to the influenza strains predicted to be prevalent that year. Therefore, it may be expected that reporting rates of adverse events following flu vaccines (number of reports for a specific vaccine-event combination/number of reports for all vaccine-event combinations) may vary substantially across reporting years. Current surveillance methods seldom consider these variations in signal detection, and reports from different years are typically collapsed together to conduct safety analyses. However, merging reports from different years ignores the potential heterogeneity of reporting rates across years and may miss important safety signals. Reports of adverse events between years 1990 to 2013 were extracted from the Vaccine Adverse Event Reporting System (VAERS) database and formatted into a three-dimensional data array with types of vaccine, groups of adverse events and reporting time as the three dimensions. We propose a random effects model to test the heterogeneity of reporting rates for a given vaccine-event combination across reporting years. The proposed method provides a rigorous statistical procedure to detect differences of reporting rates among years. We also introduce a new visualization tool to summarize the result of the proposed method when applied to multiple vaccine-adverse event combinations. We applied the proposed method to detect safety signals of FLU3, an influenza vaccine containing three flu strains, in the VAERS database. We showed that it had high statistical power to detect the variation in reporting rates across years. The identified vaccine-event combinations with significant different reporting rates over years suggested potential safety issues due to changes in vaccines which require further investigation. We developed a statistical model to detect safety signals arising from heterogeneity of reporting rates of a given vaccine-event combinations across reporting years. This method detects variation in reporting rates over years with high power. The temporal trend of reporting rate across years may reveal the impact of vaccine update on occurrence of adverse events and provide evidence for further investigations.

  16. Detection, location, and characterization of hydroacoustic signals using seafloor cable networks offshore Japan (Invited)

    NASA Astrophysics Data System (ADS)

    Sugioka, H.; Suyehiro, K.; Shinohara, M.

    2009-12-01

    The hydroacoustic monitoring by the International Monitoring System (IMS) for Comprehensive Nuclear-Test-Treaty (CTBT) verification system utilize hydrophone stations and seismic stations called T-phase stations for worldwide detection. Some signals of natural origin include those from earthquakes, submarine volcanic eruptions, or whale calls. Among artificial sources there are non-nuclear explosions and air-gun shots. It is important for IMS system to detect and locate hydroacoustic events with sufficient accuracy and correctly characterize the signals and identify the source. As there are a number of seafloor cable networks operated offshore Japanese islands basically facing the Pacific Ocean for monitoring regional seismicity, the data from these stations (pressures, hydrophones and seismic sensors) may be utilized to verify and increase the capability of the IMS. We use these data to compare some selected event parameters with those by Pacific in the time period of 2004-present. These anomalous examples and also dynamite shots used for seismic crustal structure studies and other natural sources will be presented in order to help improve the IMS verification capabilities for detection, location and characterization of anomalous signals. The seafloor cable networks composed of three hydrophones and six seismometers and a temporal dense seismic array detected and located hydroacoustic events offshore Japanese island on 12th of March in 2008, which had been reported by the IMS. We detected not only the reverberated hydroacoustic waves between the sea surface and the sea bottom but also the seismic waves going through the crust associated with the events. The determined source of the seismic waves is almost coincident with the one of hydroacoustic waves, suggesting that the seismic waves are converted very close to the origin of the hydroacoustic source. We also detected very similar signals on 16th of March in 2009 to the ones associated with the event of 12th of March in 2008.

  17. HIGH-PRECISION BIOLOGICAL EVENT EXTRACTION: EFFECTS OF SYSTEM AND OF DATA

    PubMed Central

    Cohen, K. Bretonnel; Verspoor, Karin; Johnson, Helen L.; Roeder, Chris; Ogren, Philip V.; Baumgartner, William A.; White, Elizabeth; Tipney, Hannah; Hunter, Lawrence

    2013-01-01

    We approached the problems of event detection, argument identification, and negation and speculation detection in the BioNLP’09 information extraction challenge through concept recognition and analysis. Our methodology involved using the OpenDMAP semantic parser with manually written rules. The original OpenDMAP system was updated for this challenge with a broad ontology defined for the events of interest, new linguistic patterns for those events, and specialized coordination handling. We achieved state-of-the-art precision for two of the three tasks, scoring the highest of 24 teams at precision of 71.81 on Task 1 and the highest of 6 teams at precision of 70.97 on Task 2. We provide a detailed analysis of the training data and show that a number of trigger words were ambiguous as to event type, even when their arguments are constrained by semantic class. The data is also shown to have a number of missing annotations. Analysis of a sampling of the comparatively small number of false positives returned by our system shows that major causes of this type of error were failing to recognize second themes in two-theme events, failing to recognize events when they were the arguments to other events, failure to recognize nontheme arguments, and sentence segmentation errors. We show that specifically handling coordination had a small but important impact on the overall performance of the system. The OpenDMAP system and the rule set are available at http://bionlp.sourceforge.net. PMID:25937701

  18. Digital disease detection: A systematic review of event-based internet biosurveillance systems.

    PubMed

    O'Shea, Jesse

    2017-05-01

    Internet access and usage has changed how people seek and report health information. Meanwhile,infectious diseases continue to threaten humanity. The analysis of Big Data, or vast digital data, presents an opportunity to improve disease surveillance and epidemic intelligence. Epidemic intelligence contains two components: indicator based and event-based. A relatively new surveillance type has emerged called event-based Internet biosurveillance systems. These systems use information on events impacting health from Internet sources, such as social media or news aggregates. These systems circumvent the limitations of traditional reporting systems by being inexpensive, transparent, and flexible. Yet, innovations and the functionality of these systems can change rapidly. To update the current state of knowledge on event-based Internet biosurveillance systems by identifying all systems, including current functionality, with hopes to aid decision makers with whether to incorporate new methods into comprehensive programmes of surveillance. A systematic review was performed through PubMed, Scopus, and Google Scholar databases, while also including grey literature and other publication types. 50 event-based Internet systems were identified, including an extraction of 15 attributes for each system, described in 99 articles. Each system uses different innovative technology and data sources to gather data, process, and disseminate data to detect infectious disease outbreaks. The review emphasises the importance of using both formal and informal sources for timely and accurate infectious disease outbreak surveillance, cataloguing all event-based Internet biosurveillance systems. By doing so, future researchers will be able to use this review as a library for referencing systems, with hopes of learning, building, and expanding Internet-based surveillance systems. Event-based Internet biosurveillance should act as an extension of traditional systems, to be utilised as an additional, supplemental data source to have a more comprehensive estimate of disease burden. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Temporal analyses of Salmonellae in a headwater spring ecosystem reveals the effects of precipitation and runoff events.

    PubMed

    Gaertner, James P; Garres, Tiffany; Becker, Jesse C; Jimenez, Maria L; Forstner, Michael R J; Hahn, Dittmar

    2009-03-01

    Sediments and water from the spring and slough arm of Spring Lake, the pristine headwaters of the San Marcos River, Texas, were analyzed for Salmonellae by culture and molecular techniques before and after three major precipitation events, each with intermediate dry periods. Polymerase chain reaction (PCR)-assisted analyses of enrichment cultures detected Salmonellae in samples after all three precipitation events, but failed to detect them immediately prior to the rainfall events. Detection among individual locations differed with respect to the precipitation event analyzed, and strains isolated were highly variable with respect to serovars. These results demonstrate that rainwater associated effects, most likely surface runoff, provide an avenue for short-term pollution of aquatic systems with Salmonellae that do not, however, appear to establish for the long-term in water nor sediments.

  20. System level latchup mitigation for single event and transient radiation effects on electronics

    DOEpatents

    Kimbrough, J.R.; Colella, N.J.

    1997-09-30

    A ``blink`` technique, analogous to a person blinking at a flash of bright light, is provided for mitigating the effects of single event current latchup and prompt pulse destructive radiation on a micro-electronic circuit. The system includes event detection circuitry, power dump logic circuitry, and energy limiting measures with autonomous recovery. The event detection circuitry includes ionizing radiation pulse detection means for detecting a pulse of ionizing radiation and for providing at an output terminal thereof a detection signal indicative of the detection of a pulse of ionizing radiation. The current sensing circuitry is coupled to the power bus for determining an occurrence of excess current through the power bus caused by ionizing radiation or by ion-induced destructive latchup of a semiconductor device. The power dump circuitry includes power dump logic circuitry having a first input terminal connected to the output terminal of the ionizing radiation pulse detection circuitry and having a second input terminal connected to the output terminal of the current sensing circuitry. The power dump logic circuitry provides an output signal to the input terminal of the circuitry for opening the power bus and the circuitry for shorting the power bus to a ground potential to remove power from the power bus. The energy limiting circuitry with autonomous recovery includes circuitry for opening the power bus and circuitry for shorting the power bus to a ground potential. The circuitry for opening the power bus and circuitry for shorting the power bus to a ground potential includes a series FET and a shunt FET. The invention provides for self-contained sensing for latchup, first removal of power to protect latched components, and autonomous recovery to enable transparent operation of other system elements. 18 figs.

  1. System level latchup mitigation for single event and transient radiation effects on electronics

    DOEpatents

    Kimbrough, Joseph Robert; Colella, Nicholas John

    1997-01-01

    A "blink" technique, analogous to a person blinking at a flash of bright light, is provided for mitigating the effects of single event current latchup and prompt pulse destructive radiation on a micro-electronic circuit. The system includes event detection circuitry, power dump logic circuitry, and energy limiting measures with autonomous recovery. The event detection circuitry includes ionizing radiation pulse detection means for detecting a pulse of ionizing radiation and for providing at an output terminal thereof a detection signal indicative of the detection of a pulse of ionizing radiation. The current sensing circuitry is coupled to the power bus for determining an occurrence of excess current through the power bus caused by ionizing radiation or by ion-induced destructive latchup of a semiconductor device. The power dump circuitry includes power dump logic circuitry having a first input terminal connected to the output terminal of the ionizing radiation pulse detection circuitry and having a second input terminal connected to the output terminal of the current sensing circuitry. The power dump logic circuitry provides an output signal to the input terminal of the circuitry for opening the power bus and the circuitry for shorting the power bus to a ground potential to remove power from the power bus. The energy limiting circuitry with autonomous recovery includes circuitry for opening the power bus and circuitry for shorting the power bus to a ground potential. The circuitry for opening the power bus and circuitry for shorting the power bus to a ground potential includes a series FET and a shunt FET. The invention provides for self-contained sensing for latchup, first removal of power to protect latched components, and autonomous recovery to enable transparent operation of other system elements.

  2. Integrated hydraulic and organophosphate pesticide injection simulations for enhancing event detection in water distribution systems.

    PubMed

    Schwartz, Rafi; Lahav, Ori; Ostfeld, Avi

    2014-10-15

    As a complementary step towards solving the general event detection problem of water distribution systems, injection of the organophosphate pesticides, chlorpyrifos (CP) and parathion (PA), were simulated at various locations within example networks and hydraulic parameters were calculated over 24-h duration. The uniqueness of this study is that the chemical reactions and byproducts of the contaminants' oxidation were also simulated, as well as other indicative water quality parameters such as alkalinity, acidity, pH and the total concentration of free chlorine species. The information on the change in water quality parameters induced by the contaminant injection may facilitate on-line detection of an actual event involving this specific substance and pave the way to development of a generic methodology for detecting events involving introduction of pesticides into water distribution systems. Simulation of the contaminant injection was performed at several nodes within two different networks. For each injection, concentrations of the relevant contaminants' mother and daughter species, free chlorine species and water quality parameters, were simulated at nodes downstream of the injection location. The results indicate that injection of these substances can be detected at certain conditions by a very rapid drop in Cl2, functioning as the indicative parameter, as well as a drop in alkalinity concentration and a small decrease in pH, both functioning as supporting parameters, whose usage may reduce false positive alarms. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Development and practice of a Telehealthcare Expert System (TES).

    PubMed

    Lin, Hanjun; Hsu, Yeh-Liang; Hsu, Ming-Shinn; Cheng, Chih-Ming

    2013-07-01

    Expert systems have been widely used in medical and healthcare practice for various purposes. In addition to vital sign data, important concerns in telehealthcare include the compliance with the measurement prescription, the accuracy of vital sign measurements, and the functioning of vital sign meters and home gateways. However, few expert system applications are found in the telehealthcare domain to address these issues. This article presents an expert system application for one of the largest commercialized telehealthcare practices in Taiwan by Min-Sheng General Hospital. The main function of the Telehealthcare Expert System (TES) developed in this research is to detect and classify events based on the measurement data transmitted to the database at the call center, including abnormality of vital signs, violation of vital sign measurement prescriptions, and malfunction of hardware devices (home gateway and vital sign meter). When the expert system detects an abnormal event, it assigns an "urgent degree" and alerts the nursing team in the call center to take action, such as phoning the patient for counseling or to urge the patient to return to the hospital for further tests. During 2 years of clinical practice, from 2009 to 2011, 19,182 patients were served by the expert system. The expert system detected 41,755 events, of which 22.9% indicated abnormality of vital signs, 75.2% indicated violation of measurement prescription, and 1.9% indicated malfunction of devices. On average, the expert system reduced by 76.5% the time that the nursing team in the call center spent in handling the events. The expert system helped to reduce cost and improve quality of the telehealthcare service.

  4. Technical note: Efficient online source identification algorithm for integration within a contamination event management system

    NASA Astrophysics Data System (ADS)

    Deuerlein, Jochen; Meyer-Harries, Lea; Guth, Nicolai

    2017-07-01

    Drinking water distribution networks are part of critical infrastructures and are exposed to a number of different risks. One of them is the risk of unintended or deliberate contamination of the drinking water within the pipe network. Over the past decade research has focused on the development of new sensors that are able to detect malicious substances in the network and early warning systems for contamination. In addition to the optimal placement of sensors, the automatic identification of the source of a contamination is an important component of an early warning and event management system for security enhancement of water supply networks. Many publications deal with the algorithmic development; however, only little information exists about the integration within a comprehensive real-time event detection and management system. In the following the analytical solution and the software implementation of a real-time source identification module and its integration within a web-based event management system are described. The development was part of the SAFEWATER project, which was funded under FP 7 of the European Commission.

  5. Distributed event-triggered consensus tracking of second-order multi-agent systems with a virtual leader

    NASA Astrophysics Data System (ADS)

    Jie, Cao; Zhi-Hai, Wu; Li, Peng

    2016-05-01

    This paper investigates the consensus tracking problems of second-order multi-agent systems with a virtual leader via event-triggered control. A novel distributed event-triggered transmission scheme is proposed, which is intermittently examined at constant sampling instants. Only partial neighbor information and local measurements are required for event detection. Then the corresponding event-triggered consensus tracking protocol is presented to guarantee second-order multi-agent systems to achieve consensus tracking. Numerical simulations are given to illustrate the effectiveness of the proposed strategy. Project supported by the National Natural Science Foundation of China (Grant Nos. 61203147, 61374047, and 61403168).

  6. A Risk Assessment System with Automatic Extraction of Event Types

    NASA Astrophysics Data System (ADS)

    Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula

    In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.

  7. Selected control events and reporting odds ratio in signal detection methodology.

    PubMed

    Ooba, Nobuhiro; Kubota, Kiyoshi

    2010-11-01

    To know whether the reporting odds ratio (ROR) using "control events" can detect signals hidden behind striking reports on one or more particular events. We used data of 956 drug use investigations (DUIs) conducted between 1970 and 1998 in Japan and domestic spontaneous reports (SRs) between 1998 and 2008. The event terms in DUIs were converted to the preferred terms in Medical Dictionary for Regulatory Activities (MedDRA). We calculated the incidence proportion for various events and selected 20 "control events" with a relatively constant incidence proportion across DUIs and also reported regularly to the spontaneous reporting system. A "signal" was generated for the drug-event combination when the lower limit of 95% confidence interval of the ROR exceeded 1. We also compared the ROR in SRs with the RR in DUIs. The "control events" accounted for 18.2% of all reports. The ROR using "control events" may detect some hidden signals for a drug with the proportion of "control events" lower than the average. The median of the ratios of the ROR using "control events" to RR was around the unity indicating that "control events" roughly represented the exposure distribution though the range of the ratios was so diverse that the individual ROR might not be regarded as the estimate of RR. The use of the ROR with "control events" may give an adjunctive to the traditional signal detection methods to find a signal hidden behind some major events. Copyright © 2010 John Wiley & Sons, Ltd.

  8. Real-time detection of organic contamination events in water distribution systems by principal components analysis of ultraviolet spectral data.

    PubMed

    Zhang, Jian; Hou, Dibo; Wang, Ke; Huang, Pingjie; Zhang, Guangxin; Loáiciga, Hugo

    2017-05-01

    The detection of organic contaminants in water distribution systems is essential to protect public health from potential harmful compounds resulting from accidental spills or intentional releases. Existing methods for detecting organic contaminants are based on quantitative analyses such as chemical testing and gas/liquid chromatography, which are time- and reagent-consuming and involve costly maintenance. This study proposes a novel procedure based on discrete wavelet transform and principal component analysis for detecting organic contamination events from ultraviolet spectral data. Firstly, the spectrum of each observation is transformed using discrete wavelet with a coiflet mother wavelet to capture the abrupt change along the wavelength. Principal component analysis is then employed to approximate the spectra based on capture and fusion features. The significant value of Hotelling's T 2 statistics is calculated and used to detect outliers. An alarm of contamination event is triggered by sequential Bayesian analysis when the outliers appear continuously in several observations. The effectiveness of the proposed procedure is tested on-line using a pilot-scale setup and experimental data.

  9. Effect of parameters in moving average method for event detection enhancement using phase sensitive OTDR

    NASA Astrophysics Data System (ADS)

    Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum

    2017-04-01

    We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.

  10. Event Recognition for Contactless Activity Monitoring Using Phase-Modulated Continuous Wave Radar.

    PubMed

    Forouzanfar, Mohamad; Mabrouk, Mohamed; Rajan, Sreeraman; Bolic, Miodrag; Dajani, Hilmi R; Groza, Voicu Z

    2017-02-01

    The use of remote sensing technologies such as radar is gaining popularity as a technique for contactless detection of physiological signals and analysis of human motion. This paper presents a methodology for classifying different events in a collection of phase modulated continuous wave radar returns. The primary application of interest is to monitor inmates where the presence of human vital signs amidst different, interferences needs to be identified. A comprehensive set of features is derived through time and frequency domain analyses of the radar returns. The Bhattacharyya distance is used to preselect the features with highest class separability as the possible candidate features for use in the classification process. The uncorrelated linear discriminant analysis is performed to decorrelate, denoise, and reduce the dimension of the candidate feature set. Linear and quadratic Bayesian classifiers are designed to distinguish breathing, different human motions, and nonhuman motions. The performance of these classifiers is evaluated on a pilot dataset of radar returns that contained different events including breathing, stopped breathing, simple human motions, and movement of fan and water. Our proposed pattern classification system achieved accuracies of up to 93% in stationary subject detection, 90% in stop-breathing detection, and 86% in interference detection. Our proposed radar pattern recognition system was able to accurately distinguish the predefined events amidst interferences. Besides inmate monitoring and suicide attempt detection, this paper can be extended to other radar applications such as home-based monitoring of elderly people, apnea detection, and home occupancy detection.

  11. A Framework of Simple Event Detection in Surveillance Video

    NASA Astrophysics Data System (ADS)

    Xu, Weiguang; Zhang, Yafei; Lu, Jianjiang; Tian, Yulong; Wang, Jiabao

    Video surveillance is playing more and more important role in people's social life. Real-time alerting of threaten events and searching interesting content in stored large scale video footage needs human operator to pay full attention on monitor for long time. The labor intensive mode has limit the effectiveness and efficiency of the system. A framework of simple event detection is presented advance the automation of video surveillance. An improved inner key point matching approach is used to compensate motion of background in real-time; frame difference are used to detect foreground; HOG based classifiers are used to classify foreground object into people and car; mean-shift is used to tracking the recognized objects. Events are detected based on predefined rules. The maturity of the algorithms guarantee the robustness of the framework, and the improved approach and the easily checked rules enable the framework to work in real-time. Future works to be done are also discussed.

  12. Autonomous Detection of Eruptions, Plumes, and Other Transient Events in the Outer Solar System

    NASA Astrophysics Data System (ADS)

    Bunte, M. K.; Lin, Y.; Saripalli, S.; Bell, J. F.

    2012-12-01

    The outer solar system abounds with visually stunning examples of dynamic processes such as eruptive events that jettison materials from satellites and small bodies into space. The most notable examples of such events are the prominent volcanic plumes of Io, the wispy water jets of Enceladus, and the outgassing of comet nuclei. We are investigating techniques that will allow a spacecraft to autonomously detect those events in visible images. This technique will allow future outer planet missions to conduct sustained event monitoring and automate prioritization of data for downlink. Our technique detects plumes by searching for concentrations of large local gradients in images. Applying a Scale Invariant Feature Transform (SIFT) to either raw or calibrated images identifies interest points for further investigation based on the magnitude and orientation of local gradients in pixel values. The interest points are classified as possible transient geophysical events when they share characteristics with similar features in user-classified images. A nearest neighbor classification scheme assesses the similarity of all interest points within a threshold Euclidean distance and classifies each according to the majority classification of other interest points. Thus, features marked by multiple interest points are more likely to be classified positively as events; isolated large plumes or multiple small jets are easily distinguished from a textured background surface due to the higher magnitude gradient of the plume or jet when compared with the small, randomly oriented gradients of the textured surface. We have applied this method to images of Io, Enceladus, and comet Hartley 2 from the Voyager, Galileo, New Horizons, Cassini, and Deep Impact EPOXI missions, where appropriate, and have successfully detected up to 95% of manually identifiable events that our method was able to distinguish from the background surface and surface features of a body. Dozens of distinct features are identifiable under a variety of viewing conditions and hundreds of detections are made in each of the aforementioned datasets. In this presentation, we explore the controlling factors in detecting transient events and discuss causes of success or failure due to distinct data characteristics. These include the level of calibration of images, the ability to differentiate an event from artifacts, and the variety of event appearances in user-classified images. Other important factors include the physical characteristics of the events themselves: albedo, size as a function of image resolution, and proximity to other events (as in the case of multiple small jets which feed into the overall plume at the south pole of Enceladus). A notable strength of this method is the ability to detect events that do not extend beyond the limb of a planetary body or are adjacent to the terminator or other strong edges in the image. The former scenario strongly influences the success rate of detecting eruptive events in nadir views.

  13. Method and apparatus for providing pulse pile-up correction in charge quantizing radiation detection systems

    DOEpatents

    Britton, Jr., Charles L.; Wintenberg, Alan L.

    1993-01-01

    A radiation detection method and system for continuously correcting the quantization of detected charge during pulse pile-up conditions. Charge pulses from a radiation detector responsive to the energy of detected radiation events are converted to voltage pulses of predetermined shape whose peak amplitudes are proportional to the quantity of charge of each corresponding detected event by means of a charge-sensitive preamplifier. These peak amplitudes are sampled and stored sequentially in accordance with their respective times of occurrence. Based on the stored peak amplitudes and times of occurrence, a correction factor is generated which represents the fraction of a previous pulses influence on a preceding pulse peak amplitude. This correction factor is subtracted from the following pulse amplitude in a summing amplifier whose output then represents the corrected charge quantity measurement.

  14. Sensor-enabled chem/bio contamination detection system dedicated to situational awareness of water distribution security status

    NASA Astrophysics Data System (ADS)

    Ginsberg, Mark D.; Smith, Eddy D.; VanBlaricum, Vicki; Hock, Vincent F.; Kroll, Dan; Russell, Kevin J.

    2010-04-01

    Both real events and models have proven that drinking water systems are vulnerable to deliberate and/or accidental contamination. Additionally, homeland security initiatives and modeling efforts have determined that it is relatively easy to orchestrate the contamination of potable water supplies. Such contamination can be accomplished with classic and non-traditional chemical agents, toxic industrial chemicals (TICs), and/or toxic industrial materials (TIMs). Subsequent research and testing has developed a proven network for detection and response to these threats. The method uses offthe- shelf, broad-spectrum analytical instruments coupled with advanced interpretive algorithms. The system detects and characterizes any backflow events involving toxic contaminants by employing unique chemical signature (fingerprint) response data. This instrumentation has been certified by the Office of Homeland Security for detecting deliberate and/or accidental contamination of critical water infrastructure. The system involves integration of several mature technologies (sensors, SCADA, dynamic models, and the HACH HST Guardian Blue instrumentation) into a complete, real-time, management system that also can be used to address other water distribution concerns, such as corrosion. This paper summarizes the reasons and results for installing such a distribution-based detection and protection system.

  15. The contactless detection of local normal transitions in superconducting coils by using Poynting’s vector method

    NASA Astrophysics Data System (ADS)

    Habu, K.; Kaminohara, S.; Kimoto, T.; Kawagoe, A.; Sumiyoshi, F.; Okamoto, H.

    2010-11-01

    We have developed a new monitoring system to detect an unusual event in the superconducting coils without direct contact on the coils, using Poynting's vector method. In this system, the potential leads and pickup coils are set around the superconducting coils to measure local electric and magnetic fields, respectively. By measuring the sets of magnetic and electric fields, the Poynting's vectors around the coil can be obtained. An unusual event in the coil can be detected as the result of the change of the Poynting's vector. This system has no risk of the voltage breakdown which may happen with the balance voltage method, because there is no need of direct contacts on the coil windings. In a previous paper, we have demonstrated that our system can detect the normal transitions in the Bi-2223 coil without direct contact on the coil windings by using a small test system. For our system to be applied to practical devices, it is necessary for the early detection of an unusual event in the coils to be able to detect local normal transitions in the coils. The signal voltages of the small sensors to measure local magnetic and electric fields are small. Although the increase in signals of the pickup coils is attained easily by an increase in the number of turns of the pickup coils, an increase in the signals of the potential lead is not easily attained. In this paper, a new method to amplify the signal of local electric fields around the coil is proposed. The validity of the method has been confirmed by measuring local electric fields around the Bi-2223 coil.

  16. Phase-Space Detection of Cyber Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez Jimenez, Jarilyn M; Ferber, Aaron E; Prowell, Stacy J

    Energy Delivery Systems (EDS) are a network of processes that produce, transfer and distribute energy. EDS are increasingly dependent on networked computing assets, as are many Industrial Control Systems. Consequently, cyber-attacks pose a real and pertinent threat, as evidenced by Stuxnet, Shamoon and Dragonfly. Hence, there is a critical need for novel methods to detect, prevent, and mitigate effects of such attacks. To detect cyber-attacks in EDS, we developed a framework for gathering and analyzing timing data that involves establishing a baseline execution profile and then capturing the effect of perturbations in the state from injecting various malware. The datamore » analysis was based on nonlinear dynamics and graph theory to improve detection of anomalous events in cyber applications. The goal was the extraction of changing dynamics or anomalous activity in the underlying computer system. Takens' theorem in nonlinear dynamics allows reconstruction of topologically invariant, time-delay-embedding states from the computer data in a sufficiently high-dimensional space. The resultant dynamical states were nodes, and the state-to-state transitions were links in a mathematical graph. Alternatively, sequential tabulation of executing instructions provides the nodes with corresponding instruction-to-instruction links. Graph theorems guarantee graph-invariant measures to quantify the dynamical changes in the running applications. Results showed a successful detection of cyber events.« less

  17. MERIS (Medical Error Reporting Information System) as an innovative patient safety intervention: a health policy perspective.

    PubMed

    Riga, Marina; Vozikis, Athanassios; Pollalis, Yannis; Souliotis, Kyriakos

    2015-04-01

    The economic crisis in Greece poses the necessity to resolve problems concerning both the spiralling cost and the quality assurance in the health system. The detection and the analysis of patient adverse events and medical errors are considered crucial elements of this course. The implementation of MERIS embodies a mandatory module, which adopts the trigger tool methodology for measuring adverse events and medical errors an intensive care unit [ICU] environment, and a voluntary one with web-based public reporting methodology. A pilot implementation of MERIS running in a public hospital identified 35 adverse events, with approx. 12 additional hospital days and an extra healthcare cost of €12,000 per adverse event or of about €312,000 per annum for ICU costs only. At the same time, the voluntary module unveiled 510 reports on adverse events submitted by citizens or patients. MERIS has been evaluated as a comprehensive and effective system; it succeeded in detecting the main factors that cause adverse events and discloses severe omissions of the Greek health system. MERIS may be incorporated and run efficiently nationally, adapted to the needs and peculiarities of each hospital or clinic. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Multilingual event extraction for epidemic detection.

    PubMed

    Lejeune, Gaël; Brixtel, Romain; Doucet, Antoine; Lucas, Nadine

    2015-10-01

    This paper presents a multilingual news surveillance system applied to tele-epidemiology. It has been shown that multilingual approaches improve timeliness in detection of epidemic events across the globe, eliminating the wait for local news to be translated into major languages. We present here a system to extract epidemic events in potentially any language, provided a Wikipedia seed for common disease names exists. The Daniel system presented herein relies on properties that are common to news writing (the journalistic genre), the most useful being repetition and saliency. Wikipedia is used to screen common disease names to be matched with repeated characters strings. Language variations, such as declensions, are handled by processing text at the character-level, rather than at the word level. This additionally makes it possible to handle various writing systems in a similar fashion. As no multilingual ground truth existed to evaluate the Daniel system, we built a multilingual corpus from the Web, and collected annotations from native speakers of Chinese, English, Greek, Polish and Russian, with no connection or interest in the Daniel system. This data set is available online freely, and can be used for the evaluation of other event extraction systems. Experiments for 5 languages out of 17 tested are detailed in this paper: Chinese, English, Greek, Polish and Russian. The Daniel system achieves an average F-measure of 82% in these 5 languages. It reaches 87% on BEcorpus, the state-of-the-art corpus in English, slightly below top-performing systems, which are tailored with numerous language-specific resources. The consistent performance of Daniel on multiple languages is an important contribution to the reactivity and the coverage of epidemiological event detection systems. Most event extraction systems rely on extensive resources that are language-specific. While their sophistication induces excellent results (over 90% precision and recall), it restricts their coverage in terms of languages and geographic areas. In contrast, in order to detect epidemic events in any language, the Daniel system only requires a list of a few hundreds of disease names and locations, which can actually be acquired automatically. The system can perform consistently well on any language, with precision and recall around 82% on average, according to this paper's evaluation. Daniel's character-based approach is especially interesting for morphologically-rich and low-resourced languages. The lack of resources to be exploited and the state of the art string matching algorithms imply that Daniel can process thousands of documents per minute on a simple laptop. In the context of epidemic surveillance, reactivity and geographic coverage are of primary importance, since no one knows where the next event will strike, and therefore in what vernacular language it will first be reported. By being able to process any language, the Daniel system offers unique coverage for poorly endowed languages, and can complete state of the art techniques for major languages. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Seismic Characterization of the Newberry and Cooper Basin EGS Sites

    NASA Astrophysics Data System (ADS)

    Templeton, D. C.; Wang, J.; Goebel, M.; Johannesson, G.; Myers, S. C.; Harris, D.; Cladouhos, T. T.

    2015-12-01

    To aid in the seismic characterization of Engineered Geothermal Systems (EGS), we enhance traditional microearthquake detection and location methodologies at two EGS systems: the Newberry EGS site and the Habanero EGS site in the Cooper Basin of South Australia. We apply the Matched Field Processing (MFP) seismic imaging technique to detect new seismic events using known discrete microearthquake sources. Events identified using MFP typically have smaller magnitudes or occur within the coda of a larger event. Additionally, we apply a Bayesian multiple-event location algorithm, called MicroBayesLoc, to estimate the 95% probability ellipsoids for events with high signal-to-noise ratios (SNR). Such probability ellipsoid information can provide evidence for determining if a seismic lineation is real, or simply within the anticipated error range. At the Newberry EGS site, 235 events were reported in the original catalog. MFP identified 164 additional events (an increase of over 70% more events). For the relocated events in the Newberry catalog, we can distinguish two distinct seismic swarms that fall outside of one another's 95% probability error ellipsoids.This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  20. Radar network communication through sensing of frequency hopping

    DOEpatents

    Dowla, Farid; Nekoogar, Faranak

    2013-05-28

    In one embodiment, a radar communication system includes a plurality of radars having a communication range and being capable of operating at a sensing frequency and a reporting frequency, wherein the reporting frequency is different than the sensing frequency, each radar is adapted for operating at the sensing frequency until an event is detected, each radar in the plurality of radars has an identification/location frequency for reporting information different from the sensing frequency, a first radar of the radars which senses the event sends a reporting frequency corresponding to its identification/location frequency when the event is detected, and all other radars in the plurality of radars switch their reporting frequencies to match the reporting frequency of the first radar upon detecting the reporting frequency switch of a radar within the communication range. In another embodiment, a method is presented for communicating information in a radar system.

  1. GRAVITATIONAL MICROLENSING EVENTS AS A TARGET FOR THE SETI PROJECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahvar, Sohrab, E-mail: rahvar@sharif.edu

    2016-09-01

    The detection of signals from a possible extrasolar technological civilization is one of the most challenging efforts of science. In this work, we propose using natural telescopes made of single or binary gravitational lensing systems to magnify leakage of electromagnetic signals from a remote planet that harbors Extraterrestrial Intelligent (ETI) technology. Currently, gravitational microlensing surveys are monitoring a large area of the Galactic bulge to search for microlensing events, finding more than 2000 events per year. These lenses are capable of playing the role of natural telescopes, and, in some instances, they can magnify radio band signals from planets orbitingmore » around the source stars in gravitational microlensing systems. Assuming that the frequency of electromagnetic waves used for telecommunication in ETIs is similar to ours, we propose follow-up observation of microlensing events with radio telescopes such as the Square Kilometre Array (SKA), the Low Frequency Demonstrators, and the Mileura Wide-Field Array. Amplifying signals from the leakage of broadcasting by an Earth-like civilization will allow us to detect them as far as the center of the Milky Way galaxy. Our analysis shows that in binary microlensing systems, the probability of amplification of signals from ETIs is more than that in single microlensing events. Finally, we propose the use of the target of opportunity mode for follow-up observations of binary microlensing events with SKA as a new observational program for searching ETIs. Using optimistic values for the factors of the Drake equation provides detection of about one event per year.« less

  2. Event-triggered fault detection for a class of discrete-time linear systems using interval observers.

    PubMed

    Zhang, Zhi-Hui; Yang, Guang-Hong

    2017-05-01

    This paper provides a novel event-triggered fault detection (FD) scheme for discrete-time linear systems. First, an event-triggered interval observer is proposed to generate the upper and lower residuals by taking into account the influence of the disturbances and the event error. Second, the robustness of the residual interval against the disturbances and the fault sensitivity are improved by introducing l 1 and H ∞ performances. Third, dilated linear matrix inequalities are used to decouple the Lyapunov matrices from the system matrices. The nonnegative conditions for the estimation error variables are presented with the aid of the slack matrix variables. This technique allows considering a more general Lyapunov function. Furthermore, the FD decision scheme is proposed by monitoring whether the zero value belongs to the residual interval. It is shown that the information communication burden is reduced by designing the event-triggering mechanism, while the FD performance can still be guaranteed. Finally, simulation results demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model.

    PubMed

    Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai

    2017-02-08

    Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences.

  4. Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model

    PubMed Central

    Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai

    2017-01-01

    Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences. PMID:28208694

  5. Vision-Based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Display Systems

    PubMed Central

    Chen, Yen-Lin; Liang, Wen-Yew; Chiang, Chuan-Yen; Hsieh, Tung-Ju; Lee, Da-Cheng; Yuan, Shyan-Ming; Chang, Yang-Lang

    2011-01-01

    This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions. PMID:22163990

  6. Why conventional detection methods fail in identifying the existence of contamination events.

    PubMed

    Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han

    2016-04-15

    Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Identification of unusual events in multichannel bridge monitoring data using wavelet transform and outlier analysis

    NASA Astrophysics Data System (ADS)

    Omenzetter, Piotr; Brownjohn, James M. W.; Moyo, Pilate

    2003-08-01

    Continuously operating instrumented structural health monitoring (SHM) systems are becoming a practical alternative to replace visual inspection for assessment of condition and soundness of civil infrastructure. However, converting large amount of data from an SHM system into usable information is a great challenge to which special signal processing techniques must be applied. This study is devoted to identification of abrupt, anomalous and potentially onerous events in the time histories of static, hourly sampled strains recorded by a multi-sensor SHM system installed in a major bridge structure in Singapore and operating continuously for a long time. Such events may result, among other causes, from sudden settlement of foundation, ground movement, excessive traffic load or failure of post-tensioning cables. A method of outlier detection in multivariate data has been applied to the problem of finding and localizing sudden events in the strain data. For sharp discrimination of abrupt strain changes from slowly varying ones wavelet transform has been used. The proposed method has been successfully tested using known events recorded during construction of the bridge, and later effectively used for detection of anomalous post-construction events.

  8. Evaluation of Epidemic Intelligence Systems Integrated in the Early Alerting and Reporting Project for the Detection of A/H5N1 Influenza Events

    PubMed Central

    Barboza, Philippe; Vaillant, Laetitia; Mawudeku, Abla; Nelson, Noele P.; Hartley, David M.; Madoff, Lawrence C.; Linge, Jens P.; Collier, Nigel; Brownstein, John S.; Yangarber, Roman; Astagneau, Pascal; on behalf of the Early Alerting, Reporting Project of the Global Health Security Initiative

    2013-01-01

    The objective of Web-based expert epidemic intelligence systems is to detect health threats. The Global Health Security Initiative (GHSI) Early Alerting and Reporting (EAR) project was launched to assess the feasibility and opportunity for pooling epidemic intelligence data from seven expert systems. EAR participants completed a qualitative survey to document epidemic intelligence strategies and to assess perceptions regarding the systems performance. Timeliness and sensitivity were rated highly illustrating the value of the systems for epidemic intelligence. Weaknesses identified included representativeness, completeness and flexibility. These findings were corroborated by the quantitative analysis performed on signals potentially related to influenza A/H5N1 events occurring in March 2010. For the six systems for which this information was available, the detection rate ranged from 31% to 38%, and increased to 72% when considering the virtual combined system. The effective positive predictive values ranged from 3% to 24% and F1-scores ranged from 6% to 27%. System sensitivity ranged from 38% to 72%. An average difference of 23% was observed between the sensitivities calculated for human cases and epizootics, underlining the difficulties in developing an efficient algorithm for a single pathology. However, the sensitivity increased to 93% when the virtual combined system was considered, clearly illustrating complementarities between individual systems. The average delay between the detection of A/H5N1 events by the systems and their official reporting by WHO or OIE was 10.2 days (95% CI: 6.7–13.8). This work illustrates the diversity in implemented epidemic intelligence activities, differences in system's designs, and the potential added values and opportunities for synergy between systems, between users and between systems and users. PMID:23472077

  9. Evaluation of epidemic intelligence systems integrated in the early alerting and reporting project for the detection of A/H5N1 influenza events.

    PubMed

    Barboza, Philippe; Vaillant, Laetitia; Mawudeku, Abla; Nelson, Noele P; Hartley, David M; Madoff, Lawrence C; Linge, Jens P; Collier, Nigel; Brownstein, John S; Yangarber, Roman; Astagneau, Pascal

    2013-01-01

    The objective of Web-based expert epidemic intelligence systems is to detect health threats. The Global Health Security Initiative (GHSI) Early Alerting and Reporting (EAR) project was launched to assess the feasibility and opportunity for pooling epidemic intelligence data from seven expert systems. EAR participants completed a qualitative survey to document epidemic intelligence strategies and to assess perceptions regarding the systems performance. Timeliness and sensitivity were rated highly illustrating the value of the systems for epidemic intelligence. Weaknesses identified included representativeness, completeness and flexibility. These findings were corroborated by the quantitative analysis performed on signals potentially related to influenza A/H5N1 events occurring in March 2010. For the six systems for which this information was available, the detection rate ranged from 31% to 38%, and increased to 72% when considering the virtual combined system. The effective positive predictive values ranged from 3% to 24% and F1-scores ranged from 6% to 27%. System sensitivity ranged from 38% to 72%. An average difference of 23% was observed between the sensitivities calculated for human cases and epizootics, underlining the difficulties in developing an efficient algorithm for a single pathology. However, the sensitivity increased to 93% when the virtual combined system was considered, clearly illustrating complementarities between individual systems. The average delay between the detection of A/H5N1 events by the systems and their official reporting by WHO or OIE was 10.2 days (95% CI: 6.7-13.8). This work illustrates the diversity in implemented epidemic intelligence activities, differences in system's designs, and the potential added values and opportunities for synergy between systems, between users and between systems and users.

  10. Episodic inflation events at Akutan Volcano, Alaska, during 2005-2017

    NASA Astrophysics Data System (ADS)

    Ji, Kang Hyeun; Yun, Sang-Ho; Rim, Hyoungrea

    2017-08-01

    Detection of weak volcano deformation helps constrain characteristics of eruption cycles. We have developed a signal detection technique, called the Targeted Projection Operator (TPO), to monitor surface deformation with Global Positioning System (GPS) data. We have applied the TPO to GPS data collected at Akutan Volcano from June 2005 to March 2017 and detected four inflation events that occurred in 2008, 2011, 2014, and 2016 with inflation rates of about 8-22 mm/yr above the background trend at a near-source site AV13. Numerical modeling suggests that the events should be driven by closely located sources or a single source in a shallow magma chamber at a depth of about 4 km. The inflation events suggest that magma has episodically accumulated in a shallow magma chamber.

  11. Systems and methods of detecting force and stress using tetrapod nanocrystal

    DOEpatents

    Choi, Charina L.; Koski, Kristie J.; Sivasankar, Sanjeevi; Alivisatos, A. Paul

    2013-08-20

    Systems and methods of detecting force on the nanoscale including methods for detecting force using a tetrapod nanocrystal by exposing the tetrapod nanocrystal to light, which produces a luminescent response by the tetrapod nanocrystal. The method continues with detecting a difference in the luminescent response by the tetrapod nanocrystal relative to a base luminescent response that indicates a force between a first and second medium or stresses or strains experienced within a material. Such systems and methods find use with biological systems to measure forces in biological events or interactions.

  12. Interlaboratory study of DNA extraction from multiple ground samples, multiplex real-time PCR, and multiplex qualitative PCR for individual kernel detection system of genetically modified maize.

    PubMed

    Akiyama, Hiroshi; Sakata, Kozue; Makiyma, Daiki; Nakamura, Kosuke; Teshima, Reiko; Nakashima, Akie; Ogawa, Asako; Yamagishi, Toru; Futo, Satoshi; Oguchi, Taichi; Mano, Junichi; Kitta, Kazumi

    2011-01-01

    In many countries, the labeling of grains, feed, and foodstuff is mandatory if the genetically modified (GM) organism content exceeds a certain level of approved GM varieties. We previously developed an individual kernel detection system consisting of grinding individual kernels, DNA extraction from the individually ground kernels, GM detection using multiplex real-time PCR, and GM event detection using multiplex qualitative PCR to analyze the precise commingling level and varieties of GM maize in real sample grains. We performed the interlaboratory study of the DNA extraction with multiple ground samples, multiplex real-time PCR detection, and multiplex qualitative PCR detection to evaluate its applicability, practicality, and ruggedness for the individual kernel detection system of GM maize. DNA extraction with multiple ground samples, multiplex real-time PCR, and multiplex qualitative PCR were evaluated by five laboratories in Japan, and all results from these laboratories were consistent with the expected results in terms of the commingling level and event analysis. Thus, the DNA extraction with multiple ground samples, multiplex real-time PCR, and multiplex qualitative PCR for the individual kernel detection system is applicable and practicable in a laboratory to regulate the commingling level of GM maize grain for GM samples, including stacked GM maize.

  13. Real-time determination of the efficacy of residual disinfection to limit wastewater contamination in a water distribution system using filtration-based luminescence.

    PubMed

    Lee, Jiyoung; Deininger, Rolf A

    2010-05-01

    Water distribution systems can be vulnerable to microbial contamination through cross-connections, wastewater backflow, the intrusion of soiled water after a loss of pressure resulting from an electricity blackout, natural disaster, or intentional contamination of the system in a bioterrrorism event. The most urgent matter a water treatment utility would face in this situation is detecting the presence and extent of a contamination event in real-time, so that immediate action can be taken to mitigate the problem. The current approved microbiological detection methods are culture-based plate count methods, which require incubation time (1 to 7 days). This long period of time would not be useful for the protection of public health. This study was designed to simulate wastewater intrusion in a water distribution system. The objectives were 2-fold: (1) real-time detection of water contamination, and (2) investigation of the sustainability of drinking water systems to suppress the contamination with secondary disinfectant residuals (chlorine and chloramine). The events of drinking water contamination resulting from a wastewater addition were determined by filtration-based luminescence assay. The water contamination was detected by luminescence method within 5 minutes. The signal amplification attributed to wastewater contamination was clear-102-fold signal increase. After 1 hour, chlorinated water could inactivate 98.8% of the bacterial contaminant, while chloraminated water reduced 77.2%.

  14. Adaptable radiation monitoring system and method

    DOEpatents

    Archer, Daniel E [Livermore, CA; Beauchamp, Brock R [San Ramon, CA; Mauger, G Joseph [Livermore, CA; Nelson, Karl E [Livermore, CA; Mercer, Michael B [Manteca, CA; Pletcher, David C [Sacramento, CA; Riot, Vincent J [Berkeley, CA; Schek, James L [Tracy, CA; Knapp, David A [Livermore, CA

    2006-06-20

    A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.

  15. High rate tests of the photon detection system for the LHCb RICH Upgrade

    NASA Astrophysics Data System (ADS)

    Blago, M. P.; Keizer, F.

    2017-12-01

    The photon detection system for the LHCb RICH Upgrade consists of an array of multianode photomultiplier tubes (MaPMTs) read out by custom-built modular electronics. The behaviour of the whole chain was studied at CERN using a pulsed laser. Threshold scans were performed in order to study the MaPMT pulse-height spectra at high event rates and different photon intensities. The results show a reduction in photon detection efficiency at 900 V bias voltage, marked by a 20% decrease in the single-photon peak height, when increasing the event rate from 100 kHz to 20 MHz. This reduction was not observed at 1000 V bias voltage.

  16. Remote health monitoring system for detecting cardiac disorders.

    PubMed

    Bansal, Ayush; Kumar, Sunil; Bajpai, Anurag; Tiwari, Vijay N; Nayak, Mithun; Venkatesan, Shankar; Narayanan, Rangavittal

    2015-12-01

    Remote health monitoring system with clinical decision support system as a key component could potentially quicken the response of medical specialists to critical health emergencies experienced by their patients. A monitoring system, specifically designed for cardiac care with electrocardiogram (ECG) signal analysis as the core diagnostic technique, could play a vital role in early detection of a wide range of cardiac ailments, from a simple arrhythmia to life threatening conditions such as myocardial infarction. The system that the authors have developed consists of three major components, namely, (a) mobile gateway, deployed on patient's mobile device, that receives 12-lead ECG signals from any ECG sensor, (b) remote server component that hosts algorithms for accurate annotation and analysis of the ECG signal and (c) point of care device of the doctor to receive a diagnostic report from the server based on the analysis of ECG signals. In the present study, their focus has been toward developing a system capable of detecting critical cardiac events well in advance using an advanced remote monitoring system. A system of this kind is expected to have applications ranging from tracking wellness/fitness to detection of symptoms leading to fatal cardiac events.

  17. Event specific qualitative and quantitative polymerase chain reaction detection of genetically modified MON863 maize based on the 5'-transgene integration sequence.

    PubMed

    Yang, Litao; Xu, Songci; Pan, Aihu; Yin, Changsong; Zhang, Kewei; Wang, Zhenying; Zhou, Zhigang; Zhang, Dabing

    2005-11-30

    Because of the genetically modified organisms (GMOs) labeling policies issued in many countries and areas, polymerase chain reaction (PCR) methods were developed for the execution of GMO labeling policies, such as screening, gene specific, construct specific, and event specific PCR detection methods, which have become a mainstay of GMOs detection. The event specific PCR detection method is the primary trend in GMOs detection because of its high specificity based on the flanking sequence of the exogenous integrant. This genetically modified maize, MON863, contains a Cry3Bb1 coding sequence that produces a protein with enhanced insecticidal activity against the coleopteran pest, corn rootworm. In this study, the 5'-integration junction sequence between the host plant DNA and the integrated gene construct of the genetically modified maize MON863 was revealed by means of thermal asymmetric interlaced-PCR, and the specific PCR primers and TaqMan probe were designed based upon the revealed 5'-integration junction sequence; the conventional qualitative PCR and quantitative TaqMan real-time PCR detection methods employing these primers and probes were successfully developed. In conventional qualitative PCR assay, the limit of detection (LOD) was 0.1% for MON863 in 100 ng of maize genomic DNA for one reaction. In the quantitative TaqMan real-time PCR assay, the LOD and the limit of quantification were eight and 80 haploid genome copies, respectively. In addition, three mixed maize samples with known MON863 contents were detected using the established real-time PCR systems, and the ideal results indicated that the established event specific real-time PCR detection systems were reliable, sensitive, and accurate.

  18. Cloud-based serviced-orientated data systems for ocean observational data - an example from the coral reef community

    NASA Astrophysics Data System (ADS)

    Bainbridge, S.

    2012-04-01

    The advent of new observing systems, such as sensor networks, have dramatically increased our ability to collect marine data; the issue now is not data drought but data deluge. The challenge now is to extract data representing events of interest from the background data, that is how to deliver information and potentially knowledge from an increasing large store of base observations. Given that each potential user will have differing definitions of 'interesting' and that this is often defined by other events and data, systems need to deliver information or knowledge in a form and context defined by the user. This paper reports on a series of coral reef sensor networks set up under the Coral Reef Environmental Observation Network (CREON). CREON is a community of interest group deploying coral reef sensor networks with the goal of increasing capacity in coral reef observation, especially into developing areas. Issues such as coral bleaching, terrestrial runoff, human impacts and climate change are impacting reefs with one assessment indicating a quarter of the worlds reefs being severely degraded with another quarter under immediate threat. Increasing our ability to collect scientifically valid observations is fundamental to understanding these systems and ultimately in preserving and sustaining them. A cloud based data management system was used to store the base sensor data from each agency involved using service based agents to push the data from individual field sensors to the cloud. The system supports a range of service based outputs such as on-line graphs, a smart-phone application and simple event detection. A more complex event detection system was written that takes input from the cloud services and outputs natural language 'tweets' to Twitter as events occur. It therefore becomes possible to distil the entire data set down to a series of Twitter entries that interested parties can subscribe to. The next step is to allow users to define their own events and to deliver results, in context, to their preferred medium. The paper contrasts what has been achieved within a small community with well defined issues with what it would take to build equivalent systems to hold a wide range of cross community observational data addressing a wider range of potential issues. The role of discoverability, quality control, uncertainly, conformity and metadata are investigated along with a brief discussion of existing and emerging standards in this area. The elements of such as system are described along with the role of modelling and scenario tools in delivering a higher level of outputs linking what may have already occurred (event detection) with what may potentially occur (scenarios). The development of service based cloud computing open data systems coupled with complex event detection systems delivering through social media and other channels linked into model and scenario systems represents one vision for delivering value from the increasing store of ocean observations, most of which lie unknown, unused and unloved.

  19. Assessing the continuum of event-based biosurveillance through an operational lens.

    PubMed

    Corley, Courtney D; Lancaster, Mary J; Brigantic, Robert T; Chung, James S; Walters, Ronald A; Arthur, Ray R; Bruckner-Lea, Cynthia J; Calapristi, Augustin; Dowling, Glenn; Hartley, David M; Kennedy, Shaun; Kircher, Amy; Klucking, Sara; Lee, Eva K; McKenzie, Taylor; Nelson, Noele P; Olsen, Jennifer; Pancerella, Carmen; Quitugua, Teresa N; Reed, Jeremy Todd; Thomas, Carla S

    2012-03-01

    This research follows the Updated Guidelines for Evaluating Public Health Surveillance Systems, Recommendations from the Guidelines Working Group, published by the Centers for Disease Control and Prevention nearly a decade ago. Since then, models have been developed and complex systems have evolved with a breadth of disparate data to detect or forecast chemical, biological, and radiological events that have a significant impact on the One Health landscape. How the attributes identified in 2001 relate to the new range of event-based biosurveillance technologies is unclear. This article frames the continuum of event-based biosurveillance systems (that fuse media reports from the internet), models (ie, computational that forecast disease occurrence), and constructs (ie, descriptive analytical reports) through an operational lens (ie, aspects and attributes associated with operational considerations in the development, testing, and validation of the event-based biosurveillance methods and models and their use in an operational environment). A workshop was held in 2010 to scientifically identify, develop, and vet a set of attributes for event-based biosurveillance. Subject matter experts were invited from 7 federal government agencies and 6 different academic institutions pursuing research in biosurveillance event detection. We describe 8 attribute families for the characterization of event-based biosurveillance: event, readiness, operational aspects, geographic coverage, population coverage, input data, output, and cost. Ultimately, the analyses provide a framework from which the broad scope, complexity, and relevant issues germane to event-based biosurveillance useful in an operational environment can be characterized.

  20. Fluorescence Sensors for Early Detection of Nitrification in Drinking Water Distribution Systems – Interference Corrections (Abstract)

    EPA Science Inventory

    Nitrification event detection in chloraminated drinking water distribution systems (DWDSs) remains an ongoing challenge for many drinking water utilities, including Dallas Water Utilities (DWU) and the City of Houston (CoH). Each year, these utilities experience nitrification eve...

  1. Fluorescence Sensors for Early Detection of Nitrification in Drinking Water Distribution Systems – Interference Corrections (Poster)

    EPA Science Inventory

    Nitrification event detection in chloraminated drinking water distribution systems (DWDSs) remains an ongoing challenge for many drinking water utilities, including Dallas Water Utilities (DWU) and the City of Houston (CoH). Each year, these utilities experience nitrification eve...

  2. Collection of Infrasonic Sound From Sources of Military Importance

    NASA Technical Reports Server (NTRS)

    Masterman, Michael; Shams, Qamar A.; Burkett, Cecil G.; Zuckerwar, Allan J.; Stihler, Craig; Wallace, Jack

    2008-01-01

    Extreme Endeavors is collaborating with NASA Langley Research Center (LaRC) in the development, testing and analysis of infrasonic detection system under a Space Act Agreement. Acoustic studies of atmospheric events like convective storms, shear-induced turbulence, acoustic gravity waves, microbursts, hurricanes, and clear air turbulence (CAT) over the past thirty years have established that these events are strong emitters of infrasound. Recently NASA Langley Research Center has designed and developed a portable infrasonic detection system which can be used to make useful infrasound measurements at locations where it was not possible previously, such as a mountain crag, inside a cave or on the battlefield. The system comprises an electret condenser microphone, having a 3-inch membrane diameter, and a small, compact windscreen. Extreme Endeavors will present the findings from field testing using this portable infrasonic detection system. Field testing of the infrasonic detection system was partly funded by Greer Industries and support provided by the West Virginia Division of Natural Resources. The findings from this work illustrate the ability to detect structure and other information about the contents inside the caves. The presentation will describe methodology for utilizing infrasonic to locate and portray underground facilities.

  3. Limitations imposed on fire PRA methods as the result of incomplete and uncertain fire event data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nowlen, Steven Patrick; Hyslop, J. S.

    2010-04-01

    Fire probabilistic risk assessment (PRA) methods utilize data and insights gained from actual fire events in a variety of ways. For example, fire occurrence frequencies, manual fire fighting effectiveness and timing, and the distribution of fire events by fire source and plant location are all based directly on the historical experience base. Other factors are either derived indirectly or supported qualitatively based on insights from the event data. These factors include the general nature and intensity of plant fires, insights into operator performance, and insights into fire growth and damage behaviors. This paper will discuss the potential methodology improvements thatmore » could be realized if more complete fire event reporting information were available. Areas that could benefit from more complete event reporting that will be discussed in the paper include fire event frequency analysis, analysis of fire detection and suppression system performance including incipient detection systems, analysis of manual fire fighting performance, treatment of fire growth from incipient stages to fully-involved fires, operator response to fire events, the impact of smoke on plant operations and equipment, and the impact of fire-induced cable failures on plant electrical circuits.« less

  4. Improving the Detectability of the Catalan Seismic Network for Local Seismic Activity Monitoring

    NASA Astrophysics Data System (ADS)

    Jara, Jose Antonio; Frontera, Tànit; Batlló, Josep; Goula, Xavier

    2016-04-01

    The seismic survey of the territory of Catalonia is mainly performed by the regional seismic network operated by the Cartographic and Geologic Institute of Catalonia (ICGC). After successive deployments and upgrades, the current network consists of 16 permanent stations equipped with 3 component broadband seismometers (STS2, STS2.5, CMG3ESP and CMG3T), 24 bits digitizers (Nanometrics Trident) and VSAT telemetry. Data are continuously sent in real-time via Hispasat 1D satellite to the ICGC datacenter in Barcelona. Additionally, data from other 10 stations of neighboring areas (Spain, France and Andorra) are continuously received since 2011 via Internet or VSAT, contributing both to detect and to locate events affecting the region. More than 300 local events with Ml ≥ 0.7 have been yearly detected and located in the region. Nevertheless, small magnitude earthquakes, especially those located in the south and south-west of Catalonia may still go undetected by the automatic detection system (DAS), based on Earthworm (USGS). Thus, in order to improve the detection and characterization of these missed events, one or two new stations should be installed. Before making the decision about where to install these new stations, the performance of each existing station is evaluated taking into account the fraction of detected events using the station records, compared to the total number of events in the catalogue, occurred during the station operation time from January 1, 2011 to December 31, 2014. These evaluations allow us to build an Event Detection Probability Map (EDPM), a required tool to simulate EDPMs resulting from different network topology scenarios depending on where these new stations are sited, and becoming essential for the decision-making process to increase and optimize the event detection probability of the seismic network.

  5. CTBT infrasound network performance to detect the 2013 Russian fireball event

    DOE PAGES

    Pilger, Christoph; Ceranna, Lars; Ross, J. Ole; ...

    2015-03-18

    The explosive fragmentation of the 2013 Chelyabinsk meteorite generated a large airburst with an equivalent yield of 500 kT TNT. It is the most energetic event recorded by the infrasound component of the Comprehensive Nuclear-Test-Ban Treaty-International Monitoring System (CTBT-IMS), globally detected by 20 out of 42 operational stations. This study performs a station-by-station estimation of the IMS detection capability to explain infrasound detections and nondetections from short to long distances, using the Chelyabinsk meteorite as global reference event. Investigated parameters influencing the detection capability are the directivity of the line source signal, the ducting of acoustic energy, and the individualmore » noise conditions at each station. Findings include a clear detection preference for stations perpendicular to the meteorite trajectory, even over large distances. Only a weak influence of stratospheric ducting is observed for this low-frequency case. As a result, a strong dependence on the diurnal variability of background noise levels at each station is observed, favoring nocturnal detections.« less

  6. Solar Demon: near real-time solar eruptive event detection on SDO/AIA images

    NASA Astrophysics Data System (ADS)

    Kraaikamp, Emil; Verbeeck, Cis

    Solar flares, dimmings and EUV waves have been observed routinely in extreme ultra-violet (EUV) images of the Sun since 1996. These events are closely associated with coronal mass ejections (CMEs), and therefore provide useful information for early space weather alerts. The Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) generates such a massive dataset that it becomes impossible to find most of these eruptive events manually. Solar Demon is a set of automatic detection algorithms that attempts to solve this problem by providing both near real-time warnings of eruptive events and a catalog of characterized events. Solar Demon has been designed to detect and characterize dimmings, EUV waves, as well as solar flares in near real-time on SDO/AIA data. The detection modules are running continuously at the Royal Observatory of Belgium on both quick-look data and synoptic science data. The output of Solar Demon can be accessed in near real-time on the Solar Demon website, and includes images, movies, light curves, and the numerical evolution of several parameters. Solar Demon is the result of collaboration between the FP7 projects AFFECTS and COMESEP. Flare detections of Solar Demon are integrated into the COMESEP alert system. Here we present the Solar Demon detection algorithms and their output. We will focus on the algorithm and its operational implementation. Examples of interesting flare, dimming and EUV wave events, and general statistics of the detections made so far during solar cycle 24 will be presented as well.

  7. Day-time identification of summer hailstorm cells from MSG data

    NASA Astrophysics Data System (ADS)

    Merino, A.; López, L.; Sánchez, J. L.; García-Ortega, E.; Cattani, E.; Levizzani, V.

    2013-10-01

    Identifying deep convection is of paramount importance, as it may be associated with extreme weather that has significant impact on the environment, property and the population. A new method, the Hail Detection Tool (HDT), is described for identifying hail-bearing storms using multi-spectral Meteosat Second Generation (MSG) data. HDT was conceived as a two-phase method, in which the first step is the Convective Mask (CM) algorithm devised for detection of deep convection, and the second a Hail Detection algorithm (HD) for the identification of hail-bearing clouds among cumulonimbus systems detected by CM. Both CM and HD are based on logistic regression models trained with multi-spectral MSG data-sets comprised of summer convective events in the middle Ebro Valley between 2006-2010, and detected by the RGB visualization technique (CM) or C-band weather radar system of the University of León. By means of the logistic regression approach, the probability of identifying a cumulonimbus event with CM or a hail event with HD are computed by exploiting a proper selection of MSG wavelengths or their combination. A number of cloud physical properties (liquid water path, optical thickness and effective cloud drop radius) were used to physically interpret results of statistical models from a meteorological perspective, using a method based on these "ingredients." Finally, the HDT was applied to a new validation sample consisting of events during summer 2011. The overall Probability of Detection (POD) was 76.9% and False Alarm Ratio 16.7%.

  8. Evaluation of a Broad-Spectrum Partially Automated Adverse Event Surveillance System: A Potential Tool for Patient Safety Improvement in Hospitals With Limited Resources.

    PubMed

    Saikali, Melody; Tanios, Alain; Saab, Antoine

    2017-11-21

    The aim of the study was to evaluate the sensitivity and resource efficiency of a partially automated adverse event (AE) surveillance system for routine patient safety efforts in hospitals with limited resources. Twenty-eight automated triggers from the hospital information system's clinical and administrative databases identified cases that were then filtered by exclusion criteria per trigger and then reviewed by an interdisciplinary team. The system, developed and implemented using in-house resources, was applied for 45 days of surveillance, for all hospital inpatient admissions (N = 1107). Each trigger was evaluated for its positive predictive value (PPV). Furthermore, the sensitivity of the surveillance system (overall and by AE category) was estimated relative to incidence ranges in the literature. The surveillance system identified a total of 123 AEs among 283 reviewed medical records, yielding an overall PPV of 52%. The tool showed variable levels of sensitivity across and within AE categories when compared with the literature, with a relatively low overall sensitivity estimated between 21% and 44%. Adverse events were detected in 23 of the 36 AE categories defined by an established harm classification system. Furthermore, none of the detected AEs were voluntarily reported. The surveillance system showed variable sensitivity levels across a broad range of AE categories with an acceptable PPV, overcoming certain limitations associated with other harm detection methods. The number of cases captured was substantial, and none had been previously detected or voluntarily reported. For hospitals with limited resources, this methodology provides valuable safety information from which interventions for quality improvement can be formulated.

  9. Integrating physically based simulators with Event Detection Systems: Multi-site detection approach.

    PubMed

    Housh, Mashor; Ohar, Ziv

    2017-03-01

    The Fault Detection (FD) Problem in control theory concerns of monitoring a system to identify when a fault has occurred. Two approaches can be distinguished for the FD: Signal processing based FD and Model-based FD. The former concerns of developing algorithms to directly infer faults from sensors' readings, while the latter uses a simulation model of the real-system to analyze the discrepancy between sensors' readings and expected values from the simulation model. Most contamination Event Detection Systems (EDSs) for water distribution systems have followed the signal processing based FD, which relies on analyzing the signals from monitoring stations independently of each other, rather than evaluating all stations simultaneously within an integrated network. In this study, we show that a model-based EDS which utilizes a physically based water quality and hydraulics simulation models, can outperform the signal processing based EDS. We also show that the model-based EDS can facilitate the development of a Multi-Site EDS (MSEDS), which analyzes the data from all the monitoring stations simultaneously within an integrated network. The advantage of the joint analysis in the MSEDS is expressed by increased detection accuracy (higher true positive alarms and fewer false alarms) and shorter detection time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Estimation of Temporal Gait Parameters Using a Wearable Microphone-Sensor-Based System

    PubMed Central

    Wang, Cheng; Wang, Xiangdong; Long, Zhou; Yuan, Jing; Qian, Yueliang; Li, Jintao

    2016-01-01

    Most existing wearable gait analysis methods focus on the analysis of data obtained from inertial sensors. This paper proposes a novel, low-cost, wireless and wearable gait analysis system which uses microphone sensors to collect footstep sound signals during walking. This is the first time a microphone sensor is used as a wearable gait analysis device as far as we know. Based on this system, a gait analysis algorithm for estimating the temporal parameters of gait is presented. The algorithm fully uses the fusion of two feet footstep sound signals and includes three stages: footstep detection, heel-strike event and toe-on event detection, and calculation of gait temporal parameters. Experimental results show that with a total of 240 data sequences and 1732 steps collected using three different gait data collection strategies from 15 healthy subjects, the proposed system achieves an average 0.955 F1-measure for footstep detection, an average 94.52% accuracy rate for heel-strike detection and 94.25% accuracy rate for toe-on detection. Using these detection results, nine temporal related gait parameters are calculated and these parameters are consistent with their corresponding normal gait temporal parameters and labeled data calculation results. The results verify the effectiveness of our proposed system and algorithm for temporal gait parameter estimation. PMID:27999321

  11. Method and apparatus for distinguishing actual sparse events from sparse event false alarms

    DOEpatents

    Spalding, Richard E.; Grotbeck, Carter L.

    2000-01-01

    Remote sensing method and apparatus wherein sparse optical events are distinguished from false events. "Ghost" images of actual optical phenomena are generated using an optical beam splitter and optics configured to direct split beams to a single sensor or segmented sensor. True optical signals are distinguished from false signals or noise based on whether the ghost image is presence or absent. The invention obviates the need for dual sensor systems to effect a false target detection capability, thus significantly reducing system complexity and cost.

  12. Experiments on Adaptive Techniques for Host-Based Intrusion Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DRAELOS, TIMOTHY J.; COLLINS, MICHAEL J.; DUGGAN, DAVID P.

    2001-09-01

    This research explores four experiments of adaptive host-based intrusion detection (ID) techniques in an attempt to develop systems that can detect novel exploits. The technique considered to have the most potential is adaptive critic designs (ACDs) because of their utilization of reinforcement learning, which allows learning exploits that are difficult to pinpoint in sensor data. Preliminary results of ID using an ACD, an Elman recurrent neural network, and a statistical anomaly detection technique demonstrate an ability to learn to distinguish between clean and exploit data. We used the Solaris Basic Security Module (BSM) as a data source and performed considerablemore » preprocessing on the raw data. A detection approach called generalized signature-based ID is recommended as a middle ground between signature-based ID, which has an inability to detect novel exploits, and anomaly detection, which detects too many events including events that are not exploits. The primary results of the ID experiments demonstrate the use of custom data for generalized signature-based intrusion detection and the ability of neural network-based systems to learn in this application environment.« less

  13. A Centralized Display for Mission Monitoring

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.

    2004-01-01

    Humans traditionally experience a vigilance decrement over extended periods of time on reliable systems. One possible solution to aiding operators in monitoring is to use polar-star displays that will show deviations from normal in a more salient manner. The primary objectives of this experiment were to determine if polar-star displays aid in monitoring and preliminary diagnosis of the aircraft state. This experiment indicated that the polar-star display does indeed aid operators in detecting and diagnosing system events. Subjects were able to notice system events earlier and they subjectively reported the polar-star display helped them in monitoring, noticing an event, and diagnosing an event. Therefore, these results indicate that the polar-star display used for monitoring and preliminary diagnosis improves performance in these areas for system related events.

  14. Detection of Historical and Future Precipitation Variations and Extremes Over the Continental United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Bruce T.

    2015-12-11

    Problem: The overall goal of this proposal is to detect observed seasonal-mean precipitation variations and extreme event occurrences over the United States. Detection, e.g. the process of demonstrating that an observed change in climate is unusual, first requires some means of estimating the range of internal variability absent any external drivers. Ideally, the internal variability would be derived from the observations themselves, however generally the observed variability is a confluence of both internal variability and variability in response to external drivers. Further, numerical climate models—the standard tool for detection studies—have their own estimates of intrinsic variability, which may differ substantiallymore » from that found in the observed system as well as other model systems. These problems are further compounded for weather and climate extremes, which as singular events are particularly ill-suited for detection studies because of their infrequent occurrence, limited spatial range, and underestimation within global and even regional numerical models. Rationale: As a basis for this research we will show how stochastic daily-precipitation models—models in which the simulated interannual-to-multidecadal precipitation variance is purely the result of the random evolution of daily precipitation events within a given time period—can be used to address many of these issues simultaneously. Through the novel application of these well-established models, we can first estimate the changes/trends in various means and extremes that can occur even with fixed daily-precipitation characteristics, e.g. that can occur simply as a result of the stochastic evolution of daily weather events within a given climate. Detection of a change in the observed climate—either naturally or anthropogenically forced—can then be defined as any change relative to this stochastic variability, e.g. as changes/trends in the means and extremes that could only have occurred through a change in the underlying climate. As such, this method is capable of detecting “hot spot” regions—as well as “flare ups” within the hot spot regions—that have experienced interannual to multi-decadal scale variations and trends in seasonal-mean precipitation and extreme events. Further by applying the same methods to numerical climate models we can discern the fidelity of the current-generation climate models in representing detectability within the observed climate system. In this way, we can objectively determine the utility of these model systems for performing detection studies of historical and future climate change.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novak, Avrey; Nyflot, Matthew J.; Ermoian, Ralph P.

    Purpose: Radiation treatment planning involves a complex workflow that has multiple potential points of vulnerability. This study utilizes an incident reporting system to identify the origination and detection points of near-miss errors, in order to guide their departmental safety improvement efforts. Previous studies have examined where errors arise, but not where they are detected or applied a near-miss risk index (NMRI) to gauge severity. Methods: From 3/2012 to 3/2014, 1897 incidents were analyzed from a departmental incident learning system. All incidents were prospectively reviewed weekly by a multidisciplinary team and assigned a NMRI score ranging from 0 to 4 reflectingmore » potential harm to the patient (no potential harm to potential critical harm). Incidents were classified by point of incident origination and detection based on a 103-step workflow. The individual steps were divided among nine broad workflow categories (patient assessment, imaging for radiation therapy (RT) planning, treatment planning, pretreatment plan review, treatment delivery, on-treatment quality management, post-treatment completion, equipment/software quality management, and other). The average NMRI scores of incidents originating or detected within each broad workflow area were calculated. Additionally, out of 103 individual process steps, 35 were classified as safety barriers, the process steps whose primary function is to catch errors. The safety barriers which most frequently detected incidents were identified and analyzed. Finally, the distance between event origination and detection was explored by grouping events by the number of broad workflow area events passed through before detection, and average NMRI scores were compared. Results: Near-miss incidents most commonly originated within treatment planning (33%). However, the incidents with the highest average NMRI scores originated during imaging for RT planning (NMRI = 2.0, average NMRI of all events = 1.5), specifically during the documentation of patient positioning and localization of the patient. Incidents were most frequently detected during treatment delivery (30%), and incidents identified at this point also had higher severity scores than other workflow areas (NMRI = 1.6). Incidents identified during on-treatment quality management were also more severe (NMRI = 1.7), and the specific process steps of reviewing portal and CBCT images tended to catch highest-severity incidents. On average, safety barriers caught 46% of all incidents, most frequently at physics chart review, therapist’s chart check, and the review of portal images; however, most of the incidents that pass through a particular safety barrier are not designed to be capable of being captured at that barrier. Conclusions: Incident learning systems can be used to assess the most common points of error origination and detection in radiation oncology. This can help tailor safety improvement efforts and target the highest impact portions of the workflow. The most severe near-miss events tend to originate during simulation, with the most severe near-miss events detected at the time of patient treatment. Safety barriers can be improved to allow earlier detection of near-miss events.« less

  16. Fluorescence Sensors for Early Detection of Nitrification in Drinking Water Distribution Systems - Interference Corrections and Feasibility Assessment

    NASA Astrophysics Data System (ADS)

    Do, T. D.; Pifer, A.; Chowdhury, Z.; Wahman, D.; Zhang, W.; Fairey, J.

    2017-12-01

    Detection of nitrification events in chloraminated drinking water distribution systems remains an ongoing challenge for many drinking water utilities, including Dallas Water Utilities (DWU) and the City of Houston (CoH). Each year, these utilities experience nitrification events that necessitate extensive flushing, resulting in the loss of billions of gallons of finished water. Biological techniques used to quantify the activity of nitrifying bacteria are impractical for real-time monitoring because they require significant laboratory efforts and/or lengthy incubation times. At present, DWU and CoH regularly rely on physicochemical parameters including total chlorine and monochloramine residual, and free ammonia, nitrite, and nitrate as indicators of nitrification, but these metrics lack specificity to nitrifying bacteria. To improve detection of nitrification in chloraminated drinking water distribution systems, we seek to develop a real-time fluorescence-based sensor system to detect the early onset of nitrification events by measuring the fluorescence of soluble microbial products (SMPs) specific to nitrifying bacteria. Preliminary data indicates that fluorescence-based metrics have the sensitivity to detect these SMPs in the early stages of nitrification, but several remaining challenges will be explored in this presentation. We will focus on benchtop and sensor results from ongoing batch and annular reactor experiments designed to (1) identify fluorescence wavelength pairs and data processing techniques suitable for measurement of SMPs from nitrification and (2) assess and correct potential interferences, such as those from monochloramine, pH, iron, nitrite, nitrate and humic substances. This work will serve as the basis for developing fluorescence sensor packages for full-scale testing and validation in the DWU and CoH systems. Findings from this research could be leveraged to identify nitrification events in their early stages, facilitating proactive interventions and decreasing the severity and frequency of nitrification episodes and water loss due to flushing.

  17. Minicomputer Hardware Monitor Design.

    DTIC Science & Technology

    1980-06-01

    detected signals. Both the COMTEN and TESDATA systems rely on a " plugboard " arrangement where sensor inputs may be combined by means of standard gate logic...systems. A further use of the plugboard "patch panels" is to direct the measured "event" to collection and/or distribution circuitry, where the event...are plugboard and sensor hookup configurations. The available T-PACs are: o Basic System Profile o Regional Mapping o Advanced System Management

  18. Bayesian Inference for Signal-Based Seismic Monitoring

    NASA Astrophysics Data System (ADS)

    Moore, D.

    2015-12-01

    Traditional seismic monitoring systems rely on discrete detections produced by station processing software, discarding significant information present in the original recorded signal. SIG-VISA (Signal-based Vertically Integrated Seismic Analysis) is a system for global seismic monitoring through Bayesian inference on seismic signals. By modeling signals directly, our forward model is able to incorporate a rich representation of the physics underlying the signal generation process, including source mechanisms, wave propagation, and station response. This allows inference in the model to recover the qualitative behavior of recent geophysical methods including waveform matching and double-differencing, all as part of a unified Bayesian monitoring system that simultaneously detects and locates events from a global network of stations. We demonstrate recent progress in scaling up SIG-VISA to efficiently process the data stream of global signals recorded by the International Monitoring System (IMS), including comparisons against existing processing methods that show increased sensitivity from our signal-based model and in particular the ability to locate events (including aftershock sequences that can tax analyst processing) precisely from waveform correlation effects. We also provide a Bayesian analysis of an alleged low-magnitude event near the DPRK test site in May 2010 [1] [2], investigating whether such an event could plausibly be detected through automated processing in a signal-based monitoring system. [1] Zhang, Miao and Wen, Lianxing. "Seismological Evidence for a Low-Yield Nuclear Test on 12 May 2010 in North Korea". Seismological Research Letters, January/February 2015. [2] Richards, Paul. "A Seismic Event in North Korea on 12 May 2010". CTBTO SnT 2015 oral presentation, video at https://video-archive.ctbto.org/index.php/kmc/preview/partner_id/103/uiconf_id/4421629/entry_id/0_ymmtpps0/delivery/http

  19. Detection of goal events in soccer videos

    NASA Astrophysics Data System (ADS)

    Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas

    2005-01-01

    In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.

  20. Design and Optimization of a Dual-HPGe Gamma Spectrometer and Its Cosmic Veto System

    NASA Astrophysics Data System (ADS)

    Zhang, Weihua; Ro, Hyunje; Liu, Chuanlei; Hoffman, Ian; Ungar, Kurt

    2017-03-01

    In this paper, a dual high purity germanium (HPGe) gamma spectrometer detection system with an increased solid angle was developed. The detection system consists of a pair of Broad Energy Germanium (BE-5030p) detectors and an XIA LLC digital gamma finder/Pixie-4 data-acquisition system. A data file processor was developed containing five modules that parses Pixie-4 list-mode data output files and classifies detections into anticoincident/coincident events and their specific coincidence types (double/triple/quadruple) for further analysis. A novel cosmic veto system was installed in the detection system. It was designed to be easy to install around an existing system while still providing sufficient cosmic veto shielding comparable to other designs. This paper describes the coverage and efficiency of this cosmic veto and the data processing system. It has been demonstrated that the cosmic veto system can provide a mean background reduction of 66.1%, which results in a mean MDA improvement of 58.3%. The counting time to meet the required MDA for specific radionuclide can be reduced by a factor of 2-3 compared to those using a conventional HPGe system. This paper also provides an initial overview of coincidence timing distributions between an incoming event from a cosmic veto plate and HPGe detector.

  1. Detecting modification of biomedical events using a deep parsing approach.

    PubMed

    Mackinlay, Andrew; Martinez, David; Baldwin, Timothy

    2012-04-30

    This work describes a system for identifying event mentions in bio-molecular research abstracts that are either speculative (e.g. analysis of IkappaBalpha phosphorylation, where it is not specified whether phosphorylation did or did not occur) or negated (e.g. inhibition of IkappaBalpha phosphorylation, where phosphorylation did not occur). The data comes from a standard dataset created for the BioNLP 2009 Shared Task. The system uses a machine-learning approach, where the features used for classification are a combination of shallow features derived from the words of the sentences and more complex features based on the semantic outputs produced by a deep parser. To detect event modification, we use a Maximum Entropy learner with features extracted from the data relative to the trigger words of the events. The shallow features are bag-of-words features based on a small sliding context window of 3-4 tokens on either side of the trigger word. The deep parser features are derived from parses produced by the English Resource Grammar and the RASP parser. The outputs of these parsers are converted into the Minimal Recursion Semantics formalism, and from this, we extract features motivated by linguistics and the data itself. All of these features are combined to create training or test data for the machine learning algorithm. Over the test data, our methods produce approximately a 4% absolute increase in F-score for detection of event modification compared to a baseline based only on the shallow bag-of-words features. Our results indicate that grammar-based techniques can enhance the accuracy of methods for detecting event modification.

  2. Microfluidic Arrayed Lab-On-A-Chip for Electrochemical Capacitive Detection of DNA Hybridization Events.

    PubMed

    Ben-Yoav, Hadar; Dykstra, Peter H; Bentley, William E; Ghodssi, Reza

    2017-01-01

    A microfluidic electrochemical lab-on-a-chip (LOC) device for DNA hybridization detection has been developed. The device comprises a 3 × 3 array of microelectrodes integrated with a dual layer microfluidic valved manipulation system that provides controlled and automated capabilities for high throughput analysis of microliter volume samples. The surface of the microelectrodes is functionalized with single-stranded DNA (ssDNA) probes which enable specific detection of complementary ssDNA targets. These targets are detected by a capacitive technique which measures dielectric variation at the microelectrode-electrolyte interface due to DNA hybridization events. A quantitative analysis of the hybridization events is carried out based on a sensing modeling that includes detailed analysis of energy storage and dissipation components. By calculating these components during hybridization events the device is able to demonstrate specific and dose response sensing characteristics. The developed microfluidic LOC for DNA hybridization detection offers a technology for real-time and label-free assessment of genetic markers outside of laboratory settings, such as at the point-of-care or in-field environmental monitoring.

  3. Automatic processing of induced events in the geothermal reservoirs Landau and Insheim, Germany

    NASA Astrophysics Data System (ADS)

    Olbert, Kai; Küperkoch, Ludger; Meier, Thomas

    2016-04-01

    Induced events can be a risk to local infrastructure that need to be understood and evaluated. They represent also a chance to learn more about the reservoir behavior and characteristics. Prior to the analysis, the waveform data must be processed consistently and accurately to avoid erroneous interpretations. In the framework of the MAGS2 project an automatic off-line event detection and a phase onset time determination algorithm are applied to induced seismic events in geothermal systems in Landau and Insheim, Germany. The off-line detection algorithm works based on a cross-correlation of continuous data taken from the local seismic network with master events. It distinguishes events between different reservoirs and within the individual reservoirs. Furthermore, it provides a location and magnitude estimation. Data from 2007 to 2014 are processed and compared with other detections using the SeisComp3 cross correlation detector and a STA/LTA detector. The detected events are analyzed concerning spatial or temporal clustering. Furthermore the number of events are compared to the existing detection lists. The automatic phase picking algorithm combines an AR-AIC approach with a cost function to find precise P1- and S1-phase onset times which can be used for localization and tomography studies. 800 induced events are processed, determining 5000 P1- and 6000 S1-picks. The phase onset times show a high precision with mean residuals to manual phase picks of 0s (P1) to 0.04s (S1) and standard deviations below ±0.05s. The received automatic picks are applied to relocate a selected number of events to evaluate influences on the location precision.

  4. A substitution method to improve completeness of events documentation in anesthesia records.

    PubMed

    Lamer, Antoine; De Jonckheere, Julien; Marcilly, Romaric; Tavernier, Benoît; Vallet, Benoît; Jeanne, Mathieu; Logier, Régis

    2015-12-01

    AIMS are optimized to find and display data and curves about one specific intervention but is not retrospective analysis on a huge volume of interventions. Such a system present two main limitation; (1) the transactional database architecture, (2) the completeness of documentation. In order to solve the architectural problem, data warehouses were developed to propose architecture suitable for analysis. However, completeness of documentation stays unsolved. In this paper, we describe a method which allows determining of substitution rules in order to detect missing anesthesia events in an anesthesia record. Our method is based on the principle that missing event could be detected using a substitution one defined as the nearest documented event. As an example, we focused on the automatic detection of the start and the end of anesthesia procedure when these events were not documented by the clinicians. We applied our method on a set of records in order to evaluate; (1) the event detection accuracy, (2) the improvement of valid records. For the year 2010-2012, we obtained event detection with a precision of 0.00 (-2.22; 2.00) min for the start of anesthesia and 0.10 (0.00; 0.35) min for the end of anesthesia. On the other hand, we increased by 21.1% the data completeness (from 80.3 to 97.2% of the total database) for the start and the end of anesthesia events. This method seems to be efficient to replace missing "start and end of anesthesia" events. This method could also be used to replace other missing time events in this particular data warehouse as well as in other kind of data warehouses.

  5. Using Statistical Process Control for detecting anomalies in multivariate spatiotemporal Earth Observations

    NASA Astrophysics Data System (ADS)

    Flach, Milan; Mahecha, Miguel; Gans, Fabian; Rodner, Erik; Bodesheim, Paul; Guanche-Garcia, Yanira; Brenning, Alexander; Denzler, Joachim; Reichstein, Markus

    2016-04-01

    The number of available Earth observations (EOs) is currently substantially increasing. Detecting anomalous patterns in these multivariate time series is an important step in identifying changes in the underlying dynamical system. Likewise, data quality issues might result in anomalous multivariate data constellations and have to be identified before corrupting subsequent analyses. In industrial application a common strategy is to monitor production chains with several sensors coupled to some statistical process control (SPC) algorithm. The basic idea is to raise an alarm when these sensor data depict some anomalous pattern according to the SPC, i.e. the production chain is considered 'out of control'. In fact, the industrial applications are conceptually similar to the on-line monitoring of EOs. However, algorithms used in the context of SPC or process monitoring are rarely considered for supervising multivariate spatio-temporal Earth observations. The objective of this study is to exploit the potential and transferability of SPC concepts to Earth system applications. We compare a range of different algorithms typically applied by SPC systems and evaluate their capability to detect e.g. known extreme events in land surface processes. Specifically two main issues are addressed: (1) identifying the most suitable combination of data pre-processing and detection algorithm for a specific type of event and (2) analyzing the limits of the individual approaches with respect to the magnitude, spatio-temporal size of the event as well as the data's signal to noise ratio. Extensive artificial data sets that represent the typical properties of Earth observations are used in this study. Our results show that the majority of the algorithms used can be considered for the detection of multivariate spatiotemporal events and directly transferred to real Earth observation data as currently assembled in different projects at the European scale, e.g. http://baci-h2020.eu/index.php/ and http://earthsystemdatacube.net/. Known anomalies such as the Russian heatwave are detected as well as anomalies which are not detectable with univariate methods.

  6. Ontology-based knowledge management for personalized adverse drug events detection.

    PubMed

    Cao, Feng; Sun, Xingzhi; Wang, Xiaoyuan; Li, Bo; Li, Jing; Pan, Yue

    2011-01-01

    Since Adverse Drug Event (ADE) has become a leading cause of death around the world, there arises high demand for helping clinicians or patients to identify possible hazards from drug effects. Motivated by this, we present a personalized ADE detection system, with the focus on applying ontology-based knowledge management techniques to enhance ADE detection services. The development of electronic health records makes it possible to automate the personalized ADE detection, i.e., to take patient clinical conditions into account during ADE detection. Specifically, we define the ADE ontology to uniformly manage the ADE knowledge from multiple sources. We take advantage of the rich semantics from the terminology SNOMED-CT and apply it to ADE detection via the semantic query and reasoning.

  7. Characterizing and analyzing ramping events in wind power, solar power, load, and netload

    DOE PAGES

    Cui, Mingjian; Zhang, Jie; Feng, Cong; ...

    2017-04-07

    Here, one of the biggest concerns associated with integrating a large amount of renewable energy into the power grid is the ability to handle large ramps in the renewable power output. For the sake of system reliability and economics, it is essential for power system operators to better understand the ramping features of renewable, load, and netload. An optimized swinging door algorithm (OpSDA) is used and extended to accurately and efficiently detect ramping events. For wind power ramps detection, a process of merging 'bumps' (that have a different changing direction) into adjacent ramping segments is included to improve the performancemore » of the OpSDA method. For solar ramps detection, ramping events that occur in both clear-sky and measured (or forecasted) solar power are removed to account for the diurnal pattern of solar generation. Ramping features are extracted and extensively compared between load and netload under different renewable penetration levels (9.77%, 15.85%, and 51.38%). Comparison results show that (i) netload ramp events with shorter durations and smaller magnitudes occur more frequently when renewable penetration level increases, and the total number of ramping events also increases; and (ii) different ramping characteristics are observed in load and netload even with a low renewable penetration level.« less

  8. Bridging the semantic gap in sports

    NASA Astrophysics Data System (ADS)

    Li, Baoxin; Errico, James; Pan, Hao; Sezan, M. Ibrahim

    2003-01-01

    One of the major challenges facing current media management systems and the related applications is the so-called "semantic gap" between the rich meaning that a user desires and the shallowness of the content descriptions that are automatically extracted from the media. In this paper, we address the problem of bridging this gap in the sports domain. We propose a general framework for indexing and summarizing sports broadcast programs. The framework is based on a high-level model of sports broadcast video using the concept of an event, defined according to domain-specific knowledge for different types of sports. Within this general framework, we develop automatic event detection algorithms that are based on automatic analysis of the visual and aural signals in the media. We have successfully applied the event detection algorithms to different types of sports including American football, baseball, Japanese sumo wrestling, and soccer. Event modeling and detection contribute to the reduction of the semantic gap by providing rudimentary semantic information obtained through media analysis. We further propose a novel approach, which makes use of independently generated rich textual metadata, to fill the gap completely through synchronization of the information-laden textual data with the basic event segments. An MPEG-7 compliant prototype browsing system has been implemented to demonstrate semantic retrieval and summarization of sports video.

  9. On-line Machine Learning and Event Detection in Petascale Data Streams

    NASA Astrophysics Data System (ADS)

    Thompson, David R.; Wagstaff, K. L.

    2012-01-01

    Traditional statistical data mining involves off-line analysis in which all data are available and equally accessible. However, petascale datasets have challenged this premise since it is often impossible to store, let alone analyze, the relevant observations. This has led the machine learning community to investigate adaptive processing chains where data mining is a continuous process. Here pattern recognition permits triage and followup decisions at multiple stages of a processing pipeline. Such techniques can also benefit new astronomical instruments such as the Large Synoptic Survey Telescope (LSST) and Square Kilometre Array (SKA) that will generate petascale data volumes. We summarize some machine learning perspectives on real time data mining, with representative cases of astronomical applications and event detection in high volume datastreams. The first is a "supervised classification" approach currently used for transient event detection at the Very Long Baseline Array (VLBA). It injects known signals of interest - faint single-pulse anomalies - and tunes system parameters to recover these events. This permits meaningful event detection for diverse instrument configurations and observing conditions whose noise cannot be well-characterized in advance. Second, "semi-supervised novelty detection" finds novel events based on statistical deviations from previous patterns. It detects outlier signals of interest while considering known examples of false alarm interference. Applied to data from the Parkes pulsar survey, the approach identifies anomalous "peryton" phenomena that do not match previous event models. Finally, we consider online light curve classification that can trigger adaptive followup measurements of candidate events. Classifier performance analyses suggest optimal survey strategies, and permit principled followup decisions from incomplete data. These examples trace a broad range of algorithm possibilities available for online astronomical data mining. This talk describes research performed at the Jet Propulsion Laboratory, California Institute of Technology. Copyright 2012, All Rights Reserved. U.S. Government support acknowledged.

  10. A smart room for hospitalised elderly people: essay of modelling and first steps of an experiment.

    PubMed

    Rialle, V; Lauvernay, N; Franco, A; Piquard, J F; Couturier, P

    1999-01-01

    We present a study of modelling and the first steps of an experiment of a smart room for hospitalised elderly people. The system aims at detecting falls and sicknesses, and implements four main functions: perception of patient and environment through sensors, reasoning from perceived events and patient clinical findings, action by way of alarm triggering and message passing to medical staff, and adaptation to various patient profiles, sensor layouts, house fixtures and architecture. It includes a physical multisensory device located in the patient's room, and a multi-agent system for fall detection and alarm triggering. This system encompasses a perception agent, and a reasoning agent. The latter has two complementary capacities implemented by sub-agents: deduction of type of alarm from incoming events, and knowledge induction from recorded events. The system has been tested with a few patients in real clinical situation, and the first experiment provides encouraging results which are described in a precise manner.

  11. Sprites and Early ionospheric VLF perturbations

    NASA Astrophysics Data System (ADS)

    Haldoupis, Christos; Amvrosiadi, Nino; Cotts, Ben; van der Velde, Oscar; Chanrion, Olivier; Neubert, Torsten

    2010-05-01

    Past studies have shown a correlation between sprites and early VLF perturbations, but the reported correlation varies widely from ~ 50% to 100%. The present study resolves these large discrepancies by analyzing several case studies of sprite and narrowband VLF observations, in which multiple transmitter-receiver VLF links with great circle paths (GCPs) passing near a sprite-producing thunderstorm were available. In this setup, the multiple links act in a complementary way that makes the detection of early VLF perturbations much more probable compared to a single VLF link that can miss several of them, a fact that was overlooked in past studies. The evidence shows that sprites are accompanied by early VLF perturbations in a one-to-one correspondence. This implies that the sprite generation mechanism may cause also sub-ionospheric conductivity disturbances that produce early VLF events. However, the one-to-one "sprite to early" event relationship, if viewed conversely as "early to sprite", appears not to be always reciprocal. This is because the number of early events detected in some cases was considerably larger than the number of sprites. Since the great majority of the early events not accompanied by sprites was caused by positive cloud to ground (+CG) lightning discharges, it is possible that sprites or sprite halos were concurrently present in these events as well but were missed by the sprite-watch detection system. In order for this option to be resolved we need more studies using highly sensitive optical systems capable of detecting weaker sprites, sprite halos and elves.

  12. Detections of Planets in Binaries Through the Channel of Chang–Refsdal Gravitational Lensing Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Cheongho; Shin, In-Gu; Jung, Youn Kil

    Chang–Refsdal (C–R) lensing, which refers to the gravitational lensing of a point mass perturbed by a constant external shear, provides a good approximation in describing lensing behaviors of either a very wide or a very close binary lens. C–R lensing events, which are identified by short-term anomalies near the peak of high-magnification lensing light curves, are routinely detected from lensing surveys, but not much attention is paid to them. In this paper, we point out that C–R lensing events provide an important channel to detect planets in binaries, both in close and wide binary systems. Detecting planets through the C–Rmore » lensing event channel is possible because the planet-induced perturbation occurs in the same region of the C–R lensing-induced anomaly and thus the existence of the planet can be identified by the additional deviation in the central perturbation. By presenting the analysis of the actually observed C–R lensing event OGLE-2015-BLG-1319, we demonstrate that dense and high-precision coverage of a C–R lensing-induced perturbation can provide a strong constraint on the existence of a planet in a wide range of planet parameters. The sample of an increased number of microlensing planets in binary systems will provide important observational constraints in giving shape to the details of planet formation, which have been restricted to the case of single stars to date.« less

  13. APDS: the autonomous pathogen detection system.

    PubMed

    Hindson, Benjamin J; Makarewicz, Anthony J; Setlur, Ujwal S; Henderer, Bruce D; McBride, Mary T; Dzenitis, John M

    2005-04-15

    We have developed and tested a fully autonomous pathogen detection system (APDS) capable of continuously monitoring the environment for airborne biological threat agents. The system was developed to provide early warning to civilians in the event of a bioterrorism incident and can be used at high profile events for short-term, intensive monitoring or in major public buildings or transportation nodes for long-term monitoring. The APDS is completely automated, offering continuous aerosol sampling, in-line sample preparation fluidics, multiplexed detection and identification immunoassays, and nucleic acid-based polymerase chain reaction (PCR) amplification and detection. Highly multiplexed antibody-based and duplex nucleic acid-based assays are combined to reduce false positives to a very low level, lower reagent costs, and significantly expand the detection capabilities of this biosensor. This article provides an overview of the current design and operation of the APDS. Certain sub-components of the ADPS are described in detail, including the aerosol collector, the automated sample preparation module that performs multiplexed immunoassays with confirmatory PCR, and the data monitoring and communications system. Data obtained from an APDS that operated continuously for 7 days in a major U.S. transportation hub is reported.

  14. Assessment of an Automated Touchdown Detection Algorithm for the Orion Crew Module

    NASA Technical Reports Server (NTRS)

    Gay, Robert S.

    2011-01-01

    Orion Crew Module (CM) touchdown detection is critical to activating the post-landing sequence that safe?s the Reaction Control Jets (RCS), ensures that the vehicle remains upright, and establishes communication with recovery forces. In order to accommodate safe landing of an unmanned vehicle or incapacitated crew, an onboard automated detection system is required. An Orion-specific touchdown detection algorithm was developed and evaluated to differentiate landing events from in-flight events. The proposed method will be used to initiate post-landing cutting of the parachute riser lines, to prevent CM rollover, and to terminate RCS jet firing prior to submersion. The RCS jets continue to fire until touchdown to maintain proper CM orientation with respect to the flight path and to limit impact loads, but have potentially hazardous consequences if submerged while firing. The time available after impact to cut risers and initiate the CM Up-righting System (CMUS) is measured in minutes, whereas the time from touchdown to RCS jet submersion is a function of descent velocity, sea state conditions, and is often less than one second. Evaluation of the detection algorithms was performed for in-flight events (e.g. descent under chutes) using hi-fidelity rigid body analyses in the Decelerator Systems Simulation (DSS), whereas water impacts were simulated using a rigid finite element model of the Orion CM in LS-DYNA. Two touchdown detection algorithms were evaluated with various thresholds: Acceleration magnitude spike detection, and Accumulated velocity changed (over a given time window) spike detection. Data for both detection methods is acquired from an onboard Inertial Measurement Unit (IMU) sensor. The detection algorithms were tested with analytically generated in-flight and landing IMU data simulations. The acceleration spike detection proved to be faster while maintaining desired safety margin. Time to RCS jet submersion was predicted analytically across a series of simulated Orion landing conditions. This paper details the touchdown detection method chosen and the analysis used to support the decision.

  15. Fusing Symbolic and Numerical Diagnostic Computations

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    X-2000 Anomaly Detection Language denotes a developmental computing language, and the software that establishes and utilizes the language, for fusing two diagnostic computer programs, one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for realtime detection of events (e.g., failures) in a spacecraft, aircraft, or other complex engineering system. The numerical analysis method is performed by beacon-based exception analysis for multi-missions (BEAMs), which has been discussed in several previous NASA Tech Briefs articles. The symbolic analysis method is, more specifically, an artificial-intelligence method of the knowledge-based, inference engine type, and its implementation is exemplified by the Spacecraft Health Inference Engine (SHINE) software. The goal in developing the capability to fuse numerical and symbolic diagnostic components is to increase the depth of analysis beyond that previously attainable, thereby increasing the degree of confidence in the computed results. In practical terms, the sought improvement is to enable detection of all or most events, with no or few false alarms.

  16. Evaluation of a Cyber Security System for Hospital Network.

    PubMed

    Faysel, Mohammad A

    2015-01-01

    Most of the cyber security systems use simulated data in evaluating their detection capabilities. The proposed cyber security system utilizes real hospital network connections. It uses a probabilistic data mining algorithm to detect anomalous events and takes appropriate response in real-time. On an evaluation using real-world hospital network data consisting of incoming network connections collected for a 24-hour period, the proposed system detected 15 unusual connections which were undetected by a commercial intrusion prevention system for the same network connections. Evaluation of the proposed system shows a potential to secure protected patient health information on a hospital network.

  17. One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms.

    PubMed

    Andersson, Richard; Larsson, Linnea; Holmqvist, Kenneth; Stridh, Martin; Nyström, Marcus

    2017-04-01

    Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.

  18. Pre-trained D-CNN models for detecting complex events in unconstrained videos

    NASA Astrophysics Data System (ADS)

    Robinson, Joseph P.; Fu, Yun

    2016-05-01

    Rapid event detection faces an emergent need to process large videos collections; whether surveillance videos or unconstrained web videos, the ability to automatically recognize high-level, complex events is a challenging task. Motivated by pre-existing methods being complex, computationally demanding, and often non-replicable, we designed a simple system that is quick, effective and carries minimal overhead in terms of memory and storage. Our system is clearly described, modular in nature, replicable on any Desktop, and demonstrated with extensive experiments, backed by insightful analysis on different Convolutional Neural Networks (CNNs), as stand-alone and fused with others. With a large corpus of unconstrained, real-world video data, we examine the usefulness of different CNN models as features extractors for modeling high-level events, i.e., pre-trained CNNs that differ in architectures, training data, and number of outputs. For each CNN, we use 1-fps from all training exemplar to train one-vs-rest SVMs for each event. To represent videos, frame-level features were fused using a variety of techniques. The best being to max-pool between predetermined shot boundaries, then average-pool to form the final video-level descriptor. Through extensive analysis, several insights were found on using pre-trained CNNs as off-the-shelf feature extractors for the task of event detection. Fusing SVMs of different CNNs revealed some interesting facts, finding some combinations to be complimentary. It was concluded that no single CNN works best for all events, as some events are more object-driven while others are more scene-based. Our top performance resulted from learning event-dependent weights for different CNNs.

  19. Automatic data processing and analysis system for monitoring region around a planned nuclear power plant

    NASA Astrophysics Data System (ADS)

    Kortström, Jari; Tiira, Timo; Kaisko, Outi

    2016-03-01

    The Institute of Seismology of University of Helsinki is building a new local seismic network, called OBF network, around planned nuclear power plant in Northern Ostrobothnia, Finland. The network will consist of nine new stations and one existing station. The network should be dense enough to provide azimuthal coverage better than 180° and automatic detection capability down to ML -0.1 within a radius of 25 km from the site.The network construction work began in 2012 and the first four stations started operation at the end of May 2013. We applied an automatic seismic signal detection and event location system to a network of 13 stations consisting of the four new stations and the nearest stations of Finnish and Swedish national seismic networks. Between the end of May and December 2013 the network detected 214 events inside the predefined area of 50 km radius surrounding the planned nuclear power plant site. Of those detections, 120 were identified as spurious events. A total of 74 events were associated with known quarries and mining areas. The average location error, calculated as a difference between the announced location from environment authorities and companies and the automatic location, was 2.9 km. During the same time period eight earthquakes between magnitude range 0.1-1.0 occurred within the area. Of these seven could be automatically detected. The results from the phase 1 stations of the OBF network indicates that the planned network can achieve its goals.

  20. Daytime identification of summer hailstorm cells from MSG data

    NASA Astrophysics Data System (ADS)

    Merino, A.; López, L.; Sánchez, J. L.; García-Ortega, E.; Cattani, E.; Levizzani, V.

    2014-04-01

    Identifying deep convection is of paramount importance, as it may be associated with extreme weather phenomena that have significant impact on the environment, property and populations. A new method, the hail detection tool (HDT), is described for identifying hail-bearing storms using multispectral Meteosat Second Generation (MSG) data. HDT was conceived as a two-phase method, in which the first step is the convective mask (CM) algorithm devised for detection of deep convection, and the second a hail mask algorithm (HM) for the identification of hail-bearing clouds among cumulonimbus systems detected by CM. Both CM and HM are based on logistic regression models trained with multispectral MSG data sets comprised of summer convective events in the middle Ebro Valley (Spain) between 2006 and 2010, and detected by the RGB (red-green-blue) visualization technique (CM) or C-band weather radar system of the University of León. By means of the logistic regression approach, the probability of identifying a cumulonimbus event with CM or a hail event with HM are computed by exploiting a proper selection of MSG wavelengths or their combination. A number of cloud physical properties (liquid water path, optical thickness and effective cloud drop radius) were used to physically interpret results of statistical models from a meteorological perspective, using a method based on these "ingredients". Finally, the HDT was applied to a new validation sample consisting of events during summer 2011. The overall probability of detection was 76.9 % and the false alarm ratio 16.7 %.

  1. Event Classification and Identification Based on the Characteristic Ellipsoid of Phasor Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jian; Diao, Ruisheng; Makarov, Yuri V.

    2011-09-23

    In this paper, a method to classify and identify power system events based on the characteristic ellipsoid of phasor measurement is presented. The decision tree technique is used to perform the event classification and identification. Event types, event locations and clearance times are identified by decision trees based on the indices of the characteristic ellipsoid. A sufficiently large number of transient events were simulated on the New England 10-machine 39-bus system based on different system configurations. Transient simulations taking into account different event types, clearance times and various locations are conducted to simulate phasor measurement. Bus voltage magnitudes and recordedmore » reactive and active power flows are used to build the characteristic ellipsoid. The volume, eccentricity, center and projection of the longest axis in the parameter space coordinates of the characteristic ellipsoids are used to classify and identify events. Results demonstrate that the characteristic ellipsoid and the decision tree are capable to detect the event type, location, and clearance time with very high accuracy.« less

  2. An automated cross-correlation based event detection technique and its application to surface passive data set

    USGS Publications Warehouse

    Forghani-Arani, Farnoush; Behura, Jyoti; Haines, Seth S.; Batzle, Mike

    2013-01-01

    In studies on heavy oil, shale reservoirs, tight gas and enhanced geothermal systems, the use of surface passive seismic data to monitor induced microseismicity due to the fluid flow in the subsurface is becoming more common. However, in most studies passive seismic records contain days and months of data and manually analysing the data can be expensive and inaccurate. Moreover, in the presence of noise, detecting the arrival of weak microseismic events becomes challenging. Hence, the use of an automated, accurate and computationally fast technique for event detection in passive seismic data is essential. The conventional automatic event identification algorithm computes a running-window energy ratio of the short-term average to the long-term average of the passive seismic data for each trace. We show that for the common case of a low signal-to-noise ratio in surface passive records, the conventional method is not sufficiently effective at event identification. Here, we extend the conventional algorithm by introducing a technique that is based on the cross-correlation of the energy ratios computed by the conventional method. With our technique we can measure the similarities amongst the computed energy ratios at different traces. Our approach is successful at improving the detectability of events with a low signal-to-noise ratio that are not detectable with the conventional algorithm. Also, our algorithm has the advantage to identify if an event is common to all stations (a regional event) or to a limited number of stations (a local event). We provide examples of applying our technique to synthetic data and a field surface passive data set recorded at a geothermal site.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Mingjian; Zhang, Jie; Feng, Cong

    Here, one of the biggest concerns associated with integrating a large amount of renewable energy into the power grid is the ability to handle large ramps in the renewable power output. For the sake of system reliability and economics, it is essential for power system operators to better understand the ramping features of renewable, load, and netload. An optimized swinging door algorithm (OpSDA) is used and extended to accurately and efficiently detect ramping events. For wind power ramps detection, a process of merging 'bumps' (that have a different changing direction) into adjacent ramping segments is included to improve the performancemore » of the OpSDA method. For solar ramps detection, ramping events that occur in both clear-sky and measured (or forecasted) solar power are removed to account for the diurnal pattern of solar generation. Ramping features are extracted and extensively compared between load and netload under different renewable penetration levels (9.77%, 15.85%, and 51.38%). Comparison results show that (i) netload ramp events with shorter durations and smaller magnitudes occur more frequently when renewable penetration level increases, and the total number of ramping events also increases; and (ii) different ramping characteristics are observed in load and netload even with a low renewable penetration level.« less

  4. High-Rate Data-Capture for an Airborne Lidar System

    NASA Technical Reports Server (NTRS)

    Valett, Susan; Hicks, Edward; Dabney, Philip; Harding, David

    2012-01-01

    A high-rate data system was required to capture the data for an airborne lidar system. A data system was developed that achieved up to 22 million (64-bit) events per second sustained data rate (1408 million bits per second), as well as short bursts (less than 4 s) at higher rates. All hardware used for the system was off the shelf, but carefully selected to achieve these rates. The system was used to capture laser fire, single-photon detection, and GPS data for the Slope Imaging Multi-polarization Photo-counting Lidar (SIMPL). However, the system has applications for other laser altimeter systems (waveform-recording), mass spectroscopy, xray radiometry imaging, high-background- rate ranging lidar, and other similar areas where very high-speed data capture is needed. The data capture software was used for the SIMPL instrument that employs a micropulse, single-photon ranging measurement approach and has 16 data channels. The detected single photons are from two sources those reflected from the target and solar background photons. The instrument is non-gated, so background photons are acquired for a range window of 13 km and can comprise many times the number of target photons. The highest background rate occurs when the atmosphere is clear, the Sun is high, and the target is a highly reflective surface such as snow. Under these conditions, the total data rate for the 16 channels combined is expected to be approximately 22 million events per second. For each photon detection event, the data capture software reads the relative time of receipt, with respect to a one-per-second absolute time pulse from a GPS receiver, from an event timer card with 0.1-ns precision, and records that information to a RAID (Redundant Array of Independent Disks) storage device. The relative time of laser pulse firings must also be read and recorded with the same precision. Each of the four event timer cards handles the throughput from four of the channels. For each detection event, a flag is recorded that indicates the source channel. To accommodate the expected maximum count rate and also handle the other extreme of very low rates occurring during nighttime operations, the software requests a set amount of data from each of the event timer cards and buffers the data. The software notes if any of the cards did not return all the data requested and then accommodates that lower rate. The data is buffered to minimize the I/O overhead of writing the data to storage. Care was taken to optimize the reads from the cards, the speed of the I/O bus, and RAID configuration.

  5. Detection and Mapping of the September 2017 Mexico Earthquakes Using DAS Fiber-Optic Infrastructure Arrays

    NASA Astrophysics Data System (ADS)

    Karrenbach, M. H.; Cole, S.; Williams, J. J.; Biondi, B. C.; McMurtry, T.; Martin, E. R.; Yuan, S.

    2017-12-01

    Fiber-optic distributed acoustic sensing (DAS) uses conventional telecom fibers for a wide variety of monitoring purposes. Fiber-optic arrays can be located along pipelines for leak detection; along borders and perimeters to detect and locate intruders, or along railways and roadways to monitor traffic and identify and manage incidents. DAS can also be used to monitor oil and gas reservoirs and to detect earthquakes. Because thousands of such arrays are deployed worldwide and acquiring data continuously, they can be a valuable source of data for earthquake detection and location, and could potentially provide important information to earthquake early-warning systems. In this presentation, we show that DAS arrays in Mexico and the United States detected the M8.1 and M7.2 Mexico earthquakes in September 2017. At Stanford University, we have deployed a 2.4 km fiber-optic DAS array in a figure-eight pattern, with 600 channels spaced 4 meters apart. Data have been recorded continuously since September 2016. Over 800 earthquakes from across California have been detected and catalogued. Distant teleseismic events have also been recorded, including the two Mexican earthquakes. In Mexico, fiber-optic arrays attached to pipelines also detected these two events. Because of the length of these arrays and their proximity to the event locations, we can not only detect the earthquakes but also make location estimates, potentially in near real time. In this presentation, we review the data recorded for these two events recorded at Stanford and in Mexico. We compare the waveforms recorded by the DAS arrays to those recorded by traditional earthquake sensor networks. Using the wide coverage provided by the pipeline arrays, we estimate the event locations. Such fiber-optic DAS networks can potentially play a role in earthquake early-warning systems, allowing actions to be taken to minimize the impact of an earthquake on critical infrastructure components. While many such fiber-optic networks are already in place, new arrays can be created on demand, using existing fiber-optic telecom cables, for specific monitoring situations such as recording aftershocks of a large earthquake or monitoring induced seismicity.

  6. Results of field testing with the FightSight infrared-based projectile tracking and weapon-fire characterization technology

    NASA Astrophysics Data System (ADS)

    Snarski, Steve; Menozzi, Alberico; Sherrill, Todd; Volpe, Chris; Wille, Mark

    2010-04-01

    This paper describes experimental results from recent live-fire data collects that demonstrate the capability of a prototype system for projectile detection and tracking. This system, which is being developed at Applied Research Associates, Inc., under the FightSight program, consists of a high-speed thermal camera and sophisticated image processing algorithms to detect and track projectiles. The FightSight operational vision is automated situational intelligence to detect, track, and graphically map large-scale firefights and individual shooting events onto command and control (C2) systems in real time (shot location and direction, weapon ID, movements and trends). Gaining information on enemy-fire trajectories allows educated inferences on the enemy's intent, disposition, and strength. Our prototype projectile detection and tracking system has been tested at the Joint Readiness Training Center (Ft Polk, LA) during live-fire convoy and mortar registration exercises, in the summer of 2009. It was also tested during staged military-operations- on-urban-terrain (MOUT) firefight events at Aberdeen Test Center (Aberdeen, MD) under the Hostile Fire Defeat Army Technology Objective midterm experiment, also in the summer of 2009, where we introduced fusion with acoustic and EO sensors to provide 3D localization and near-real time display of firing events. Results are presented in this paper that demonstrate effective and accurate detection and localization of weapon fire (5.56mm, 7.62mm, .50cal, 81/120mm mortars, 40mm) in diverse and challenging environments (dust, heat, day and night, rain, arid open terrain, urban clutter). FightSight's operational capabilities demonstrated under these live-fire data collects can support closecombat scenarios. As development continues, FightSight will be able to feed C2 systems with a symbolic map of enemy actions.

  7. Adverse Drug Event Detection in Pediatric Oncology and Hematology Patients: Using Medication Triggers to Identify Patient Harm in a Specialized Pediatric Patient Population

    PubMed Central

    Call, Rosemary J.; Burlison, Jonathan D.; Robertson, Jennifer J.; Scott, Jeffrey R.; Baker, Donald K.; Rossi, Michael G.; Howard, Scott C.; Hoffman, James M.

    2014-01-01

    Objective To investigate the use of a trigger tool for adverse drug event (ADE) detection in a pediatric hospital specializing in oncology, hematology, and other catastrophic diseases. Study design A medication-based trigger tool package analyzed electronic health records from February 2009 to February 2013. Chart review determined whether an ADE precipitated the trigger. Severity was assigned to ADEs, and preventability was assessed. Preventable ADEs were compared with the hospital’s electronic voluntary event reporting system to identify whether these ADEs had been previously identified. The positive predictive values (PPVs) of the entire trigger tool and individual triggers were calculated to assess their accuracy to detect ADEs. Results Trigger occurrences (n=706) were detected in 390 patients from six medication triggers, 33 of which were ADEs (overall PPV = 16%). Hyaluronidase had the highest PPV (60%). Most ADEs were category E harm (temporary harm) per the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) index. One event was category H harm (intervention to sustain life). Naloxone was associated with the most grade 4 ADEs per the Common Terminology Criteria for Adverse Events (CTCAE) v4.03. Twenty-one (64%) ADEs were preventable; 3 of which were submitted via the voluntary reporting system. Conclusion Most of the medication-based triggers yielded low PPVs. Refining the triggers based on patients’ characteristics and medication usage patterns could increase the PPVs and make them more useful for quality improvement. To efficiently detect ADEs, triggers must be revised to reflect specialized pediatric patient populations such as hematology and oncology patients. PMID:24768254

  8. Adverse drug event detection in pediatric oncology and hematology patients: using medication triggers to identify patient harm in a specialized pediatric patient population.

    PubMed

    Call, Rosemary J; Burlison, Jonathan D; Robertson, Jennifer J; Scott, Jeffrey R; Baker, Donald K; Rossi, Michael G; Howard, Scott C; Hoffman, James M

    2014-09-01

    To investigate the use of a trigger tool for the detection of adverse drug events (ADE) in a pediatric hospital specializing in oncology, hematology, and other catastrophic diseases. A medication-based trigger tool package analyzed electronic health records from February 2009 to February 2013. Chart review determined whether an ADE precipitated the trigger. Severity was assigned to ADEs, and preventability was assessed. Preventable ADEs were compared with the hospital's electronic voluntary event reporting system to identify whether these ADEs had been previously identified. The positive predictive values (PPVs) of the entire trigger tool and individual triggers were calculated to assess their accuracy to detect ADEs. Trigger occurrences (n = 706) were detected in 390 patients from 6 medication triggers, 33 of which were ADEs (overall PPV = 16%). Hyaluronidase had the greatest PPV (60%). Most ADEs were category E harm (temporary harm) per the National Coordinating Council for Medication Error Reporting and Prevention index. One event was category H harm (intervention to sustain life). Naloxone was associated with the most grade 4 ADEs per the Common Terminology Criteria for Adverse Events v4.03. Twenty-one (64%) ADEs were preventable, 3 of which were submitted via the voluntary reporting system. Most of the medication-based triggers yielded low PPVs. Refining the triggers based on patients' characteristics and medication usage patterns could increase the PPVs and make them more useful for quality improvement. To efficiently detect ADEs, triggers must be revised to reflect specialized pediatric patient populations such as hematology and oncology patients. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. ELECTRONIC SYSTEM

    DOEpatents

    Robison, G.H. et al.

    1960-11-15

    An electronic system is described for indicating the occurrence of a plurality of electrically detectable events within predetermined time intervals. It is comprised of separate input means electrically associated with the events under observation: an electronic channel associated with each input means including control means and indicating means; timing means associated with each of the input means and the control means and adapted to derive a signal from the input means and apply it after a predetermined time to the control means to effect deactivation of each of the channels; and means for resetting the system to its initial condition after observation of each group of events.

  10. Hydra—The National Earthquake Information Center’s 24/7 seismic monitoring, analysis, catalog production, quality analysis, and special studies tool suite

    USGS Publications Warehouse

    Patton, John M.; Guy, Michelle R.; Benz, Harley M.; Buland, Raymond P.; Erickson, Brian K.; Kragness, David S.

    2016-08-18

    This report provides an overview of the capabilities and design of Hydra, the global seismic monitoring and analysis system used for earthquake response and catalog production at the U.S. Geological Survey National Earthquake Information Center (NEIC). Hydra supports the NEIC’s worldwide earthquake monitoring mission in areas such as seismic event detection, seismic data insertion and storage, seismic data processing and analysis, and seismic data output.The Hydra system automatically identifies seismic phase arrival times and detects the occurrence of earthquakes in near-real time. The system integrates and inserts parametric and waveform seismic data into discrete events in a database for analysis. Hydra computes seismic event parameters, including locations, multiple magnitudes, moment tensors, and depth estimates. Hydra supports the NEIC’s 24/7 analyst staff with a suite of seismic analysis graphical user interfaces.In addition to the NEIC’s monitoring needs, the system supports the processing of aftershock and temporary deployment data, and supports the NEIC’s quality assurance procedures. The Hydra system continues to be developed to expand its seismic analysis and monitoring capabilities.

  11. What We Are Watching-Top Global Infectious Disease Threats, 2013-2016: An Update from CDC's Global Disease Detection Operations Center.

    PubMed

    Christian, Kira A; Iuliano, A Danielle; Uyeki, Timothy M; Mintz, Eric D; Nichol, Stuart T; Rollin, Pierre; Staples, J Erin; Arthur, Ray R

    To better track public health events in areas where the public health system is unable or unwilling to report the event to appropriate public health authorities, agencies can conduct event-based surveillance, which is defined as the organized collection, monitoring, assessment, and interpretation of unstructured information regarding public health events that may represent an acute risk to public health. The US Centers for Disease Control and Prevention's (CDC's) Global Disease Detection Operations Center (GDDOC) was created in 2007 to serve as CDC's platform dedicated to conducting worldwide event-based surveillance, which is now highlighted as part of the "detect" element of the Global Health Security Agenda (GHSA). The GHSA works toward making the world more safe and secure from disease threats through building capacity to better "Prevent, Detect, and Respond" to those threats. The GDDOC monitors approximately 30 to 40 public health events each day. In this article, we describe the top threats to public health monitored during 2012 to 2016: avian influenza, cholera, Ebola virus disease, and the vector-borne diseases yellow fever, chikungunya virus, and Zika virus, with updates to the previously described threats from Middle East respiratory syndrome-coronavirus (MERS-CoV) and poliomyelitis.

  12. Secure access control and large scale robust representation for online multimedia event detection.

    PubMed

    Liu, Changyu; Lu, Bin; Li, Huiling

    2014-01-01

    We developed an online multimedia event detection (MED) system. However, there are a secure access control issue and a large scale robust representation issue when we want to integrate traditional event detection algorithms into the online environment. For the first issue, we proposed a tree proxy-based and service-oriented access control (TPSAC) model based on the traditional role based access control model. Verification experiments were conducted on the CloudSim simulation platform, and the results showed that the TPSAC model is suitable for the access control of dynamic online environments. For the second issue, inspired by the object-bank scene descriptor, we proposed a 1000-object-bank (1000OBK) event descriptor. Feature vectors of the 1000OBK were extracted from response pyramids of 1000 generic object detectors which were trained on standard annotated image datasets, such as the ImageNet dataset. A spatial bag of words tiling approach was then adopted to encode these feature vectors for bridging the gap between the objects and events. Furthermore, we performed experiments in the context of event classification on the challenging TRECVID MED 2012 dataset, and the results showed that the robust 1000OBK event descriptor outperforms the state-of-the-art approaches.

  13. The first gravitational-wave burst GW150914, as predicted by the scenario machine

    NASA Astrophysics Data System (ADS)

    Lipunov, V. M.; Kornilov, V.; Gorbovskoy, E.; Tiurina, N.; Balanutsa, P.; Kuznetsov, A.

    2017-02-01

    The Advanced LIGO observatory recently reported (Abbott et al., 2016a) the first direct detection of gravitational waves predicted by Einstein (1916). The detection of this event was predicted in 1997 on the basis of the Scenario Machine population synthesis calculations (Lipunov et al., 1997b) Now we discuss the parameters of binary black holes and event rates predicted by different scenarios of binary evolution. We give a simple explanation of the big difference between detected black hole masses and the mean black hole masses observed in of X-ray Nova systems. The proximity of the masses of the components of GW150914 is in good agreement with the observed initial mass ratio distribution in massive binary systems, as is used in Scenario Machine calculations for massive binaries.

  14. Detecting modification of biomedical events using a deep parsing approach

    PubMed Central

    2012-01-01

    Background This work describes a system for identifying event mentions in bio-molecular research abstracts that are either speculative (e.g. analysis of IkappaBalpha phosphorylation, where it is not specified whether phosphorylation did or did not occur) or negated (e.g. inhibition of IkappaBalpha phosphorylation, where phosphorylation did not occur). The data comes from a standard dataset created for the BioNLP 2009 Shared Task. The system uses a machine-learning approach, where the features used for classification are a combination of shallow features derived from the words of the sentences and more complex features based on the semantic outputs produced by a deep parser. Method To detect event modification, we use a Maximum Entropy learner with features extracted from the data relative to the trigger words of the events. The shallow features are bag-of-words features based on a small sliding context window of 3-4 tokens on either side of the trigger word. The deep parser features are derived from parses produced by the English Resource Grammar and the RASP parser. The outputs of these parsers are converted into the Minimal Recursion Semantics formalism, and from this, we extract features motivated by linguistics and the data itself. All of these features are combined to create training or test data for the machine learning algorithm. Results Over the test data, our methods produce approximately a 4% absolute increase in F-score for detection of event modification compared to a baseline based only on the shallow bag-of-words features. Conclusions Our results indicate that grammar-based techniques can enhance the accuracy of methods for detecting event modification. PMID:22595089

  15. Rule-Based Event Processing and Reaction Rules

    NASA Astrophysics Data System (ADS)

    Paschke, Adrian; Kozlenkov, Alexander

    Reaction rules and event processing technologies play a key role in making business and IT / Internet infrastructures more agile and active. While event processing is concerned with detecting events from large event clouds or streams in almost real-time, reaction rules are concerned with the invocation of actions in response to events and actionable situations. They state the conditions under which actions must be taken. In the last decades various reaction rule and event processing approaches have been developed, which for the most part have been advanced separately. In this paper we survey reaction rule approaches and rule-based event processing systems and languages.

  16. Real-time Interplanetary Shock Prediciton System

    NASA Astrophysics Data System (ADS)

    Vandegriff, J.; Ho, G.; Plauger, J.

    A system is being developed to predict the arrival times and maximum intensities of energetic storm particle (ESP) events at the earth. Measurements of particle flux values at L1 being made by the Electron, Proton, and Alpha Monitor (EPAM) instrument aboard NASA's ACE spacecraft are made available in real-time by the NOAA Space Environment Center as 5 minute averages of several proton and electron energy channels. Past EPAM flux measurements can be used to train forecasting algorithms which then run on the real-time data. Up to 3 days before the arrival of the interplanetary shock associated with an ESP event, characteristic changes in the particle intensities (such as decreased spectral slope and increased overall flux level) are easily discernable. Once the onset of an event is detected, a neural net is used to forecast the arrival time and flux level for the event. We present results obtained with this technique for forecasting the largest of the ESP events detected by EPAM. Forecasting information will be made publicly available through http://sd-www.jhuapl.edu/ACE/EPAM/, the Johns Hopkins University Applied Physics Lab web site for the ACE/EPAM instrument.

  17. A research using hybrid RBF/Elman neural networks for intrusion detection system secure model

    NASA Astrophysics Data System (ADS)

    Tong, Xiaojun; Wang, Zhu; Yu, Haining

    2009-10-01

    A hybrid RBF/Elman neural network model that can be employed for both anomaly detection and misuse detection is presented in this paper. The IDSs using the hybrid neural network can detect temporally dispersed and collaborative attacks effectively because of its memory of past events. The RBF network is employed as a real-time pattern classification and the Elman network is employed to restore the memory of past events. The IDSs using the hybrid neural network are evaluated against the intrusion detection evaluation data sponsored by U.S. Defense Advanced Research Projects Agency (DARPA). Experimental results are presented in ROC curves. Experiments show that the IDSs using this hybrid neural network improve the detection rate and decrease the false positive rate effectively.

  18. Examining the Return on Investment of a Security Information and Event Management Solution in a Notional Department of Defense Network Environment

    DTIC Science & Technology

    2013-06-01

    collection are the facts that devices the lack encryption or compression methods and that the log file must be saved on the host system prior to transfer...time. Statistical correlation utilizes numerical algorithms to detect deviations from normal event levels and other routine activities (Chuvakin...can also assist in detecting low volume threats. Although easy and logical to implement, the implementation of statistical correlation algorithms

  19. Motion camera based on a custom vision sensor and an FPGA architecture

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  20. Machine learning for the automatic detection of anomalous events

    NASA Astrophysics Data System (ADS)

    Fisher, Wendy D.

    In this dissertation, we describe our research contributions for a novel approach to the application of machine learning for the automatic detection of anomalous events. We work in two different domains to ensure a robust data-driven workflow that could be generalized for monitoring other systems. Specifically, in our first domain, we begin with the identification of internal erosion events in earth dams and levees (EDLs) using geophysical data collected from sensors located on the surface of the levee. As EDLs across the globe reach the end of their design lives, effectively monitoring their structural integrity is of critical importance. The second domain of interest is related to mobile telecommunications, where we investigate a system for automatically detecting non-commercial base station routers (BSRs) operating in protected frequency space. The presence of non-commercial BSRs can disrupt the connectivity of end users, cause service issues for the commercial providers, and introduce significant security concerns. We provide our motivation, experimentation, and results from investigating a generalized novel data-driven workflow using several machine learning techniques. In Chapter 2, we present results from our performance study that uses popular unsupervised clustering algorithms to gain insights to our real-world problems, and evaluate our results using internal and external validation techniques. Using EDL passive seismic data from an experimental laboratory earth embankment, results consistently show a clear separation of events from non-events in four of the five clustering algorithms applied. Chapter 3 uses a multivariate Gaussian machine learning model to identify anomalies in our experimental data sets. For the EDL work, we used experimental data from two different laboratory earth embankments. Additionally, we explore five wavelet transform methods for signal denoising. The best performance is achieved with the Haar wavelets. We achieve up to 97.3% overall accuracy and less than 1.4% false negatives in anomaly detection. In Chapter 4, we research using two-class and one-class support vector machines (SVMs) for an effective anomaly detection system. We again use the two different EDL data sets from experimental laboratory earth embankments (each having approximately 80% normal and 20% anomalies) to ensure our workflow is robust enough to work with multiple data sets and different types of anomalous events (e.g., cracks and piping). We apply Haar wavelet-denoising techniques and extract nine spectral features from decomposed segments of the time series data. The two-class SVM with 10-fold cross validation achieved over 94% overall accuracy and 96% F1-score. Our approach provides a means for automatically identifying anomalous events using various machine learning techniques. Detecting internal erosion events in aging EDLs, earlier than is currently possible, can allow more time to prevent or mitigate catastrophic failures. Results show that we can successfully separate normal from anomalous data observations in passive seismic data, and provide a step towards techniques for continuous real-time monitoring of EDL health. Our lightweight non-commercial BSR detection system also has promise in separating commercial from non-commercial BSR scans without the need for prior geographic location information, extensive time-lapse surveys, or a database of known commercial carriers. (Abstract shortened by ProQuest.).

  1. Development and validation of a 48-target analytical method for high-throughput monitoring of genetically modified organisms.

    PubMed

    Li, Xiaofei; Wu, Yuhua; Li, Jun; Li, Yunjing; Long, Likun; Li, Feiwu; Wu, Gang

    2015-01-05

    The rapid increase in the number of genetically modified (GM) varieties has led to a demand for high-throughput methods to detect genetically modified organisms (GMOs). We describe a new dynamic array-based high throughput method to simultaneously detect 48 targets in 48 samples on a Fludigm system. The test targets included species-specific genes, common screening elements, most of the Chinese-approved GM events, and several unapproved events. The 48 TaqMan assays successfully amplified products from both single-event samples and complex samples with a GMO DNA amount of 0.05 ng, and displayed high specificity. To improve the sensitivity of detection, a preamplification step for 48 pooled targets was added to enrich the amount of template before performing dynamic chip assays. This dynamic chip-based method allowed the synchronous high-throughput detection of multiple targets in multiple samples. Thus, it represents an efficient, qualitative method for GMO multi-detection.

  2. Development and Validation of A 48-Target Analytical Method for High-throughput Monitoring of Genetically Modified Organisms

    PubMed Central

    Li, Xiaofei; Wu, Yuhua; Li, Jun; Li, Yunjing; Long, Likun; Li, Feiwu; Wu, Gang

    2015-01-01

    The rapid increase in the number of genetically modified (GM) varieties has led to a demand for high-throughput methods to detect genetically modified organisms (GMOs). We describe a new dynamic array-based high throughput method to simultaneously detect 48 targets in 48 samples on a Fludigm system. The test targets included species-specific genes, common screening elements, most of the Chinese-approved GM events, and several unapproved events. The 48 TaqMan assays successfully amplified products from both single-event samples and complex samples with a GMO DNA amount of 0.05 ng, and displayed high specificity. To improve the sensitivity of detection, a preamplification step for 48 pooled targets was added to enrich the amount of template before performing dynamic chip assays. This dynamic chip-based method allowed the synchronous high-throughput detection of multiple targets in multiple samples. Thus, it represents an efficient, qualitative method for GMO multi-detection. PMID:25556930

  3. Impact of three task demand factors on simulated unmanned system intelligence, surveillance, and reconnaissance operations.

    PubMed

    Abich, Julian; Reinerman-Jones, Lauren; Matthews, Gerald

    2017-06-01

    The present study investigated how three task demand factors influenced performance, subjective workload and stress of novice intelligence, surveillance, and reconnaissance operators within a simulation of an unmanned ground vehicle. Manipulations were task type, dual-tasking and event rate. Participants were required to discriminate human targets within a street scene from a direct video feed (threat detection [TD] task) and detect changes in symbols presented in a map display (change detection [CD] task). Dual-tasking elevated workload and distress, and impaired performance for both tasks. However, with increasing event rate, CD task deteriorated, but TD improved. Thus, standard workload models provide a better guide to evaluating the demands of abstract symbols than to processing realistic human characters. Assessment of stress and workload may be especially important in the design and evaluation of systems in which human character critical signals must be detected in video images. Practitioner Summary: This experiment assessed subjective workload and stress during threat and CD tasks performed alone and in combination. Results indicated an increase in event rate led to significant improvements in performance during TD, but decrements during CD, yet both had associated increases in workload and engagement.

  4. Confidential reporting of patient safety events in primary care: results from a multilevel classification of cognitive and system factors.

    PubMed

    Kostopoulou, Olga; Delaney, Brendan

    2007-04-01

    To classify events of actual or potential harm to primary care patients using a multilevel taxonomy of cognitive and system factors. Observational study of patient safety events obtained via a confidential but not anonymous reporting system. Reports were followed up with interviews where necessary. Events were analysed for their causes and contributing factors using causal trees and were classified using the taxonomy. Five general medical practices in the West Midlands were selected to represent a range of sizes and types of patient population. All practice staff were invited to report patient safety events. Main outcome measures were frequencies of clinical types of events reported, cognitive types of error, types of detection and contributing factors; and relationship between types of error, practice size, patient consequences and detection. 78 reports were relevant to patient safety and analysable. They included 21 (27%) adverse events and 50 (64%) near misses. 16.7% (13/71) had serious patient consequences, including one death. 75.7% (59/78) had the potential for serious patient harm. Most reports referred to administrative errors (25.6%, 20/78). 60% (47/78) of the reports contained sufficient information to characterise cognition: "situation assessment and response selection" was involved in 45% (21/47) of these reports and was often linked to serious potential consequences. The most frequent contributing factor was work organisation, identified in 71 events. This included excessive task demands (47%, 37/71) and fragmentation (28%, 22/71). Even though most reported events were near misses, events with serious patient consequences were also reported. Failures in situation assessment and response selection, a cognitive activity that occurs in both clinical and administrative tasks, was related to serious potential harm.

  5. Confidential reporting of patient safety events in primary care: results from a multilevel classification of cognitive and system factors

    PubMed Central

    Kostopoulou, Olga; Delaney, Brendan

    2007-01-01

    Objective To classify events of actual or potential harm to primary care patients using a multilevel taxonomy of cognitive and system factors. Methods Observational study of patient safety events obtained via a confidential but not anonymous reporting system. Reports were followed up with interviews where necessary. Events were analysed for their causes and contributing factors using causal trees and were classified using the taxonomy. Five general medical practices in the West Midlands were selected to represent a range of sizes and types of patient population. All practice staff were invited to report patient safety events. Main outcome measures were frequencies of clinical types of events reported, cognitive types of error, types of detection and contributing factors; and relationship between types of error, practice size, patient consequences and detection. Results 78 reports were relevant to patient safety and analysable. They included 21 (27%) adverse events and 50 (64%) near misses. 16.7% (13/71) had serious patient consequences, including one death. 75.7% (59/78) had the potential for serious patient harm. Most reports referred to administrative errors (25.6%, 20/78). 60% (47/78) of the reports contained sufficient information to characterise cognition: “situation assessment and response selection” was involved in 45% (21/47) of these reports and was often linked to serious potential consequences. The most frequent contributing factor was work organisation, identified in 71 events. This included excessive task demands (47%, 37/71) and fragmentation (28%, 22/71). Conclusions Even though most reported events were near misses, events with serious patient consequences were also reported. Failures in situation assessment and response selection, a cognitive activity that occurs in both clinical and administrative tasks, was related to serious potential harm. PMID:17403753

  6. Real-Time Data Processing Systems and Products at the Alaska Earthquake Information Center

    NASA Astrophysics Data System (ADS)

    Ruppert, N. A.; Hansen, R. A.

    2007-05-01

    The Alaska Earthquake Information Center (AEIC) receives data from over 400 seismic sites located within the state boundaries and the surrounding regions and serves as a regional data center. In 2007, the AEIC reported ~20,000 seismic events, with the largest event of M6.6 in Andreanof Islands. The real-time earthquake detection and data processing systems at AEIC are based on the Antelope system from BRTT, Inc. This modular and extensible processing platform allows an integrated system complete from data acquisition to catalog production. Multiple additional modules constructed with the Antelope toolbox have been developed to fit particular needs of the AEIC. The real-time earthquake locations and magnitudes are determined within 2-5 minutes of the event occurrence. AEIC maintains a 24/7 seismologist-on-duty schedule. Earthquake alarms are based on the real- time earthquake detections. Significant events are reviewed by the seismologist on duty within 30 minutes of the occurrence with information releases issued for significant events. This information is disseminated immediately via the AEIC website, ANSS website via QDDS submissions, through e-mail, cell phone and pager notifications, via fax broadcasts and recorded voice-mail messages. In addition, automatic regional moment tensors are determined for events with M>=4.0. This information is posted on the public website. ShakeMaps are being calculated in real-time with the information currently accessible via a password-protected website. AEIC is designing an alarm system targeted for the critical lifeline operations in Alaska. AEIC maintains an extensive computer network to provide adequate support for data processing and archival. For real-time processing, AEIC operates two identical, interoperable computer systems in parallel.

  7. MOA-2012-BLG-505Lb: A Super-Earth-mass Planet That Probably Resides in the Galactic Bulge

    NASA Astrophysics Data System (ADS)

    Nagakane, M.; Sumi, T.; Koshimoto, N.; Bennett, D. P.; Bond, I. A.; Rattenbury, N.; Suzuki, D.; Abe, F.; Asakura, Y.; Barry, R.; Bhattacharya, A.; Donachie, M.; Fukui, A.; Hirao, Y.; Itow, Y.; Li, M. C. A.; Ling, C. H.; Masuda, K.; Matsubara, Y.; Matsuo, T.; Muraki, Y.; Ohnishi, K.; Ranc, C.; Saito, To.; Sharan, A.; Shibai, H.; Sullivan, D. J.; Tristram, P. J.; Yamada, T.; Yonehara, A.; MOA Collaboration

    2017-07-01

    We report the discovery of a super-Earth-mass planet in the microlensing event MOA-2012-BLG-505. This event has the second shortest event timescale of t E = 10 ± 1 days where the observed data show evidence of a planetary companion. Our 15 minute high cadence survey observation schedule revealed the short subtle planetary signature. The system shows the well known close/wide degeneracy. The planet/host-star mass ratio is q = 2.1 × 10-4 and the projected separation normalized by the Einstein radius is s = 1.1 or 0.9 for the wide and close solutions, respectively. We estimate the physical parameters of the system by using a Bayesian analysis and find that the lens consists of a super-Earth with a mass of {6.7}-3.6+10.7 {M}\\oplus orbiting around a brown dwarf or late-M-dwarf host with a mass of {0.10}-0.05+0.16 {M}⊙ with a projected star-planet separation of {0.9}-0.2+0.3 {au}. The system is at a distance of 7.2 ± 1.1 kpc, I.e., it is likely to be in the Galactic bulge. The small angular Einstein radius (θ E = 0.12 ± 0.02 mas) and short event timescale are typical for a low-mass lens in the Galactic bulge. Such low-mass planetary systems in the Bulge are rare because the detection efficiency of planets in short microlensing events is relatively low. This discovery may suggest that such low-mass planetary systems are abundant in the Bulge and currently on-going high cadence survey programs will detect more such events and may reveal an abundance of such planetary systems.

  8. More evidence for a one-to-one correlation between Sprites and Early VLF perturbations

    NASA Astrophysics Data System (ADS)

    Haldoupis, C.; Amvrosiadi, N.; Cotts, B. R. T.; van der Velde, O. A.; Chanrion, O.; Neubert, T.

    2010-07-01

    Past studies have shown a correlation between sprites and early VLF perturbations, but the reported correlation varies widely from ˜50% to 100%. The present study resolves these large discrepancies by analyzing several case studies of sprite and narrowband VLF observations, in which multiple transmitter-receiver VLF pairs with great circle paths (GCPs) passing near a sprite-producing thunderstorm were available. In this setup, the multiple paths act in a complementary way that makes the detection of early VLF perturbations much more probable compared to a single VLF path that can miss several of them, a fact that was overlooked in past studies. The evidence shows that visible sprite occurrences are accompanied by early VLF perturbations in a one-to-one correspondence. This implies that the sprite generation mechanism may cause also sub-ionospheric conductivity disturbances that produce early VLF events. However, the one-to-one visible sprite to early VLF event correspondence, if viewed conversely, appears not to be always reciprocal. This is because the number of early events detected in some case studies was considerably larger than the number of visible sprites. Since the great majority of the early events not accompanied by visible sprites appeared to be caused by positive cloud to ground (+CG) lightning discharges, it is possible that sprites or sprite halos were concurrently present in these events as well but were missed by the sprite-watch camera detection system. In order for this option to be resolved we need more studies using highly sensitive optical systems capable of detecting weaker sprites, sprite halos and elves.

  9. Prediction of topographic and bathymetric measurement performance of airborne low-SNR lidar systems

    NASA Astrophysics Data System (ADS)

    Cossio, Tristan

    Low signal-to-noise ratio (LSNR) lidar (light detection and ranging) is an alternative paradigm to traditional lidar based on the detection of return signals at the single photoelectron level. The objective of this work was to predict low altitude (600 m) LSNR lidar system performance with regards to elevation measurement and target detection capability in topographic (dry land) and bathymetric (shallow water) scenarios. A modular numerical sensor model has been developed to provide data for further analysis due to the dearth of operational low altitude LSNR lidar systems. This simulator tool is described in detail, with consideration given to atmospheric effects, surface conditions, and the effects of laser phenomenology. Measurement performance analysis of the simulated topographic data showed results comparable to commercially available lidar systems, with a standard deviation of less than 12 cm for calculated elevation values. Bathymetric results, although dependent largely on water turbidity, were indicative of meter-scale horizontal data spacing for sea depths less than 5 m. The high prevalence of noise in LSNR lidar data introduces significant difficulties in data analysis. Novel algorithms to reduce noise are described, with particular focus on their integration into an end-to-end target detection classifier for both dry and submerged targets (cube blocks, 0.5 m to 1.0 m on a side). The key characteristic exploited to discriminate signal and noise is the temporal coherence of signal events versus the random distribution of noise events. Target detection performance over dry earth was observed to be robust, reliably detecting over 90% of targets with a minimal false alarm rate. Comparable results were observed in waters of high clarity, where the investigated system was generally able to detect more than 70% of targets to a depth of 5 m. The results of the study show that CATS, the University of Florida's LSNR lidar prototype, is capable of high fidelity (decimeter-scale) coverage of the topographic zone with limited applicability to shallow waters less than 5 m deep. To increase the spatial-temporal contrast between signal and noise events, laser pulse rate is the optimal system characteristic to improve in future LSNR lidar units.

  10. The attributes of medical event-reporting systems: experience with a prototype medical event-reporting system for transfusion medicine.

    PubMed

    Battles, J B; Kaplan, H S; Van der Schaaf, T W; Shea, C E

    1998-03-01

    To design, develop, and implement a prototype medical event-reporting system for use in transfusion medicine to improve transfusion safety by studying incidents and errors. The IDEALS concept of design was used to identify specifications for the event-reporting system, and a Delphi and subsequent nominal group technique meetings were used to reach consensus on the development of the system. An interdisciplinary panel of experts from aviation safety, nuclear power, cognitive psychology, artificial intelligence, and education and representatives of major transfusion medicine organizations participated in the development process. Setting.- Three blood centers and three hospital transfusion services implemented the reporting system. A working prototype event-reporting system was recommended and implemented. The system has seven components: detection, selection, description, classification, computation, interpretation, and local evaluation. Its unique features include no-fault reporting initiated by the individual discovering the event, who submits a report that is investigated by local quality assurance personnel and forwarded to a nonregulatory central system for computation and interpretation. An event-reporting system incorporated into present quality assurance and risk management efforts can help organizations address system structural and procedural weakness where the potential for errors can adversely affect health care outcomes. Input from the end users of the system as well as from external experts should enable this reporting system to serve as a useful model for others who may develop event-reporting systems in other medical domains.

  11. Time difference of arrival to blast localization of potential chemical/biological event on the move

    NASA Astrophysics Data System (ADS)

    Morcos, Amir; Desai, Sachi; Peltzer, Brian; Hohil, Myron E.

    2007-10-01

    Integrating a sensor suite with ability to discriminate potential Chemical/Biological (CB) events from high-explosive (HE) events employing a standalone acoustic sensor with a Time Difference of Arrival (TDOA) algorithm we developed a cueing mechanism for more power intensive and range limited sensing techniques. Enabling the event detection algorithm to locate to a blast event using TDOA we then provide further information of the event as either Launch/Impact and if CB/HE. The added information is provided to a range limited chemical sensing system that exploits spectroscopy to determine the contents of the chemical event. The main innovation within this sensor suite is the system will provide this information on the move while the chemical sensor will have adequate time to determine the contents of the event from a safe stand-off distance. The CB/HE discrimination algorithm exploits acoustic sensors to provide early detection and identification of CB attacks. Distinct characteristics arise within the different airburst signatures because HE warheads emphasize concussive and shrapnel effects, while CB warheads are designed to disperse their contents over large areas, therefore employing a slower burning, less intense explosive to mix and spread their contents. Differences characterized by variations in the corresponding peak pressure and rise time of the blast, differences in the ratio of positive pressure amplitude to the negative amplitude, and variations in the overall duration of the resulting waveform. The discrete wavelet transform (DWT) is used to extract the predominant components of these characteristics from air burst signatures at ranges exceeding 3km. Highly reliable discrimination is achieved with a feed-forward neural network classifier trained on a feature space derived from the distribution of wavelet coefficients and higher frequency details found within different levels of the multiresolution decomposition. The development of an adaptive noise floor to provide early event detection assists in minimizing the false alarm rate and increasing the confidence whether the event is blast event or back ground noise. The integration of these algorithms with the TDOA algorithm provides a complex suite of algorithms that can give early warning detection and highly reliable look direction from a great stand-off distance for a moving vehicle to determine if a candidate blast event is CB and if CB what is the composition of the resulting cloud.

  12. Jaguar surveying and monitoring in the United States

    USGS Publications Warehouse

    Culver, Melanie

    2016-06-10

    This project established and implemented a noninvasive system for detecting and monitoring jaguars. The study area incorporates most of the mountainous areas north of the United States-Mexico international border and south of Interstate 10, from the Baboquivari Mountains in Arizona to the Animas Mountains in New Mexico. We used two primary methods to detect exact jaguar locations: paired motion-sensor trail cameras, and genetic testing of large carnivore scat collected in the field. We emphasize that this project used entirely noninvasive methods and no jaguars were captured, radiocollared, baited, or harassed in any way. Scat sample collection occurred during the entire field part of the study, but was intensified with the use of a trained scat detection dog following the first jaguar photo detection event (photo detection event was October 2012, scat detection dog began working January 2013). We also collected weather, vegetation, and geographic information system (GIS) data to analyze in conjunction with photo and video data. The results of this study are intended to aid and inform future management and conservation practices for jaguars and ocelots in this region.

  13. IDSR as a Platform for Implementing IHR in African Countries

    PubMed Central

    Kasolo, Francis; Yoti, Zabulon; Bakyaita, Nathan; Gaturuku, Peter; Katz, Rebecca; Fischer, Julie E.

    2013-01-01

    Of the 46 countries in the World Health Organization (WHO) African region (AFRO), 43 are implementing Integrated Disease Surveillance and Response (IDSR) guidelines to improve their abilities to detect, confirm, and respond to high-priority communicable and noncommunicable diseases. IDSR provides a framework for strengthening the surveillance, response, and laboratory core capacities required by the revised International Health Regulations [IHR (2005)]. In turn, IHR obligations can serve as a driving force to sustain national commitments to IDSR strategies. The ability to report potential public health events of international concern according to IHR (2005) relies on early warning systems founded in national surveillance capacities. Public health events reported through IDSR to the WHO Emergency Management System in Africa illustrate the growing capacities in African countries to detect, assess, and report infectious and noninfectious threats to public health. The IHR (2005) provide an opportunity to continue strengthening national IDSR systems so they can characterize outbreaks and respond to public health events in the region. PMID:24041192

  14. Vehicle Mode and Driving Activity Detection Based on Analyzing Sensor Data of Smartphones.

    PubMed

    Lu, Dang-Nhac; Nguyen, Duc-Nhan; Nguyen, Thi-Hau; Nguyen, Ha-Nam

    2018-03-29

    In this paper, we present a flexible combined system, namely the Vehicle mode-driving Activity Detection System (VADS), that is capable of detecting either the current vehicle mode or the current driving activity of travelers. Our proposed system is designed to be lightweight in computation and very fast in response to the changes of travelers' vehicle modes or driving events. The vehicle mode detection module is responsible for recognizing both motorized vehicles, such as cars, buses, and motorbikes, and non-motorized ones, for instance, walking, and bikes. It relies only on accelerometer data in order to minimize the energy consumption of smartphones. By contrast, the driving activity detection module uses the data collected from the accelerometer, gyroscope, and magnetometer of a smartphone to detect various driving activities, i.e., stopping, going straight, turning left, and turning right. Furthermore, we propose a method to compute the optimized data window size and the optimized overlapping ratio for each vehicle mode and each driving event from the training datasets. The experimental results show that this strategy significantly increases the overall prediction accuracy. Additionally, numerous experiments are carried out to compare the impact of different feature sets (time domain features, frequency domain features, Hjorth features) as well as the impact of various classification algorithms (Random Forest, Naïve Bayes, Decision tree J48, K Nearest Neighbor, Support Vector Machine) contributing to the prediction accuracy. Our system achieves an average accuracy of 98.33% in detecting the vehicle modes and an average accuracy of 98.95% in recognizing the driving events of motorcyclists when using the Random Forest classifier and a feature set containing time domain features, frequency domain features, and Hjorth features. Moreover, on a public dataset of HTC company in New Taipei, Taiwan, our framework obtains the overall accuracy of 97.33% that is considerably higher than that of the state-of the art.

  15. The LUX experiment - trigger and data acquisition systems

    NASA Astrophysics Data System (ADS)

    Druszkiewicz, Eryk

    2013-04-01

    The Large Underground Xenon (LUX) detector is a two-phase xenon time projection chamber designed to detect interactions of dark matter particles with the xenon nuclei. Signals from the detector PMTs are processed by custom-built analog electronics which provide properly shaped signals for the trigger and data acquisition (DAQ) systems. During calibrations, both systems must be able to handle high rates and have large dynamic ranges; during dark matter searches, maximum sensitivity requires low thresholds. The trigger system uses eight-channel 64-MHz digitizers (DDC-8) connected to a Trigger Builder (TB). The FPGA cores on the digitizers perform real-time pulse identification (discriminating between S1 and S2-like signals) and event localization. The TB uses hit patterns, hit maps, and maximum response detection to make trigger decisions, which are reached within few microseconds after the occurrence of an event of interest. The DAQ system is comprised of commercial digitizers with customized firmware. Its real-time baseline suppression allows for a maximum event acquisition rate in excess of 1.5 kHz, which results in virtually no deadtime. The performance of the trigger and DAQ systems during the commissioning runs of LUX will be discussed.

  16. Real-time measurements, rare events and photon economics

    NASA Astrophysics Data System (ADS)

    Jalali, B.; Solli, D. R.; Goda, K.; Tsia, K.; Ropers, C.

    2010-07-01

    Rogue events otherwise known as outliers and black swans are singular, rare, events that carry dramatic impact. They appear in seemingly unconnected systems in the form of oceanic rogue waves, stock market crashes, evolution, and communication systems. Attempts to understand the underlying dynamics of such complex systems that lead to spectacular and often cataclysmic outcomes have been frustrated by the scarcity of events, resulting in insufficient statistical data, and by the inability to perform experiments under controlled conditions. Extreme rare events also occur in ultrafast physical sciences where it is possible to collect large data sets, even for rare events, in a short time period. The knowledge gained from observing rare events in ultrafast systems may provide valuable insight into extreme value phenomena that occur over a much slower timescale and that have a closer connection with human experience. One solution is a real-time ultrafast instrument that is capable of capturing singular and randomly occurring non-repetitive events. The time stretch technology developed during the past 13 years is providing a powerful tool box for reaching this goal. This paper reviews this technology and discusses its use in capturing rogue events in electronic signals, spectroscopy, and imaging. We show an example in nonlinear optics where it was possible to capture rare and random solitons whose unusual statistical distribution resemble those observed in financial markets. The ability to observe the true spectrum of each event in real time has led to important insight in understanding the underlying process, which in turn has made it possible to control soliton generation leading to improvement in the coherence of supercontinuum light. We also show a new class of fast imagers which are being considered for early detection of cancer because of their potential ability to detect rare diseased cells (so called rogue cells) in a large population of healthy cells.

  17. Quadrant anode image sensor

    NASA Technical Reports Server (NTRS)

    Lampton, M.; Malina, R. F.

    1976-01-01

    A position-sensitive event-counting electronic readout system for microchannel plates (MCPs) is described that offers the advantages of high spatial resolution and fast time resolution. The technique relies upon a four-quadrant electron-collecting anode located behind the output face of the microchannel plate, so that the electron cloud from each detected event is partly intercepted by each of the four quadrants. The relative amounts of charge collected by each quadrant depend on event position, permitting each event to be localized with two ratio circuits. A prototype quadrant anode system for ion, electron, and extreme ultraviolet imaging is described. The spatial resolution achieved, about 10 microns, allows individual MCP channels to be distinguished.

  18. Recent Results on "Approximations to Optimal Alarm Systems for Anomaly Detection"

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander

    2009-01-01

    An optimal alarm system and its approximations may use Kalman filtering for univariate linear dynamic systems driven by Gaussian noise to provide a layer of predictive capability. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. An optimal alarm system can be designed to elicit the fewest false alarms for a fixed detection probability in this particular scenario.

  19. Fall detection in homes of older adults using the Microsoft Kinect.

    PubMed

    Stone, Erik E; Skubic, Marjorie

    2015-01-01

    A method for detecting falls in the homes of older adults using the Microsoft Kinect and a two-stage fall detection system is presented. The first stage of the detection system characterizes a person's vertical state in individual depth image frames, and then segments on ground events from the vertical state time series obtained by tracking the person over time. The second stage uses an ensemble of decision trees to compute a confidence that a fall preceded on a ground event. Evaluation was conducted in the actual homes of older adults, using a combined nine years of continuous data collected in 13 apartments. The dataset includes 454 falls, 445 falls performed by trained stunt actors and nine naturally occurring resident falls. The extensive data collection allows for characterization of system performance under real-world conditions to a degree that has not been shown in other studies. Cross validation results are included for standing, sitting, and lying down positions, near (within 4 m) versus far fall locations, and occluded versus not occluded fallers. The method is compared against five state-of-the-art fall detection algorithms and significantly better results are achieved.

  20. Detecting Seismic Events Using a Supervised Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Burks, L.; Forrest, R.; Ray, J.; Young, C.

    2017-12-01

    We explore the use of supervised hidden Markov models (HMMs) to detect seismic events in streaming seismogram data. Current methods for seismic event detection include simple triggering algorithms, such as STA/LTA and the Z-statistic, which can lead to large numbers of false positives that must be investigated by an analyst. The hypothesis of this study is that more advanced detection methods, such as HMMs, may decreases false positives while maintaining accuracy similar to current methods. We train a binary HMM classifier using 2 weeks of 3-component waveform data from the International Monitoring System (IMS) that was carefully reviewed by an expert analyst to pick all seismic events. Using an ensemble of simple and discrete features, such as the triggering of STA/LTA, the HMM predicts the time at which transition occurs from noise to signal. Compared to the STA/LTA detection algorithm, the HMM detects more true events, but the false positive rate remains unacceptably high. Future work to potentially decrease the false positive rate may include using continuous features, a Gaussian HMM, and multi-class HMMs to distinguish between types of seismic waves (e.g., P-waves and S-waves). Acknowledgement: Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.SAND No: SAND2017-8154 A

  1. Slow Earthquake Hunters: A New Citizen Science Project to Identify and Catalog Slow Slip Events in Geodetic Data

    NASA Astrophysics Data System (ADS)

    Bartlow, N. M.

    2017-12-01

    Slow Earthquake Hunters is a new citizen science project to detect, catalog, and monitor slow slip events. Slow slip events, also called "slow earthquakes", occur when faults slip too slowly to generate significant seismic radiation. They typically take between a few days and over a year to occur, and are most often found on subduction zone plate interfaces. While not dangerous in and of themselves, recent evidence suggests that monitoring slow slip events is important for earthquake hazards, as slow slip events have been known to trigger damaging "regular" earthquakes. Slow slip events, because they do not radiate seismically, are detected with a variety of methods, most commonly continuous geodetic Global Positioning System (GPS) stations. There is now a wealth of GPS data in some regions that experience slow slip events, but a reliable automated method to detect them in GPS data remains elusive. This project aims to recruit human users to view GPS time series data, with some post-processing to highlight slow slip signals, and flag slow slip events for further analysis by the scientific team. Slow Earthquake Hunters will begin with data from the Cascadia subduction zone, where geodetically detectable slow slip events with a duration of at least a few days recur at regular intervals. The project will then expand to other areas with slow slip events or other transient geodetic signals, including other subduction zones, and areas with strike-slip faults. This project has not yet rolled out to the public, and is in a beta testing phase. This presentation will show results from an initial pilot group of student participants at the University of Missouri, and solicit feedback for the future of Slow Earthquake Hunters.

  2. Observations of transient events with Mini-MegaTORTORA wide-field monitoring system with sub-second temporal resolution

    NASA Astrophysics Data System (ADS)

    Karpov, S.; Beskin, G.; Biryukov, A.; Bondar, S.; Ivanov, E.; Katkova, E.; Orekhova, N.; Perkov, A.; Sasyuk, V.

    2017-07-01

    Here we present the summary of first years of operation and the first results of a novel 9-channel wide-field optical monitoring system with sub-second temporal resolution, Mini-MegaTORTORA (MMT-9), which is in operation now at Special Astrophysical Observatory on Russian Caucasus. The system is able to observe the sky simultaneously in either wide (900 square degrees) or narrow (100 square degrees) fields of view, either in clear light or with any combination of color (Johnson-Cousins B, V or R) and polarimetric filters installed, with exposure times ranging from 0.1 s to hundreds of seconds.The real-time system data analysis pipeline performs automatic detection of rapid transient events, both near-Earth and extragalactic. The objects routinely detected by MMT also include faint meteors and artificial satellites.

  3. Mini-MegaTORTORA Wide-Field Monitoring System with Subsecond Temporal Resolution: Observation of Transient Events

    NASA Astrophysics Data System (ADS)

    Karpov, S.; Beskin, G.; Biryukov, A.; Bondar, S.; Ivanov, E.; Katkova, E.; Orekhova, N.; Perkov, A.; Sasyuk, V.

    2017-06-01

    Here we present the summary of first years of operation and the first results of a novel 9-channel wide-field optical monitoring system with sub-second temporal resolution, Mini-MegaTORTORA (MMT-9), which is in operation now at Special Astrophysical Observatory on Russian Caucasus. The system is able to observe the sky simultaneously in either wide (˜900 square degrees) or narrow (˜100 square degrees) fields of view, either in clear light or with any combination of color (Johnson-Cousins B, V or R) and polarimetric filters installed, with exposure times ranging from 0.1 s to hundreds of seconds.The real-time system data analysis pipeline performs automatic detection of rapid transient events, both near-Earth and extragalactic. The objects routinely detected by MMT include faint meteors and artificial satellites.

  4. An Ultralow-Power Sleep Spindle Detection System on Chip.

    PubMed

    Iranmanesh, Saam; Rodriguez-Villegas, Esther

    2017-08-01

    This paper describes a full system-on-chip to automatically detect sleep spindle events from scalp EEG signals. These events, which are known to play an important role on memory consolidation during sleep, are also characteristic of a number of neurological diseases. The operation of the system is based on a previously reported algorithm, which used the Teager energy operator, together with the Spectral Edge Frequency (SEF50) achieving more than 70% sensitivity and 98% specificity. The algorithm is now converted into a hardware analog based customized implementation in order to achieve extremely low levels of power. Experimental results prove that the system, which is fabricated in a 0.18 μm CMOS technology, is able to operate from a 1.25 V power supply consuming only 515 nW, with an accuracy that is comparable to its software counterpart.

  5. Event-driven simulation in SELMON: An overview of EDSE

    NASA Technical Reports Server (NTRS)

    Rouquette, Nicolas F.; Chien, Steve A.; Charest, Leonard, Jr.

    1992-01-01

    EDSE (event-driven simulation engine), a model-based event-driven simulator implemented for SELMON, a tool for sensor selection and anomaly detection in real-time monitoring is described. The simulator is used in conjunction with a causal model to predict future behavior of the model from observed data. The behavior of the causal model is interpreted as equivalent to the behavior of the physical system being modeled. An overview of the functionality of the simulator and the model-based event-driven simulation paradigm on which it is based is provided. Included are high-level descriptions of the following key properties: event consumption and event creation, iterative simulation, synchronization and filtering of monitoring data from the physical system. Finally, how EDSE stands with respect to the relevant open issues of discrete-event and model-based simulation is discussed.

  6. Simulating adverse event spontaneous reporting systems as preferential attachment networks: application to the Vaccine Adverse Event Reporting System.

    PubMed

    Scott, J; Botsis, T; Ball, R

    2014-01-01

    Spontaneous Reporting Systems [SRS] are critical tools in the post-licensure evaluation of medical product safety. Regulatory authorities use a variety of data mining techniques to detect potential safety signals in SRS databases. Assessing the performance of such signal detection procedures requires simulated SRS databases, but simulation strategies proposed to date each have limitations. We sought to develop a novel SRS simulation strategy based on plausible mechanisms for the growth of databases over time. We developed a simulation strategy based on the network principle of preferential attachment. We demonstrated how this strategy can be used to create simulations based on specific databases of interest, and provided an example of using such simulations to compare signal detection thresholds for a popular data mining algorithm. The preferential attachment simulations were generally structurally similar to our targeted SRS database, although they had fewer nodes of very high degree. The approach was able to generate signal-free SRS simulations, as well as mimicking specific known true signals. Explorations of different reporting thresholds for the FDA Vaccine Adverse Event Reporting System suggested that using proportional reporting ratio [PRR] > 3.0 may yield better signal detection operating characteristics than the more commonly used PRR > 2.0 threshold. The network analytic approach to SRS simulation based on the principle of preferential attachment provides an attractive framework for exploring the performance of safety signal detection algorithms. This approach is potentially more principled and versatile than existing simulation approaches. The utility of network-based SRS simulations needs to be further explored by evaluating other types of simulated signals with a broader range of data mining approaches, and comparing network-based simulations with other simulation strategies where applicable.

  7. Analysis of several methods and inertial sensors locations to assess gait parameters in able-bodied subjects.

    PubMed

    Ben Mansour, Khaireddine; Rezzoug, Nasser; Gorce, Philippe

    2015-10-01

    The purpose of this paper was to determine which types of inertial sensors and which advocated locations should be used for reliable and accurate gait event detection and temporal parameter assessment in normal adults. In addition, we aimed to remove the ambiguity found in the literature of the definition of the initial contact (IC) from the lumbar accelerometer. Acceleration and angular velocity data was gathered from the lumbar region and the distal edge of each shank. This data was evaluated in comparison to an instrumented treadmill and an optoelectronic system during five treadmill speed sessions. The lumbar accelerometer showed that the peak of the anteroposterior component was the most accurate for IC detection. Similarly, the valley that followed the peak of the vertical component was the most precise for terminal contact (TC) detection. Results based on ANOVA and Tukey tests showed that the set of inertial methods was suitable for temporal gait assessment and gait event detection in able-bodied subjects. For gait event detection, an exception was found with the shank accelerometer. The tool was suitable for temporal parameters assessment, despite the high root mean square error on the detection of IC (RMSEIC) and TC (RMSETC). The shank gyroscope was found to be as accurate as the kinematic method since the statistical tests revealed no significant difference between the two techniques for the RMSE off all gait events and temporal parameters. The lumbar and shank accelerometers were the most accurate alternative to the shank gyroscope for gait event detection and temporal parameters assessment, respectively. Copyright © 2015. Published by Elsevier B.V.

  8. Advanced Clinical Decision Support for Vaccine Adverse Event Detection and Reporting.

    PubMed

    Baker, Meghan A; Kaelber, David C; Bar-Shain, David S; Moro, Pedro L; Zambarano, Bob; Mazza, Megan; Garcia, Crystal; Henry, Adam; Platt, Richard; Klompas, Michael

    2015-09-15

    Reporting of adverse events (AEs) following vaccination can help identify rare or unexpected complications of immunizations and aid in characterizing potential vaccine safety signals. We developed an open-source, generalizable clinical decision support system called Electronic Support for Public Health-Vaccine Adverse Event Reporting System (ESP-VAERS) to assist clinicians with AE detection and reporting. ESP-VAERS monitors patients' electronic health records for new diagnoses, changes in laboratory values, and new allergies following vaccinations. When suggestive events are found, ESP-VAERS sends the patient's clinician a secure electronic message with an invitation to affirm or refute the message, add comments, and submit an automated, prepopulated electronic report to VAERS. High-probability AEs are reported automatically if the clinician does not respond. We implemented ESP-VAERS in December 2012 throughout the MetroHealth System, an integrated healthcare system in Ohio. We queried the VAERS database to determine MetroHealth's baseline reporting rates from January 2009 to March 2012 and then assessed changes in reporting rates with ESP-VAERS. In the 8 months following implementation, 91 622 vaccinations were given. ESP-VAERS sent 1385 messages to responsible clinicians describing potential AEs. Clinicians opened 1304 (94.2%) messages, responded to 209 (15.1%), and confirmed 16 for transmission to VAERS. An additional 16 high-probability AEs were sent automatically. Reported events included seizure, pleural effusion, and lymphocytopenia. The odds of a VAERS report submission during the implementation period were 30.2 (95% confidence interval, 9.52-95.5) times greater than the odds during the comparable preimplementation period. An open-source, electronic health record-based clinical decision support system can increase AE detection and reporting rates in VAERS. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Detection of infectious disease outbreaks in twenty-two fragile states, 2000-2010: a systematic review

    PubMed Central

    2011-01-01

    Fragile states are home to a sixth of the world's population, and their populations are particularly vulnerable to infectious disease outbreaks. Timely surveillance and control are essential to minimise the impact of these outbreaks, but little evidence is published about the effectiveness of existing surveillance systems. We did a systematic review of the circumstances (mode) of detection of outbreaks occurring in 22 fragile states in the decade 2000-2010 (i.e. all states consistently meeting fragility criteria during the timeframe of the review), as well as time lags from onset to detection of these outbreaks, and from detection to further events in their timeline. The aim of this review was to enhance the evidence base for implementing infectious disease surveillance in these complex, resource-constrained settings, and to assess the relative importance of different routes whereby outbreak detection occurs. We identified 61 reports concerning 38 outbreaks. Twenty of these were detected by existing surveillance systems, but 10 detections occurred following formal notifications by participating health facilities rather than data analysis. A further 15 outbreaks were detected by informal notifications, including rumours. There were long delays from onset to detection (median 29 days) and from detection to further events (investigation, confirmation, declaration, control). Existing surveillance systems yielded the shortest detection delays when linked to reduced barriers to health care and frequent analysis and reporting of incidence data. Epidemic surveillance and control appear to be insufficiently timely in fragile states, and need to be strengthened. Greater reliance on formal and informal notifications is warranted. Outbreak reports should be more standardised and enable monitoring of surveillance systems' effectiveness. PMID:21861869

  10. Integrated System for Autonomous Science

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Sherwood, Robert; Tran, Daniel; Cichy, Benjamin; Davies, Ashley; Castano, Rebecca; Rabideau, Gregg; Frye, Stuart; Trout, Bruce; Shulman, Seth; hide

    2006-01-01

    The New Millennium Program Space Technology 6 Project Autonomous Sciencecraft software implements an integrated system for autonomous planning and execution of scientific, engineering, and spacecraft-coordination actions. A prior version of this software was reported in "The TechSat 21 Autonomous Sciencecraft Experiment" (NPO-30784), NASA Tech Briefs, Vol. 28, No. 3 (March 2004), page 33. This software is now in continuous use aboard the Earth Orbiter 1 (EO-1) spacecraft mission and is being adapted for use in the Mars Odyssey and Mars Exploration Rovers missions. This software enables EO-1 to detect and respond to such events of scientific interest as volcanic activity, flooding, and freezing and thawing of water. It uses classification algorithms to analyze imagery onboard to detect changes, including events of scientific interest. Detection of such events triggers acquisition of follow-up imagery. The mission-planning component of the software develops a response plan that accounts for visibility of targets and operational constraints. The plan is then executed under control by a task-execution component of the software that is capable of responding to anomalies.

  11. Classifying seismic waveforms from scratch: a case study in the alpine environment

    NASA Astrophysics Data System (ADS)

    Hammer, C.; Ohrnberger, M.; Fäh, D.

    2013-01-01

    Nowadays, an increasing amount of seismic data is collected by daily observatory routines. The basic step for successfully analyzing those data is the correct detection of various event types. However, the visually scanning process is a time-consuming task. Applying standard techniques for detection like the STA/LTA trigger still requires the manual control for classification. Here, we present a useful alternative. The incoming data stream is scanned automatically for events of interest. A stochastic classifier, called hidden Markov model, is learned for each class of interest enabling the recognition of highly variable waveforms. In contrast to other automatic techniques as neural networks or support vector machines the algorithm allows to start the classification from scratch as soon as interesting events are identified. Neither the tedious process of collecting training samples nor a time-consuming configuration of the classifier is required. An approach originally introduced for the volcanic task force action allows to learn classifier properties from a single waveform example and some hours of background recording. Besides a reduction of required workload this also enables to detect very rare events. Especially the latter feature provides a milestone point for the use of seismic devices in alpine warning systems. Furthermore, the system offers the opportunity to flag new signal classes that have not been defined before. We demonstrate the application of the classification system using a data set from the Swiss Seismological Survey achieving very high recognition rates. In detail we document all refinements of the classifier providing a step-by-step guide for the fast set up of a well-working classification system.

  12. Electrophysiological and Electrochemical Methods Development for the Detection of Biologically Active Chemical Agents

    DTIC Science & Technology

    1988-11-01

    Bilayer ........................................... 14 5. Current-Voltage Curve for Gramacidin in a Lecithin -Sphingomyelin Patch Bilayer... lecithin (Avanti). 9 2. MATERIALS 2.1 Patch Microprobe Instrumentation. The basis of the microprobe system is an AxoPatch Patch- Clamping Amplifier System...histogram of 1024 events cut above 2 pA. Events sampled are thought to be from the same single gramacidin channel in a lecithin : sphingomyelin (5:1) patch

  13. An evaluation of an expert system for detecting critical events during anesthesia in a human patient simulator: a prospective randomized controlled study.

    PubMed

    Görges, Matthias; Winton, Pamela; Koval, Valentyna; Lim, Joanne; Stinson, Jonathan; Choi, Peter T; Schwarz, Stephan K W; Dumont, Guy A; Ansermino, J Mark

    2013-08-01

    Perioperative monitoring systems produce a large amount of uninterpreted data, use threshold alarms prone to artifacts, and rely on the clinician to continuously visually track changes in physiological data. To address these deficiencies, we developed an expert system that provides real-time clinical decisions for the identification of critical events. We evaluated the efficacy of the expert system for enhancing critical event detection in a simulated environment. We hypothesized that anesthesiologists would identify critical ventilatory events more rapidly and accurately with the expert system. We used a high-fidelity human patient simulator to simulate an operating room environment. Participants managed 4 scenarios (anesthetic vapor overdose, tension pneumothorax, anaphylaxis, and endotracheal tube cuff leak) in random order. In 2 of their 4 scenarios, participants were randomly assigned to the expert system, which provided trend-based alerts and potential differential diagnoses. Time to detection and time to treatment were measured. Workload questionnaires and structured debriefings were completed after each scenario, and a usability questionnaire at the conclusion of the session. Data were analyzed using a mixed-effects linear regression model; Fisher exact test was used for workload scores. Twenty anesthesiology trainees and 15 staff anesthesiologists with a combined median (range) of 36 (29-66) years of age and 6 (1-38) years of anesthesia experience participated. For the endotracheal tube cuff leak, the expert system caused mean reductions of 128 (99% confidence interval [CI], 54-202) seconds in time to detection and 140 (99% CI, 79-200) seconds in time to treatment. In the other 3 scenarios, a best-case decrease of 97 seconds (lower 99% CI) in time to diagnosis for anaphylaxis and a worst-case increase of 63 seconds (upper 99% CI) in time to treatment for anesthetic vapor overdose were found. Participants were highly satisfied with the expert system (median score, 2 on a scale of 1-7). Based on participant debriefings, we identified avoidance of task fixation, reassurance to initiate invasive treatment, and confirmation of a suspected diagnosis as 3 safety-critical areas. When using the expert system, clinically important and statistically significant decreases in time to detection and time to treatment were observed for the endotracheal tube cuff Leak scenario. The observed differences in the other 3 scenarios were much smaller and not statistically significant. Further evaluation is required to confirm the clinical utility of real-time expert systems for anesthesia.

  14. Automatic detection of lift-off and touch-down of a pick-up walker using 3D kinematics.

    PubMed

    Grootveld, L; Thies, S B; Ogden, D; Howard, D; Kenney, L P J

    2014-02-01

    Walking aids have been associated with falls and it is believed that incorrect use limits their usefulness. Measures are therefore needed that characterize their stable use and the classification of key events in walking aid movement is the first step in their development. This study presents an automated algorithm for detection of lift-off (LO) and touch-down (TD) events of a pick-up walker. For algorithm design and initial testing, a single user performed trials for which the four individual walker feet lifted off the ground and touched down again in various sequences, and for different amounts of frame loading (Dataset_1). For further validation, ten healthy young subjects walked with the pick-up walker on flat ground (Dataset_2a) and on a narrow beam (Dataset_2b), to challenge balance. One 88-year-old walking frame user was also assessed. Kinematic data were collected with a 3D optoelectronic camera system. The algorithm detected over 93% of events (Dataset_1), and 95% and 92% in Dataset_2a and b, respectively. Of the various LO/TD sequences, those associated with natural progression resulted in up to 100% correctly identified events. For the 88-year-old walking frame user, 96% of LO events and 93% of TD events were detected, demonstrating the potential of the approach. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Enhanced situational awareness in the maritime domain: an agent-based approach for situation management

    NASA Astrophysics Data System (ADS)

    Brax, Christoffer; Niklasson, Lars

    2009-05-01

    Maritime Domain Awareness is important for both civilian and military applications. An important part of MDA is detection of unusual vessel activities such as piracy, smuggling, poaching, collisions, etc. Today's interconnected sensorsystems provide us with huge amounts of information over large geographical areas which can make the operators reach their cognitive capacity and start to miss important events. We propose and agent-based situation management system that automatically analyse sensor information to detect unusual activity and anomalies. The system combines knowledge-based detection with data-driven anomaly detection. The system is evaluated using information from both radar and AIS sensors.

  16. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    DOEpatents

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  17. Real-time prediction of the occurrence of GLE events

    NASA Astrophysics Data System (ADS)

    Núñez, Marlon; Reyes-Santiago, Pedro J.; Malandraki, Olga E.

    2017-07-01

    A tool for predicting the occurrence of Ground Level Enhancement (GLE) events using the UMASEP scheme is presented. This real-time tool, called HESPERIA UMASEP-500, is based on the detection of the magnetic connection, along which protons arrive in the near-Earth environment, by estimating the lag correlation between the time derivatives of 1 min soft X-ray flux (SXR) and 1 min near-Earth proton fluxes observed by the GOES satellites. Unlike current GLE warning systems, this tool can predict GLE events before the detection by any neutron monitor (NM) station. The prediction performance measured for the period from 1986 to 2016 is presented for two consecutive periods, because of their notable difference in performance. For the 2000-2016 period, this prediction tool obtained a probability of detection (POD) of 53.8% (7 of 13 GLE events), a false alarm ratio (FAR) of 30.0%, and average warning times (AWT) of 8 min with respect to the first NM station's alert and 15 min to the GLE Alert Plus's warning. We have tested the model by replacing the GOES proton data with SOHO/EPHIN proton data, and the results are similar in terms of POD, FAR, and AWT for the same period. The paper also presents a comparison with a GLE warning system.

  18. A novel seizure detection algorithm informed by hidden Markov model event states

    NASA Astrophysics Data System (ADS)

    Baldassano, Steven; Wulsin, Drausin; Ung, Hoameng; Blevins, Tyler; Brown, Mesha-Gay; Fox, Emily; Litt, Brian

    2016-06-01

    Objective. Recently the FDA approved the first responsive, closed-loop intracranial device to treat epilepsy. Because these devices must respond within seconds of seizure onset and not miss events, they are tuned to have high sensitivity, leading to frequent false positive stimulations and decreased battery life. In this work, we propose a more robust seizure detection model. Approach. We use a Bayesian nonparametric Markov switching process to parse intracranial EEG (iEEG) data into distinct dynamic event states. Each event state is then modeled as a multidimensional Gaussian distribution to allow for predictive state assignment. By detecting event states highly specific for seizure onset zones, the method can identify precise regions of iEEG data associated with the transition to seizure activity, reducing false positive detections associated with interictal bursts. The seizure detection algorithm was translated to a real-time application and validated in a small pilot study using 391 days of continuous iEEG data from two dogs with naturally occurring, multifocal epilepsy. A feature-based seizure detector modeled after the NeuroPace RNS System was developed as a control. Main results. Our novel seizure detection method demonstrated an improvement in false negative rate (0/55 seizures missed versus 2/55 seizures missed) as well as a significantly reduced false positive rate (0.0012 h versus 0.058 h-1). All seizures were detected an average of 12.1 ± 6.9 s before the onset of unequivocal epileptic activity (unequivocal epileptic onset (UEO)). Significance. This algorithm represents a computationally inexpensive, individualized, real-time detection method suitable for implantable antiepileptic devices that may considerably reduce false positive rate relative to current industry standards.

  19. Verifying the Comprehensive Nuclear-Test-Ban Treaty by Radioxenon Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ringbom, Anders

    2005-05-24

    The current status of the ongoing establishment of a verification system for the Comprehensive Nuclear-Test-Ban Treaty using radioxenon detection is discussed. As an example of equipment used in this application the newly developed fully automatic noble gas sampling and detection system SAUNA is described, and data collected with this system are discussed. It is concluded that the most important remaining scientific challenges in the field concern event categorization and meteorological backtracking.

  20. Decision support methods for the detection of adverse events in post-marketing data.

    PubMed

    Hauben, M; Bate, A

    2009-04-01

    Spontaneous reporting is a crucial component of post-marketing drug safety surveillance despite its significant limitations. The size and complexity of some spontaneous reporting system databases represent a challenge for drug safety professionals who traditionally have relied heavily on the scientific and clinical acumen of the prepared mind. Computer algorithms that calculate statistical measures of reporting frequency for huge numbers of drug-event combinations are increasingly used to support pharamcovigilance analysts screening large spontaneous reporting system databases. After an overview of pharmacovigilance and spontaneous reporting systems, we discuss the theory and application of contemporary computer algorithms in regular use, those under development, and the practical considerations involved in the implementation of computer algorithms within a comprehensive and holistic drug safety signal detection program.

  1. Infrasonic Detection of a Large Bolide over South Sulawesi, Indonesia on October 8, 2009: Preliminary Results

    NASA Technical Reports Server (NTRS)

    Silber, E. A.; Brown, P. G.; Le Pinchon, A.

    2011-01-01

    In the morning hours of October 8, 2009, a bright object entered Earth's atmosphere over South Sulawesi, Indonesia. This bolide disintegrated above the ground, generating stratospheric infrasound returns that were detected by infrasonic stations of the global International Monitoring System (IMS) Network of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) at distances up to 17 500 km. Here we present instrumental recordings and preliminary results of this extraordinary event. Using the infrasonic period-yield relations, originally derived for atmospheric nuclear detonations, we find the most probable source energy for this bolide to be 70+/-20 kt TNT equivalent explosive yield. A unique aspect of this event is the fact that it was apparently detected by infrasound only. Global events of such magnitude are expected only once per decade and can be utilized to calibrate infrasonic location and propagation tools on a global scale, and to evaluate energy yield formula, and event timing.

  2. Secure Access Control and Large Scale Robust Representation for Online Multimedia Event Detection

    PubMed Central

    Liu, Changyu; Li, Huiling

    2014-01-01

    We developed an online multimedia event detection (MED) system. However, there are a secure access control issue and a large scale robust representation issue when we want to integrate traditional event detection algorithms into the online environment. For the first issue, we proposed a tree proxy-based and service-oriented access control (TPSAC) model based on the traditional role based access control model. Verification experiments were conducted on the CloudSim simulation platform, and the results showed that the TPSAC model is suitable for the access control of dynamic online environments. For the second issue, inspired by the object-bank scene descriptor, we proposed a 1000-object-bank (1000OBK) event descriptor. Feature vectors of the 1000OBK were extracted from response pyramids of 1000 generic object detectors which were trained on standard annotated image datasets, such as the ImageNet dataset. A spatial bag of words tiling approach was then adopted to encode these feature vectors for bridging the gap between the objects and events. Furthermore, we performed experiments in the context of event classification on the challenging TRECVID MED 2012 dataset, and the results showed that the robust 1000OBK event descriptor outperforms the state-of-the-art approaches. PMID:25147840

  3. A new type of tri-axial accelerometers with high dynamic range MEMS for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Peng, Chaoyong; Chen, Yang; Chen, Quansheng; Yang, Jiansi; Wang, Hongti; Zhu, Xiaoyi; Xu, Zhiqiang; Zheng, Yu

    2017-03-01

    Earthquake Early Warning System (EEWS) has shown its efficiency for earthquake damage mitigation. As the progress of low-cost Micro Electro Mechanical System (MEMS), many types of MEMS-based accelerometers have been developed and widely used in deploying large-scale, dense seismic networks for EEWS. However, the noise performance of these commercially available MEMS is still insufficient for weak seismic signals, leading to the large scatter of early-warning parameters estimation. In this study, we developed a new type of tri-axial accelerometer based on high dynamic range MEMS with low noise level using for EEWS. It is a MEMS-integrated data logger with built-in seismological processing. The device is built on a custom-tailored Linux 2.6.27 operating system and the method for automatic detecting seismic events is STA/LTA algorithms. When a seismic event is detected, peak ground parameters of all data components will be calculated at an interval of 1 s, and τc-Pd values will be evaluated using the initial 3 s of P wave. These values will then be organized as a trigger packet actively sent to the processing center for event combining detection. The output data of all three components are calibrated to sensitivity 500 counts/cm/s2. Several tests and a real field test deployment were performed to obtain the performances of this device. The results show that the dynamic range can reach 98 dB for the vertical component and 99 dB for the horizontal components, and majority of bias temperature coefficients are lower than 200 μg/°C. In addition, the results of event detection and real field deployment have shown its capabilities for EEWS and rapid intensity reporting.

  4. Automated oestrus detection using multimetric behaviour recognition in seasonal-calving dairy cattle on pasture.

    PubMed

    Brassel, J; Rohrssen, F; Failing, K; Wehrend, A

    2018-06-11

    To evaluate the performance of a novel accelerometer-based oestrus detection system (ODS) for dairy cows on pasture, in comparison with measurement of concentrations of progesterone in milk, ultrasonographic examination of ovaries and farmer observations. Mixed-breed lactating dairy cows (n=109) in a commercial, seasonal-calving herd managed at pasture under typical farming conditions in Ireland, were fitted with oestrus detection collars 3 weeks prior to mating start date. The ODS performed multimetric analysis of eight different motion patterns to generate oestrus alerts. Data were collected during the artificial insemination period of 66 days, commencing on 16 April 2015. Transrectal ultrasonographic examinations of the reproductive tract and measurements of concentrations of progesterone in milk were used to confirm oestrus events. Visual observations by the farmer and the number of theoretically expected oestrus events were used to evaluate the number of false negative ODS alerts. The percentage of eligible cows that were detected in oestrus at least once (and were confirmed true positives) was calculated for the first 21, 42 and 63 days of the insemination period. During the insemination period, the ODS generated 194 oestrus alerts and 140 (72.2%) were confirmed as true positives. Six confirmed oestrus events recognised by the farmer did not generate ODS alerts. The positive predictive value of the ODS was 72.2 (95% CI=65.3-78.4)%. To account for oestrus events not identified by the ODS or the farmer, four theoretical missed oestrus events were added to the false negatives. Estimated sensitivity of the automated ODS was 93.3 (95% CI=88.1-96.8)%. The proportion of eligible cows that were detected in oestrus during the first 21 days of the insemination period was 92/106 (86.8%), and during the first 42 and 63 days of the insemination period was 103/106 (97.2%) and 105/106 (99.1%), respectively. The ODS under investigation was suitable for oestrus detection in dairy cows on pasture and showed a high sensitivity of oestrus detection. Multimetric analysis of behavioural data seems to be the superior approach to developing and improving ODS for dairy cows on pasture. Due to a high proportion of false positive alerts, its use as a stand-alone system for oestrus detection cannot be recommended. As it is the first time the system was investigated, testing on other farms would be necessary for further validation.

  5. Detecting Biosphere anomalies hotspots

    NASA Astrophysics Data System (ADS)

    Guanche-Garcia, Yanira; Mahecha, Miguel; Flach, Milan; Denzler, Joachim

    2017-04-01

    The current amount of satellite remote sensing measurements available allow for applying data-driven methods to investigate environmental processes. The detection of anomalies or abnormal events is crucial to monitor the Earth system and to analyze their impacts on ecosystems and society. By means of a combination of statistical methods, this study proposes an intuitive and efficient methodology to detect those areas that present hotspots of anomalies, i.e. higher levels of abnormal or extreme events or more severe phases during our historical records. Biosphere variables from a preliminary version of the Earth System Data Cube developed within the CAB-LAB project (http://earthsystemdatacube.net/) have been used in this study. This database comprises several atmosphere and biosphere variables expanding 11 years (2001-2011) with 8-day of temporal resolution and 0.25° of global spatial resolution. In this study, we have used 10 variables that measure the biosphere. The methodology applied to detect abnormal events follows the intuitive idea that anomalies are assumed to be time steps that are not well represented by a previously estimated statistical model [1].We combine the use of Autoregressive Moving Average (ARMA) models with a distance metric like Mahalanobis distance to detect abnormal events in multiple biosphere variables. In a first step we pre-treat the variables by removing the seasonality and normalizing them locally (μ=0,σ=1). Additionally we have regionalized the area of study into subregions of similar climate conditions, by using the Köppen climate classification. For each climate region and variable we have selected the best ARMA parameters by means of a Bayesian Criteria. Then we have obtained the residuals by comparing the fitted models with the original data. To detect the extreme residuals from the 10 variables, we have computed the Mahalanobis distance to the data's mean (Hotelling's T^2), which considers the covariance matrix of the joint distribution. The proposed methodology has been applied to different areas around the globe. The results show that the method is able to detect historic events and also provides a useful tool to define sensitive regions. This method and results have been developed within the framework of the project BACI (http://baci-h2020.eu/), which aims to integrate Earth Observation data to monitor the earth system and assessing the impacts of terrestrial changes. [1] V. Chandola, A., Banerjee and v., Kumar. Anomaly detection: a survey. ACM computing surveys (CSUR), vol. 41, n. 3, 2009. [2] P. Mahalanobis. On the generalised distance in statistics. Proceedings National Institute of Science, vol. 2, pp 49-55, 1936.

  6. Electrical breakdown detection system for dielectric elastomer actuators

    NASA Astrophysics Data System (ADS)

    Ghilardi, Michele; Busfield, James J. C.; Carpi, Federico

    2017-04-01

    Electrical breakdown of dielectric elastomer actuators (DEAs) is an issue that has to be carefully addressed when designing systems based on this novel technology. Indeed, in some systems electrical breakdown might have serious consequences, not only in terms of interruption of the desired function but also in terms of safety of the overall system (e.g. overheating and even burning). The risk for electrical breakdown often cannot be completely avoided by simply reducing the driving voltages, either because completely safe voltages might not generate sufficient actuation or because internal or external factors might change some properties of the actuator whilst in operation (for example the aging or fatigue of the material, or an externally imposed deformation decreasing the distance between the compliant electrodes). So, there is the clear need for reliable, simple and cost-effective detection systems that are able to acknowledge the occurrence of a breakdown event, making DEA-based devices able to monitor their status and become safer and "selfaware". Here a simple solution for a portable detection system is reported that is based on a voltage-divider configuration that detects the voltage drop at the DEA terminals and assesses the occurrence of breakdown via a microcontroller (Beaglebone Black single-board computer) combined with a real-time, ultra-low-latency processing unit (Bela cape an open-source embedded platform developed at Queen Mary University of London). The system was used to both generate the control signal that drives the actuator and constantly monitor the functionality of the actuator, detecting any breakdown event and discontinuing the supplied voltage accordingly, so as to obtain a safer controlled actuation. This paper presents preliminary tests of the detection system in different scenarios in order to assess its reliability.

  7. 75 FR 75059 - Mandatory Reporting of Greenhouse Gases: Injection and Geologic Sequestration of Carbon Dioxide

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-01

    ... monitoring will achieve detection and quantification of CO 2 in the event surface leakage occurs. The UIC... leakage detection monitoring system or technical specifications should also be described in the MRV plan... of injected CO 2 or from another cause (e.g. natural variability). The MRV plan leakage detection and...

  8. Automatically Recognizing Medication and Adverse Event Information From Food and Drug Administration’s Adverse Event Reporting System Narratives

    PubMed Central

    Polepalli Ramesh, Balaji; Belknap, Steven M; Li, Zuofeng; Frid, Nadya; West, Dennis P

    2014-01-01

    Background The Food and Drug Administration’s (FDA) Adverse Event Reporting System (FAERS) is a repository of spontaneously-reported adverse drug events (ADEs) for FDA-approved prescription drugs. FAERS reports include both structured reports and unstructured narratives. The narratives often include essential information for evaluation of the severity, causality, and description of ADEs that are not present in the structured data. The timely identification of unknown toxicities of prescription drugs is an important, unsolved problem. Objective The objective of this study was to develop an annotated corpus of FAERS narratives and biomedical named entity tagger to automatically identify ADE related information in the FAERS narratives. Methods We developed an annotation guideline and annotate medication information and adverse event related entities on 122 FAERS narratives comprising approximately 23,000 word tokens. A named entity tagger using supervised machine learning approaches was built for detecting medication information and adverse event entities using various categories of features. Results The annotated corpus had an agreement of over .9 Cohen’s kappa for medication and adverse event entities. The best performing tagger achieves an overall performance of 0.73 F1 score for detection of medication, adverse event and other named entities. Conclusions In this study, we developed an annotated corpus of FAERS narratives and machine learning based models for automatically extracting medication and adverse event information from the FAERS narratives. Our study is an important step towards enriching the FAERS data for postmarketing pharmacovigilance. PMID:25600332

  9. Deterrence Requirements and Arms Control Responsibilities: The United State’s Obligation to Ratify the Comprehensive Nuclear Test Ban Treaty

    DTIC Science & Technology

    2010-02-17

    systems to detect a nuclear explosion; seismic, hydroacoustic, infrasound , and radionuclide. These stations are able to detect a nuclear explosion as...These sites detect thousands of seismic events a year, mainly from earthquakes and mining explosions, and have proved effective in detecting past...that detect sound waves in the oceans, and the 60 infrasound stations above ground that detect ultra-low frequency sound waves emitted by nuclear

  10. Continuous robust sound event classification using time-frequency features and deep learning

    PubMed Central

    Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478

  11. Continuous robust sound event classification using time-frequency features and deep learning.

    PubMed

    McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.

  12. Passive acoustic monitoring to detect spawning in large-bodied catostomids

    USGS Publications Warehouse

    Straight, Carrie A.; Freeman, Byron J.; Freeman, Mary C.

    2014-01-01

    Documenting timing, locations, and intensity of spawning can provide valuable information for conservation and management of imperiled fishes. However, deep, turbid or turbulent water, or occurrence of spawning at night, can severely limit direct observations. We have developed and tested the use of passive acoustics to detect distinctive acoustic signatures associated with spawning events of two large-bodied catostomid species (River Redhorse Moxostoma carinatum and Robust Redhorse Moxostoma robustum) in river systems in north Georgia. We deployed a hydrophone with a recording unit at four different locations on four different dates when we could both record and observe spawning activity. Recordings captured 494 spawning events that we acoustically characterized using dominant frequency, 95% frequency, relative power, and duration. We similarly characterized 46 randomly selected ambient river noises. Dominant frequency did not differ between redhorse species and ranged from 172.3 to 14,987.1 Hz. Duration of spawning events ranged from 0.65 to 11.07 s, River Redhorse having longer durations than Robust Redhorse. Observed spawning events had significantly higher dominant and 95% frequencies than ambient river noises. We additionally tested software designed to automate acoustic detection. The automated detection configurations correctly identified 80–82% of known spawning events, and falsely indentified spawns 6–7% of the time when none occurred. These rates were combined over all recordings; rates were more variable among individual recordings. Longer spawning events were more likely to be detected. Combined with sufficient visual observations to ascertain species identities and to estimate detection error rates, passive acoustic recording provides a useful tool to study spawning frequency of large-bodied fishes that displace gravel during egg deposition, including several species of imperiled catostomids.

  13. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    NASA Astrophysics Data System (ADS)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as localization capability. Utilizing imaging information will show signal-to-noise gains over spectroscopic algorithms alone.

  14. Real time monitoring of induced seismicity in the Insheim and Landau deep geothermal reservoirs, Upper Rhine Graben, using the new SeisComP3 cross-correlation detector

    NASA Astrophysics Data System (ADS)

    Vasterling, Margarete; Wegler, Ulrich; Bruestle, Andrea; Becker, Jan

    2016-04-01

    Real time information on the locations and magnitudes of induced earthquakes is essential for response plans based on the magnitude frequency distribution. We developed and tested a real time cross-correlation detector focusing on induced microseismicity in deep geothermal reservoirs. The incoming seismological data are cross-correlated in real time with a set of known master events. We use the envelopes of the seismograms rather than the seismograms themselves to account for small changes in the source locations or in the focal mechanisms. Two different detection conditions are implemented: After first passing a single trace correlation condition, secondly a network correlation is calculated taking the amplitude information of the seismic network into account. The magnitude is estimated by using the respective ratio of the maximum amplitudes of the master event and the detected event. The detector is implemented as a real time tool and put into practice as a SeisComp3 module, an established open source software for seismological real time data handling and analysis. We validated the reliability and robustness of the detector by an offline playback test using four month of data from monitoring the power plant in Insheim (Upper Rhine Graben, SW Germany). Subsequently, in October 2013 the detector was installed as real time monitoring system within the project "MAGS2 - Microseismic Activity of Geothermal Systems". Master events from the two neighboring geothermal power plants in Insheim and Landau and two nearby quarries are defined. After detection, manual phase determination and event location are performed at the local seismological survey of the Geological Survey and Mining Authority of Rhineland-Palatinate. Until November 2015 the detector identified 454 events out of which 95% were assigned correctly to the respective source. 5% were misdetections caused by local tectonic events. To evaluate the completeness of the automatically obtained catalogue, it is compared to the event catalogue of the Seismological Service of Southwestern Germany and to the events reported by the company tasked with seismic monitoring of the Insheim power plant. Events missed by the cross-correlation detector are generally very small. They are registered at too few stations to meet the detection criteria. Most of these small events were not locatable. The automatic catalogue has a magnitude of completeness around 0.0 and is significantly more detailed than the catalogue from standard processing of the Seismological Service of Southwestern Germany for this region. For events in the magnitude range of the master event the magnitude estimated from the amplitude ratio reproduces the local magnitude well. For weaker events there tends to be a small offset. Altogether, the developed real time cross correlation detector provides robust detections with reliable association of the events to the respective sources and valid magnitude estimates. Thus, it provides input parameters for the mitigation of seismic hazard by using response plans in real time.

  15. The Whipple Mission: Exploring the Kuiper Belt and the Oort Cloud

    NASA Astrophysics Data System (ADS)

    Holman, Matthew J.; Alcock, Charles; Kenter, Almus T.; Kraft, Ralph P.; Nulsen, Paul; Payne, Matthew John; Vrtilek, Jan M.; Murray, Stephen S.; Murray-Clay, Ruth; Schlichting, Hilke; Brown, Michael E.; Livingston, John H.; Trangsrud, Amy R.; Werner, Michael W.

    2015-01-01

    Whipple will characterize the small body populations of the Kuiper Belt and the Oort Cloud with a blind occultation survey, detecting objects when they briefly (~1 second) interrupt the light from background stars, allowing the detection of much more distant and/or smaller objects than can be seen in reflected sunlight. Whipple will reach much deeper into the unexplored frontier of the outer solar system than any other mission, current or proposed. Whipple will look back to the dawn of the solar system by discovering its most remote bodies where primordial processes left their imprint.Specifically, Whipple will monitor large numbers of stars at high cadences (~12,000 stars at 20 Hz to examine Kuiper Belt events; as many as ~36,000 stars at 5 Hz to explore deep into the Oort Cloud, where events are less frequent). Analysis of the detected events will allow us to determine the size spectrum of bodies in the Kuiper Belt with radii as small as ~1 km. This will allow the testing of models of the growth and later collisional erosion of planetesimals in the earlysolar system. Whipple will explore the Oort Cloud, detecting objects as far out as ~10,000 AU. This will be the first direct exploration of the Oort Cloud since the original hypothesis of 1950.Whipple is a Discovery class mission that will be proposed to NASA in response to the upcoming Announcement of Opportunity. The mission is being developed jointly by the Smithsonian Astrophysical Observatory, Jet Propulsion Laboratory, and Ball Aerospace & Technologies, with telescope optics from L-3 Integrated Optical Systems.

  16. The Whipple Mission: Exploring the Kuiper Belt and the Oort Cloud

    NASA Astrophysics Data System (ADS)

    Alcock, Charles; Brown, Michael; Gauron, Tom; Heneghan, Cate; Holman, Matthew; Kenter, Almus; Kraft, Ralph; Livingston, John; Murray, Stephen; Murray-Clay, Ruth; Nulsen, Paul; Payne, Matthew; Schlichting, Hilke; Trangsrud, Amy; Vrtilek, Jan; Werner, Michael

    2014-11-01

    Whipple will characterize the small body populations of the Kuiper Belt and the Oort Cloud with a blind occultation survey, detecting objects when they briefly 1 second) interrupt the light from background stars, allowing the detection of much more distant and/or smaller objects than can be seen in reflected sunlight. Whipple will reach much deeper into the unexplored frontier of the outer solar system than any other mission, current or proposed. Whipple will look back to the dawn of the solar system by discovering its most remote bodies where primordial processes left their imprint.Specifically, Whipple will monitor large numbers of stars at high cadences 12,000 stars at 20 Hz to examine Kuiper Belt events; as many as ~36,000 stars at 5 Hz to explore deep into the Oort Cloud, where events are less frequent). Analysis of the detected events will allow us to determine the size spectrum of bodies in the Kuiper Belt with radii as small as ~1 km. This will allow the testing of models of the growth and later collisional erosion of planetesimals in the early solar system. Whipple will explore the Oort Cloud, detecting objects as far out as ~10,000 AU. This will be the first direct exploration of the Oort Cloud since the original hypothesis of 1950.Whipple is a Discovery class mission that will be proposed to NASA in response to the 2014 Announcement of Opportunity. The mission is being developed jointly by the Smithsonian Astrophysical Observatory, Jet Propulsion Laboratories, and Ball Aerospace & Technologies, with telescope optics from L-3 Integrated Optical Systems.

  17. The Whipple Mission: Exploring the Kuiper Belt and the Oort Cloud

    NASA Astrophysics Data System (ADS)

    Alcock, C.; Brown, M. E.; Gauron, T.; Heneghan, C.; Holman, M. J.; Kenter, A.; Kraft, R.; Lee, R.; Livingston, J.; Mcguire, J.; Murray, S. S.; Murray-Clay, R.; Nulsen, P.; Payne, M. J.; Schlichting, H.; Trangsrud, A.; Vrtilek, J.; Werner, M.

    2014-12-01

    Whipple will characterize the small body populations of the Kuiper Belt and the Oort Cloud with a blind occultation survey, detecting objects when they briefly (~1 second) interrupt the light from background stars, allowing the detection of much more distant and/or smaller objects than can be seen in reflected sunlight. Whipple will reach much deeper into the unexplored frontier of the outer solar system than any other mission, current or proposed. Whipple will look back to the dawn of the solar system by discovering its most remote bodies where primordial processes left their imprint. Specifically, Whipple will monitor large numbers of stars at high cadences (~12,000 stars at 20 Hz to examine Kuiper Belt events; as many as ~36,000 stars at 5 Hz to explore deep into the Oort Cloud, where events are less frequent). Analysis of the detected events will allow us to determine the size spectrum of bodies in the Kuiper Belt with radii as small as ~1 km. This will allow the testing of models of the growth and later collisional erosion of planetesimals in the early solar system. Whipple will explore the Oort Cloud, detecting objects as far out as ~10,000 AU. This will be the first direct exploration of the Oort Cloud since the original hypothesis of 1950. Whipple is a Discovery class mission that will be proposed to NASA in response to the 2014 Announcement of Opportunity. The mission is being developed jointly by the Smithsonian Astrophysical Observatory, Jet Propulsion Laboratories, and Ball Aerospace & Technologies, with telescope optics from L-3 Integrated Optical Systems.

  18. Acoustic Emission Sensing for Maritime Diesel Engine Performance and Health

    DTIC Science & Technology

    2016-05-01

    diesel internal combustion engine operating condition and health. A commercial-off- the-shelf AE monitoring system and a purpose-built data acquisition...subjected to external events such as a combustion event, fluid flow or the opening and closing of valves. This document reports on the monitoring and...conjunction with injection- combustion processes and valve events. AE from misfire as the result of a fuel injector malfunction was readily detectable

  19. Masquerade Detection Using a Taxonomy-Based Multinomial Modeling Approach in UNIX Systems

    DTIC Science & Technology

    2008-08-25

    primarily the modeling of statistical features , such as the frequency of events, the duration of events, the co- occurrence of multiple events...are identified, we can extract features representing such behavior while auditing the user’s behavior. Figure1: Taxonomy of Linux and Unix...achieved when the features are extracted just from simple commands. Method Hit Rate False Positive Rate ocSVM using simple cmds (freq.-based

  20. Planet–Planet Occultations in TRAPPIST-1 and Other Exoplanet Systems

    NASA Astrophysics Data System (ADS)

    Luger, Rodrigo; Lustig-Yaeger, Jacob; Agol, Eric

    2017-12-01

    We explore the occurrence and detectability of planet–planet occultations (PPOs) in exoplanet systems. These are events during which a planet occults the disk of another planet in the same system, imparting a small photometric signal as its thermal or reflected light is blocked. We focus on the planets in TRAPPIST-1, whose orbital planes we show are aligned to < 0\\buildrel{\\circ}\\over{.} 3 at 90% confidence. We present a photodynamical model for predicting and computing PPOs in TRAPPIST-1 and other systems for various assumptions of the planets’ atmospheric states. When marginalizing over the uncertainties on all orbital parameters, we find that the rate of PPOs in TRAPPIST-1 is about 1.4 per day. We investigate the prospects for detection of these events with the James Webb Space Telescope, finding that ∼10–20 occultations per year of b and c should be above the noise level at 12–15 μm. Joint modeling of several of these PPOs could lead to a robust detection. Alternatively, observations with the proposed Origins Space Telescope should be able to detect individual PPOs at high signal-to-noise ratios. We show how PPOs can be used to break transit timing variation degeneracies, imposing strong constraints on the eccentricities and masses of the planets, as well as to constrain the longitudes of nodes and thus the complete three-dimensional structure of the system. We further show how modeling of these events can be used to reveal a planet’s day/night temperature contrast and construct crude surface maps. We make our photodynamical code available on github (https://github.com/rodluger/planetplanet).

  1. Developing an EEG-based on-line closed-loop lapse detection and mitigation system

    PubMed Central

    Wang, Yu-Te; Huang, Kuan-Chih; Wei, Chun-Shu; Huang, Teng-Yi; Ko, Li-Wei; Lin, Chin-Teng; Cheng, Chung-Kuan; Jung, Tzyy-Ping

    2014-01-01

    In America, 60% of adults reported that they have driven a motor vehicle while feeling drowsy, and at least 15–20% of fatal car accidents are fatigue-related. This study translates previous laboratory-oriented neurophysiological research to design, develop, and test an On-line Closed-loop Lapse Detection and Mitigation (OCLDM) System featuring a mobile wireless dry-sensor EEG headgear and a cell-phone based real-time EEG processing platform. Eleven subjects participated in an event-related lane-keeping task, in which they were instructed to manipulate a randomly deviated, fixed-speed cruising car on a 4-lane highway. This was simulated in a 1st person view with an 8-screen and 8-projector immersive virtual-reality environment. When the subjects experienced lapses or failed to respond to events during the experiment, auditory warning was delivered to rectify the performance decrements. However, the arousing auditory signals were not always effective. The EEG spectra exhibited statistically significant differences between effective and ineffective arousing signals, suggesting that EEG spectra could be used as a countermeasure of the efficacy of arousing signals. In this on-line pilot study, the proposed OCLDM System was able to continuously detect EEG signatures of fatigue, deliver arousing warning to subjects suffering momentary cognitive lapses, and assess the efficacy of the warning in near real-time to rectify cognitive lapses. The on-line testing results of the OCLDM System validated the efficacy of the arousing signals in improving subjects' response times to the subsequent lane-departure events. This study may lead to a practical on-line lapse detection and mitigation system in real-world environments. PMID:25352773

  2. Automated surveillance of 911 call data for detection of possible water contamination incidents.

    PubMed

    Haas, Adam J; Gibbons, Darcy; Dangel, Chrissy; Allgeier, Steve

    2011-03-30

    Drinking water contamination, with the capability to affect large populations, poses a significant risk to public health. In recent water contamination events, the impact of contamination on public health appeared in data streams monitoring health-seeking behavior. While public health surveillance has traditionally focused on the detection of pathogens, developing methods for detection of illness from fast-acting chemicals has not been an emphasis. An automated surveillance system was implemented for Cincinnati's drinking water contamination warning system to monitor health-related 911 calls in the city of Cincinnati. Incident codes indicative of possible water contamination were filtered from all 911 calls for analysis. The 911 surveillance system uses a space-time scan statistic to detect potential water contamination incidents. The frequency and characteristics of the 911 alarms over a 2.5 year period were studied. During the evaluation, 85 alarms occurred, although most occurred prior to the implementation of an additional alerting constraint in May 2009. Data were available for analysis approximately 48 minutes after calls indicating alarms may be generated 1-2 hours after a rapid increase in call volume. Most alerts occurred in areas of high population density. The average alarm area was 9.22 square kilometers. The average number of cases in an alarm was nine calls. The 911 surveillance system provides timely notification of possible public health events, but did have limitations. While the alarms contained incident codes and location of the caller, additional information such as medical status was not available to assist validating the cause of the alarm. Furthermore, users indicated that a better understanding of 911 system functionality is necessary to understand how it would behave in an actual water contamination event.

  3. Automated surveillance of 911 call data for detection of possible water contamination incidents

    PubMed Central

    2011-01-01

    Background Drinking water contamination, with the capability to affect large populations, poses a significant risk to public health. In recent water contamination events, the impact of contamination on public health appeared in data streams monitoring health-seeking behavior. While public health surveillance has traditionally focused on the detection of pathogens, developing methods for detection of illness from fast-acting chemicals has not been an emphasis. Methods An automated surveillance system was implemented for Cincinnati's drinking water contamination warning system to monitor health-related 911 calls in the city of Cincinnati. Incident codes indicative of possible water contamination were filtered from all 911 calls for analysis. The 911 surveillance system uses a space-time scan statistic to detect potential water contamination incidents. The frequency and characteristics of the 911 alarms over a 2.5 year period were studied. Results During the evaluation, 85 alarms occurred, although most occurred prior to the implementation of an additional alerting constraint in May 2009. Data were available for analysis approximately 48 minutes after calls indicating alarms may be generated 1-2 hours after a rapid increase in call volume. Most alerts occurred in areas of high population density. The average alarm area was 9.22 square kilometers. The average number of cases in an alarm was nine calls. Conclusions The 911 surveillance system provides timely notification of possible public health events, but did have limitations. While the alarms contained incident codes and location of the caller, additional information such as medical status was not available to assist validating the cause of the alarm. Furthermore, users indicated that a better understanding of 911 system functionality is necessary to understand how it would behave in an actual water contamination event. PMID:21450105

  4. Demonstrating the Value of Near Real-time Satellite-based Earth Observations in a Research and Education Framework

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Hao, X.; Kinter, J. L.; Stearn, G.; Aliani, M.

    2017-12-01

    The launch of GOES-16 series provides an opportunity to advance near real-time applications in natural hazard detection, monitoring and warning. This study demonstrates the capability and values of receiving real-time satellite-based Earth observations over a fast terrestrial networks and processing high-resolution remote sensing data in a university environment. The demonstration system includes 4 components: 1) Near real-time data receiving and processing; 2) data analysis and visualization; 3) event detection and monitoring; and 4) information dissemination. Various tools are developed and integrated to receive and process GRB data in near real-time, produce images and value-added data products, and detect and monitor extreme weather events such as hurricane, fire, flooding, fog, lightning, etc. A web-based application system is developed to disseminate near-real satellite images and data products. The images are generated with GIS-compatible format (GeoTIFF) to enable convenient use and integration in various GIS platforms. This study enhances the capacities for undergraduate and graduate education in Earth system and climate sciences, and related applications to understand the basic principles and technology in real-time applications with remote sensing measurements. It also provides an integrated platform for near real-time monitoring of extreme weather events, which are helpful for various user communities.

  5. Information Assurance Technology Analysis Center Information Assurance Tools Report Intrusion Detection

    DTIC Science & Technology

    1998-01-01

    such as central processing unit (CPU) usage, disk input/output (I/O), memory usage, user activity, and number of logins attempted. The statistics... EMERALD Commercial anomaly detection, system monitoring SRI porras@csl.sri.com www.csl.sri.com/ emerald /index. html Gabriel Commercial system...sensors, it starts to protect the network with minimal configuration and maximum intelligence. T 11 EMERALD TITLE EMERALD (Event Monitoring

  6. Dynamic Fault Detection Chassis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mize, Jeffery J

    2007-01-01

    Abstract The high frequency switching megawatt-class High Voltage Converter Modulator (HVCM) developed by Los Alamos National Laboratory for the Oak Ridge National Laboratory's Spallation Neutron Source (SNS) is now in operation. One of the major problems with the modulator systems is shoot-thru conditions that can occur in a IGBTs H-bridge topology resulting in large fault currents and device failure in a few microseconds. The Dynamic Fault Detection Chassis (DFDC) is a fault monitoring system; it monitors transformer flux saturation using a window comparator and dV/dt events on the cathode voltage caused by any abnormality such as capacitor breakdown, transformer primarymore » turns shorts, or dielectric breakdown between the transformer primary and secondary. If faults are detected, the DFDC will inhibit the IGBT gate drives and shut the system down, significantly reducing the possibility of a shoot-thru condition or other equipment damaging events. In this paper, we will present system integration considerations, performance characteristics of the DFDC, and discuss its ability to significantly reduce costly down time for the entire facility.« less

  7. Using natural archives to detect climate and environmental tipping points in the Earth System

    NASA Astrophysics Data System (ADS)

    Thomas, Zoë A.

    2016-11-01

    'Tipping points' in the Earth system are characterised by a nonlinear response to gradual forcing, and may have severe and wide-ranging impacts. Many abrupt events result from simple underlying system dynamics termed 'critical transitions' or 'bifurcations'. One of the best ways to identify and potentially predict threshold behaviour in the climate system is through analysis of natural ('palaeo') archives. Specifically, on the approach to a tipping point, early warning signals can be detected as characteristic fluctuations in a time series as a system loses stability. Testing whether these early warning signals can be detected in highly complex real systems is a key challenge, since much work is either theoretical or only tested with simple models. This is particularly problematic in palaeoclimate and palaeoenvironmental records with low resolution, non-equidistant data, which can limit accurate analysis. Here, a range of different datasets are examined to explore generic rules that can be used to detect such dramatic events. A number of key criteria are identified to be necessary for the reliable identification of early warning signals in natural archives, most crucially, the need for a low-noise record of sufficient data length, resolution and accuracy. A deeper understanding of the underlying system dynamics is required to inform the development of more robust system-specific indicators, or to indicate the temporal resolution required, given a known forcing. This review demonstrates that time series precursors from natural archives provide a powerful means of forewarning tipping points within the Earth System.

  8. Utilizing Intrinsic Properties of Polyaniline to Detect Nucleic Acid Hybridization through UV-Enhanced Electrostatic Interaction.

    PubMed

    Sengupta, Partha Pratim; Gloria, Jared N; Amato, Dahlia N; Amato, Douglas V; Patton, Derek L; Murali, Beddhu; Flynt, Alex S

    2015-10-12

    Detection of specific RNA or DNA molecules by hybridization to "probe" nucleic acids via complementary base-pairing is a powerful method for analysis of biological systems. Here we describe a strategy for transducing hybridization events through modulating intrinsic properties of the electroconductive polymer polyaniline (PANI). When DNA-based probes electrostatically interact with PANI, its fluorescence properties are increased, a phenomenon that can be enhanced by UV irradiation. Hybridization of target nucleic acids results in dissociation of probes causing PANI fluorescence to return to basal levels. By monitoring restoration of base PANI fluorescence as little as 10(-11) M (10 pM) of target oligonucleotides could be detected within 15 min of hybridization. Detection of complementary oligos was specific, with introduction of a single mismatch failing to form a target-probe duplex that would dissociate from PANI. Furthermore, this approach is robust and is capable of detecting specific RNAs in extracts from animals. This sensor system improves on previously reported strategies by transducing highly specific probe dissociation events through intrinsic properties of a conducting polymer without the need for additional labels.

  9. Reaction times to weak test lights. [psychophysics biological model

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.; Ahumada, P.; Welsh, D.

    1984-01-01

    Maloney and Wandell (1984) describe a model of the response of a single visual channel to weak test lights. The initial channel response is a linearly filtered version of the stimulus. The filter output is randomly sampled over time. Each time a sample occurs there is some probability increasing with the magnitude of the sampled response - that a discrete detection event is generated. Maloney and Wandell derive the statistics of the detection events. In this paper a test is conducted of the hypothesis that the reaction time responses to the presence of a weak test light are initiated at the first detection event. This makes it possible to extend the application of the model to lights that are slightly above threshold, but still within the linear operating range of the visual system. A parameter-free prediction of the model proposed by Maloney and Wandell for lights detected by this statistic is tested. The data are in agreement with the prediction.

  10. Automatic detection of adverse events to predict drug label changes using text and data mining techniques.

    PubMed

    Gurulingappa, Harsha; Toldo, Luca; Rajput, Abdul Mateen; Kors, Jan A; Taweel, Adel; Tayrouz, Yorki

    2013-11-01

    The aim of this study was to assess the impact of automatically detected adverse event signals from text and open-source data on the prediction of drug label changes. Open-source adverse effect data were collected from FAERS, Yellow Cards and SIDER databases. A shallow linguistic relation extraction system (JSRE) was applied for extraction of adverse effects from MEDLINE case reports. Statistical approach was applied on the extracted datasets for signal detection and subsequent prediction of label changes issued for 29 drugs by the UK Regulatory Authority in 2009. 76% of drug label changes were automatically predicted. Out of these, 6% of drug label changes were detected only by text mining. JSRE enabled precise identification of four adverse drug events from MEDLINE that were undetectable otherwise. Changes in drug labels can be predicted automatically using data and text mining techniques. Text mining technology is mature and well-placed to support the pharmacovigilance tasks. Copyright © 2013 John Wiley & Sons, Ltd.

  11. [Comparison of the "Trigger" tool with the minimum basic data set for detecting adverse events in general surgery].

    PubMed

    Pérez Zapata, A I; Gutiérrez Samaniego, M; Rodríguez Cuéllar, E; Gómez de la Cámara, A; Ruiz López, P

    Surgery is a high risk for the occurrence of adverse events (AE). The main objective of this study is to compare the effectiveness of the Trigger tool with the Hospital National Health System registration of Discharges, the minimum basic data set (MBDS), in detecting adverse events in patients admitted to General Surgery and undergoing surgery. Observational and descriptive retrospective study of patients admitted to general surgery of a tertiary hospital, and undergoing surgery in 2012. The identification of adverse events was made by reviewing the medical records, using an adaptation of "Global Trigger Tool" methodology, as well as the (MBDS) registered on the same patients. Once the AE were identified, they were classified according to damage and to the extent to which these could have been avoided. The area under the curve (ROC) were used to determine the discriminatory power of the tools. The Hanley and Mcneil test was used to compare both tools. AE prevalence was 36.8%. The TT detected 89.9% of all AE, while the MBDS detected 28.48%. The TT provides more information on the nature and characteristics of the AE. The area under the curve was 0.89 for the TT and 0.66 for the MBDS. These differences were statistically significant (P<.001). The Trigger tool detects three times more adverse events than the MBDS registry. The prevalence of adverse events in General Surgery is higher than that estimated in other studies. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  12. Facilitating adverse drug event detection in pharmacovigilance databases using molecular structure similarity: application to rhabdomyolysis

    PubMed Central

    Vilar, Santiago; Harpaz, Rave; Chase, Herbert S; Costanzi, Stefano; Rabadan, Raul

    2011-01-01

    Background Adverse drug events (ADE) cause considerable harm to patients, and consequently their detection is critical for patient safety. The US Food and Drug Administration maintains an adverse event reporting system (AERS) to facilitate the detection of ADE in drugs. Various data mining approaches have been developed that use AERS to detect signals identifying associations between drugs and ADE. The signals must then be monitored further by domain experts, which is a time-consuming task. Objective To develop a new methodology that combines existing data mining algorithms with chemical information by analysis of molecular fingerprints to enhance initial ADE signals generated from AERS, and to provide a decision support mechanism to facilitate the identification of novel adverse events. Results The method achieved a significant improvement in precision in identifying known ADE, and a more than twofold signal enhancement when applied to the ADE rhabdomyolysis. The simplicity of the method assists in highlighting the etiology of the ADE by identifying structurally similar drugs. A set of drugs with strong evidence from both AERS and molecular fingerprint-based modeling is constructed for further analysis. Conclusion The results demonstrate that the proposed methodology could be used as a pharmacovigilance decision support tool to facilitate ADE detection. PMID:21946238

  13. Using Knowledge Base for Event-Driven Scheduling of Web Monitoring Systems

    NASA Astrophysics Data System (ADS)

    Kim, Yang Sok; Kang, Sung Won; Kang, Byeong Ho; Compton, Paul

    Web monitoring systems report any changes to their target web pages by revisiting them frequently. As they operate under significant resource constraints, it is essential to minimize revisits while ensuring minimal delay and maximum coverage. Various statistical scheduling methods have been proposed to resolve this problem; however, they are static and cannot easily cope with events in the real world. This paper proposes a new scheduling method that manages unpredictable events. An MCRDR (Multiple Classification Ripple-Down Rules) document classification knowledge base was reused to detect events and to initiate a prompt web monitoring process independent of a static monitoring schedule. Our experiment demonstrates that the approach improves monitoring efficiency significantly.

  14. Development of a general method for detection and quantification of the P35S promoter based on assessment of existing methods

    PubMed Central

    Wu, Yuhua; Wang, Yulei; Li, Jun; Li, Wei; Zhang, Li; Li, Yunjing; Li, Xiaofei; Li, Jun; Zhu, Li; Wu, Gang

    2014-01-01

    The Cauliflower mosaic virus (CaMV) 35S promoter (P35S) is a commonly used target for detection of genetically modified organisms (GMOs). There are currently 24 reported detection methods, targeting different regions of the P35S promoter. Initial assessment revealed that due to the absence of primer binding sites in the P35S sequence, 19 of the 24 reported methods failed to detect P35S in MON88913 cotton, and the other two methods could only be applied to certain GMOs. The rest three reported methods were not suitable for measurement of P35S in some testing events, because SNPs in binding sites of the primer/probe would result in abnormal amplification plots and poor linear regression parameters. In this study, we discovered a conserved region in the P35S sequence through sequencing of P35S promoters from multiple transgenic events, and developed new qualitative and quantitative detection systems targeting this conserved region. The qualitative PCR could detect the P35S promoter in 23 unique GMO events with high specificity and sensitivity. The quantitative method was suitable for measurement of P35S promoter, exhibiting good agreement between the amount of template and Ct values for each testing event. This study provides a general P35S screening method, with greater coverage than existing methods. PMID:25483893

  15. Autonomous navigation system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2009-09-08

    A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.

  16. Mini-MegaTORTORA wide-field monitoring system with sub-second temporal resolution: observation of transient events

    NASA Astrophysics Data System (ADS)

    Karpov, S.; Beskin, G.; Biryukov, A.; Bondar, S.; Ivanov, E.; Katkova, E.; Perkov, A.; Sasyuk, V.

    2016-06-01

    Here we present a summary of first years of operation and first results of a novel 9-channel wide-field optical monitoring system with sub-second temporal resolution, Mini-MegaTORTORA (MMT-9), which is in operation now at Special Astrophysical Observatory on Russian Caucasus. The system is able to observe the sky simultaneously in either wide (~900 square degrees) or narrow (~100 square degrees) fields of view, either in clear light or with any combination of color (Johnson-Cousins B, V or R) and polarimetric filters installed, with exposure times ranging from 0.1 s to hundreds of seconds. The real-time system data analysis pipeline performs automatic detection of rapid transient events, both near-Earth and extragalactic. The objects routinely detected by MMT include faint meteors and artificial satellites. The pipeline for a longer time scales variability analysis is still in development.

  17. A novel adaptive, real-time algorithm to detect gait events from wearable sensors.

    PubMed

    Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona

    2015-05-01

    A real-time, adaptive algorithm based on two inertial and magnetic sensors placed on the shanks was developed for gait-event detection. For each leg, the algorithm detected the Initial Contact (IC), as the minimum of the flexion/extension angle, and the End Contact (EC) and the Mid-Swing (MS), as minimum and maximum of the angular velocity, respectively. The algorithm consisted of calibration, real-time detection, and step-by-step update. Data collected from 22 healthy subjects (21 to 85 years) walking at three self-selected speeds were used to validate the algorithm against the GaitRite system. Comparable levels of accuracy and significantly lower detection delays were achieved with respect to other published methods. The algorithm robustness was tested on ten healthy subjects performing sudden speed changes and on ten stroke subjects (43 to 89 years). For healthy subjects, F1-scores of 1 and mean detection delays lower than 14 ms were obtained. For stroke subjects, F1-scores of 0.998 and 0.944 were obtained for IC and EC, respectively, with mean detection delays always below 31 ms. The algorithm accurately detected gait events in real time from a heterogeneous dataset of gait patterns and paves the way for the design of closed-loop controllers for customized gait trainings and/or assistive devices.

  18. The veto system of the DarkSide-50 experiment

    DOE PAGES

    Agnes, P.

    2016-03-16

    Here, nuclear recoil events produced by neutron scatters form one of the most important classes of background in WIMP direct detection experiments, as they may produce nuclear recoils that look exactly like WIMP interactions. In DarkSide-50, we both actively suppress and measure the rate of neutron-induced background events using our neutron veto, composed of a boron-loaded liquid scintillator detector within a water Cherenkov detector. This paper is devoted to the description of the neutron veto system of DarkSide-50, including the detector structure, the fundamentals of event reconstruction and data analysis, and basic performance parameters.

  19. The veto system of the DarkSide-50 experiment

    NASA Astrophysics Data System (ADS)

    Agnes, P.; Agostino, L.; Albuquerque, I. F. M.; Alexander, T.; Alton, A. K.; Arisaka, K.; Back, H. O.; Baldin, B.; Biery, K.; Bonfini, G.; Bossa, M.; Bottino, B.; Brigatti, A.; Brodsky, J.; Budano, F.; Bussino, S.; Cadeddu, M.; Cadonati, L.; Cadoni, M.; Calaprice, F.; Canci, N.; Candela, A.; Cao, H.; Cariello, M.; Carlini, M.; Catalanotti, S.; Cavalcante, P.; Chepurnov, A.; Cocco, A. G.; Covone, G.; Crippa, L.; D'Angelo, D.; D'Incecco, M.; Davini, S.; De Cecco, S.; De Deo, M.; De Vincenzi, M.; Derbin, A.; Devoto, A.; Di Eusanio, F.; Di Pietro, G.; Edkins, E.; Empl, A.; Fan, A.; Fiorillo, G.; Fomenko, K.; Foster, G.; Franco, D.; Gabriele, F.; Galbiati, C.; Giganti, C.; Goretti, A. M.; Granato, F.; Grandi, L.; Gromov, M.; Guan, M.; Guardincerri, Y.; Hackett, B. R.; Herner, K. R.; Hungerford, E. V.; Ianni, Aldo; Ianni, Andrea; James, I.; Johnson, T.; Jollet, C.; Keeter, K.; Kendziora, C. L.; Kobychev, V.; Koh, G.; Korablev, D.; Korga, G.; Kubankin, A.; Li, X.; Lissia, M.; Lombardi, P.; Luitz, S.; Ma, Y.; Machulin, I. N.; Mandarano, A.; Mari, S. M.; Maricic, J.; Marini, L.; Martoff, C. J.; Meregaglia, A.; Meyers, P. D.; Miletic, T.; Milincic, R.; Montanari, D.; Monte, A.; Montuschi, M.; Monzani, M. E.; Mosteiro, P.; Mount, B. J.; Muratova, V. N.; Musico, P.; Napolitano, J.; Nelson, A.; Odrowski, S.; Orsini, M.; Ortica, F.; Pagani, L.; Pallavicini, M.; Pantic, E.; Parmeggiano, S.; Pelczar, K.; Pelliccia, N.; Perasso, S.; Pocar, A.; Pordes, S.; Pugachev, D. A.; Qian, H.; Randle, K.; Ranucci, G.; Razeto, A.; Reinhold, B.; Renshaw, A. L.; Romani, A.; Rossi, B.; Rossi, N.; Rountree, S. D.; Sablone, D.; Saggese, P.; Saldanha, R.; Sands, W.; Sangiorgio, S.; Savarese, C.; Segreto, E.; Semenov, D. A.; Shields, E.; Singh, P. N.; Skorokhvatov, M. D.; Smirnov, O.; Sotnikov, A.; Stanford, C.; Suvorov, Y.; Tartaglia, R.; Tatarowicz, J.; Testera, G.; Tonazzo, A.; Trinchese, P.; Unzhakov, E. V.; Vishneva, A.; Vogelaar, R. B.; Wada, M.; Walker, S.; Wang, H.; Wang, Y.; Watson, A. W.; Westerdale, S.; Wilhelmi, J.; Wojcik, M. M.; Xiang, X.; Xu, J.; Yang, C.; Yoo, J.; Zavatarelli, S.; Zec, A.; Zhong, W.; Zhu, C.; Zuzel, G.

    2016-03-01

    Nuclear recoil events produced by neutron scatters form one of the most important classes of background in WIMP direct detection experiments, as they may produce nuclear recoils that look exactly like WIMP interactions. In DarkSide-50, we both actively suppress and measure the rate of neutron-induced background events using our neutron veto, composed of a boron-loaded liquid scintillator detector within a water Cherenkov detector. This paper is devoted to the description of the neutron veto system of DarkSide-50, including the detector structure, the fundamentals of event reconstruction and data analysis, and basic performance parameters.

  20. Real Time Coincidence Detection Engine for High Count Rate Timestamp Based PET

    NASA Astrophysics Data System (ADS)

    Tetrault, M.-A.; Oliver, J. F.; Bergeron, M.; Lecomte, R.; Fontaine, R.

    2010-02-01

    Coincidence engines follow two main implementation flows: timestamp based systems and AND-gate based systems. The latter have been more widespread in recent years because of its lower cost and high efficiency. However, they are highly dependent on the selected electronic components, they have limited flexibility once assembled and they are customized to fit a specific scanner's geometry. Timestamp based systems are gathering more attention lately, especially with high channel count fully digital systems. These new systems must however cope with important singles count rates. One option is to record every detected event and postpone coincidence detection offline. For daily use systems, a real time engine is preferable because it dramatically reduces data volume and hence image preprocessing time and raw data management. This paper presents the timestamp based coincidence engine for the LabPET¿, a small animal PET scanner with up to 4608 individual readout avalanche photodiode channels. The engine can handle up to 100 million single events per second and has extensive flexibility because it resides in programmable logic devices. It can be adapted for any detector geometry or channel count, can be ported to newer, faster programmable devices and can have extra modules added to take advantage of scanner-specific features. Finally, the user can select between full processing mode for imaging protocols and minimum processing mode to study different approaches for coincidence detection with offline software.

  1. NPE 2010 results - Independent performance assessment by simulated CTBT violation scenarios

    NASA Astrophysics Data System (ADS)

    Ross, O.; Bönnemann, C.; Ceranna, L.; Gestermann, N.; Hartmann, G.; Plenefisch, T.

    2012-04-01

    For verification of compliance to the Comprehensive Nuclear-Test-Ban Treaty (CTBT) the global International Monitoring System (IMS) is currently being built up. The IMS is designed to detect nuclear explosions through their seismic, hydroacoustic, infrasound, and radionuclide signature. The IMS data are collected, processed to analysis products, and distributed to the state signatories by the International Data Centre (IDC) in Vienna. The state signatories themselves may operate National Data Centers (NDC) giving technical advice concerning CTBT verification to the government. NDC Preparedness Exercises (NPE) are regularly performed to practice the verification procedures for the detection of nuclear explosions in the framework of CTBT monitoring. The initial focus of the NPE 2010 was on the component of radionuclide detections and the application of Atmospheric Transport Modeling (ATM) for defining the source region of a radionuclide event. The exercise was triggered by fictitious radioactive noble gas detections which were calculated beforehand secretly by forward ATM for a hypothetical xenon release scenario starting at location and time of a real seismic event. The task for the exercise participants was to find potential source events by atmospheric backtracking and to analyze in the following promising candidate events concerning their waveform signals. The study shows one possible way of solution for NPE 2010 as it was performed at German NDC by a team without precedent knowledge of the selected event and release scenario. The ATM Source Receptor Sensitivity (SRS) fields as provided by the IDC were evaluated in a logical approach in order to define probable source regions for several days before the first reported fictitious radioactive xenon finding. Additional information on likely event times was derived from xenon isotopic ratios where applicable. Of the considered seismic events in the potential source region all except one could be identified as earthquakes by seismological analysis. The remaining event at Black Thunder Mine, Wyoming, on 23 Oct at 21:15 UTC showed clear explosion characteristics. It caused also Infrasound detections at one station in Canada. An infrasonic one station localization algorithm led to event localization results comparable in precision to the teleseismic localization. However, the analysis of regional seismological stations gave the most accurate result giving an error ellipse of about 60 square kilometer. Finally a forward ATM simulation was performed with the candidate event as source in order to reproduce the original detection scenario. The ATM results showed a simulated station fingerprint in the IMS very similar to the fictitious detections given in the NPE 2010 scenario which is an additional confirmation that the event was correctly identified. The shown event analysis of the NPE 2010 serves as successful example for Data Fusion between the technology of radionuclide detection supported by ATM and seismological methodology as well as infrasound signal processing.

  2. A networks-based discrete dynamic systems approach to volcanic seismicity

    NASA Astrophysics Data System (ADS)

    Suteanu, Mirela

    2013-04-01

    The detection and relevant description of pattern change concerning earthquake events is an important, but challenging task. In this paper, earthquake events related to volcanic activity are considered manifestations of a dynamic system evolving over time. The system dynamics is seen as a succession of events with point-like appearance both in time and in space. Each event is characterized by a position in three-dimensional space, a moment of occurrence, and an event size (magnitude). A weighted directed network is constructed to capture the effects of earthquakes on subsequent events. Each seismic event represents a node. Relations among events represent edges. Edge directions are given by the temporal succession of the events. Edges are also characterized by weights reflecting the strengths of the relation between the nodes. Weights are calculated as a function of (i) the time interval separating the two events, (ii) the spatial distance between the events, (iii) the magnitude of the earliest event among the two. Different ways of addressing weight components are explored, and their implications for the properties of the produced networks are analyzed. The resulting networks are then characterized in terms of degree- and weight distributions. Subsequently, the distribution of system transitions is determined for all the edges connecting related events in the network. Two- and three-dimensional diagrams are constructed to reflect transition distributions for each set of events. Networks are thus generated for successive temporal windows of different size, and the evolution of (a) network properties and (b) system transition distributions are followed over time and compared to the timeline of documented geologic processes. Applications concerning volcanic seismicity on the Big Island of Hawaii show that this approach is capable of revealing novel aspects of change occurring in the volcanic system on different scales in time and in space.

  3. Geostationary Communications Satellites as Sensors for the Space Weather Environment: Telemetry Event Identification Algorithms

    NASA Astrophysics Data System (ADS)

    Carlton, A.; Cahoy, K.

    2015-12-01

    Reliability of geostationary communication satellites (GEO ComSats) is critical to many industries worldwide. The space radiation environment poses a significant threat and manufacturers and operators expend considerable effort to maintain reliability for users. Knowledge of the space radiation environment at the orbital location of a satellite is of critical importance for diagnosing and resolving issues resulting from space weather, for optimizing cost and reliability, and for space situational awareness. For decades, operators and manufacturers have collected large amounts of telemetry from geostationary (GEO) communications satellites to monitor system health and performance, yet this data is rarely mined for scientific purposes. The goal of this work is to acquire and analyze archived data from commercial operators using new algorithms that can detect when a space weather (or non-space weather) event of interest has occurred or is in progress. We have developed algorithms, collectively called SEER (System Event Evaluation Routine), to statistically analyze power amplifier current and temperature telemetry by identifying deviations from nominal operations or other events and trends of interest. This paper focuses on our work in progress, which currently includes methods for detection of jumps ("spikes", outliers) and step changes (changes in the local mean) in the telemetry. We then examine available space weather data from the NOAA GOES and the NOAA-computed Kp index and sunspot numbers to see what role, if any, it might have played. By combining the results of the algorithm for many components, the spacecraft can be used as a "sensor" for the space radiation environment. Similar events occurring at one time across many component telemetry streams may be indicative of a space radiation event or system-wide health and safety concern. Using SEER on representative datasets of telemetry from Inmarsat and Intelsat, we find events that occur across all or many of telemetry files at certain dates. We compare these system-wide events to known space weather storms, such as the 2003 Halloween storms, and to spacecraft operational events, such as maneuvers. We also present future applications and expansions of SEER for robust space environment sensing and system health and safety monitoring.

  4. Visual Salience in the Change Detection Paradigm: The Special Role of Object Onset

    ERIC Educational Resources Information Center

    Cole, Geoff G.; Kentridge, Robert W.; Heywood, Charles A.

    2004-01-01

    The relative efficacy with which appearance of a new object orients visual attention was investigated. At issue is whether the visual system treats onset as being of particular importance or only 1 of a number of stimulus events equally likely to summon attention. Using the 1-shot change detection paradigm, the authors compared detectability of…

  5. Reagent-free and portable detection of Bacillus anthracis spores using a microfluidic incubator and smartphone microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchison, Janine R.; Erikson, Rebecca L.; Sheen, Allison M.

    Rapid, cost-effective bacterial detection systems are needed to respond to potential biothreat events. Here we report the use of smartphone-based microscopy in combination with a simple microfluidic incubation device to detect 5000 Bacillus anthracis spores in 3 hours. This field-deployable approach is compatible with real-time PCR for secondary confirmation.

  6. A history of radiation detection instrumentation.

    PubMed

    Frame, Paul W

    2004-08-01

    A review is presented of the history of radiation detection instrumentation. Specific radiation detection systems that are discussed include the human senses, photography, calorimetry, color dosimetry, ion chambers, electrometers, electroscopes, proportional counters, Geiger Mueller counters, scalers and rate meters, barium platinocyanide, scintillation counters, semiconductor detectors, radiophotoluminescent dosimeters, thermoluminescent dosimeters, optically stimulated luminescent dosimeters, direct ion storage, electrets, cloud chambers, bubble chambers, and bubble dosimeters. Given the broad scope of this review, the coverage is limited to a few key events in the development of a given detection system and some relevant operating principles. The occasional anecdote is included for interest.

  7. A history of radiation detection instrumentation.

    PubMed

    Frame, Paul W

    2005-06-01

    A review is presented of the history of radiation detection instrumentation. Specific radiation detection systems that are discussed include the human senses, photography, calorimetry, color dosimetry, ion chambers, electrometers, electroscopes, proportional counters, Geiger Mueller counters, scalers and rate meters, barium platinocyanide, scintillation counters, semiconductor detectors, radiophotoluminescent dosimeters, thermoluminescent dosimeters, optically stimulated luminescent dosimeters, direct ion storage, electrets, cloud chambers, bubble chambers, and bubble dosimeters. Given the broad scope of this review, the coverage is limited to a few key events in the development of a given detection system and some relevant operating principles. The occasional anecdote is included for interest.

  8. Space Weather and the Ground-Level Solar Proton Events of the 23rd Solar Cycle

    NASA Astrophysics Data System (ADS)

    Shea, M. A.; Smart, D. F.

    2012-10-01

    Solar proton events can adversely affect space and ground-based systems. Ground-level events are a subset of solar proton events that have a harder spectrum than average solar proton events and are detectable on Earth's surface by cosmic radiation ionization chambers, muon detectors, and neutron monitors. This paper summarizes the space weather effects associated with ground-level solar proton events during the 23rd solar cycle. These effects include communication and navigation systems, spacecraft electronics and operations, space power systems, manned space missions, and commercial aircraft operations. The major effect of ground-level events that affect manned spacecraft operations is increased radiation exposure. The primary effect on commercial aircraft operations is the loss of high frequency communication and, at extreme polar latitudes, an increase in the radiation exposure above that experienced from the background galactic cosmic radiation. Calculations of the maximum potential aircraft polar route exposure for each ground-level event of the 23rd solar cycle are presented. The space weather effects in October and November 2003 are highlighted together with on-going efforts to utilize cosmic ray neutron monitors to predict high energy solar proton events, thus providing an alert so that system operators can possibly make adjustments to vulnerable spacecraft operations and polar aircraft routes.

  9. Will climate change increase the risk for critical infrastructure failures in Europe due to extreme precipitation?

    NASA Astrophysics Data System (ADS)

    Nissen, Katrin; Ulbrich, Uwe

    2016-04-01

    An event based detection algorithm for extreme precipitation is applied to a multi-model ensemble of regional climate model simulations. The algorithm determines extent, location, duration and severity of extreme precipitation events. We assume that precipitation in excess of the local present-day 10-year return value will potentially exceed the capacity of the drainage systems that protect critical infrastructure elements. This assumption is based on legislation for the design of drainage systems which is in place in many European countries. Thus, events exceeding the local 10-year return value are detected. In this study we distinguish between sub-daily events (3 hourly) with high precipitation intensities and long-duration events (1-3 days) with high precipitation amounts. The climate change simulations investigated here were conducted within the EURO-CORDEX framework and exhibit a horizontal resolution of approximately 12.5 km. The period between 1971-2100 forced with observed and scenario (RCP 8.5 and RCP 4.5) greenhouse gas concentrations was analysed. Examined are changes in event frequency, event duration and size. The simulations show an increase in the number of extreme precipitation events for the future climate period over most of the area, which is strongest in Northern Europe. Strength and statistical significance of the signal increase with increasing greenhouse gas concentrations. This work has been conducted within the EU project RAIN (Risk Analysis of Infrastructure Networks in response to extreme weather).

  10. Deriving Geomechanical Constraints from Microseismic Monitoring Demonstrated with Data from the Decatur CO2 Sequestration Site

    NASA Astrophysics Data System (ADS)

    Goertz-Allmann, B. P.; Oye, V.

    2015-12-01

    The occurrence of induced and triggered microseismicity is of increasing concern to the general public. The underlying human causes are numerous and include hydrocarbon production and geological storage of CO2. The concerns of induced seismicity are the potential hazards from large seismic events and the creation of fluid pathways. However, microseismicity is also a unique tool to gather information about real-time changes in the subsurface, a fact generally ignored by the public. The ability to detect, locate and characterize microseismic events, provides a snapshot of the stress conditions within and around a geological reservoir. In addition, data on rapid stress changes (i.e. microseismic events) can be used as input to hydro-mechanical models, often used to map fluid propagation. In this study we investigate the impact of microseismic event location accuracy using surface seismic stations in addition to downhole geophones. Due to signal-to-noise conditions and the small magnitudes inherent in microseismicity, downhole systems detect significantly more events with better precision of phase arrival times than surface networks. However, downhole systems are often limited in their ability to obtain large enough observational apertures required for accurate locations. We therefore jointly locate the largest microseismic events using surface and downhole data. This requires careful evaluation in the weighting of input data when inverting for the event location. For the smaller events only observed on the downhole geophones, we define event clusters using waveform cross-correlation methods. We apply this methodology to microseismic data collected in the Illinois Basin-Decatur Project. A previous study revealed over 10,000 events detected by the downhole sensors. In our analysis, we include up to 12 surface sensors, installed by the USGS. The weighting scheme for this assembly of data needs to take into account significant uncertainties in the near-surface velocity structure. The re-located event clusters allow an investigation of systematic spatio-temporal variations of source parameters (e.g. stress drop) and statistical parameters (e.g. b-value). We examine these observations together with injection parameters to deduce constraints on the long-term stability of the injection site.

  11. LAN attack detection using Discrete Event Systems.

    PubMed

    Hubballi, Neminath; Biswas, Santosh; Roopa, S; Ratti, Ritesh; Nandi, Sukumar

    2011-01-01

    Address Resolution Protocol (ARP) is used for determining the link layer or Medium Access Control (MAC) address of a network host, given its Internet Layer (IP) or Network Layer address. ARP is a stateless protocol and any IP-MAC pairing sent by a host is accepted without verification. This weakness in the ARP may be exploited by malicious hosts in a Local Area Network (LAN) by spoofing IP-MAC pairs. Several schemes have been proposed in the literature to circumvent these attacks; however, these techniques either make IP-MAC pairing static, modify the existing ARP, patch operating systems of all the hosts etc. In this paper we propose a Discrete Event System (DES) approach for Intrusion Detection System (IDS) for LAN specific attacks which do not require any extra constraint like static IP-MAC, changing the ARP etc. A DES model is built for the LAN under both a normal and compromised (i.e., spoofed request/response) situation based on the sequences of ARP related packets. Sequences of ARP events in normal and spoofed scenarios are similar thereby rendering the same DES models for both the cases. To create different ARP events under normal and spoofed conditions the proposed technique uses active ARP probing. However, this probing adds extra ARP traffic in the LAN. Following that a DES detector is built to determine from observed ARP related events, whether the LAN is operating under a normal or compromised situation. The scheme also minimizes extra ARP traffic by probing the source IP-MAC pair of only those ARP packets which are yet to be determined as genuine/spoofed by the detector. Also, spoofed IP-MAC pairs determined by the detector are stored in tables to detect other LAN attacks triggered by spoofing namely, man-in-the-middle (MiTM), denial of service etc. The scheme is successfully validated in a test bed. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Using Antelope and Seiscomp in the framework of the Romanian Seismic Network

    NASA Astrophysics Data System (ADS)

    Marius Craiu, George; Craiu, Andreea; Marmureanu, Alexandru; Neagoe, Cristian

    2014-05-01

    The National Institute for Earth Physics (NIEP) operates a real-time seismic network designed to monitor the seismic activity on the Romania territory, dominated by the Vrancea intermediate-depth (60-200 km) earthquakes. The NIEP real-time network currently consists of 102 stations and two seismic arrays equipped with different high quality digitizers (Kinemetrics K2, Quanterra Q330, Quanterra Q330HR, PS6-26, Basalt), broadband and short period seismometers (CMG3ESP, CMG40T, KS2000, KS54000, KS2000, CMG3T, STS2, SH-1, S13, Mark l4c, Ranger, Gs21, Mark 22) and acceleration sensors (Episensor Kinemetrics). The primary goal of the real-time seismic network is to provide earthquake parameters from more broad-band stations with a high dynamic range, for more rapid and accurate computation of the locations and magnitudes of earthquakes. The Seedlink and AntelopeTM program packages are completely automated Antelope seismological system is run at the Data Center in Măgurele. The Antelope data acquisition and processing software is running for real-time processing and post processing. The Antelope real-time system provides automatic event detection, arrival picking, event location, and magnitude calculation. It also provides graphical displays and automatic location within near real time after a local, regional or teleseismic event has occurred SeisComP 3 is another automated system that is run at the NIEP and which provides the following features: data acquisition, data quality control, real-time data exchange and processing, network status monitoring, issuing event alerts, waveform archiving and data distribution, automatic event detection and location, easy access to relevant information about stations, waveforms, and recent earthquakes. The main goal of this paper is to compare both of these data acquisitions systems in order to improve their detection capabilities, location accuracy, magnitude and depth determination and reduce the RMS and other location errors.

  13. Real-time optimizations for integrated smart network camera

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Lienard, Bruno; Meessen, Jerome; Delaigle, Jean-Francois

    2005-02-01

    We present an integrated real-time smart network camera. This system is composed of an image sensor, an embedded PC based electronic card for image processing and some network capabilities. The application detects events of interest in visual scenes, highlights alarms and computes statistics. The system also produces meta-data information that could be shared between other cameras in a network. We describe the requirements of such a system and then show how the design of the system is optimized to process and compress video in real-time. Indeed, typical video-surveillance algorithms as background differencing, tracking and event detection should be highly optimized and simplified to be used in this hardware. To have a good adequation between hardware and software in this light embedded system, the software management is written on top of the java based middle-ware specification established by the OSGi alliance. We can integrate easily software and hardware in complex environments thanks to the Java Real-Time specification for the virtual machine and some network and service oriented java specifications (like RMI and Jini). Finally, we will report some outcomes and typical case studies of such a camera like counter-flow detection.

  14. Confidential Clinician-reported Surveillance of Adverse Events Among Medical Inpatients

    PubMed Central

    Weingart, Saul N; Ship, Amy N; Aronson, Mark D

    2000-01-01

    BACKGROUND Although iatrogenic injury poses a significant risk to hospitalized patients, detection of adverse events (AEs) is costly and difficult. METHODS The authors developed a confidential reporting method for detecting AEs on a medicine unit of a teaching hospital. Adverse events were defined as patient injuries. Potential adverse events (PAEs) represented errors that could have, but did not result in harm. Investigators interviewed house officers during morning rounds and by e-mail, asking them to identify obstacles to high quality care and iatrogenic injuries. They compared house officer reports with hospital incident reports and patients' medical records. A multivariate regression model identified correlates of reporting. RESULTS One hundred ten events occurred, affecting 84 patients. Queries by e-mail (incidence rate ratio [IRR ]=0.16; 95% confidence interval [95% CI], 0.05 to 0.49) and on days when house officers rotated to a new service (IRR =0.12; 95% CI, 0.02 to 0.91) resulted in fewer reports. The most commonly reported process of care problems were inadequate evaluation of the patient (16.4%), failure to monitor or follow up (12.7%), and failure of the laboratory to perform a test (12.7%). Respondents identified 29 (26.4%) AEs, 52 (47.3%) PAEs, and 29 (26.4%) other house officer-identified quality problems. An AE occurred in 2.6% of admissions. The hospital incident reporting system detected only one house officer-reported event. Chart review corroborated 72.9% of events. CONCLUSIONS House officers detect many AEs among inpatients. Confidential peer interviews of front-line providers is a promising method for identifying medical errors and substandard quality. PMID:10940133

  15. Strategies for automatic processing of large aftershock sequences

    NASA Astrophysics Data System (ADS)

    Kvaerna, T.; Gibbons, S. J.

    2017-12-01

    Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.

  16. What We Are Watching—Top Global Infectious Disease Threats, 2013-2016: An Update from CDC's Global Disease Detection Operations Center

    PubMed Central

    Iuliano, A. Danielle; Uyeki, Timothy M.; Mintz, Eric D.; Nichol, Stuart T.; Rollin, Pierre; Staples, J. Erin; Arthur, Ray R.

    2017-01-01

    To better track public health events in areas where the public health system is unable or unwilling to report the event to appropriate public health authorities, agencies can conduct event-based surveillance, which is defined as the organized collection, monitoring, assessment, and interpretation of unstructured information regarding public health events that may represent an acute risk to public health. The US Centers for Disease Control and Prevention's (CDC's) Global Disease Detection Operations Center (GDDOC) was created in 2007 to serve as CDC's platform dedicated to conducting worldwide event-based surveillance, which is now highlighted as part of the “detect” element of the Global Health Security Agenda (GHSA). The GHSA works toward making the world more safe and secure from disease threats through building capacity to better “Prevent, Detect, and Respond” to those threats. The GDDOC monitors approximately 30 to 40 public health events each day. In this article, we describe the top threats to public health monitored during 2012 to 2016: avian influenza, cholera, Ebola virus disease, and the vector-borne diseases yellow fever, chikungunya virus, and Zika virus, with updates to the previously described threats from Middle East respiratory syndrome-coronavirus (MERS-CoV) and poliomyelitis. PMID:28805465

  17. Integration of launch/impact discrimination algorithm with the UTAMS platform

    NASA Astrophysics Data System (ADS)

    Desai, Sachi; Morcos, Amir; Tenney, Stephen; Mays, Brian

    2008-04-01

    An acoustic array, integrated with an algorithm to discriminate potential Launch (LA) or Impact (IM) events, was augmented by employing the Launch Impact Discrimination (LID) algorithm for mortar events. We develop an added situational awareness capability to determine whether the localized event is a mortar launch or mortar impact at safe standoff distances. The algorithm utilizes a discrete wavelet transform to exploit higher harmonic components of various sub bands of the acoustic signature. Additional features are extracted via the frequency domain exploiting harmonic components generated by the nature of event, i.e. supersonic shrapnel components at impact. The further extrapolations of these features are employed with a neural network to provide a high level of confidence for discrimination and classification. The ability to discriminate between these events is of great interest on the battlefield. Providing more information and developing a common picture of situational awareness. Algorithms exploit the acoustic sensor array to provide detection and identification of IM/LA events at extended ranges. The integration of this algorithm with the acoustic sensor array for mortar detection provides an early warning detection system giving greater battlefield information for field commanders. This paper will describe the integration of the algorithm with a candidate sensor and resulting field tests.

  18. Commonality of drug-associated adverse events detected by 4 commonly used data mining algorithms.

    PubMed

    Sakaeda, Toshiyuki; Kadoyama, Kaori; Minami, Keiko; Okuno, Yasushi

    2014-01-01

    Data mining algorithms have been developed for the quantitative detection of drug-associated adverse events (signals) from a large database on spontaneously reported adverse events. In the present study, the commonality of signals detected by 4 commonly used data mining algorithms was examined. A total of 2,231,029 reports were retrieved from the public release of the US Food and Drug Administration Adverse Event Reporting System database between 2004 and 2009. The deletion of duplicated submissions and revision of arbitrary drug names resulted in a reduction in the number of reports to 1,644,220. Associations with adverse events were analyzed for 16 unrelated drugs, using the proportional reporting ratio (PRR), reporting odds ratio (ROR), information component (IC), and empirical Bayes geometric mean (EBGM). All EBGM-based signals were included in the PRR-based signals as well as IC- or ROR-based ones, and PRR- and IC-based signals were included in ROR-based ones. The PRR scores of PRR-based signals were significantly larger for 15 of 16 drugs when adverse events were also detected as signals by the EBGM method, as were the IC scores of IC-based signals for all drugs; however, no such effect was observed in the ROR scores of ROR-based signals. The EBGM method was the most conservative among the 4 methods examined, which suggested its better suitability for pharmacoepidemiological studies. Further examinations should be performed on the reproducibility of clinical observations, especially for EBGM-based signals.

  19. Learning rational temporal eye movement strategies.

    PubMed

    Hoppe, David; Rothkopf, Constantin A

    2016-07-19

    During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.

  20. Automatic Near-Real-Time Detection of CMEs in Mauna Loa K-Cor Coronagraph Images

    NASA Astrophysics Data System (ADS)

    Thompson, W. T.; St. Cyr, O. C.; Burkepile, J. T.; Posner, A.

    2017-10-01

    A simple algorithm has been developed to detect the onset of coronal mass ejections (CMEs), together with speed estimates, in near-real time using linearly polarized white-light solar coronal images from the Mauna Loa Solar Observatory K-Cor telescope. Ground observations in the low corona can warn of CMEs well before they appear in space coronagraphs. The algorithm used is a variation on the Solar Eruptive Event Detection System developed at George Mason University. It was tested against K-Cor data taken between 29 April 2014 and 20 February 2017, on days identified as containing CMEs. This resulted in testing of 139 days' worth of data containing 171 CMEs. The detection rate varied from close to 80% when solar activity was high down to as low as 20-30% when activity was low. The difference in effectiveness with solar cycle is attributed to the relative prevalence of strong CMEs between active and quiet periods. There were also 12 false detections, leading to an average false detection rate of 8.6%. The K-Cor data were also compared with major solar energetic particle (SEP) storms during this time period. There were three SEP events detected either at Earth or at one of the two STEREO spacecraft when K-Cor was observing during the relevant time period. The algorithm successfully generated alerts for two of these events, with lead times of 1-3 h before the SEP onset at 1 AU. The third event was not detected by the automatic algorithm because of the unusually broad width in position angle.

  1. Bioluminescence-based system for rapid detection of natural transformation.

    PubMed

    Santala, Ville; Karp, Matti; Santala, Suvi

    2016-07-01

    Horizontal gene transfer plays a significant role in bacterial evolution and has major clinical importance. Thus, it is vital to understand the mechanisms and kinetics of genetic transformations. Natural transformation is the driving mechanism for horizontal gene transfer in diverse genera of bacteria. Our study introduces a simple and rapid method for the investigation of natural transformation. This highly sensitive system allows the detection of a transformation event directly from a bacterial population without any separation step or selection of cells. The system is based on the bacterial luciferase operon from Photorhabdus luminescens The studied molecular tools consist of the functional modules luxCDE and luxAB, which involve a replicative plasmid and an integrative gene cassette. A well-established host for bacterial genetic investigations, Acinetobacter baylyi ADP1, is used as the model bacterium. We show that natural transformation followed by homologous recombination or plasmid recircularization can be readily detected in both actively growing and static biofilm-like cultures, including very rare transformation events. The system allows the detection of natural transformation within 1 h of introducing sample DNA into the culture. The introduced method provides a convenient means to study the kinetics of natural transformation under variable conditions and perturbations. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. The Catalog of Event Data of the Operational Deep-ocean Assessment and Reporting of Tsunamis (DART) Stations at the National Data Buoy Center

    NASA Astrophysics Data System (ADS)

    Bouchard, R.; Locke, L.; Hansen, W.; Collins, S.; McArthur, S.

    2007-12-01

    DART systems are a critical component of the tsunami warning system as they provide the only real-time, in situ, tsunami detection before landfall. DART systems consist of a surface buoy that serves as a position locater and communications transceiver and a Bottom Pressure Recorder (BPR) on the seafloor. The BPR records temperature and pressure at 15-second intervals to a memory card for later retrieval for analysis and use by tsunami researchers, but the BPRs are normally recovered only once every two years. The DART systems also transmit subsets of the data, converted to an estimation of the sea surface height, in near real-time for use by the tsunami warning community. These data are available on NDBC's webpages, http://www.ndbc.noaa.gov/dart.shtml. Although not of the resolution of the data recorded to the BPR memory card, the near real-time data have proven to be of value in research applications [1]. Of particular interest are the DART data associated with geophysical events. The DART BPR continuously compares the measured sea height with a predicted sea-height and when the difference exceeds a threshold value, the BPR goes into Event Mode. Event Mode provides an extended, more frequent near real-time reporting of the sea surface heights for tsunami detection. The BPR can go into Event Mode because of geophysical triggers, such as tsunamis or seismic activity, which may or may not be tsunamigenic. The BPR can also go into Event Mode during recovery of the BPR as it leaves the seafloor, or when manually triggered by the Tsunami Warning Centers in advance of an expected tsunami. On occasion, the BPR will go into Event Mode without any associated tsunami or seismic activity or human intervention and these are considered "False'' Events. Approximately one- third of all Events can be classified as "False". NDBC is responsible for the operations, maintenance, and data management of the DART stations. Each DART station has a webpage with a drop-down list of all Events. NDBC maintains the non-geophysical Events in order to maintain the continuity of the time series records. In 2007, NDBC compiled all DART Events that occurred while under NDBC's operational control and made an assessment on their validity. The NDBC analysts performed the assessment using the characteristics of the data time series, triggering criteria, and associated seismic events. The compilation and assessments are catalogued in a NDBC technical document. The Catalog also includes a listing of the one-hour, high-resolution data, retrieved remotely from the BPRs that are not available on the web pages. The Events are classified by their triggering mechanism and listed by station location and, for those Events associated with geophysical triggers, they are listed by their associated seismic events. The Catalog provides researchers with a valuable tool in locating, assessing, and applying near real-time DART data to tsunami research and will be updated following DART Events. A link to the published Catalog can be found on the NDBC DART website, http://www.ndbc.noaa.gov/dart.shtml. Reference: [1] Gower, J. and F. González (2006), U.S. Warning System Detected the Sumatra Tsunami, Eos Trans. AGU, 87(10), 105-112.

  3. Helmet-mounted acoustic array for hostile fire detection and localization in an urban environment

    NASA Astrophysics Data System (ADS)

    Scanlon, Michael V.

    2008-04-01

    The detection and localization of hostile weapons firing has been demonstrated successfully with acoustic sensor arrays on unattended ground sensors (UGS), ground-vehicles, and unmanned aerial vehicles (UAVs). Some of the more mature systems have demonstrated significant capabilities and provide direct support to ongoing counter-sniper operations. The Army Research Laboratory (ARL) is conducting research and development for a helmet-mounted system to acoustically detect and localize small arms firing, or other events such as RPG, mortars, and explosions, as well as other non-transient signatures. Since today's soldier is quickly being asked to take on more and more reconnaissance, surveillance, & target acquisition (RSTA) functions, sensor augmentation enables him to become a mobile and networked sensor node on the complex and dynamic battlefield. Having a body-worn threat detection and localization capability for events that pose an immediate danger to the soldiers around him can significantly enhance their survivability and lethality, as well as enable him to provide and use situational awareness clues on the networked battlefield. This paper addresses some of the difficulties encountered by an acoustic system in an urban environment. Complex reverberation, multipath, diffraction, and signature masking by building structures makes this a very harsh environment for robust detection and classification of shockwaves and muzzle blasts. Multifunctional acoustic detection arrays can provide persistent surveillance and enhanced situational awareness for every soldier.

  4. Sensitivity recovery for the AX-PET prototype using inter-crystal scattering events

    NASA Astrophysics Data System (ADS)

    Gillam, John E.; Solevi, Paola; Oliver, Josep F.; Casella, Chiara; Heller, Matthieu; Joram, Christian; Rafecas, Magdalena

    2014-08-01

    The development of novel detection devices and systems such as the AX-positron emission tomography (PET) demonstrator often introduce or increase the measurement of atypical coincidence events such as inter-crystal scattering (ICS). In more standard systems, ICS events often go undetected and the small measured fraction may be ignored. As the measured quantity of such events in the data increases, so too does the importance of considering them during image reconstruction. Generally, treatment of ICS events will attempt to determine which of the possible candidate lines of response (LoRs) correctly determine the annihilation photon trajectory. However, methods of assessment often have low success rates or are computationally demanding. In this investigation alternative approaches are considered. Experimental data was taken using the AX-PET prototype and a NEMA phantom. Three methods of ICS treatment were assessed—each of which considered all possible candidate LoRs during image reconstruction. Maximum likelihood expectation maximization was used in conjunction with both standard (line-like) and novel (V-like in this investigation) detection responses modeled within the system matrix. The investigation assumed that no information other than interaction locations was available to distinguish between candidates, yet the methods assessed all provided means by which such information could be included. In all cases it was shown that the signal to noise ratio is increased using ICS events. However, only one method, which used full modeling of the ICS response in the system matrix—the V-like model—provided enhancement in all figures of merit assessed in this investigation. Finally, the optimal method of ICS incorporation was demonstrated using data from two small animals measured using the AX-PET demonstrator.

  5. DEAP-3600 Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Lindner, Thomas

    2015-12-01

    DEAP-3600 is a dark matter experiment using liquid argon to detect Weakly Interacting Massive Particles (WIMPs). The DEAP-3600 Data Acquisition (DAQ) has been built using a combination of commercial and custom electronics, organized using the MIDAS framework. The DAQ system needs to suppress a high rate of background events from 39Ar beta decays. This suppression is implemented using a combination of online firmware and software-based event filtering. We will report on progress commissioning the DAQ system, as well as the development of the web-based user interface.

  6. Prospects for the Detection of Fast Radio Bursts with the Murchison Widefield Array

    NASA Astrophysics Data System (ADS)

    Trott, Cathryn M.; Tingay, Steven J.; Wayth, Randall B.

    2013-10-01

    Fast radio bursts (FRBs) are short timescale (Lt1 s) astrophysical radio signals, presumed to be a signature of cataclysmic events of extragalactic origin. The discovery of six high-redshift events at ~1400 MHz from the Parkes radio telescope suggests that FRBs may occur at a high rate across the sky. The Murchison Widefield Array (MWA) operates at low radio frequencies (80-300 MHz) and is expected to detect FRBs due to its large collecting area (~2500 m2) and wide field-of-view (FOV, ~ 1000 deg2 at ν = 200 MHz). We compute the expected number of FRB detections for the MWA assuming a source population consistent with the reported detections. Our formalism properly accounts for the frequency-dependence of the antenna primary beam, the MWA system temperature, and unknown spectral index of the source population, for three modes of FRB detection: coherent; incoherent; and fast imaging. We find that the MWA's sensitivity and large FOV combine to provide the expectation of multiple detectable events per week in all modes, potentially making it an excellent high time resolution science instrument. Deviations of the expected number of detections from actual results will provide a strong constraint on the assumptions made for the underlying source population and intervening plasma distribution.

  7. Case study of early detection and intervention of infectious disease outbreaks in an institution using Nursery School Absenteeism Surveillance Systems (NSASSy) of the Public Health Center.

    PubMed

    Matsumoto, Kayo; Hirayama, Chifumi; Sakuma, Yoko; Itoi, Yoichi; Sunadori, Asami; Kitamura, Junko; Nakahashi, Takeshi; Sugawara, Tamie; Ohkusa, Yasushi

    2016-01-01

    Objectives Detecting outbreaks early and then activating countermeasures based on such information is extremely important for infection control at childcare facilities. The Sumida ward began operating the Nursery School Absenteeism Surveillance System (NSASSy) in August 2013, and has since conducted real-time monitoring at nursery schools. The Public Health Center can detect outbreaks early and support appropriate intervention. This paper describes the experiences of Sumida Public Health Center related to early detection and intervention since the initiation of the system.Methods In this study, we investigated infectious disease outbreaks detected at 62 nursery schools in the Sumida ward, which were equipped with NSASSy from early November 2013 through late March 2015. We classified the information sources of the detected outbreak and responses of the public health center. The sources were (1) direct contact from some nursery schools, (2) messages from public officers with jurisdiction over nursery schools, (3) automatic detection by NSASSy, and (4) manual detection by public health center officers using NSASSy. The responses made by the health center were described and classified into 11 categories including verification of outbreak and advice for caregivers.Results The number of outbreaks detected by the aforementioned four information sources was zero, 25, 15, and 7 events, respectively, during the first 5 months after beginning NSASSy. These numbers became 5, 7, 53, and 25 events, respectively, during the subsequent 12 months. The number of outbreaks detected increased by 47% during the first 5 months, and by 87% in the following 12 months. The responses were primarily confirming the situation and offering advice to caregivers.Conclusion The Sumida Public Health Center ward could achieve early detection with automatic or manual detection of NSASSy. This system recently has become an important detection resource, and has contributed greatly to early detection. Because the Public Health Center can use it to achieve real-time monitoring, they can recognize emergent situations and intervene earlier, and thereby give feedback to the nursery schools. The system can contribute to providing effective countermeasures in these settings.

  8. A Neutral Network based Early Eathquake Warning model in California region

    NASA Astrophysics Data System (ADS)

    Xiao, H.; MacAyeal, D. R.

    2016-12-01

    Early Earthquake Warning systems could reduce loss of lives and other economic impact resulted from natural disaster or man-made calamity. Current systems could be further enhanced by neutral network method. A 3 layer neural network model combined with onsite method was deployed in this paper to improve the recognition time and detection time for large scale earthquakes.The 3 layer neutral network early earthquake warning model adopted the vector feature design for sample events happened within 150 km radius of the epicenters. Dataset used in this paper contained both destructive events and small scale events. All the data was extracted from IRIS database to properly train the model. In the training process, backpropagation algorithm was used to adjust the weight matrices and bias matrices during each iteration. The information in all three channels of the seismometers served as the source in this model. Through designed tests, it was indicated that this model could identify approximately 90 percent of the events' scale correctly. And the early detection could provide informative evidence for public authorities to make further decisions. This indicated that neutral network model could have the potential to strengthen current early warning system, since the onsite method may greatly reduce the responding time and save more lives in such disasters.

  9. Detection of inter-frame forgeries in digital videos.

    PubMed

    K, Sitara; Mehtre, B M

    2018-05-26

    Videos are acceptable as evidence in the court of law, provided its authenticity and integrity are scientifically validated. Videos recorded by surveillance systems are susceptible to malicious alterations of visual content by perpetrators locally or remotely. Such malicious alterations of video contents (called video forgeries) are categorized into inter-frame and intra-frame forgeries. In this paper, we propose inter-frame forgery detection techniques using tamper traces from spatio-temporal and compressed domains. Pristine videos containing frames that are recorded during sudden camera zooming event, may get wrongly classified as tampered videos leading to an increase in false positives. To address this issue, we propose a method for zooming detection and it is incorporated in video tampering detection. Frame shuffling detection, which was not explored so far is also addressed in our work. Our method is capable of differentiating various inter-frame tamper events and its localization in the temporal domain. The proposed system is tested on 23,586 videos of which 2346 are pristine and rest of them are candidates of inter-frame forged videos. Experimental results show that we have successfully detected frame shuffling with encouraging accuracy rates. We have achieved improved accuracy on forgery detection in frame insertion, frame deletion and frame duplication. Copyright © 2018. Published by Elsevier B.V.

  10. Monitoring of pipeline oil spill fire events using Geographical Information System and Remote Sensing

    NASA Astrophysics Data System (ADS)

    Ogungbuyi, M. G.; Eckardt, F. D.; Martinez, P.

    2016-12-01

    Nigeria, the largest producer of crude oil in Africa occupies sixth position in the world. Despite such huge oil revenue potentials, its pipeline network system is consistently susceptible to leaks causing oil spills. We investigate ground based spill events which are caused by operational error, equipment failure and most importantly by deliberate attacks along the major pipeline transport system. Sometimes, these spills are accompanied with fire explosion caused by accidental discharge, natural or illegal refineries in the creeds, etc. MODIS satellites fires data corresponding to the times and spill events (i.e. ground based data) of the Area of Interest (AOI) show significant correlation. The open source Quantum Geographical Information System (QGIS) was used to validate the dataset and the spatiotemporal analyses of the oil spill fires were expressed. We demonstrate that through QGIS and Google Earth (using the time sliders), we can identify and monitor oil spills when they are attended with fire events along the pipeline transport system accordingly. This is shown through the spatiotemporal images of the fires. Evidence of such fire cases resulting from bunt vegetation as different from industrial and domestic fire is also presented. Detecting oil spill fires in the study location may not require an enormous terabyte of image processing: we can however rely on a near-real-time (NRT) MODIS data that is readily available twice daily to detect oil spill fire as early warning signal for those hotspots areas where cases of oil seepage is significant in Nigeria.

  11. Signal Detection of Imipenem Compared to Other Drugs from Korea Adverse Event Reporting System Database

    PubMed Central

    Park, Kyounghoon; Soukavong, Mick; Kim, Jungmee; Kwon, Kyoung-eun; Jin, Xue-mei; Lee, Joongyub; Yang, Bo Ram

    2017-01-01

    Purpose To detect signals of adverse drug events after imipenem treatment using the Korea Institute of Drug Safety & Risk Management-Korea adverse event reporting system database (KIDS-KD). Materials and Methods We performed data mining using KIDS-KD, which was constructed using spontaneously reported adverse event (AE) reports between December 1988 and June 2014. We detected signals calculated the proportional reporting ratio, reporting odds ratio, and information component of imipenem. We defined a signal as any AE that satisfied all three indices. The signals were compared with drug labels of nine countries. Results There were 807582 spontaneous AEs reports in the KIDS-KD. Among those, the number of antibiotics related AEs was 192510; 3382 reports were associated with imipenem. The most common imipenem-associated AE was the drug eruption; 353 times. We calculated the signal by comparing with all other antibiotics and drugs; 58 and 53 signals satisfied the three methods. We compared the drug labelling information of nine countries, including the USA, the UK, Japan, Italy, Switzerland, Germany, France, Canada, and South Korea, and discovered that the following signals were currently not included in drug labels: hypokalemia, cardiac arrest, cardiac failure, Parkinson's syndrome, myocardial infarction, and prostate enlargement. Hypokalemia was an additional signal compared with all other antibiotics, and the other signals were not different compared with all other antibiotics and all other drugs. Conclusion We detected new signals that were not listed on the drug labels of nine countries. However, further pharmacoepidemiologic research is needed to evaluate the causality of these signals. PMID:28332362

  12. Signal Detection of Imipenem Compared to Other Drugs from Korea Adverse Event Reporting System Database.

    PubMed

    Park, Kyounghoon; Soukavong, Mick; Kim, Jungmee; Kwon, Kyoung Eun; Jin, Xue Mei; Lee, Joongyub; Yang, Bo Ram; Park, Byung Joo

    2017-05-01

    To detect signals of adverse drug events after imipenem treatment using the Korea Institute of Drug Safety & Risk Management-Korea adverse event reporting system database (KIDS-KD). We performed data mining using KIDS-KD, which was constructed using spontaneously reported adverse event (AE) reports between December 1988 and June 2014. We detected signals calculated the proportional reporting ratio, reporting odds ratio, and information component of imipenem. We defined a signal as any AE that satisfied all three indices. The signals were compared with drug labels of nine countries. There were 807582 spontaneous AEs reports in the KIDS-KD. Among those, the number of antibiotics related AEs was 192510; 3382 reports were associated with imipenem. The most common imipenem-associated AE was the drug eruption; 353 times. We calculated the signal by comparing with all other antibiotics and drugs; 58 and 53 signals satisfied the three methods. We compared the drug labelling information of nine countries, including the USA, the UK, Japan, Italy, Switzerland, Germany, France, Canada, and South Korea, and discovered that the following signals were currently not included in drug labels: hypokalemia, cardiac arrest, cardiac failure, Parkinson's syndrome, myocardial infarction, and prostate enlargement. Hypokalemia was an additional signal compared with all other antibiotics, and the other signals were not different compared with all other antibiotics and all other drugs. We detected new signals that were not listed on the drug labels of nine countries. However, further pharmacoepidemiologic research is needed to evaluate the causality of these signals. © Copyright: Yonsei University College of Medicine 2017

  13. A pan-African medium-range ensemble flood forecast system

    NASA Astrophysics Data System (ADS)

    Thiemig, Vera; Bisselink, Bernard; Pappenberger, Florian; Thielen, Jutta

    2015-04-01

    The African Flood Forecasting System (AFFS) is a probabilistic flood forecast system for medium- to large-scale African river basins, with lead times of up to 15 days. The key components are the hydrological model LISFLOOD, the African GIS database, the meteorological ensemble predictions of the ECMWF and critical hydrological thresholds. In this study the predictive capability is investigated, to estimate AFFS' potential as an operational flood forecasting system for the whole of Africa. This is done in a hindcast mode, by reproducing pan-African hydrological predictions for the whole year of 2003 where important flood events were observed. Results were analysed in two ways, each with its individual objective. The first part of the analysis is of paramount importance for the assessment of AFFS as a flood forecasting system, as it focuses on the detection and prediction of flood events. Here, results were verified with reports of various flood archives such as Dartmouth Flood Observatory, the Emergency Event Database, the NASA Earth Observatory and Reliefweb. The number of hits, false alerts and missed alerts as well as the Probability of Detection, False Alarm Rate and Critical Success Index were determined for various conditions (different regions, flood durations, average amount of annual precipitations, size of affected areas and mean annual discharge). The second part of the analysis complements the first by giving a basic insight into the prediction skill of the general streamflow. For this, hydrological predictions were compared against observations at 36 key locations across Africa and the Continuous Rank Probability Skill Score (CRPSS), the limit of predictability and reliability were calculated. Results showed that AFFS detected around 70 % of the reported flood events correctly. In particular, the system showed good performance in predicting riverine flood events of long duration (> 1 week) and large affected areas (> 10 000 km2) well in advance, whereas AFFS showed limitations for small-scale and short duration flood events. Also the forecasts showed on average a good reliability, and the CRPSS helped identifying regions to focus on for future improvements. The case study for the flood event in March 2003 in the Sabi Basin (Zimbabwe and Mozambique) illustrated the good performance of AFFS in forecasting timing and severity of the floods, gave an example of the clear and concise output products, and showed that the system is capable of producing flood warnings even in ungauged river basins. Hence, from a technical perspective, AFFS shows a good prospective as an operational system, as it has demonstrated its significant potential to contribute to the reduction of flood-related losses in Africa by providing national and international aid organizations timely with medium-range flood forecast information. However, issues related to the practical implication will still need to be investigated.

  14. Extracting semantically enriched events from biomedical literature

    PubMed Central

    2012-01-01

    Background Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Results Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP’09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP’09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. Conclusions We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare. PMID:22621266

  15. Extracting semantically enriched events from biomedical literature.

    PubMed

    Miwa, Makoto; Thompson, Paul; McNaught, John; Kell, Douglas B; Ananiadou, Sophia

    2012-05-23

    Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP'09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP'09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare.

  16. Multiple Kernel Learning for Heterogeneous Anomaly Detection: Algorithm and Aviation Safety Case Study

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.

    2010-01-01

    The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods

  17. A true real-time, on-line security system for waterborne pathogen surveillance

    NASA Astrophysics Data System (ADS)

    Adams, John A.; McCarty, David L.

    2008-04-01

    Over the past several years many advances have been made to monitor potable water systems for toxic threats. However, the need for real-time, on-line systems to detect the malicious introduction of deadly pathogens still exists. Municipal water distribution systems, government facilities and buildings, and high profile public events remain vulnerable to terrorist-related biological contamination. After years of research and development, an instrument using multi-angle light scattering (MALS) technology has been introduced to achieve on-line, real-time detection and classification of a waterborne pathogen event. The MALS system utilizes a continuous slip stream of water passing through a flow cell in the instrument. A laser beam, focused perpendicular to the water flow, strikes particles as they pass through the beam generating unique light scattering patterns that are captured by photodetectors. Microorganisms produce patterns termed 'bio-optical signatures' which are comparable to fingerprints. By comparing these bio-optical signatures to an on-board database of microorganism patterns, detection and classification occurs within minutes. If a pattern is not recognized, it is classified as an 'unknown' and the unidentified contaminant is registered as a potential threat. In either case, if the contaminant exceeds a customer's threshold, the system will immediately alert personnel to the contamination event while extracting a sample for confirmation. The system, BioSentry TM, developed by JMAR Technologies is now field-tested and commercially available. BioSentry is cost effective, uses no reagents, operates remotely, and can be used for continuous microbial surveillance in many water treatment environments. Examples of HLS installations will be presented along with data from the US EPA NHSRC Testing and Evaluation Facility.

  18. Polarimetry Microlensing of Close-in Planetary Systems

    NASA Astrophysics Data System (ADS)

    Sajadian, Sedighe; Hundertmark, Markus

    2017-04-01

    A close-in giant planetary (CGP) system has a net polarization signal whose value varies depending on the orbital phase of the planet. This polarization signal is either caused by the stellar occultation or by reflected starlight from the surface of the orbiting planet. When the CGP system is located in the Galactic bulge, its polarization signal becomes too weak to be measured directly. One method for detecting and characterizing these weak polarization signatures due to distant CGP systems is gravitational microlensing. In this work, we focus on potential polarimetric observations of highly magnified microlensing events of CGP systems. When the lens is passing directly in front of the source star with its planetary companion, the polarimetric signature caused by the transiting planet is magnified. As a result, some distinct features in the polarimetry and light curves are produced. In the same way, microlensing amplifies the reflection-induced polarization signal. While the planet-induced perturbations are magnified whenever these polarimetric or photometric deviations vanish for a moment, the corresponding magnification factor of the polarization component(s) is related to the planet itself. Finding these exact times in the planet-induced perturbations helps us to characterize the planet. In order to evaluate the observability of such systems through polarimetric or photometric observations of high-magnification microlensing events, we simulate these events by considering confirmed CGP systems as their source stars and conclude that the efficiency for detecting the planet-induced signal with the state-of-the-art polarimetric instrument (FORS2/VLT) is less than 0.1%. Consequently, these planet-induced polarimetry perturbations can likely be detected under favorable conditions by the high-resolution and short-cadence polarimeters of the next generation.

  19. Analytical Assessment of a Gross Leakage Event Within the International Space Station (ISS) Node 2 Internal Active Thermal Control System (IATCS)

    NASA Technical Reports Server (NTRS)

    Holt, James M.; Clanton, Stephen E.

    1999-01-01

    Results of the International Space Station (ISS) Node 2 Internal Active Thermal Control System (IATCS) gross leakage analysis are presented for evaluating total leakage flowrates and volume discharge caused by a gross leakage event (i.e. open boundary condition). A Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA/FLUINT) thermal hydraulic mathematical model (THMM) representing the Node 2 IATCS was developed to simulate system performance under steady-state nominal conditions as well as the transient flow effects resulting from an open line exposed to ambient. The objective of the analysis was to determine the adequacy of the leak detection software in limiting the quantity of fluid lost during a gross leakage event to within an acceptable level.

  20. Analytical Assessment of a Gross Leakage Event Within the International Space Station (ISS) Node 2 Internal Active Thermal Control System (IATCS)

    NASA Technical Reports Server (NTRS)

    Holt, James M.; Clanton, Stephen E.

    2001-01-01

    Results of the International Space Station (ISS) Node 2 Internal Active Thermal Control System (IATCS) gross leakage analysis are presented for evaluating total leakage flow rates and volume discharge caused by a gross leakage event (i.e. open boundary condition). A Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA85/FLUINT) thermal hydraulic mathematical model (THMM) representing the Node 2 IATCS was developed to simulate system performance under steady-state nominal conditions as well as the transient flow effect resulting from an open line exposed to ambient. The objective of the analysis was to determine the adequacy of the leak detection software in limiting the quantity of fluid lost during a gross leakage event to within an acceptable level.

  1. Decision support environment for medical product safety surveillance.

    PubMed

    Botsis, Taxiarchis; Jankosky, Christopher; Arya, Deepa; Kreimeyer, Kory; Foster, Matthew; Pandey, Abhishek; Wang, Wei; Zhang, Guangfan; Forshee, Richard; Goud, Ravi; Menschik, David; Walderhaug, Mark; Woo, Emily Jane; Scott, John

    2016-12-01

    We have developed a Decision Support Environment (DSE) for medical experts at the US Food and Drug Administration (FDA). The DSE contains two integrated systems: The Event-based Text-mining of Health Electronic Records (ETHER) and the Pattern-based and Advanced Network Analyzer for Clinical Evaluation and Assessment (PANACEA). These systems assist medical experts in reviewing reports submitted to the Vaccine Adverse Event Reporting System (VAERS) and the FDA Adverse Event Reporting System (FAERS). In this manuscript, we describe the DSE architecture and key functionalities, and examine its potential contributions to the signal management process by focusing on four use cases: the identification of missing cases from a case series, the identification of duplicate case reports, retrieving cases for a case series analysis, and community detection for signal identification and characterization. Published by Elsevier Inc.

  2. Fault detection on a sewer network by a combination of a Kalman filter and a binary sequential probability ratio test

    NASA Astrophysics Data System (ADS)

    Piatyszek, E.; Voignier, P.; Graillot, D.

    2000-05-01

    One of the aims of sewer networks is the protection of population against floods and the reduction of pollution rejected to the receiving water during rainy events. To meet these goals, managers have to equip the sewer networks with and to set up real-time control systems. Unfortunately, a component fault (leading to intolerable behaviour of the system) or sensor fault (deteriorating the process view and disturbing the local automatism) makes the sewer network supervision delicate. In order to ensure an adequate flow management during rainy events it is essential to set up procedures capable of detecting and diagnosing these anomalies. This article introduces a real-time fault detection method, applicable to sewer networks, for the follow-up of rainy events. This method consists in comparing the sensor response with a forecast of this response. This forecast is provided by a model and more precisely by a state estimator: a Kalman filter. This Kalman filter provides not only a flow estimate but also an entity called 'innovation'. In order to detect abnormal operations within the network, this innovation is analysed with the binary sequential probability ratio test of Wald. Moreover, by crossing available information on several nodes of the network, a diagnosis of the detected anomalies is carried out. This method provided encouraging results during the analysis of several rains, on the sewer network of Seine-Saint-Denis County, France.

  3. Mining patterns in persistent surveillance systems with smart query and visual analytics

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad S.; Shirkhodaie, Amir

    2013-05-01

    In Persistent Surveillance Systems (PSS) the ability to detect and characterize events geospatially help take pre-emptive steps to counter adversary's actions. Interactive Visual Analytic (VA) model offers this platform for pattern investigation and reasoning to comprehend and/or predict such occurrences. The need for identifying and offsetting these threats requires collecting information from diverse sources, which brings with it increasingly abstract data. These abstract semantic data have a degree of inherent uncertainty and imprecision, and require a method for their filtration before being processed further. In this paper, we have introduced an approach based on Vector Space Modeling (VSM) technique for classification of spatiotemporal sequential patterns of group activities. The feature vectors consist of an array of attributes extracted from generated sensors semantic annotated messages. To facilitate proper similarity matching and detection of time-varying spatiotemporal patterns, a Temporal-Dynamic Time Warping (DTW) method with Gaussian Mixture Model (GMM) for Expectation Maximization (EM) is introduced. DTW is intended for detection of event patterns from neighborhood-proximity semantic frames derived from established ontology. GMM with EM, on the other hand, is employed as a Bayesian probabilistic model to estimated probability of events associated with a detected spatiotemporal pattern. In this paper, we present a new visual analytic tool for testing and evaluation group activities detected under this control scheme. Experimental results demonstrate the effectiveness of proposed approach for discovery and matching of subsequences within sequentially generated patterns space of our experiments.

  4. Multiple-modality program for standoff detection of roadside hazards

    NASA Astrophysics Data System (ADS)

    Williams, Kathryn; Middleton, Seth; Close, Ryan; Luke, Robert H.; Suri, Rajiv

    2016-05-01

    The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is executing a program to assess the performance of a variety of sensor modalities for standoff detection of roadside explosive hazards. The program objective is to identify an optimal sensor or combination of fused sensors to incorporate with autonomous detection algorithms into a system of systems for use in future route clearance operations. This paper provides an overview of the program, including a description of the sensors under consideration, sensor test events, and ongoing data analysis.

  5. Smart Distributed Sensor Fields: Algorithms for Tactical Sensors

    DTIC Science & Technology

    2013-12-23

    ranging from detecting, identifying, localizing/tracking interesting events, discarding irrelevant data, to providing actionable intelligence currently...tracking interesting events, discarding irrelevant data, to providing actionable intelligence currently requires significant human super- vision. Human...view of the overall system. The main idea is to reduce the problem to the relevant data, and then reason intelligently over that data. This process

  6. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    NASA Astrophysics Data System (ADS)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support-vector network to various classical learning algorithms used before in seismic detection and classification is an essential final step to analyze the advantages and disadvantages of the model.

  7. Automatic near-real-time detection of CMEs in Mauna Loa K-Cor coronagraph images

    NASA Astrophysics Data System (ADS)

    Thompson, W. T.; St Cyr, O. C.; Burkepile, J.; Posner, A.

    2017-12-01

    A simple algorithm has been developed to detect the onset of coronal massejections (CMEs), together with an estimate of their speed, in near-real-timeusing images of the linearly polarized white-light solar corona taken by theK-Cor telescope at the Mauna Loa Solar Observatory (MLSO). The algorithm usedis a variation on the Solar Eruptive Event Detection System (SEEDS) developedat George Mason University. The algorithm was tested against K-Cor data takenbetween 29 April 2014 and 20 February 2017, on days which the MLSO websitemarked as containing CMEs. This resulted in testing of 139 days worth of datacontaining 171 CMEs. The detection rate varied from close to 80% in 2014-2015when solar activity was high, down to as low as 20-30% in 2017 when activitywas low. The difference in effectiveness with solar cycle is attributed to thedifference in relative prevalance of strong CMEs between active and quietperiods. There were also twelve false detections during this time period,leading to an average false detection rate of 8.6% on any given day. However,half of the false detections were clustered into two short periods of a fewdays each when special conditions prevailed to increase the false detectionrate. The K-Cor data were also compared with major Solar Energetic Particle(SEP) storms during this time period. There were three SEP events detectedeither at Earth or at one of the two STEREO spacecraft where K-Cor wasobserving during the relevant time period. The K-Cor CME detection algorithmsuccessfully generated alerts for two of these events, with lead times of 1-3hours before the SEP onset at 1 AU. The third event was not detected by theautomatic algorithm because of the unusually broad width of the CME in positionangle.

  8. The Whipple Mission: Exploring the Kuiper Belt and the Oort Cloud

    NASA Astrophysics Data System (ADS)

    Alcock, Charles; Brown, Michael; Gauron, Tom; Heneghan, Cate; Holman, Matthew; Kenter, Almus; Kraft, Ralph; Livingstone, John; Murray-Clay, Ruth; Nulsen, Paul; Payne, Matthew; Schlichting, Hilke; Trangsrud, Amy; Vrtilek, Jan; Werner, Michael

    2015-11-01

    Whipple will characterize the small body populations of the Kuiper Belt and the Oort Cloud with a blind occultation survey, detecting objects when they briefly (~1 second) interrupt the light from background stars, allowing the detection of much more distant and/or smaller objects than can be seen in reflected sunlight. Whipple will reach much deeper into the unexplored frontier of the outer solar system than any other mission, current or proposed. Whipple will look back to the dawn of the solar system by discovering its most remote bodies where primordial processes left their imprint.Specifically, Whipple will monitor large numbers of stars at high cadences (~12,000 stars at 20 Hz to examine Kuiper Belt events; as many as ~36,000 stars at 5 Hz to explore deep into the Oort Cloud, where events are less frequent). Analysis of the detected events will allow us to determine the size spectrum of bodies in the Kuiper Belt with radii as small as ~1 km. This will allow the testing of models of the growth and later collisional erosion of planetesimals in the early solar system. Whipple will explore the Oort Cloud, potentially detecting objects as far out as ~10,000 AU. This will be the first direct exploration of the Oort Cloud since the original hypothesis of 1950.Whipple is a Discovery class mission that was proposed to NASA in response to the 2014 Announcement of Opportunity. The mission is being developed jointly by the Smithsonian Astrophysical Observatory, Jet Propulsion Laboratories, and Ball Aerospace & Technologies, with telescope optics from L-3 Integrated Optical Systems and imaging sensors from Teledyne Imaging Sensors.

  9. The sequentially discounting autoregressive (SDAR) method for on-line automatic seismic event detecting on long term observation

    NASA Astrophysics Data System (ADS)

    Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.

    2017-12-01

    In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long-term series and achieve real-time monitoring. Finally, we employ the SDAR on a synthetic model and Tomakomai Ocean Bottom Cable (OBC) baseline data to prove the feasibility and advantage of our method.

  10. 33 CFR 117.743 - Rahway River.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... lights anytime the bridge is not in the full open position. (d) An infrared sensor system shall be... the infrared sensor system. (g) If the infrared sensors detect a vessel or other obstruction.... (j) In the event of a failure, or obstruction to the infrared sensor system, the bridge shall...

  11. 33 CFR 117.743 - Rahway River.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... lights anytime the bridge is not in the full open position. (d) An infrared sensor system shall be... the infrared sensor system. (g) If the infrared sensors detect a vessel or other obstruction.... (j) In the event of a failure, or obstruction to the infrared sensor system, the bridge shall...

  12. 33 CFR 117.743 - Rahway River.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... lights anytime the bridge is not in the full open position. (d) An infrared sensor system shall be... the infrared sensor system. (g) If the infrared sensors detect a vessel or other obstruction.... (j) In the event of a failure, or obstruction to the infrared sensor system, the bridge shall...

  13. 33 CFR 117.743 - Rahway River.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... lights anytime the bridge is not in the full open position. (d) An infrared sensor system shall be... the infrared sensor system. (g) If the infrared sensors detect a vessel or other obstruction.... (j) In the event of a failure, or obstruction to the infrared sensor system, the bridge shall...

  14. 33 CFR 117.743 - Rahway River.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... lights anytime the bridge is not in the full open position. (d) An infrared sensor system shall be... the infrared sensor system. (g) If the infrared sensors detect a vessel or other obstruction.... (j) In the event of a failure, or obstruction to the infrared sensor system, the bridge shall...

  15. Establishing an Environmental Scanning/Forecasting System to Augment College and University Planning.

    ERIC Educational Resources Information Center

    Morrison, James L.

    1987-01-01

    The major benefit of an environmental scanning/forecasting system is in providing critical information for strategic planning. Such a system allows the institution to detect social, technological, economic, and political trends and potential events. The environmental scanning database developed by United Way of America is described. (MLW)

  16. A model of human event detection in multiple process monitoring situations

    NASA Technical Reports Server (NTRS)

    Greenstein, J. S.; Rouse, W. B.

    1978-01-01

    It is proposed that human decision making in many multi-task situations might be modeled in terms of the manner in which the human detects events related to his tasks and the manner in which he allocates his attention among his tasks once he feels events have occurred. A model of human event detection performance in such a situation is presented. An assumption of the model is that, in attempting to detect events, the human generates the probability that events have occurred. Discriminant analysis is used to model the human's generation of these probabilities. An experimental study of human event detection performance in a multiple process monitoring situation is described and the application of the event detection model to this situation is addressed. The experimental study employed a situation in which subjects simulataneously monitored several dynamic processes for the occurrence of events and made yes/no decisions on the presence of events in each process. Input to the event detection model of the information displayed to the experimental subjects allows comparison of the model's performance with the performance of the subjects.

  17. A habituation based approach for detection of visual changes in surveillance camera

    NASA Astrophysics Data System (ADS)

    Sha'abani, M. N. A. H.; Adan, N. F.; Sabani, M. S. M.; Abdullah, F.; Nadira, J. H. S.; Yasin, M. S. M.

    2017-09-01

    This paper investigates a habituation based approach in detecting visual changes using video surveillance systems in a passive environment. Various techniques have been introduced for dynamic environment such as motion detection, object classification and behaviour analysis. However, in a passive environment, most of the scenes recorded by the surveillance system are normal. Therefore, implementing a complex analysis all the time in the passive environment resulting on computationally expensive, especially when using a high video resolution. Thus, a mechanism of attention is required, where the system only responds to an abnormal event. This paper proposed a novelty detection mechanism in detecting visual changes and a habituation based approach to measure the level of novelty. The objective of the paper is to investigate the feasibility of the habituation based approach in detecting visual changes. Experiment results show that the approach are able to accurately detect the presence of novelty as deviations from the learned knowledge.

  18. Spacecraft-to-Earth Communications for Juno and Mars Science Laboratory Critical Events

    NASA Technical Reports Server (NTRS)

    Soriano, Melissa; Finley, Susan; Jongeling, Andre; Fort, David; Goodhart, Charles; Rogstad, David; Navarro, Robert

    2012-01-01

    Deep Space communications typically utilize closed loop receivers and Binary Phase Shift Keying (BPSK) or Quadrature Phase Shift Keying (QPSK). Critical spacecraft events include orbit insertion and entry, descent, and landing.---Low gain antennas--> low signal -to-noise-ratio.---High dynamics such as parachute deployment or spin --> Doppler shift. During critical events, open loop receivers and Multiple Frequency Shift Keying (MFSK) used. Entry, Descent, Landing (EDL) Data Analysis (EDA) system detects tones in real-time.

  19. ElarmS Earthquake Early Warning System: 2017 Performance and New ElarmS Version 3.0 (E3)

    NASA Astrophysics Data System (ADS)

    Chung, A. I.; Henson, I. H.; Allen, R. M.; Hellweg, M.; Neuhauser, D. S.

    2017-12-01

    The ElarmS earthquake early warning (EEW) system has been successfully detecting earthquakes throughout California since 2007. ElarmS version 2.0 (E2) is one of the three algorithms contributing alerts to ShakeAlert, a public EEW system being developed by the USGS in collaboration with UC Berkeley, Caltech, University of Washington, and University of Oregon. E2 began operating in test mode in the Pacific Northwest in 2013, and since April of this year E2 has been contributing real-time alerts from Oregon and Washington to the ShakeAlert production prototype system as part of the ShakeAlert roll-out throughout the West Coast. Since it began operating west-coast-wide, E2 has correctly alerted on 5 events that matched ANSS catalog events with M≥4, missed 1 event with M≥4, and incorrectly created alerts for 5 false events with M≥4. The most recent version of the algorithm, ElarmS version 3.0 (E3), is a significant improvement over E2. It addresses some of the most problematic causes of false events for which E2 produced alerts, without impacting reliability in terms of matched and missed events. Of the 5 false events that were generated by E2 since April, 4 would have been suppressed by E3. In E3, we have added a filterbank teleseismic filter. By analyzing the amplitude of the waveform filtered in various passbands, it is possible to distinguish between local and teleseismic events. We have also added a series of checks to validate triggers and filter out spurious and S-wave triggers. Additional improvements to the waveform associator also improve detections. In this presentation, we describe the improvements and compare the performance of the current production (E2) and development (E3) versions of ElarmS over the past year. The ShakeAlert project is now working through a streamlining process to identify the best components of various algorithms and merge them. The ElarmS team is participating in this effort and we anticipate that much of E3 will continue in the final system.

  20. Transient Volcano Deformation Event Detection over Variable Spatial Scales in Alaska

    NASA Astrophysics Data System (ADS)

    Li, J. D.; Rude, C. M.; Gowanlock, M.; Herring, T.; Pankratius, V.

    2016-12-01

    Transient deformation events driven by volcanic activity can be monitored using increasingly dense networks of continuous Global Positioning System (GPS) ground stations. The wide spatial extent of GPS networks, the large number of GPS stations, and the spatially and temporally varying scale of deformation events result in the mixing of signals from multiple sources. Typical analysis then necessitates manual identification of times and regions of volcanic activity for further study and the careful tuning of algorithmic parameters to extract possible transient events. Here we present a computer-aided discovery system that facilitates the discovery of potential transient deformation events at volcanoes by providing a framework for selecting varying spatial regions of interest and for tuning the analysis parameters. This site specification step in the framework reduces the spatial mixing of signals from different volcanic sources before applying filters to remove interfering signals originating from other geophysical processes. We analyze GPS data recorded by the Plate Boundary Observatory network and volcanic activity logs from the Alaska Volcano Observatory to search for and characterize transient inflation events in Alaska. We find 3 transient inflation events between 2008 and 2015 at the Akutan, Westdahl, and Shishaldin volcanoes in the Aleutian Islands. The inflation event detected in the first half of 2008 at Akutan is validated other studies, while the inflation events observed in early 2011 at Westdahl and in early 2013 at Shishaldin are previously unreported. Our analysis framework also incorporates modelling of the transient inflation events and enables a comparison of different magma chamber inversion models. Here, we also estimate the magma sources that best describe the deformation observed by the GPS stations at Akutan, Westdahl, and Shishaldin. We acknowledge support from NASA AIST-NNX15AG84G (PI: V. Pankratius).

  1. Results of the first continuous meteor head echo survey at polar latitudes

    NASA Astrophysics Data System (ADS)

    Schult, Carsten; Stober, Gunter; Janches, Diego; Chau, Jorge L.

    2017-11-01

    We present the first quasi continuous meteor head echo measurements obtained during a period of over two years using the Middle Atmosphere ALOMAR Radar System (MAARSY). The measurements yield information on the altitude, trajectory, vector velocity, radar cross section, deceleration and dynamical mass of every single event. The large statistical amount of nearly one million meteor head detections provide an excellent overview of the elevation, altitude, velocity and daily count rate distributions during different times of the year at polar latitudes. Only 40% of the meteors were detected within the full width half maximum of the specific sporadic meteor sources. Our observation of the sporadic meteors are compared to the observations with other radar systems and a meteor input function (MIF). The best way to compare different radar systems is by comparing the radar cross section (RCS), which is the main detection criterion for each system. In this study we aim to compare our observations with a MIF, which provides information only about the meteoroid mass. Thus, we are using a statistical approach for the elevation and velocity dependent visibility and a specific mass selection. The predicted absolute count rates from the MIF are in a good agreement with the observation when it is assumed that the radar system is only sensitive to meteoroids with masses higher than one microgram. The analysis of the dynamic masses seems to be consistent with this assumption since the count rate of events with smaller masses are low and decrease even more by using events with relatively small errors.

  2. Monitoring the Earth's Atmosphere with the Global IMS Infrasound Network

    NASA Astrophysics Data System (ADS)

    Brachet, Nicolas; Brown, David; Mialle, Pierrick; Le Bras, Ronan; Coyne, John; Given, Jeffrey

    2010-05-01

    The Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) is tasked with monitoring compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT) which bans nuclear weapon explosions underground, in the oceans, and in the atmosphere. The verification regime includes a globally distributed network of seismic, hydroacoustic, infrasound and radionuclide stations which collect and transmit data to the International Data Centre (IDC) in Vienna, Austria shortly after the data are recorded at each station. The infrasound network defined in the Protocol of the CTBT comprises 60 infrasound array stations. Each array is built according to the same technical specifications, it is typically composed of 4 to 9 sensors, with 1 to 3 km aperture geometry. At the end of 2000 only one infrasound station was transmitting data to the IDC. Since then, 41 additional stations have been installed and 70% of the infrasound network is currently certified and contributing data to the IDC. This constitutes the first global infrasound network ever built with such a large and uniform distribution of stations. Infrasound data at the IDC are processed at the station level using the Progressive Multi-Channel Correlation (PMCC) method for the detection and measurement of infrasound signals. The algorithm calculates the signal correlation between sensors at an infrasound array. If the signal is sufficiently correlated and consistent over an extended period of time and frequency range a detection is created. Groups of detections are then categorized according to their propagation and waveform features, and a phase name is assigned for infrasound, seismic or noise detections. The categorization complements the PMCC algorithm to avoid overwhelming the IDC automatic association algorithm with false alarm infrasound events. Currently, 80 to 90% of the detections are identified as noise by the system. Although the noise detections are not used to build events in the context of CTBT monitoring, they represent valuable data for other civil applications like monitoring of natural hazards (volcanic activity, storm tracking) and climate change. Non-noise detections are used in network processing at the IDC along with seismic and hydroacoustic technologies. The arrival phases detected on the three waveform technologies may be combined and used for locating events in an automatically generated bulletin of events. This automatic event bulletin is routinely reviewed by analysts during the interactive review process. However, the fusion of infrasound data with the other waveform technologies has only recently (in early 2010) become part of the IDC operational system, after a software development and testing period that began in 2004. The build-up of the IMS infrasound network, the recent developments of the IDC infrasound software, and the progress accomplished during the last decade in the domain of real-time atmospheric modelling have allowed better understanding of infrasound signals and identification of a growing data set of ground-truth sources. These infragenic sources originate from natural or man-made sources. Some of the detected signals are emitted by local or regional phenomena recorded by a single IMS infrasound station: man-made cultural activity, wind farms, aircraft, artillery exercises, ocean surf, thunderstorms, rumbling volcanoes, iceberg calving, aurora, avalanches. Other signals may be recorded by several IMS infrasound stations at larger distances: ocean swell, sonic booms, and mountain associated waves. Only a small fraction of events meet the event definition criteria considering the Treaty verification mission of the Organization. Candidate event types for the IDC Reviewed Event Bulletin include atmospheric or surface explosions, meteor explosions, rocket launches, signals from large earthquakes and explosive volcanic eruptions.

  3. Simultaneous optical and meteor head echo measurements using the Middle Atmosphere Alomar Radar System (MAARSY): Data collection and preliminary analysis

    NASA Astrophysics Data System (ADS)

    Brown, P.; Stober, G.; Schult, C.; Krzeminski, Z.; Cooke, W.; Chau, J. L.

    2017-07-01

    The initial results of a two year simultaneous optical-radar meteor campaign are described. Analysis of 105 double-station optical meteors having plane of sky intersection angles greater than 5° and trail lengths in excess of 2 km also detected by the Middle Atmosphere Alomar Radar System (MAARSY) as head echoes was performed. These events show a median deviation in radiants between radar and optical determinations of 1.5°, with 1/3 of events having radiant agreement to less than one degree. MAARSY tends to record average speeds roughly 0.5 km/s and 1.3 km higher than optical records, in part due to the higher sensitivity of MAARSY as compared to the optical instruments. More than 98% of all head echoes are not detected with the optical system. Using this non-detection ratio and the known limiting sensitivity of the cameras, we estimate that the limiting meteoroid detection mass of MAARSY is in the 10-9-10-10 kg (astronomical limiting meteor magnitudes of +11 to +12) appropriate to speeds from 30 to 60 km/s. There is a clear trend of higher peak RCS for brighter meteors between 35 and -30 dBsm. For meteors with similar magnitudes, the MAARSY head echo radar cross-section is larger at higher speeds. Brighter meteors at fixed heights and similar speeds have consistently, on average, larger RCS values, in accordance with established scattering theory. However, our data show RCS ∝ v/2, much weaker than the normally assumed RCS ∝ v3, a consequence of our requiring head echoes to also be detectable optically. Most events show a smooth variation of RCS with height broadly following the light production behavior. A significant minority of meteors show large variations in RCS relative to the optical light curve over common height intervals, reflecting fragmentation or possibly differential ablation. No optically detected meteor occurring in the main radar beam and at times when the radar was collecting head echo data went unrecorded by MAARSY. Thus there does not appear to be any large scale bias in MAARSY head echo detections for the (comparatively) larger optical events in our dataset, even at very low speeds.

  4. A flare event of the long-period RS Canum Venaticorum system IM Pegasi

    NASA Technical Reports Server (NTRS)

    Buzasi, Derek L.; Ramsey, Lawrence W.; Huenemoerder, David P.

    1987-01-01

    The characteristics of a flare event detected on the long-period RS CVn system IM Pegasi are reported. The low-resolution spectrum show enhancements of up to a factor of five in some emission lines. All of the ultraviolet emission lines normally visible are enhanced significantly more than the normal 30 rotational modulation. Emission fluxes of both the quiescent and flare event are used to construct models of the density and temperature variation with height. These models reveal a downward shift of the transition region during the flare. Scaled models of the quiet and flaring solar outer atmosphere are used to estimate the filling factor of the flare event at about 30 percent of the stellar surface. The pattern of line enhancements in the flare is the same as a previous event in Lambda Andromeda observed previously.

  5. Inexpensive read-out for coincident electron spectroscopy with a transmission electron microscope at nanometer scale using micro channel plates and multistrip anodes

    NASA Astrophysics Data System (ADS)

    Hollander, R. W.; Bom, V. R.; van Eijk, C. W. E.; Faber, J. S.; Hoevers, H.; Kruit, P.

    1994-09-01

    The elemental composition of a sample at nanometer scale is determined by measurement of the characteristic energy of Auger electrons, emitted in coincidence with incoming primary electrons from a microbeam in a scanning transmission electron microscope (STEM). Single electrons are detected with position sensitive detectors, consisting of MicroChannel Plates (MCP) and MultiStrip Anodes (MSA), one for the energy of the Auger electrons (Auger-detector) and one for the energy loss of primary electrons (EELS-detector). The MSAs are sensed with LeCroy 2735DC preamplifiers. The fast readout is based on LeCroy's PCOS III system. On the detection of a coincidence (Event) energy data of Auger and EELS are combined with timing data to an Event word. Event words are stored in list mode in a VME memory module. Blocks of Event words are scanned by transputers in VME and two-dimensional energy histograms are filled using the timing information to obtain a maximal true/accidental ratio. The resulting histograms are stored on disk of a PC-386, which also controls data taking. The system is designed to handle 10 5 Events per second, 90% of which are accidental. In the histograms the "true" to "accidental" ratio will be 5. The dead time is 15%.

  6. A Study of Failure Events in Drinking Water Systems As a Basis for Comparison and Evaluation of the Efficacy of Potable Reuse Schemes

    PubMed Central

    Onyango, Laura A.; Quinn, Chloe; Tng, Keng H.; Wood, James G.; Leslie, Greg

    2015-01-01

    Potable reuse is implemented in several countries around the world to augment strained water supplies. This article presents a public health perspective on potable reuse by comparing the critical infrastructure and institutional capacity characteristics of two well-established potable reuse schemes with conventional drinking water schemes in developed nations that have experienced waterborne outbreaks. Analysis of failure events in conventional water systems between 2003 and 2013 showed that despite advances in water treatment technologies, drinking water outbreaks caused by microbial contamination were still frequent in developed countries and can be attributed to failures in infrastructure or institutional practices. Numerous institutional failures linked to ineffective treatment protocols, poor operational practices, and negligence were detected. In contrast, potable reuse schemes that use multiple barriers, online instrumentation, and operational measures were found to address the events that have resulted in waterborne outbreaks in conventional systems in the past decade. Syndromic surveillance has emerged as a tool in outbreak detection and was useful in detecting some outbreaks; increases in emergency department visits and GP consultations being the most common data source, suggesting potential for an increasing role in public health surveillance of waterborne outbreaks. These results highlight desirable characteristics of potable reuse schemes from a public health perspective with potential for guiding policy on surveillance activities. PMID:27053920

  7. A Study of Failure Events in Drinking Water Systems As a Basis for Comparison and Evaluation of the Efficacy of Potable Reuse Schemes.

    PubMed

    Onyango, Laura A; Quinn, Chloe; Tng, Keng H; Wood, James G; Leslie, Greg

    2015-01-01

    Potable reuse is implemented in several countries around the world to augment strained water supplies. This article presents a public health perspective on potable reuse by comparing the critical infrastructure and institutional capacity characteristics of two well-established potable reuse schemes with conventional drinking water schemes in developed nations that have experienced waterborne outbreaks. Analysis of failure events in conventional water systems between 2003 and 2013 showed that despite advances in water treatment technologies, drinking water outbreaks caused by microbial contamination were still frequent in developed countries and can be attributed to failures in infrastructure or institutional practices. Numerous institutional failures linked to ineffective treatment protocols, poor operational practices, and negligence were detected. In contrast, potable reuse schemes that use multiple barriers, online instrumentation, and operational measures were found to address the events that have resulted in waterborne outbreaks in conventional systems in the past decade. Syndromic surveillance has emerged as a tool in outbreak detection and was useful in detecting some outbreaks; increases in emergency department visits and GP consultations being the most common data source, suggesting potential for an increasing role in public health surveillance of waterborne outbreaks. These results highlight desirable characteristics of potable reuse schemes from a public health perspective with potential for guiding policy on surveillance activities.

  8. Dynamic sensing model for accurate delectability of environmental phenomena using event wireless sensor network

    NASA Astrophysics Data System (ADS)

    Missif, Lial Raja; Kadhum, Mohammad M.

    2017-09-01

    Wireless Sensor Network (WSN) has been widely used for monitoring where sensors are deployed to operate independently to sense abnormal phenomena. Most of the proposed environmental monitoring systems are designed based on a predetermined sensing range which does not reflect the sensor reliability, event characteristics, and the environment conditions. Measuring of the capability of a sensor node to accurately detect an event within a sensing field is of great important for monitoring applications. This paper presents an efficient mechanism for even detection based on probabilistic sensing model. Different models have been presented theoretically in this paper to examine their adaptability and applicability to the real environment applications. The numerical results of the experimental evaluation have showed that the probabilistic sensing model provides accurate observation and delectability of an event, and it can be utilized for different environment scenarios.

  9. Detection of Rain-on-Snow (ROS) Events Using the Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E) and Weather Station Observations

    NASA Astrophysics Data System (ADS)

    Ryan, E. M.; Brucker, L.; Forman, B. A.

    2015-12-01

    During the winter months, the occurrence of rain-on-snow (ROS) events can impact snow stratigraphy via generation of large scale ice crusts, e.g., on or within the snowpack. The formation of such layers significantly alters the electromagnetic response of the snowpack, which can be witnessed using space-based microwave radiometers. In addition, ROS layers can hinder the ability of wildlife to burrow in the snow for vegetation, which limits their foraging capability. A prime example occurred on 23 October 2003 in Banks Island, Canada, where an ROS event is believed to have caused the deaths of over 20,000 musk oxen. Through the use of passive microwave remote sensing, ROS events can be detected by utilizing observed brightness temperatures (Tb) from AMSR-E. Tb observed at different microwave frequencies and polarizations depends on snow properties. A wet snowpack formed from an ROS event yields a larger Tb than a typical dry snowpack would. This phenomenon makes observed Tb useful when detecting ROS events. With the use of data retrieved from AMSR-E, in conjunction with observations from ground-based weather station networks, a database of estimated ROS events over the past twelve years was generated. Using this database, changes in measured Tb following the ROS events was also observed. This study adds to the growing knowledge of ROS events and has the potential to help inform passive microwave snow water equivalent (SWE) retrievals or snow cover properties in polar regions.

  10. Event Detection Using Mobile Phone Mass GPS Data and Their Reliavility Verification by Dmsp/ols Night Light Image

    NASA Astrophysics Data System (ADS)

    Yuki, Akiyama; Satoshi, Ueyama; Ryosuke, Shibasaki; Adachi, Ryuichiro

    2016-06-01

    In this study, we developed a method to detect sudden population concentration on a certain day and area, that is, an "Event," all over Japan in 2012 using mass GPS data provided from mobile phone users. First, stay locations of all phone users were detected using existing methods. Second, areas and days where Events occurred were detected by aggregation of mass stay locations into 1-km-square grid polygons. Finally, the proposed method could detect Events with an especially large number of visitors in the year by removing the influences of Events that occurred continuously throughout the year. In addition, we demonstrated reasonable reliability of the proposed Event detection method by comparing the results of Event detection with light intensities obtained from the night light images from the DMSP/OLS night light images. Our method can detect not only positive events such as festivals but also negative events such as natural disasters and road accidents. These results are expected to support policy development of urban planning, disaster prevention, and transportation management.

  11. Designing and evaluating an automated system for real-time medication administration error detection in a neonatal intensive care unit.

    PubMed

    Ni, Yizhao; Lingren, Todd; Hall, Eric S; Leonard, Matthew; Melton, Kristin; Kirkendall, Eric S

    2018-05-01

    Timely identification of medication administration errors (MAEs) promises great benefits for mitigating medication errors and associated harm. Despite previous efforts utilizing computerized methods to monitor medication errors, sustaining effective and accurate detection of MAEs remains challenging. In this study, we developed a real-time MAE detection system and evaluated its performance prior to system integration into institutional workflows. Our prospective observational study included automated MAE detection of 10 high-risk medications and fluids for patients admitted to the neonatal intensive care unit at Cincinnati Children's Hospital Medical Center during a 4-month period. The automated system extracted real-time medication use information from the institutional electronic health records and identified MAEs using logic-based rules and natural language processing techniques. The MAE summary was delivered via a real-time messaging platform to promote reduction of patient exposure to potential harm. System performance was validated using a physician-generated gold standard of MAE events, and results were compared with those of current practice (incident reporting and trigger tools). Physicians identified 116 MAEs from 10 104 medication administrations during the study period. Compared to current practice, the sensitivity with automated MAE detection was improved significantly from 4.3% to 85.3% (P = .009), with a positive predictive value of 78.0%. Furthermore, the system showed potential to reduce patient exposure to harm, from 256 min to 35 min (P < .001). The automated system demonstrated improved capacity for identifying MAEs while guarding against alert fatigue. It also showed promise for reducing patient exposure to potential harm following MAE events.

  12. More About Software for No-Loss Computing

    NASA Technical Reports Server (NTRS)

    Edmonds, Iarina

    2007-01-01

    A document presents some additional information on the subject matter of "Integrated Hardware and Software for No- Loss Computing" (NPO-42554), which appears elsewhere in this issue of NASA Tech Briefs. To recapitulate: The hardware and software designs of a developmental parallel computing system are integrated to effectuate a concept of no-loss computing (NLC). The system is designed to reconfigure an application program such that it can be monitored in real time and further reconfigured to continue a computation in the event of failure of one of the computers. The design provides for (1) a distributed class of NLC computation agents, denoted introspection agents, that effects hierarchical detection of anomalies; (2) enhancement of the compiler of the parallel computing system to cause generation of state vectors that can be used to continue a computation in the event of a failure; and (3) activation of a recovery component when an anomaly is detected.

  13. Detection of Cardiopulmonary Activity and Related Abnormal Events Using Microsoft Kinect Sensor.

    PubMed

    Al-Naji, Ali; Chahl, Javaan

    2018-03-20

    Monitoring of cardiopulmonary activity is a challenge when attempted under adverse conditions, including different sleeping postures, environmental settings, and an unclear region of interest (ROI). This study proposes an efficient remote imaging system based on a Microsoft Kinect v2 sensor for the observation of cardiopulmonary-signal-and-detection-related abnormal cardiopulmonary events (e.g., tachycardia, bradycardia, tachypnea, bradypnea, and central apnoea) in many possible sleeping postures within varying environmental settings including in total darkness and whether the subject is covered by a blanket or not. The proposed system extracts the signal from the abdominal-thoracic region where cardiopulmonary activity is most pronounced, using a real-time image sequence captured by Kinect v2 sensor. The proposed system shows promising results in any sleep posture, regardless of illumination conditions and unclear ROI even in the presence of a blanket, whilst being reliable, safe, and cost-effective.

  14. Detection of Cardiopulmonary Activity and Related Abnormal Events Using Microsoft Kinect Sensor

    PubMed Central

    Chahl, Javaan

    2018-01-01

    Monitoring of cardiopulmonary activity is a challenge when attempted under adverse conditions, including different sleeping postures, environmental settings, and an unclear region of interest (ROI). This study proposes an efficient remote imaging system based on a Microsoft Kinect v2 sensor for the observation of cardiopulmonary-signal-and-detection-related abnormal cardiopulmonary events (e.g., tachycardia, bradycardia, tachypnea, bradypnea, and central apnoea) in many possible sleeping postures within varying environmental settings including in total darkness and whether the subject is covered by a blanket or not. The proposed system extracts the signal from the abdominal-thoracic region where cardiopulmonary activity is most pronounced, using a real-time image sequence captured by Kinect v2 sensor. The proposed system shows promising results in any sleep posture, regardless of illumination conditions and unclear ROI even in the presence of a blanket, whilst being reliable, safe, and cost-effective. PMID:29558414

  15. POTENTIAL OF BIOLOGICAL MONITORING SYSTEMS TO DETECT TOXICITY IN A FINISHED MATRIX

    EPA Science Inventory

    Distribution systems of the U.S. are vulnerable to natural and anthropogenic factors affecting quality for use as drinking water. Important factors include physical parameters such as increased turbidity, ecological cycles such as algal blooms, and episodic contamination events ...

  16. Evaluation of force-sensing resistors for gait event detection to trigger electrical stimulation to improve walking in the child with cerebral palsy.

    PubMed

    Smith, Brian T; Coiro, Daniel J; Finson, Richard; Betz, Randal R; McCarthy, James

    2002-03-01

    Force-sensing resistors (FSRs) were used to detect the transitions between five main phases of gait for the control of electrical stimulation (ES) while walking with seven children with spastic diplegia, cerebral palsy. The FSR positions within each child's insoles were customized based on plantar pressure profiles determined using a pressure-sensitive membrane array (Tekscan Inc., Boston, MA). The FSRs were placed in the insoles so that pressure transitions coincided with an ipsilateral or contralateral gait event. The transitions between the following gait phases were determined: loading response, mid- and terminal stance, and pre- and initial swing. Following several months of walking on a regular basis with FSR-triggered intramuscular ES to the hip and knee extensors, hip abductors, and ankle dorsi and plantar flexors, the accuracy and reliability of the FSRs to detect gait phase transitions were evaluated. Accuracy was evaluated with four of the subjects by synchronizing the output of the FSR detection scheme with a VICON (Oxford Metrics, U.K.) motion analysis system, which was used as the gait event reference. While mean differences between each FSR-detected gait event and that of the standard (VICON) ranged from +35 ms (indicating that the FSR detection scheme recognized the event before it actually happened) to -55 ms (indicating that the FSR scheme recognized the event after it occurred), the difference data was widely distributed, which appeared to be due in part to both intrasubject (step-to-step) and intersubject variability. Terminal stance exhibited the largest mean difference and standard deviation, while initial swing exhibited the smallest deviation and preswing the smallest mean difference. To determine step-to-step reliability, all seven children walked on a level walkway for at least 50 steps. Of 642 steps, there were no detection errors in 94.5% of the steps. Of the steps that contained a detection error, 80% were due to the failure of the FSR signal to reach the programmed threshold level during the transition to loading response. Recovery from an error always occurred one to three steps later.

  17. Improved Microseismicity Detection During Newberry EGS Stimulations

    DOE Data Explorer

    Templeton, Dennise

    2013-10-01

    Effective enhanced geothermal systems (EGS) require optimal fracture networks for efficient heat transfer between hot rock and fluid. Microseismic mapping is a key tool used to infer the subsurface fracture geometry. Traditional earthquake detection and location techniques are often employed to identify microearthquakes in geothermal regions. However, most commonly used algorithms may miss events if the seismic signal of an earthquake is small relative to the background noise level or if a microearthquake occurs within the coda of a larger event. Consequently, we have developed a set of algorithms that provide improved microearthquake detection. Our objective is to investigate the microseismicity at the DOE Newberry EGS site to better image the active regions of the underground fracture network during and immediately after the EGS stimulation. Detection of more microearthquakes during EGS stimulations will allow for better seismic delineation of the active regions of the underground fracture system. This improved knowledge of the reservoir network will improve our understanding of subsurface conditions, and allow improvement of the stimulation strategy that will optimize heat extraction and maximize economic return.

  18. Improved Microseismicity Detection During Newberry EGS Stimulations

    DOE Data Explorer

    Templeton, Dennise

    2013-11-01

    Effective enhanced geothermal systems (EGS) require optimal fracture networks for efficient heat transfer between hot rock and fluid. Microseismic mapping is a key tool used to infer the subsurface fracture geometry. Traditional earthquake detection and location techniques are often employed to identify microearthquakes in geothermal regions. However, most commonly used algorithms may miss events if the seismic signal of an earthquake is small relative to the background noise level or if a microearthquake occurs within the coda of a larger event. Consequently, we have developed a set of algorithms that provide improved microearthquake detection. Our objective is to investigate the microseismicity at the DOE Newberry EGS site to better image the active regions of the underground fracture network during and immediately after the EGS stimulation. Detection of more microearthquakes during EGS stimulations will allow for better seismic delineation of the active regions of the underground fracture system. This improved knowledge of the reservoir network will improve our understanding of subsurface conditions, and allow improvement of the stimulation strategy that will optimize heat extraction and maximize economic return.

  19. Intelligent detection and identification in fiber-optical perimeter intrusion monitoring system based on the FBG sensor network

    NASA Astrophysics Data System (ADS)

    Wu, Huijuan; Qian, Ya; Zhang, Wei; Li, Hanyu; Xie, Xin

    2015-12-01

    A real-time intelligent fiber-optic perimeter intrusion detection system (PIDS) based on the fiber Bragg grating (FBG) sensor network is presented in this paper. To distinguish the effects of different intrusion events, a novel real-time behavior impact classification method is proposed based on the essential statistical characteristics of signal's profile in the time domain. The features are extracted by the principal component analysis (PCA), which are then used to identify the event with a K-nearest neighbor classifier. Simulation and field tests are both carried out to validate its effectiveness. The average identification rate (IR) for five sample signals in the simulation test is as high as 96.67%, and the recognition rate for eight typical signals in the field test can also be achieved up to 96.52%, which includes both the fence-mounted and the ground-buried sensing signals. Besides, critically high detection rate (DR) and low false alarm rate (FAR) can be simultaneously obtained based on the autocorrelation characteristics analysis and a hierarchical detection and identification flow.

  20. Simultaneous detection of transgenic DNA by surface plasmon resonance imaging with potential application to gene doping detection.

    PubMed

    Scarano, Simona; Ermini, Maria Laura; Spiriti, Maria Michela; Mascini, Marco; Bogani, Patrizia; Minunni, Maria

    2011-08-15

    Surface plasmon resonance imaging (SPRi) was used as the transduction principle for the development of optical-based sensing for transgenes detection in human cell lines. The objective was to develop a multianalyte, label-free, and real-time approach for DNA sequences that are identified as markers of transgenosis events. The strategy exploits SPRi sensing to detect the transgenic event by targeting selected marker sequences, which are present on shuttle vector backbone used to carry out the transfection of human embryonic kidney (HEK) cell lines. Here, we identified DNA sequences belonging to the Cytomegalovirus promoter and the Enhanced Green Fluorescent Protein gene. System development is discussed in terms of probe efficiency and influence of secondary structures on biorecognition reaction on sensor; moreover, optimization of PCR samples pretreatment was carried out to allow hybridization on biosensor, together with an approach to increase SPRi signals by in situ mass enhancement. Real-time PCR was also employed as reference technique for marker sequences detection on human HEK cells. We can foresee that the developed system may have potential applications in the field of antidoping research focused on the so-called gene doping.

  1. IRiS: construction of ARG networks at genomic scales.

    PubMed

    Javed, Asif; Pybus, Marc; Melé, Marta; Utro, Filippo; Bertranpetit, Jaume; Calafell, Francesc; Parida, Laxmi

    2011-09-01

    Given a set of extant haplotypes IRiS first detects high confidence recombination events in their shared genealogy. Next using the local sequence topology defined by each detected event, it integrates these recombinations into an ancestral recombination graph. While the current system has been calibrated for human population data, it is easily extendible to other species as well. IRiS (Identification of Recombinations in Sequences) binary files are available for non-commercial use in both Linux and Microsoft Windows, 32 and 64 bit environments from https://researcher.ibm.com/researcher/view_project.php?id = 2303 parida@us.ibm.com.

  2. Detection of rain events in radiological early warning networks with spectro-dosimetric systems

    NASA Astrophysics Data System (ADS)

    Dąbrowski, R.; Dombrowski, H.; Kessler, P.; Röttger, A.; Neumaier, S.

    2017-10-01

    Short-term pronounced increases of the ambient dose equivalent rate, due to rainfall are a well-known phenomenon. Increases in the same order of magnitude or even below may also be caused by a nuclear or radiological event, i.e. by artificial radiation. Hence, it is important to be able to identify natural rain events in dosimetric early warning networks and to distinguish them from radiological events. Novel spectrometric systems based on scintillators may be used to differentiate between the two scenarios, because the measured gamma spectra provide significant nuclide-specific information. This paper describes three simple, automatic methods to check whether an dot H*(10) increase is caused by a rain event or by artificial radiation. These methods were applied to measurements of three spectrometric systems based on CeBr3, LaBr3 and SrI2 scintillation crystals, investigated and tested for their practicability at a free-field reference site of PTB.

  3. Physical effects of mechanical design parameters on photon sensitivity and spatial resolution performance of a breast-dedicated PET system.

    PubMed

    Spanoudaki, V C; Lau, F W Y; Vandenbroucke, A; Levin, C S

    2010-11-01

    This study aims to address design considerations of a high resolution, high sensitivity positron emission tomography scanner dedicated to breast imaging. The methodology uses a detailed Monte Carlo model of the system structures to obtain a quantitative evaluation of several performance parameters. Special focus was given to the effect of dense mechanical structures designed to provide mechanical robustness and thermal regulation to the minuscule and temperature sensitive detectors. For the energies of interest around the photopeak (450-700 keV energy window), the simulation results predict a 6.5% reduction in the single photon detection efficiency and a 12.5% reduction in the coincidence photon detection efficiency in the case that the mechanical structures are interspersed between the detectors. However for lower energies, a substantial increase in the number of detected events (approximately 14% and 7% for singles at a 100-200 keV energy window and coincidences at a lower energy threshold of 100 keV, respectively) was observed with the presence of these structures due to backscatter. The number of photon events that involve multiple interactions in various crystal elements is also affected by the presence of the structures. For photon events involving multiple interactions among various crystal elements, the coincidence photon sensitivity is reduced by as much as 20% for a point source at the center of the field of view. There is no observable effect on the intrinsic and the reconstructed spatial resolution and spatial resolution uniformity. Mechanical structures can have a considerable effect on system sensitivity, especially for systems processing multi-interaction photon events. This effect, however, does not impact the spatial resolution. Various mechanical structure designs are currently under evaluation in order to achieve optimum trade-off between temperature stability, accurate detector positioning, and minimum influence on system performance.

  4. Physical effects of mechanical design parameters on photon sensitivity and spatial resolution performance of a breast-dedicated PET system

    PubMed Central

    Spanoudaki, V. C.; Lau, F. W. Y.; Vandenbroucke, A.; Levin, C. S.

    2010-01-01

    Purpose: This study aims to address design considerations of a high resolution, high sensitivity positron emission tomography scanner dedicated to breast imaging. Methods: The methodology uses a detailed Monte Carlo model of the system structures to obtain a quantitative evaluation of several performance parameters. Special focus was given to the effect of dense mechanical structures designed to provide mechanical robustness and thermal regulation to the minuscule and temperature sensitive detectors. Results: For the energies of interest around the photopeak (450–700 keV energy window), the simulation results predict a 6.5% reduction in the single photon detection efficiency and a 12.5% reduction in the coincidence photon detection efficiency in the case that the mechanical structures are interspersed between the detectors. However for lower energies, a substantial increase in the number of detected events (approximately 14% and 7% for singles at a 100–200 keV energy window and coincidences at a lower energy threshold of 100 keV, respectively) was observed with the presence of these structures due to backscatter. The number of photon events that involve multiple interactions in various crystal elements is also affected by the presence of the structures. For photon events involving multiple interactions among various crystal elements, the coincidence photon sensitivity is reduced by as much as 20% for a point source at the center of the field of view. There is no observable effect on the intrinsic and the reconstructed spatial resolution and spatial resolution uniformity. Conclusions: Mechanical structures can have a considerable effect on system sensitivity, especially for systems processing multi-interaction photon events. This effect, however, does not impact the spatial resolution. Various mechanical structure designs are currently under evaluation in order to achieve optimum trade-off between temperature stability, accurate detector positioning, and minimum influence on system performance. PMID:21158296

  5. A novel new tsunami detection network using GNSS on commercial ships

    NASA Astrophysics Data System (ADS)

    Foster, J. H.; Ericksen, T.; Avery, J.

    2015-12-01

    Accurate and rapid detection and assessment of tsunamis in the open ocean is critical for predicting how they will impact distant coastlines, enabling appropriate mitigation efforts. The unexpectedly huge fault slip for the 2011 Tohoku, Japan earthquake, and the unanticipated type of slip for the 2012 event at Queen Charlotte Islands, Canada highlighted weaknesses in our understanding of earthquake and tsunami hazards, and emphasized the need for more densely-spaced observing capabilities. Crucially, when each sensor is extremely expensive to build, deploy, and maintain, only a limited network of them can be installed. Gaps in the coverage of the network as well as routine outages of instruments, limit the ability of the detection system to accurately characterize events. Ship-based geodetic GNSS has been demonstrated to be able to detect and measure the properties of tsunamis in the open ocean. Based on this approach, we have used commercial ships operating in the North Pacific to construct a pilot network of low-cost, tsunami sensors to augment the existing detection systems. Partnering with NOAA, Maersk and Matson Navigation, we have equipped 10 ships with high-accuracy GNSS systems running the Trimble RTX high-accuracy real-time positioning service. Satellite communications transmit the position data streams to our shore-side server for processing and analysis. We present preliminary analyses of this novel network, assessing the robustness of the system, the quality of the time-series and the effectiveness of various processing and filtering strategies for retrieving accurate estimates of sea surface height variations for triggering detection and characterization of tsunami in the open ocean.

  6. Boolean Logic Tree of Label-Free Dual-Signal Electrochemical Aptasensor System for Biosensing, Three-State Logic Computation, and Keypad Lock Security Operation.

    PubMed

    Lu, Jiao Yang; Zhang, Xin Xing; Huang, Wei Tao; Zhu, Qiu Yan; Ding, Xue Zhi; Xia, Li Qiu; Luo, Hong Qun; Li, Nian Bing

    2017-09-19

    The most serious and yet unsolved problems of molecular logic computing consist in how to connect molecular events in complex systems into a usable device with specific functions and how to selectively control branchy logic processes from the cascading logic systems. This report demonstrates that a Boolean logic tree is utilized to organize and connect "plug and play" chemical events DNA, nanomaterials, organic dye, biomolecule, and denaturant for developing the dual-signal electrochemical evolution aptasensor system with good resettability for amplification detection of thrombin, controllable and selectable three-state logic computation, and keypad lock security operation. The aptasensor system combines the merits of DNA-functionalized nanoamplification architecture and simple dual-signal electroactive dye brilliant cresyl blue for sensitive and selective detection of thrombin with a wide linear response range of 0.02-100 nM and a detection limit of 1.92 pM. By using these aforementioned chemical events as inputs and the differential pulse voltammetry current changes at different voltages as dual outputs, a resettable three-input biomolecular keypad lock based on sequential logic is established. Moreover, the first example of controllable and selectable three-state molecular logic computation with active-high and active-low logic functions can be implemented and allows the output ports to assume a high impediment or nothing (Z) state in addition to the 0 and 1 logic levels, effectively controlling subsequent branchy logic computation processes. Our approach is helpful in developing the advanced controllable and selectable logic computing and sensing system in large-scale integration circuits for application in biomedical engineering, intelligent sensing, and control.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sajadian, Sedighe; Hundertmark, Markus, E-mail: s.sajadian@cc.iut.ac.ir

    A close-in giant planetary (CGP) system has a net polarization signal whose value varies depending on the orbital phase of the planet. This polarization signal is either caused by the stellar occultation or by reflected starlight from the surface of the orbiting planet. When the CGP system is located in the Galactic bulge, its polarization signal becomes too weak to be measured directly. One method for detecting and characterizing these weak polarization signatures due to distant CGP systems is gravitational microlensing. In this work, we focus on potential polarimetric observations of highly magnified microlensing events of CGP systems. When themore » lens is passing directly in front of the source star with its planetary companion, the polarimetric signature caused by the transiting planet is magnified. As a result, some distinct features in the polarimetry and light curves are produced. In the same way, microlensing amplifies the reflection-induced polarization signal. While the planet-induced perturbations are magnified whenever these polarimetric or photometric deviations vanish for a moment, the corresponding magnification factor of the polarization component(s) is related to the planet itself. Finding these exact times in the planet-induced perturbations helps us to characterize the planet. In order to evaluate the observability of such systems through polarimetric or photometric observations of high-magnification microlensing events, we simulate these events by considering confirmed CGP systems as their source stars and conclude that the efficiency for detecting the planet-induced signal with the state-of-the-art polarimetric instrument (FORS2/VLT) is less than 0.1%. Consequently, these planet-induced polarimetry perturbations can likely be detected under favorable conditions by the high-resolution and short-cadence polarimeters of the next generation.« less

  8. Deep Long-period Seismicity Beneath the Executive Committee Range, Marie Byrd Land, Antarctica, Studied Using Subspace Detection

    NASA Astrophysics Data System (ADS)

    Aster, R. C.; McMahon, N. D.; Myers, E. K.; Lough, A. C.

    2015-12-01

    Lough et al. (2014) first detected deep sub-icecap magmatic events beneath the Executive Committee Range volcanoes of Marie Byrd Land. Here, we extend the identification and analysis of these events in space and time utilizing subspace detection. Subspace detectors provide a highly effective methodology for studying events within seismic swarms that have similar moment tensor and Green's function characteristics and are particularly effective for identifying low signal-to-noise events. Marie Byrd Land (MBL) is an extremely remote continental region that is nearly completely covered by the West Antarctic Ice Sheet (WAIS). The southern extent of Marie Byrd Land lies within the West Antarctic Rift System (WARS), which includes the volcanic Executive Committee Range (ECR). The ECR shows north-to-south progression of volcanism across the WARS during the Holocene. In 2013, the POLENET/ANET seismic data identified two swarms of seismic activity in 2010 and 2011. These events have been interpreted as deep, long-period (DLP) earthquakes based on depth (25-40 km) and low frequency content. The DLP events in MBL lie beneath an inferred sub-WAIS volcanic edifice imaged with ice penetrating radar and have been interpreted as a present location of magmatic intrusion. The magmatic swarm activity in MBL provides a promising target for advanced subspace detection and temporal, spatial, and event size analysis of an extensive deep long period earthquake swarm using a remote seismographic network. We utilized a catalog of 1,370 traditionally identified DLP events to construct subspace detectors for the six nearest stations and analyzed two years of data spanning 2010-2011. Association of these detections into events resulted in an approximate ten-fold increase in number of locatable earthquakes. In addition to the two previously identified swarms during early 2010 and early 2011, we find sustained activity throughout the two years of study that includes several previously unidentified periods of heightened activity. Correlation with large global earthquakes suggests that the DLP activity is not sensitive to remote teleseismic triggering.

  9. High-speed atomic force microscopy combined with inverted optical microscopy for studying cellular events

    PubMed Central

    Suzuki, Yuki; Sakai, Nobuaki; Yoshida, Aiko; Uekusa, Yoshitsugu; Yagi, Akira; Imaoka, Yuka; Ito, Shuichi; Karaki, Koichi; Takeyasu, Kunio

    2013-01-01

    A hybrid atomic force microscopy (AFM)-optical fluorescence microscopy is a powerful tool for investigating cellular morphologies and events. However, the slow data acquisition rates of the conventional AFM unit of the hybrid system limit the visualization of structural changes during cellular events. Therefore, high-speed AFM units equipped with an optical/fluorescence detection device have been a long-standing wish. Here we describe the implementation of high-speed AFM coupled with an optical fluorescence microscope. This was accomplished by developing a tip-scanning system, instead of a sample-scanning system, which operates on an inverted optical microscope. This novel device enabled the acquisition of high-speed AFM images of morphological changes in individual cells. Using this instrument, we conducted structural studies of living HeLa and 3T3 fibroblast cell surfaces. The improved time resolution allowed us to image dynamic cellular events. PMID:23823461

  10. High-speed atomic force microscopy combined with inverted optical microscopy for studying cellular events.

    PubMed

    Suzuki, Yuki; Sakai, Nobuaki; Yoshida, Aiko; Uekusa, Yoshitsugu; Yagi, Akira; Imaoka, Yuka; Ito, Shuichi; Karaki, Koichi; Takeyasu, Kunio

    2013-01-01

    A hybrid atomic force microscopy (AFM)-optical fluorescence microscopy is a powerful tool for investigating cellular morphologies and events. However, the slow data acquisition rates of the conventional AFM unit of the hybrid system limit the visualization of structural changes during cellular events. Therefore, high-speed AFM units equipped with an optical/fluorescence detection device have been a long-standing wish. Here we describe the implementation of high-speed AFM coupled with an optical fluorescence microscope. This was accomplished by developing a tip-scanning system, instead of a sample-scanning system, which operates on an inverted optical microscope. This novel device enabled the acquisition of high-speed AFM images of morphological changes in individual cells. Using this instrument, we conducted structural studies of living HeLa and 3T3 fibroblast cell surfaces. The improved time resolution allowed us to image dynamic cellular events.

  11. Machine intelligence-based decision-making (MIND) for automatic anomaly detection

    NASA Astrophysics Data System (ADS)

    Prasad, Nadipuram R.; King, Jason C.; Lu, Thomas

    2007-04-01

    Any event deemed as being out-of-the-ordinary may be called an anomaly. Anomalies by virtue of their definition are events that occur spontaneously with no prior indication of their existence or appearance. Effects of anomalies are typically unknown until they actually occur, and their effects aggregate in time to show noticeable change from the original behavior. An evolved behavior would in general be very difficult to correct unless the anomalous event that caused such behavior can be detected early, and any consequence attributed to the specific anomaly. Substantial time and effort is required to back-track the cause for abnormal behavior and to recreate the event sequence leading to abnormal behavior. There is a critical need therefore to automatically detect anomalous behavior as and when they may occur, and to do so with the operator in the loop. Human-machine interaction results in better machine learning and a better decision-support mechanism. This is the fundamental concept of intelligent control where machine learning is enhanced by interaction with human operators, and vice versa. The paper discusses a revolutionary framework for the characterization, detection, identification, learning, and modeling of anomalous behavior in observed phenomena arising from a large class of unknown and uncertain dynamical systems.

  12. In-situ trainable intrusion detection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Symons, Christopher T.; Beaver, Justin M.; Gillen, Rob

    A computer implemented method detects intrusions using a computer by analyzing network traffic. The method includes a semi-supervised learning module connected to a network node. The learning module uses labeled and unlabeled data to train a semi-supervised machine learning sensor. The method records events that include a feature set made up of unauthorized intrusions and benign computer requests. The method identifies at least some of the benign computer requests that occur during the recording of the events while treating the remainder of the data as unlabeled. The method trains the semi-supervised learning module at the network node in-situ, such thatmore » the semi-supervised learning modules may identify malicious traffic without relying on specific rules, signatures, or anomaly detection.« less

  13. Global Near Real-Time MODIS and Landsat Flood Mapping and Product Delivery

    NASA Astrophysics Data System (ADS)

    Policelli, F. S.; Slayback, D. A.; Tokay, M. M.; Brakenridge, G. R.

    2014-12-01

    Flooding is the most destructive, frequent, and costly natural disaster faced by modern society, and is increasing in frequency and damage (deaths, displacements, and financial costs) as populations increase and climate change generates more extreme weather events. When major flooding events occur, the disaster management community needs frequently updated and easily accessible information to better understand the extent of flooding and coordinate response efforts. With funding from NASA's Applied Sciences program, we developed and are now operating a near real-time global flood mapping system to help provide flood extent information within 24-48 hours of events. The principal element of the system applies a water detection algorithm to MODIS imagery, which is processed by the LANCE (Land Atmosphere Near real-time Capability for EOS) system at NASA Goddard within a few hours of satellite overpass. Using imagery from both the Terra (10:30 AM local time overpass) and Aqua (1:30 PM) platforms allows the system to deliver an initial daily assessment of flood extent by late afternoon, and more robust assessments after accumulating cloud-free imagery over several days. Cloud cover is the primary limitation in detecting surface water from MODIS imagery. Other issues include the relatively coarse scale of the MODIS imagery (250 meters) for some events, the difficulty of detecting flood waters in areas with continuous canopy cover, confusion of shadow (cloud or terrain) with water, and accurately identifying detected water as flood as opposed to normal water extent. We are working on improvements to address these limitations. We have also begun delivery of near real time water maps at 30 m resolution from Landsat imagery. Although Landsat is not available daily globally, but only every 8 days if imagery from both operating platforms (Landsat 7 and 8) is accessed, it can provide useful higher resolution data on water extent when a clear acquisition coincides with an active flood event. These data products are provided in various formats on our website, and also via live OGC (Open Geospatial Consortium) services, and ArcGIS Online accessible web maps, allowing easy access from a variety of platforms, from desktop GIS software to web browsers on mobile phones. https://oas.gsfc.nasa.gov/floodmap

  14. Remote detection of weak aftershocks of the DPRK underground explosions using waveform cross correlation

    NASA Astrophysics Data System (ADS)

    Le Bras, R.; Rozhkov, M.; Bobrov, D.; Kitov, I. O.; Sanina, I.

    2017-12-01

    Association of weak seismic signals generated by low-magnitude aftershocks of the DPRK underground tests into event hypotheses represent a challenge for routine automatic and interactive processing at the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization, due to the relatively low station density of the International Monitoring System (IMS) seismic network. Since 2011, as an alternative, the IDC has been testing various prototype techniques of signal detection and event creation based on waveform cross correlation. Using signals measured by seismic stations of the IMS from DPRK explosions as waveform templates, the IDC detected several small (estimated mb between 2.2 and 3.6) seismic events after two DPRK tests conducted on September 9, 2016 and September 3, 2017. The obtained detections were associated with reliable event hypothesis and then used to locate these events relative to the epicenters of the DPRK explosions. We observe high similarity of the detected signals with the corresponding waveform templates. The newly found signals also correlate well between themselves. In addition, the values of the signal-to-noise ratios (SNR) estimated using the traces of cross correlation coefficients, increase with template length (from 5 s to 150 s), providing strong evidence in favour of their spatial closeness, which allows interpreting them as explosion aftershocks. We estimated the relative magnitudes of all aftershocks using the ratio of RMS amplitudes of the master and slave signal in the cross correlation windows characterized by the highest SNR. Additional waveform data from regional non-IMS stations MDJ and SEHB provide independent validation of these aftershock hypotheses. Since waveform templates from any single master event may be sub-efficient at some stations, we have also developed a method of joint usage of the DPRK and the biggest aftershocks templates to build more robust event hypotheses.

  15. Mapping the Recent US Hurricanes Triggered Flood Events in Near Real Time

    NASA Astrophysics Data System (ADS)

    Shen, X.; Lazin, R.; Anagnostou, E. N.; Wanik, D. W.; Brakenridge, G. R.

    2017-12-01

    Synthetic Aperture Radar (SAR) observations is the only reliable remote sensing data source to map flood inundation during severe weather events. Unfortunately, since state-of-art data processing algorithms cannot meet the automation and quality standard of a near-real-time (NRT) system, quality controlled inundation mapping by SAR currently depends heavily on manual processing, which limits our capability to quickly issue flood inundation maps at global scale. Specifically, most SAR-based inundation mapping algorithms are not fully automated, while those that are automated exhibit severe over- and/or under-detection errors that limit their potential. These detection errors are primarily caused by the strong overlap among the SAR backscattering probability density functions (PDF) of different land cover types. In this study, we tested a newly developed NRT SAR-based inundation mapping system, named Radar Produced Inundation Diary (RAPID), using Sentinel-1 dual polarized SAR data over recent flood events caused by Hurricanes Harvey, Irma, and Maria (2017). The system consists of 1) self-optimized multi-threshold classification, 2) over-detection removal using land-cover information and change detection, 3) under-detection compensation, and 4) machine-learning based correction. Algorithm details are introduced in another poster, H53J-1603. Good agreements were obtained by comparing the result from RAPID with visual interpretation of SAR images and manual processing from Dartmouth Flood Observatory (DFO) (See Figure 1). Specifically, the over- and under-detections that is typically noted in automated methods is significantly reduced to negligible levels. This performance indicates that RAPID can address the automation and accuracy issues of current state-of-art algorithms and has the potential to apply operationally on a number of satellite SAR missions, such as SWOT, ALOS, Sentinel etc. RAPID data can support many applications such as rapid assessment of damage losses and disaster alleviation/rescue at global scale.

  16. Detection Of Special Nuclear Materials Tagged Neutrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deyglun, Clement; Perot, Bertrand; Carasco, Cedric

    In order to detect Special Nuclear Materials (SNM) in unattended luggage or cargo containers in the field of homeland security, fissions are induced by 14 MeV neutrons produced by an associated particle DT neutron generator, and prompt fission particles correlated with tagged neutron are detected by plastic scintillators. SMN produce high multiplicity events due to induced fissions, whereas nonnuclear materials produce low multiplicity events due to cross-talk, (n,2n) or (n,n'γ) reactions. The data acquisition electronics is made of compact FPGA boards. The coincidence window is triggered by the alpha particle detection, allowing to tag the emission date and direction ofmore » the 14 MeV interrogating neutron. The first part of the paper presents experiment vs. calculation comparisons to validate MCNP-PoliMi simulations and the post-processing tools developed with the data analysis framework ROOT. Measurements have been performed using different targets (iron, lead, graphite), first with small plastic scintillators (10 x 10 x 10 cm{sup 3}) and then with large detectors (10 x 10 x 100 cm{sup 3}) to demonstrate that nuclear materials can be differentiated from nonnuclear dense materials (iron, lead) in iron and wood matrixes. A special attention is paid on SNM detection in abandoned luggage. In the second part of the paper, the performances of a cargo container inspection system are studied by numerical simulation, following previous work reported in. Detectors dimensions and shielding against the neutron generator background are optimized for container inspection. Events not correlated to an alpha particle (uncorrelated background), counting statistics, time and energy resolutions of the data acquisition system are all taken into account in a realistic numerical model. The impact of the container matrix (iron, ceramic, wood) has been investigated by studying the system capability to detect a few kilograms of SNM in different positions in the cargo container, within 10 min acquisitions. (authors)« less

  17. Assessment of SRS ambient air monitoring network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, K.; Jannik, T.

    Three methodologies have been used to assess the effectiveness of the existing ambient air monitoring system in place at the Savannah River Site in Aiken, SC. Effectiveness was measured using two metrics that have been utilized in previous quantification of air-monitoring network performance; frequency of detection (a measurement of how frequently a minimum number of samplers within the network detect an event), and network intensity (a measurement of how consistent each sampler within the network is at detecting events). In addition to determining the effectiveness of the current system, the objective of performing this assessment was to determine what, ifmore » any, changes could make the system more effective. Methodologies included 1) the Waite method of determining sampler distribution, 2) the CAP88- PC annual dose model, and 3) a puff/plume transport model used to predict air concentrations at sampler locations. Data collected from air samplers at SRS in 2015 compared with predicted data resulting from the methodologies determined that the frequency of detection for the current system is 79.2% with sampler efficiencies ranging from 5% to 45%, and a mean network intensity of 21.5%. One of the air monitoring stations had an efficiency of less than 10%, and detected releases during just one sampling period of the entire year, adding little to the overall network intensity. By moving or removing this sampler, the mean network intensity increased to about 23%. Further work in increasing the network intensity and simulating accident scenarios to further test the ambient air system at SRS is planned« less

  18. Characterizing a forest insect outbreak in Colorado by using MODIS NDVI phenology data and aerial detection survey data

    Treesearch

    Charlie Schrader-Patton; Nancy E. Grulke; Melissa E. Dressen

    2016-01-01

    Forest disturbances are increasing in extent and intensity, annually altering the structure and function of affected systems across millions of acres. Land managers need rapid assessment tools that can be used to characterize disturbance events across space and to meet forest planning needs. Unlike vegetation management projects and wildfire events, which typically are...

  19. Cyber Security Audit and Attack Detection Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Dale

    2012-05-31

    This goal of this project was to develop cyber security audit and attack detection tools for industrial control systems (ICS). Digital Bond developed and released a tool named Bandolier that audits ICS components commonly used in the energy sector against an optimal security configuration. The Portaledge Project developed a capability for the PI Historian, the most widely used Historian in the energy sector, to aggregate security events and detect cyber attacks.

  20. Network Catastrophe: Self-Organized Patterns Reveal both the Instability and the Structure of Complex Networks

    PubMed Central

    Moon, Hankyu; Lu, Tsai-Ching

    2015-01-01

    Critical events in society or biological systems can be understood as large-scale self-emergent phenomena due to deteriorating stability. We often observe peculiar patterns preceding these events, posing a question of—how to interpret the self-organized patterns to know more about the imminent crisis. We start with a very general description — of interacting population giving rise to large-scale emergent behaviors that constitute critical events. Then we pose a key question: is there a quantifiable relation between the network of interactions and the emergent patterns? Our investigation leads to a fundamental understanding to: 1. Detect the system's transition based on the principal mode of the pattern dynamics; 2. Identify its evolving structure based on the observed patterns. The main finding of this study is that while the pattern is distorted by the network of interactions, its principal mode is invariant to the distortion even when the network constantly evolves. Our analysis on real-world markets show common self-organized behavior near the critical transitions, such as housing market collapse and stock market crashes, thus detection of critical events before they are in full effect is possible. PMID:25822423

  1. Network Catastrophe: Self-Organized Patterns Reveal both the Instability and the Structure of Complex Networks

    NASA Astrophysics Data System (ADS)

    Moon, Hankyu; Lu, Tsai-Ching

    2015-03-01

    Critical events in society or biological systems can be understood as large-scale self-emergent phenomena due to deteriorating stability. We often observe peculiar patterns preceding these events, posing a question of--how to interpret the self-organized patterns to know more about the imminent crisis. We start with a very general description -- of interacting population giving rise to large-scale emergent behaviors that constitute critical events. Then we pose a key question: is there a quantifiable relation between the network of interactions and the emergent patterns? Our investigation leads to a fundamental understanding to: 1. Detect the system's transition based on the principal mode of the pattern dynamics; 2. Identify its evolving structure based on the observed patterns. The main finding of this study is that while the pattern is distorted by the network of interactions, its principal mode is invariant to the distortion even when the network constantly evolves. Our analysis on real-world markets show common self-organized behavior near the critical transitions, such as housing market collapse and stock market crashes, thus detection of critical events before they are in full effect is possible.

  2. Modal Acoustic Emission Used at Elevated Temperatures to Detect Damage and Failure Location in Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Morscher, Gregory N.

    1999-01-01

    Ceramic matrix composites are being developed for elevated-temperature engine applications. A leading material system in this class of materials is silicon carbide (SiC) fiber-reinforced SiC matrix composites. Unfortunately, the nonoxide fibers, matrix, and interphase (boron nitride in this system) can react with oxygen or water vapor in the atmosphere, leading to strength degradation of the composite at elevated temperatures. For this study, constant-load stress-rupture tests were performed in air at temperatures ranging from 815 to 960 C until failure. From these data, predictions can be made for the useful life of such composites under similar stressed-oxidation conditions. During these experiments, the sounds of failure events (matrix cracking and fiber breaking) were monitored with a modal acoustic emission (AE) analyzer through transducers that were attached at the ends of the tensile bars. Such failure events, which are caused by applied stress and oxidation reactions, cause these composites to fail prematurely. Because of the nature of acoustic waveform propagation in thin tensile bars, the location of individual source events and the eventual failure event could be detected accurately.

  3. Sensor data monitoring and decision level fusion scheme for early fire detection

    NASA Astrophysics Data System (ADS)

    Rizogiannis, Constantinos; Thanos, Konstantinos Georgios; Astyakopoulos, Alkiviadis; Kyriazanos, Dimitris M.; Thomopoulos, Stelios C. A.

    2017-05-01

    The aim of this paper is to present the sensor monitoring and decision level fusion scheme for early fire detection which has been developed in the context of the AF3 Advanced Forest Fire Fighting European FP7 research project, adopted specifically in the OCULUS-Fire control and command system and tested during a firefighting field test in Greece with prescribed real fire, generating early-warning detection alerts and notifications. For this purpose and in order to improve the reliability of the fire detection system, a two-level fusion scheme is developed exploiting a variety of observation solutions from air e.g. UAV infrared cameras, ground e.g. meteorological and atmospheric sensors and ancillary sources e.g. public information channels, citizens smartphone applications and social media. In the first level, a change point detection technique is applied to detect changes in the mean value of each measured parameter by the ground sensors such as temperature, humidity and CO2 and then the Rate-of-Rise of each changed parameter is calculated. In the second level the fire event Basic Probability Assignment (BPA) function is determined for each ground sensor using Fuzzy-logic theory and then the corresponding mass values are combined in a decision level fusion process using Evidential Reasoning theory to estimate the final fire event probability.

  4. Tracking Vessels to Illegal Pollutant Discharges Using Multisource Vessel Information

    NASA Astrophysics Data System (ADS)

    Busler, J.; Wehn, H.; Woodhouse, L.

    2015-04-01

    Illegal discharge of bilge waters is a significant source of oil and other environmental pollutants in Canadian and international waters. Imaging satellites are commonly used to monitor large areas to detect oily discharges from vessels, off-shore platforms and other sources. While remotely sensed imagery provides a snap-shot picture useful for detecting a spill or the presence of vessels in the vicinity, it is difficult to directly associate a vessel to an observed spill unless the vessel is observed while the discharge is occurring. The situation then becomes more challenging with increased vessel traffic as multiple vessels may be associated with a spill event. By combining multiple sources of vessel location data, such as Automated Information Systems (AIS), Long Range Identification and Tracking (LRIT) and SAR-based ship detection, with spill detections and drift models we have created a system that associates detected spill events with vessels in the area using a probabilistic model that intersects vessel tracks and spill drift trajectories in both time and space. Working with the Canadian Space Agency and the Canadian Ice Service's Integrated Satellite Tracking of Pollution (ISTOP) program, we use spills observed in Canadian waters to demonstrate the investigative value of augmenting spill detections with temporally sequenced vessel and spill tracking information.

  5. Pipeline oil fire detection with MODIS active fire products

    NASA Astrophysics Data System (ADS)

    Ogungbuyi, M. G.; Martinez, P.; Eckardt, F. D.

    2017-12-01

    We investigate 85 129 MODIS satellite active fire events from 2007 to 2015 in the Niger Delta of Nigeria. The region is the oil base for Nigerian economy and the hub of oil exploration where oil facilities (i.e. flowlines, flow stations, trunklines, oil wells and oil fields) are domiciled, and from where crude oil and refined products are transported to different Nigerian locations through a network of pipeline systems. Pipeline and other oil facilities are consistently susceptible to oil leaks due to operational or maintenance error, and by acts of deliberate sabotage of the pipeline equipment which often result in explosions and fire outbreaks. We used ground oil spill reports obtained from the National Oil Spill Detection and Response Agency (NOSDRA) database (see www.oilspillmonitor.ng) to validate MODIS satellite data. NOSDRA database shows an estimate of 10 000 spill events from 2007 - 2015. The spill events were filtered to include largest spills by volume and events occurring only in the Niger Delta (i.e. 386 spills). By projecting both MODIS fire and spill as `input vector' layers with `Points' geometry, and the Nigerian pipeline networks as `from vector' layers with `LineString' geometry in a geographical information system, we extracted the nearest MODIS events (i.e. 2192) closed to the pipelines by 1000m distance in spatial vector analysis. The extraction process that defined the nearest distance to the pipelines is based on the global practices of the Right of Way (ROW) in pipeline management that earmarked 30m strip of land to the pipeline. The KML files of the extracted fires in a Google map validated their source origin to be from oil facilities. Land cover mapping confirmed fire anomalies. The aim of the study is to propose a near-real-time monitoring of spill events along pipeline routes using 250 m spatial resolution of MODIS active fire detection sensor when such spills are accompanied by fire events in the study location.

  6. Electro-optical muzzle flash detection

    NASA Astrophysics Data System (ADS)

    Krieg, Jürgen; Eisele, Christian; Seiffer, Dirk

    2016-10-01

    Localizing a shooter in a complex scenario is a difficult task. Acoustic sensors can be used to detect blast waves. Radar technology permits detection of the projectile. A third method is to detect the muzzle flash using electro-optical devices. Detection of muzzle flash events is possible with focal plane arrays, line and single element detectors. In this paper, we will show that the detection of a muzzle flash works well in the shortwave infrared spectral range. Important for the acceptance of an operational warning system in daily use is a very low false alarm rate. Using data from a detector with a high sampling rate the temporal signature of a potential muzzle flash event can be analyzed and the false alarm rate can be reduced. Another important issue is the realization of an omnidirectional view required on an operational level. It will be shown that a combination of single element detectors and simple optics in an appropriate configuration is a capable solution.

  7. Quantitative and qualitative assessment of the bovine abortion surveillance system in France.

    PubMed

    Bronner, Anne; Gay, Emilie; Fortané, Nicolas; Palussière, Mathilde; Hendrikx, Pascal; Hénaux, Viviane; Calavas, Didier

    2015-06-01

    Bovine abortion is the main clinical sign of bovine brucellosis, a disease of which France has been declared officially free since 2005. To ensure the early detection of any brucellosis outbreak, event-driven surveillance relies on the mandatory notification of bovine abortions and the brucellosis testing of aborting cows. However, the under-reporting of abortions appears frequent. Our objectives were to assess the aptitude of the bovine abortion surveillance system to detect each and every bovine abortion and to identify factors influencing the system's effectiveness. We evaluated five attributes defined by the U.S. Centers for Disease Control with a method suited to each attribute: (1) data quality was studied quantitatively and qualitatively, as this factor considerably influences data analysis and results; (2) sensitivity and representativeness were estimated using a unilist capture-recapture approach to quantify the surveillance system's effectiveness; (3) acceptability and simplicity were studied through qualitative interviews of actors in the field, given that the surveillance system relies heavily on abortion notifications by farmers and veterinarians. Our analysis showed that (1) data quality was generally satisfactory even though some errors might be due to actors' lack of awareness of the need to collect accurate data; (2) from 2006 to 2011, the mean annual sensitivity - i.e. the proportion of farmers who reported at least one abortion out of all those who detected such events - was around 34%, but was significantly higher in dairy than beef cattle herds (highlighting a lack of representativeness); (3) overall, the system's low sensitivity was related to its low acceptability and lack of simplicity. This study showed that, in contrast to policy-makers, most farmers and veterinarians perceived the risk of a brucellosis outbreak as negligible. They did not consider sporadic abortions as a suspected case of brucellosis and usually reported abortions only to identify their cause rather than to reject brucellosis. The system proved too complex, especially for beef cattle farmers, as they may fail to detect aborting cows at pasture or have difficulties catching them for sampling. By investigating critical attributes, our evaluation highlighted the surveillance system's strengths and needed improvements. We believe our comprehensive approach can be used to assess other event-driven surveillance systems. In addition, some of our recommendations on increasing the effectiveness of event-driven brucellosis surveillance may be useful in improving the notification rate for suspected cases of other exotic diseases. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. myBlackBox: Blackbox Mobile Cloud Systems for Personalized Unusual Event Detection.

    PubMed

    Ahn, Junho; Han, Richard

    2016-05-23

    We demonstrate the feasibility of constructing a novel and practical real-world mobile cloud system, called myBlackBox, that efficiently fuses multimodal smartphone sensor data to identify and log unusual personal events in mobile users' daily lives. The system incorporates a hybrid architectural design that combines unsupervised classification of audio, accelerometer and location data with supervised joint fusion classification to achieve high accuracy, customization, convenience and scalability. We show the feasibility of myBlackBox by implementing and evaluating this end-to-end system that combines Android smartphones with cloud servers, deployed for 15 users over a one-month period.

  9. myBlackBox: Blackbox Mobile Cloud Systems for Personalized Unusual Event Detection

    PubMed Central

    Ahn, Junho; Han, Richard

    2016-01-01

    We demonstrate the feasibility of constructing a novel and practical real-world mobile cloud system, called myBlackBox, that efficiently fuses multimodal smartphone sensor data to identify and log unusual personal events in mobile users’ daily lives. The system incorporates a hybrid architectural design that combines unsupervised classification of audio, accelerometer and location data with supervised joint fusion classification to achieve high accuracy, customization, convenience and scalability. We show the feasibility of myBlackBox by implementing and evaluating this end-to-end system that combines Android smartphones with cloud servers, deployed for 15 users over a one-month period. PMID:27223292

  10. DETECTION OF FAST RADIO TRANSIENTS WITH MULTIPLE STATIONS: A CASE STUDY USING THE VERY LONG BASELINE ARRAY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, David R.; Wagstaff, Kiri L.; Majid, Walid A.

    2011-07-10

    Recent investigations reveal an important new class of transient radio phenomena that occur on submillisecond timescales. Often, transient surveys' data volumes are too large to archive exhaustively. Instead, an online automatic system must excise impulsive interference and detect candidate events in real time. This work presents a case study using data from multiple geographically distributed stations to perform simultaneous interference excision and transient detection. We present several algorithms that incorporate dedispersed data from multiple sites, and report experiments with a commensal real-time transient detection system on the Very Long Baseline Array. We test the system using observations of pulsar B0329+54.more » The multiple-station algorithms enhanced sensitivity for detection of individual pulses. These strategies could improve detection performance for a future generation of geographically distributed arrays such as the Australian Square Kilometre Array Pathfinder and the Square Kilometre Array.« less

  11. Using a Calculated Pulse Rate with an Artificial Neural Network to Detect Irregular Interbeats.

    PubMed

    Yeh, Bih-Chyun; Lin, Wen-Piao

    2016-03-01

    Heart rate is an important clinical measure that is often used in pathological diagnosis and prognosis. Valid detection of irregular heartbeats is crucial in the clinical practice. We propose an artificial neural network using the calculated pulse rate to detect irregular interbeats. The proposed system measures the calculated pulse rate to determine an "irregular interbeat on" or "irregular interbeat off" event. If an irregular interbeat is detected, the proposed system produces a danger warning, which is helpful for clinicians. If a non-irregular interbeat is detected, the proposed system displays the calculated pulse rate. We include a flow chart of the proposed software. In an experiment, we measure the calculated pulse rates and achieve an error percentage of < 3% in 20 participants with a wide age range. When we use the calculated pulse rates to detect irregular interbeats, we find such irregular interbeats in eight participants.

  12. A model for anomaly classification in intrusion detection systems

    NASA Astrophysics Data System (ADS)

    Ferreira, V. O.; Galhardi, V. V.; Gonçalves, L. B. L.; Silva, R. C.; Cansian, A. M.

    2015-09-01

    Intrusion Detection Systems (IDS) are traditionally divided into two types according to the detection methods they employ, namely (i) misuse detection and (ii) anomaly detection. Anomaly detection has been widely used and its main advantage is the ability to detect new attacks. However, the analysis of anomalies generated can become expensive, since they often have no clear information about the malicious events they represent. In this context, this paper presents a model for automated classification of alerts generated by an anomaly based IDS. The main goal is either the classification of the detected anomalies in well-defined taxonomies of attacks or to identify whether it is a false positive misclassified by the IDS. Some common attacks to computer networks were considered and we achieved important results that can equip security analysts with best resources for their analyses.

  13. Automatic near-real-time detection of CMEs in Mauna Loa K-Cor coronagraph images

    NASA Astrophysics Data System (ADS)

    Thompson, William T.; St. Cyr, Orville Chris; Burkepile, Joan; Posner, Arik

    2017-08-01

    A simple algorithm has been developed to detect the onset of coronal mass ejections (CMEs), together with an estimate of their speed, in near-real-time using images of the linearly polarized white-light solar corona taken by the K-Cor telescope at the Mauna Loa Solar Observatory (MLSO). The algorithm used is a variation on the Solar Eruptive Event Detection System (SEEDS) developed at George Mason University. The algorithm was tested against K-Cor data taken between 29 April 2014 and 20 February 2017, on days which the MLSO website marked as containing CMEs. This resulted in testing of 139 days worth of data containing 171 CMEs. The detection rate varied from close to 80% in 2014-2015 when solar activity was high, down to as low as 20-30% in 2017 when activity was low. The difference in effectiveness with solar cycle is attributed to the difference in relative prevalance of strong CMEs between active and quiet periods. There were also twelve false detections during this time period, leading to an average false detection rate of 8.6% on any given day. However, half of the false detections were clustered into two short periods of a few days each when special conditions prevailed to increase the false detection rate. The K-Cor data were also compared with major Solar Energetic Particle (SEP) storms during this time period. There were three SEP events detected either at Earth or at one of the two STEREO spacecraft where K-Cor was observing during the relevant time period. The K-Cor CME detection algorithm successfully generated alerts for two of these events, with lead times of 1-3 hours before the SEP onset at 1 AU. The third event was not detected by the automatic algorithm because of the unusually broad width of the CME in position angle.

  14. Jupiter emission observed near 1 MHz

    NASA Technical Reports Server (NTRS)

    Brown, L. W.

    1974-01-01

    Emission from Jupiter has been observed by the IMP-6 spacecraft at 19 frequencies between 600 and 9900 kHz covering the period from April 1971 to October 1972. The Jovian bursts were identified in the IMP-6 data through the phase of the observed modulated signal detected from the spinning dipole antenna. Initial data reduction has isolated 177 events over a span of 500 days. These events persisted over a period between 1 and 60 min. Of these events at least 48 occurred during times in which Jupiter emission was being observed at either 16.7 or 22.2 MHz by ground-based instruments of the Goddard Space Flight Center Jupiter monitoring system. Large bursts were detectable from 9900 kHz down to 600 kHz, while smaller bursts ranged down to 1030 kHz.-

  15. A Participatory System for Preventing Pandemics of Animal Origins: Pilot Study of the Participatory One Health Disease Detection (PODD) System

    PubMed Central

    Yano, Terdsak; Phornwisetsirikun, Somphorn; Susumpow, Patipat; Visrutaratna, Surasing; Chanachai, Karoon; Phetra, Polawat; Chaisowwong, Warangkhana; Trakarnsirinont, Pairat; Hemwan, Phonpat; Kaewpinta, Boontuan; Singhapreecha, Charuk; Kreausukon, Khwanchai; Charoenpanyanet, Arisara ; Robert, Chongchit Sripun; Robert, Lamar; Rodtian, Pranee; Mahasing, Suteerat; Laiya, Ekkachai; Pattamakaew, Sakulrat; Tankitiyanon, Taweesart; Sansamur, Chalutwan

    2018-01-01

    Background Aiming for early disease detection and prompt outbreak control, digital technology with a participatory One Health approach was used to create a novel disease surveillance system called Participatory One Health Disease Detection (PODD). PODD is a community-owned surveillance system that collects data from volunteer reporters; identifies disease outbreak automatically; and notifies the local governments (LGs), surrounding villages, and relevant authorities. This system provides a direct and immediate benefit to the communities by empowering them to protect themselves. Objective The objective of this study was to determine the effectiveness of the PODD system for the rapid detection and control of disease outbreaks. Methods The system was piloted in 74 LGs in Chiang Mai, Thailand, with the participation of 296 volunteer reporters. The volunteers and LGs were key participants in the piloting of the PODD system. Volunteers monitored animal and human diseases, as well as environmental problems, in their communities and reported these events via the PODD mobile phone app. LGs were responsible for outbreak control and provided support to the volunteers. Outcome mapping was used to evaluate the performance of the LGs and volunteers. Results LGs were categorized into one of the 3 groups based on performance: A (good), B (fair), and C (poor), with the majority (46%,34/74) categorized into group B. Volunteers were similarly categorized into 4 performance groups (A-D), again with group A showing the best performance, with the majority categorized into groups B and C. After 16 months of implementation, 1029 abnormal events had been reported and confirmed to be true reports. The majority of abnormal reports were sick or dead animals (404/1029, 39.26%), followed by zoonoses and other human diseases (129/1029, 12.54%). Many potentially devastating animal disease outbreaks were detected and successfully controlled, including 26 chicken high mortality outbreaks, 4 cattle disease outbreaks, 3 pig disease outbreaks, and 3 fish disease outbreaks. In all cases, the communities and animal authorities cooperated to apply community contingency plans to control these outbreaks, and community volunteers continued to monitor the abnormal events for 3 weeks after each outbreak was controlled. Conclusions By design, PODD initially targeted only animal diseases that potentially could emerge into human pandemics (eg, avian influenza) and then, in response to community needs, expanded to cover human health and environmental health issues. PMID:29563079

  16. Towards cross-lingual alerting for bursty epidemic events.

    PubMed

    Collier, Nigel

    2011-10-06

    Online news reports are increasingly becoming a source for event-based early warning systems that detect natural disasters. Harnessing the massive volume of information available from multilingual newswire presents as many challanges as opportunities due to the patterns of reporting complex spatio-temporal events. In this article we study the problem of utilising correlated event reports across languages. We track the evolution of 16 disease outbreaks using 5 temporal aberration detection algorithms on text-mined events classified according to disease and outbreak country. Using ProMED reports as a silver standard, comparative analysis of news data for 13 languages over a 129 day trial period showed improved sensitivity, F1 and timeliness across most models using cross-lingual events. We report a detailed case study analysis for Cholera in Angola 2010 which highlights the challenges faced in correlating news events with the silver standard. The results show that automated health surveillance using multilingual text mining has the potential to turn low value news into high value alerts if informed choices are used to govern the selection of models and data sources. An implementation of the C2 alerting algorithm using multilingual news is available at the BioCaster portal http://born.nii.ac.jp/?page=globalroundup.

  17. Final Technical Report. Project Boeing SGS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bell, Thomas E.

    Boeing and its partner, PJM Interconnection, teamed to bring advanced “defense-grade” technologies for cyber security to the US regional power grid through demonstration in PJM’s energy management environment. Under this cooperative project with the Department of Energy, Boeing and PJM have developed and demonstrated a host of technologies specifically tailored to the needs of PJM and the electric sector as a whole. The team has demonstrated to the energy industry a combination of processes, techniques and technologies that have been successfully implemented in the commercial, defense, and intelligence communities to identify, mitigate and continuously monitor the cyber security of criticalmore » systems. Guided by the results of a Cyber Security Risk-Based Assessment completed in Phase I, the Boeing-PJM team has completed multiple iterations through the Phase II Development and Phase III Deployment phases. Multiple cyber security solutions have been completed across a variety of controls including: Application Security, Enhanced Malware Detection, Security Incident and Event Management (SIEM) Optimization, Continuous Vulnerability Monitoring, SCADA Monitoring/Intrusion Detection, Operational Resiliency, Cyber Range simulations and hands on cyber security personnel training. All of the developed and demonstrated solutions are suitable for replication across the electric sector and/or the energy sector as a whole. Benefits identified include; Improved malware and intrusion detection capability on critical SCADA networks including behavioral-based alerts resulting in improved zero-day threat protection; Improved Security Incident and Event Management system resulting in better threat visibility, thus increasing the likelihood of detecting a serious event; Improved malware detection and zero-day threat response capability; Improved ability to systematically evaluate and secure in house and vendor sourced software applications; Improved ability to continuously monitor and maintain secure configuration of network devices resulting in reduced vulnerabilities for potential exploitation; Improved overall cyber security situational awareness through the integration of multiple discrete security technologies into a single cyber security reporting console; Improved ability to maintain the resiliency of critical systems in the face of a targeted cyber attack of other significant event; Improved ability to model complex networks for penetration testing and advanced training of cyber security personnel« less

  18. NORSAR detection processing system

    NASA Astrophysics Data System (ADS)

    Loughran, L. B.

    1987-05-01

    This Semiannual Technical Summary describes the operation, maintenance and research activities at the Norwegian Seismic Array (NORSAR). Investigations into further potential improvements in the NORSAR array processing system have continued. A new Detection Processor (DP) program has developed and tested in an off-line mode. This program is flexible enough to conduct both NORSAR and NORESS detection processing as is done today, besides incorporating improved algorithms. A wide-band slowness estimation technique has been investigated by processing data from several events from the same location. Ten quarry blasts at a dam construction site in western Russia and sixteen Semipalatinsk nuclear explosions were selected. The major conclusion from this study is that employing a wider frequency band clearly tends to increase the stability of the slowness estimates, provided the signal-to-noise ratio is adequate over the band of interest. The stability was found, particularly for Pn, to be remarkably good for the western Norway quarry blasts when using a fixed frequency band for each phase for all ten events.

  19. Self-similarity Clustering Event Detection Based on Triggers Guidance

    NASA Astrophysics Data System (ADS)

    Zhang, Xianfei; Li, Bicheng; Tian, Yuxuan

    Traditional method of Event Detection and Characterization (EDC) regards event detection task as classification problem. It makes words as samples to train classifier, which can lead to positive and negative samples of classifier imbalance. Meanwhile, there is data sparseness problem of this method when the corpus is small. This paper doesn't classify event using word as samples, but cluster event in judging event types. It adopts self-similarity to convergence the value of K in K-means algorithm by the guidance of event triggers, and optimizes clustering algorithm. Then, combining with named entity and its comparative position information, the new method further make sure the pinpoint type of event. The new method avoids depending on template of event in tradition methods, and its result of event detection can well be used in automatic text summarization, text retrieval, and topic detection and tracking.

  20. Social-aware Event Handling within the FallRisk Project.

    PubMed

    De Backere, Femke; Van den Bergh, Jan; Coppers, Sven; Elprama, Shirley; Nelis, Jelle; Verstichel, Stijn; Jacobs, An; Coninx, Karin; Ongenae, Femke; De Turck, Filip

    2017-01-09

    With the uprise of the Internet of Things, wearables and smartphones are moving to the foreground. Ambient Assisted Living solutions are, for example, created to facilitate ageing in place. One example of such systems are fall detection systems. Currently, there exists a wide variety of fall detection systems using different methodologies and technologies. However, these systems often do not take into account the fall handling process, which starts after a fall is identified or this process only consists of sending a notification. The FallRisk system delivers an accurate analysis of incidents occurring in the home of the older adults using several sensors and smart devices. Moreover, the input from these devices can be used to create a social-aware event handling process, which leads to assisting the older adult as soon as possible and in the best possible way. The FallRisk system consists of several components, located in different places. When an incident is identified by the FallRisk system, the event handling process will be followed to assess the fall incident and select the most appropriate caregiver, based on the input of the smartphones of the caregivers. In this process, availability and location are automatically taken into account. The event handling process was evaluated during a decision tree workshop to verify if the current day practices reflect the requirements of all the stakeholders. Other knowledge, which is uncovered during this workshop can be taken into account to further improve the process. The FallRisk offers a way to detect fall incidents in an accurate way and uses context information to assign the incident to the most appropriate caregiver. This way, the consequences of the fall are minimized and help is at location as fast as possible. It could be concluded that the current guidelines on fall handling reflect the needs of the stakeholders. However, current technology evolutions, such as the uptake of wearables and smartphones, enables the improvement of these guidelines, such as the automatic ordering of the caregivers based on their location and availability.

  1. Automated Feature and Event Detection with SDO AIA and HMI Data

    NASA Astrophysics Data System (ADS)

    Davey, Alisdair; Martens, P. C. H.; Attrill, G. D. R.; Engell, A.; Farid, S.; Grigis, P. C.; Kasper, J.; Korreck, K.; Saar, S. H.; Su, Y.; Testa, P.; Wills-Davey, M.; Savcheva, A.; Bernasconi, P. N.; Raouafi, N.-E.; Delouille, V. A.; Hochedez, J. F..; Cirtain, J. W.; Deforest, C. E.; Angryk, R. A.; de Moortel, I.; Wiegelmann, T.; Georgouli, M. K.; McAteer, R. T. J.; Hurlburt, N.; Timmons, R.

    The Solar Dynamics Observatory (SDO) represents a new frontier in quantity and quality of solar data. At about 1.5 TB/day, the data will not be easily digestible by solar physicists using the same methods that have been employed for images from previous missions. In order for solar scientists to use the SDO data effectively they need meta-data that will allow them to identify and retrieve data sets that address their particular science questions. We are building a comprehensive computer vision pipeline for SDO, abstracting complete metadata on many of the features and events detectable on the Sun without human intervention. Our project unites more than a dozen individual, existing codes into a systematic tool that can be used by the entire solar community. The feature finding codes will run as part of the SDO Event Detection System (EDS) at the Joint Science Operations Center (JSOC; joint between Stanford and LMSAL). The metadata produced will be stored in the Heliophysics Event Knowledgebase (HEK), which will be accessible on-line for the rest of the world directly or via the Virtual Solar Observatory (VSO) . Solar scientists will be able to use the HEK to select event and feature data to download for science studies.

  2. An investigation of deformation and fluid flow at subduction zones using newly developed instrumentation and finite element modeling

    NASA Astrophysics Data System (ADS)

    Labonte, Alison Louise

    Detecting seafloor deformation events in the offshore convergent margin environment is of particular importance considering the significant seismic hazard at subduction zones. Efforts to gain insight into the earthquake cycle have been made at the Cascadia and Costa Rica subduction margins through recent expansions of onshore GPS and seismic networks. While these studies have given scientists the ability to quantify and locate slip events in the seismogenic zone, there is little technology available for adequately measuring offshore aseismic slip. This dissertation introduces an improved flow meter for detecting seismic and aseismic deformation in submarine environments. The value of such hydrologic measurements for quantifying the geodetics at offshore margins is verified through a finite element modeling (FEM) study in which the character of deformation in the shallow subduction zone is determined from previously recorded hydrologic events at the Costa Rica Pacific margin. Accurately sensing aseismic events is one key to determining the stress state in subduction zones as these slow-slip events act to load or unload the seismogenic zone during the interseismic period. One method for detecting seismic and aseismic strain events is to monitor the hydrogeologic response to strain events using fluid flow meters. Previous instrumentation, the Chemical Aqueous Transport (CAT) meter which measures flow rates through the sediment-water interface, can detect transient events at very low flowrates, down to 0.0001 m/yr. The CAT meter performs well in low flow rate environments and can capture gradual changes in flow rate, as might be expected during ultra slow slip events. However, it cannot accurately quantify high flow rates through fractures and conduits, nor does it have the temporal resolution and accuracy required for detecting transient flow events associated with rapid deformation. The Optical Tracer Injection System (OTIS) developed for this purpose is an electronic flow meter that can measure flow rates of 0.1 to >500 m/yr at a temporal resolution of 30 minutes to 0.5 minutes, respectively. Test deployments of the OTIS at cold seeps in the transpressional Monterey Bay demonstrated the OTIS functionality over this range of flow environments. Although no deformation events were detected during these test deployments, the OTIS's temporally accurate measurements at the vigorously flowing Monterey Bay cold seep rendered valuable insight into the plumbing of the seep system. In addition to the capability to detect transient flow events, a primary functional requirement of the OTIS was the ability to communicate and transfer data for long-term real-time monitoring deployments. Real-time data transfer from the OTIS to the desktop was successful during a test deployment of the Nootka Observatory, an acoustically-linked moored-buoy system. A small array of CAT meters was also deployed at the Nootka transform-Cascadia subduction zone triple junction. Four anomalous flow rate events were observed across all four meters during the yearlong deployment. Although the records have low temporal accuracy, a preliminary explanation for the regional changes in flow rate is made through comparison between flow rate records and seismic records. The flow events are thought to be a result of a tectonic deformation event, possibly with an aseismic component. Further constraints are not feasible given the unknown structure of faulting near the triple junction. In a final proof of concept study, I find that use these hydrologic instruments, which capture unique aseismic flow rate patterns, is a valuable method for extracting information about deformation events on the decollement in the offshore subduction zone margin. Transient flow events observed in the frontal prism during a 1999--2000 deployment of CAT meters on the Costa Rica Pacific margin suggest episodic slow-slip deformation events may be occurring in the shallow subduction zone. The FEM study to infer the character of the hypothetical deformation event driving flow transients verify that indeed, a shallow slow-slip event can reproduce the unique flow rate patterns observed. Along (trench) strike variability in the rupture initiation location, and bidirectional propagation, is one way to explain the opposite sign of flow rate transients observed at different along-strike distances. The larger question stimulated by this dissertation project, is: What are the controls on fault mechanics in offshore subduction zone environments? It appears the shallow subduction zone plate interface doesn't behave solely in response to frictional properties of the sediment lining the decollement. Shallow episodic slip at the Costa Rica Pacific margin and further north off Nicaragua, where a slow earthquake broke through the shallow 'stable-sliding' zone and resulted in a tsunami, are potentially conceived through the normally faulted incoming basement topography. Scientists should seek to map out the controls of faulting mechanics, whatever they may be, at all temporal and spatial scales in order to understand these dynamic subduction zone systems. The quest to understanding these controls, in part, requires the characterization of aseismic and seismic strain occurring over time and space. The techniques presented in this dissertation advance scientists' capability for quantifying such strains. With the new instrumentation presented here, long-term real-time observatory networks on the seafloor, and modeling for characterization of deformation events, the pieces of the subduction zone earthquake cycle puzzle may start to come together.

  3. A physics investigation of deadtime losses in neutron counting at low rates with Cf252

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Louise G; Croft, Stephen

    2009-01-01

    {sup 252}Cf spontaneous fission sources are used for the characterization of neutron counters and the determination of calibration parameters; including both neutron coincidence counting (NCC) and neutron multiplicity deadtime (DT) parameters. Even at low event rates, temporally-correlated neutron counting using {sup 252}Cf suffers a deadtime effect. Meaning that in contrast to counting a random neutron source (e.g. AmLi to a close approximation), DT losses do not vanish in the low rate limit. This is because neutrons are emitted from spontaneous fission events in time-correlated 'bursts', and are detected over a short period commensurate with their lifetime in the detector (characterizedmore » by the system die-away time, {tau}). Thus, even when detected neutron events from different spontaneous fissions are unlikely to overlap in time, neutron events within the detected 'burst' are subject to intrinsic DT losses. Intrinsic DT losses for dilute Pu will be lower since the multiplicity distribution is softer, but real items also experience self-multiplication which can increase the 'size' of the bursts. Traditional NCC DT correction methods do not include the intrinsic (within burst) losses. We have proposed new forms of the traditional NCC Singles and Doubles DT correction factors. In this work, we apply Monte Carlo neutron pulse train analysis to investigate the functional form of the deadtime correction factors for an updating deadtime. Modeling is based on a high efficiency {sup 3}He neutron counter with short die-away time, representing an ideal {sup 3}He based detection system. The physics of dead time losses at low rates is explored and presented. It is observed that new forms are applicable and offer more accurate correction than the traditional forms.« less

  4. An investigation into pilot and system response to critical in-flight events. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Rockwell, T. H.; Griffin, W. C.

    1981-01-01

    Critical in-flight events (CIFE) that threaten the aircraft were studied. The scope of the CIFE was described and defined with emphasis on characterizing event development, detection and assessment; pilot information requirements, sources, acquisition, and interpretation, pilot response options, decision processed, and decision implementation and event outcome. Detailed scenarios were developed for use in simulators and paper and pencil testing for developing relationships between pilot performance and background information as well as for an analysis of pilot reaction decision and feedback processes. Statistical relationships among pilot characteristics and observed responses to CIFE's were developed.

  5. Automatic optical detection and classification of marine animals around MHK converters using machine vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunton, Steven

    Optical systems provide valuable information for evaluating interactions and associations between organisms and MHK energy converters and for capturing potentially rare encounters between marine organisms and MHK device. The deluge of optical data from cabled monitoring packages makes expert review time-consuming and expensive. We propose algorithms and a processing framework to automatically extract events of interest from underwater video. The open-source software framework consists of background subtraction, filtering, feature extraction and hierarchical classification algorithms. This principle classification pipeline was validated on real-world data collected with an experimental underwater monitoring package. An event detection rate of 100% was achieved using robustmore » principal components analysis (RPCA), Fourier feature extraction and a support vector machine (SVM) binary classifier. The detected events were then further classified into more complex classes – algae | invertebrate | vertebrate, one species | multiple species of fish, and interest rank. Greater than 80% accuracy was achieved using a combination of machine learning techniques.« less

  6. Global Near Real-Time Satellite-based Flood Monitoring and Product Dissemination

    NASA Astrophysics Data System (ADS)

    Smith, M.; Slayback, D. A.; Policelli, F.; Brakenridge, G. R.; Tokay, M.

    2012-12-01

    Flooding is among the most destructive, frequent, and costly natural disasters faced by modern society, with several major events occurring each year. In the past few years, major floods have devastated parts of China, Thailand, Pakistan, Australia, and the Philippines, among others. The toll of these events, in financial costs, displacement of individuals, and deaths, is substantial and continues to rise as climate change generates more extreme weather events. When these events do occur, the disaster management community requires frequently updated and easily accessible information to better understand the extent of flooding and better coordinate response efforts. With funding from NASA's Applied Sciences program, we have developed, and are now operating, a near real-time global flood mapping system to help provide critical flood extent information within 24-48 hours after flooding events. The system applies a water detection algorithm to MODIS imagery received from the LANCE (Land Atmosphere Near real-time Capability for EOS) system at NASA Goddard. The LANCE system typically processes imagery in less than 3 hours after satellite overpass, and our flood mapping system can output flood products within ½ hour of acquiring the LANCE products. Using imagery from both the Terra (10:30 AM local time overpass) and Aqua (1:30 PM) platforms allows an initial assessment of flooding extent by late afternoon, every day, and more robust assessments after accumulating imagery over a longer period; the MODIS sensors are optical, so cloud cover remains an issue, which is partly overcome by using multiple looks over one or more days. Other issues include the relatively coarse scale of the MODIS imagery (250 meters), the difficulty of detecting flood waters in areas with continuous canopy cover, confusion of shadow (cloud or terrain) with water, and accurately identifying detected water as flood as opposed to normal water extents. We have made progress on some of these issues, and are working to develop higher resolution flood detection using alternate sensors, including Landsat and various radar sensors. Although these provide better spatial resolution, this comes at the cost of being less timely. As of late 2011, the system expanded to fully global daily flood monitoring, with free public access to the generated products. These include GIS-ready files of flood and normal water extent (KML, shapefile, raster), and small scale graphic maps (10 degrees square) showing regional flood extent. We are now expanding product distribution channels to include live web services (WMS, etc), allowing easier access via standalone apps. We are also working to bring our product into the Pacific Disaster Center's Disaster Alert system and mobile app for wider accessibility.

  7. Bruxism force detection by a piezoelectric film-based recording device in sleeping humans.

    PubMed

    Baba, Kazuyoshi; Clark, Glenn T; Watanabe, Tatsutomi; Ohyama, Takashi

    2003-01-01

    To test the reliability and utility of a force-based bruxism detection system (Intra-Splint Force Detector [ISFD]) for multiple night recordings of forceful tooth-to-splint contacts in sleeping human subjects in their home environment. Bruxism-type forces, i.e., forceful tooth-to-splint contacts, during the night were recorded with this system in 12 subjects (6 bruxers and 6 controls) for 5 nights in their home environment; a laboratory-based nocturnal polysomnogram (NPSG) study was also performed on 1 of these subjects. All 12 subjects were able to use the device without substantial difficulty on a nightly basis. The bruxer group exhibited bruxism events of significantly longer duration than the control group (27 seconds/hour versus 7.4 seconds/hour, P < .01). A NPSG study performed on 1 subject revealed that, when the masseter muscle electromyogram (EMG) was used as a "gold standard," the ISFD had a sensitivity of 0.89. The correlation coefficient between the duration of events detected by the ISFD and the EMG was also 0.89. These results suggest that the ISFD is a system that can be used easily by the subjects and that has a reasonable reliability for bruxism detection as reflected in forceful tooth-to-splint contacts during sleep.

  8. Detection, Location, and Characterization of Hydroacoustic Signals Using Seafloor Cable Networks Offshore Japan

    NASA Astrophysics Data System (ADS)

    Suyehiro, K.; Sugioka, H.; Watanabe, T.

    2008-12-01

    The hydroacoustic monitoring by the International Monitoring System for CTBT (Comprehensive Nuclear- Test-Ban Treaty) verification system utilizes hydrophone stations (6) and seismic stations (5 and called T- phase stations) for worldwide detection. Some conspicuous signals of natural origin include those from earthquakes, volcanic eruptions, or whale calls. Among artificial sources are non-nuclear explosions and airgun shots. It is important for the IMS system to detect and locate hydroacoustic events with sufficient accuracy and correctly characterize the signals and identify the source. As there are a number of seafloor cable networks operated offshore Japanese islands basically facing the Pacific Ocean for monitoring regional seismicity, the data from these stations (pressure and seismic sensors) may be utilized to increase the capability of IMS. We use these data to compare some selected event parameters with those by IMS. In particular, there have been several unconventional acoustic signals in the western Pacific,which were also captured by IMS hydrophones across the Pacific in the time period of 2007-present. These anomalous examples and also dynamite shots used for seismic crustal structure studies and other natural sources will be presented in order to help improve the IMS verification capabilities for detection, location and characterization of anomalous signals.

  9. Syndromic surveillance of abortions in beef cattle based on the prospective analysis of spatio-temporal variations of calvings.

    PubMed

    Bronner, A; Morignat, E; Fournié, G; Vergne, T; Vinard, J-L; Gay, E; Calavas, D

    2015-12-21

    Our objective was to study the ability of a syndromic surveillance system to identify spatio-temporal clusters of drops in the number of calvings among beef cows during the Bluetongue epizootic of 2007 and 2008, based on calving seasons. France was partitioned into 300 iso-populated units, i.e. units with quite the same number of beef cattle. Only 1% of clusters were unlikely to be related to Bluetongue. Clusters were detected during the calving season of primary infection by Bluetongue in 28% (n = 23) of the units first infected in 2007, and in 87% (n = 184) of the units first infected in 2008. In units in which a first cluster was detected over their calving season of primary infection, Bluetongue was detected more rapidly after the start of the calving season and its prevalence was higher than in other units. We believe that this type of syndromic surveillance system could improve the surveillance of abortive events in French cattle. Besides, our approach should be used to develop syndromic surveillance systems for other diseases and purposes, and in other settings, to avoid "false" alarms due to isolated events and homogenize the ability to detect abnormal variations of indicator amongst iso-populated units.

  10. Syndromic surveillance of abortions in beef cattle based on the prospective analysis of spatio-temporal variations of calvings

    PubMed Central

    Bronner, A.; Morignat, E.; Fournié, G.; Vergne, T.; Vinard, J-L; Gay, E.; Calavas, D.

    2015-01-01

    Our objective was to study the ability of a syndromic surveillance system to identify spatio-temporal clusters of drops in the number of calvings among beef cows during the Bluetongue epizootic of 2007 and 2008, based on calving seasons. France was partitioned into 300 iso-populated units, i.e. units with quite the same number of beef cattle. Only 1% of clusters were unlikely to be related to Bluetongue. Clusters were detected during the calving season of primary infection by Bluetongue in 28% (n = 23) of the units first infected in 2007, and in 87% (n = 184) of the units first infected in 2008. In units in which a first cluster was detected over their calving season of primary infection, Bluetongue was detected more rapidly after the start of the calving season and its prevalence was higher than in other units. We believe that this type of syndromic surveillance system could improve the surveillance of abortive events in French cattle. Besides, our approach should be used to develop syndromic surveillance systems for other diseases and purposes, and in other settings, to avoid “false” alarms due to isolated events and homogenize the ability to detect abnormal variations of indicator amongst iso-populated units. PMID:26687099

  11. A Framework for Collaborative Review of Candidate Events in High Data Rate Streams: the V-Fastr Experiment as a Case Study

    NASA Astrophysics Data System (ADS)

    Hart, Andrew F.; Cinquini, Luca; Khudikyan, Shakeh E.; Thompson, David R.; Mattmann, Chris A.; Wagstaff, Kiri; Lazio, Joseph; Jones, Dayton

    2015-01-01

    “Fast radio transients” are defined here as bright millisecond pulses of radio-frequency energy. These short-duration pulses can be produced by known objects such as pulsars or potentially by more exotic objects such as evaporating black holes. The identification and verification of such an event would be of great scientific value. This is one major goal of the Very Long Baseline Array (VLBA) Fast Transient Experiment (V-FASTR), a software-based detection system installed at the VLBA. V-FASTR uses a “commensal” (piggy-back) approach, analyzing all array data continually during routine VLBA observations and identifying candidate fast transient events. Raw data can be stored from a buffer memory, which enables a comprehensive off-line analysis. This is invaluable for validating the astrophysical origin of any detection. Candidates discovered by the automatic system must be reviewed each day by analysts to identify any promising signals that warrant a more in-depth investigation. To support the timely analysis of fast transient detection candidates by V-FASTR scientists, we have developed a metadata-driven, collaborative candidate review framework. The framework consists of a software pipeline for metadata processing composed of both open source software components and project-specific code written expressly to extract and catalog metadata from the incoming V-FASTR data products, and a web-based data portal that facilitates browsing and inspection of the available metadata for candidate events extracted from the VLBA radio data.

  12. Case study of a low-reflectivity pulsating microburst: Numerical simulation of the Denver, 8 July 1989, storm

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.

    1994-01-01

    On 8 July 1989, a very strong microburst was detected by the Low-Level Windshear Alert system (LLWAS), within the approach corridor just north of Denver Stapleton Airport. The microburst was encountered by a Boeing 737-200 in a 'go-around' configuration which was reported to have lost considerable air speed and altitude during penetration. Data from LLWAS revealed a pulsating microburst with an estimated peak velocity change of 48 m/s. Wilson et al. reported that the microburst was accompanied by no apparent visible clues such as rain or virga, although blowing dust was present. Weather service hourly reports indicated virga in all quadrants near the time of the event. A National Center for Atmospheric Research (NCAR) research Doppler radar was operating; but according to Wilson et al., meaningful velocity could not be measured within the microburst due to low radar-reflectivity factor and poor siting for windshear detection at Stapleton. This paper presents results from the three-dimensional numerical simulation of this event, using the Terminal Area Simulation System (TASS) model. The TASS model is a three-dimensional nonhydrostatic cloud model that includes parameterizations for both liquid and ice phase microphysics, and has been used in investigations of both wet and dry microburst case studies. The focus of this paper is the pulsating characteristic and the very-low radar reflectivity of this event. Most of the surface outflow contained no precipitation. Such an event may be difficult to detect by radar.

  13. Fuel Line Based Acoustic Flame-Out Detection System

    NASA Technical Reports Server (NTRS)

    Puster, Richard L. (Inventor); Franke, John M. (Inventor)

    1997-01-01

    An acoustic flame-out detection system that renders a large high pressure combustor safe in the event of a flame-out and possible explosive reignition. A dynamic pressure transducer is placed in the fuel and detects the stabilizing fuel pressure oscillations, caused by the combustion process. An electric circuit converts the signal from the combustion vortices, and transmitted to the fuel flow to a series of pulses. A missing pulse detector counts the pulses and continuously resets itself. If three consecutive pulses are missing, the circuit closes the fuel valve. With fuel denied the combustor is shut down or restarted under controlled conditions.

  14. Design and Performance of the Astro-E/XRS Signal Processing System

    NASA Technical Reports Server (NTRS)

    Boyce, Kevin R.; Audley, M. D.; Baker, R. G.; Dumonthier, J. J.; Fujimoto, R.; Gendreau, K. C.; Ishisaki, Y.; Kelley, R. L.; Stahle, C. K.; Szymkowiak, A. E.

    1999-01-01

    We describe the signal processing system of the Astro-E XRS instrument. The Calorimeter Analog Processor (CAP) provides bias and power for the detectors and amplifies the detector signals by a factor of 20,000. The Calorimeter Digital Processor (CDP) performs the digital processing of the calorimeter signals, detecting X-ray pulses and analyzing them by optimal filtering. We describe the operation of pulse detection, Pulse height analysis. and risetime determination. We also discuss performance, including the three event grades (hi-res mid-res, and low-res). anticoincidence detection, counting rate dependence, and noise rejection.

  15. An Epidemiological Network Model for Disease Outbreak Detection

    PubMed Central

    Reis, Ben Y; Kohane, Isaac S; Mandl, Kenneth D

    2007-01-01

    Background Advanced disease-surveillance systems have been deployed worldwide to provide early detection of infectious disease outbreaks and bioterrorist attacks. New methods that improve the overall detection capabilities of these systems can have a broad practical impact. Furthermore, most current generation surveillance systems are vulnerable to dramatic and unpredictable shifts in the health-care data that they monitor. These shifts can occur during major public events, such as the Olympics, as a result of population surges and public closures. Shifts can also occur during epidemics and pandemics as a result of quarantines, the worried-well flooding emergency departments or, conversely, the public staying away from hospitals for fear of nosocomial infection. Most surveillance systems are not robust to such shifts in health-care utilization, either because they do not adjust baselines and alert-thresholds to new utilization levels, or because the utilization shifts themselves may trigger an alarm. As a result, public-health crises and major public events threaten to undermine health-surveillance systems at the very times they are needed most. Methods and Findings To address this challenge, we introduce a class of epidemiological network models that monitor the relationships among different health-care data streams instead of monitoring the data streams themselves. By extracting the extra information present in the relationships between the data streams, these models have the potential to improve the detection capabilities of a system. Furthermore, the models' relational nature has the potential to increase a system's robustness to unpredictable baseline shifts. We implemented these models and evaluated their effectiveness using historical emergency department data from five hospitals in a single metropolitan area, recorded over a period of 4.5 y by the Automated Epidemiological Geotemporal Integrated Surveillance real-time public health–surveillance system, developed by the Children's Hospital Informatics Program at the Harvard-MIT Division of Health Sciences and Technology on behalf of the Massachusetts Department of Public Health. We performed experiments with semi-synthetic outbreaks of different magnitudes and simulated baseline shifts of different types and magnitudes. The results show that the network models provide better detection of localized outbreaks, and greater robustness to unpredictable shifts than a reference time-series modeling approach. Conclusions The integrated network models of epidemiological data streams and their interrelationships have the potential to improve current surveillance efforts, providing better localized outbreak detection under normal circumstances, as well as more robust performance in the face of shifts in health-care utilization during epidemics and major public events. PMID:17593895

  16. Radionuclide detection devices and associated methods

    DOEpatents

    Mann, Nicholas R [Rigby, ID; Lister, Tedd E [Idaho Falls, ID; Tranter, Troy J [Idaho Falls, ID

    2011-03-08

    Radionuclide detection devices comprise a fluid cell comprising a flow channel for a fluid stream. A radionuclide collector is positioned within the flow channel and configured to concentrate one or more radionuclides from the fluid stream onto at least a portion of the radionuclide collector. A scintillator for generating scintillation pulses responsive to an occurrence of a decay event is positioned proximate at least a portion of the radionuclide collector and adjacent to a detection system for detecting the scintillation pulses. Methods of selectively detecting a radionuclide are also provided.

  17. Advanced Geospatial Hydrodynamic Signals Analysis for Tsunami Event Detection and Warning

    NASA Astrophysics Data System (ADS)

    Arbab-Zavar, Banafshe; Sabeur, Zoheir

    2013-04-01

    Current early tsunami warning can be issued upon the detection of a seismic event which may occur at a given location offshore. This also provides an opportunity to predict the tsunami wave propagation and run-ups at potentially affected coastal zones by selecting the best matching seismic event from a database of pre-computed tsunami scenarios. Nevertheless, it remains difficult and challenging to obtain the rupture parameters of the tsunamigenic earthquakes in real time and simulate the tsunami propagation with high accuracy. In this study, we propose a supporting approach, in which the hydrodynamic signal is systematically analysed for traces of a tsunamigenic signal. The combination of relatively low amplitudes of a tsunami signal at deep waters and the frequent occurrence of background signals and noise contributes to a generally low signal to noise ratio for the tsunami signal; which in turn makes the detection of this signal difficult. In order to improve the accuracy and confidence of detection, a re-identification framework in which a tsunamigenic signal is detected via the scan of a network of hydrodynamic stations with water level sensing is performed. The aim is to attempt the re-identification of the same signatures as the tsunami wave spatially propagates through the hydrodynamic stations sensing network. The re-identification of the tsunamigenic signal is technically possible since the tsunami signal at the open ocean itself conserves its birthmarks relating it to the source event. As well as supporting the initial detection and improving the confidence of detection, a re-identified signal is indicative of the spatial range of the signal, and thereby it can be used to facilitate the identification of certain background signals such as wind waves which do not have as large a spatial reach as tsunamis. In this paper, the proposed methodology for the automatic detection of tsunamigenic signals has been achieved using open data from NOAA with a recorded tsunami event in the Pacific Ocean. The new approach will be tested in the future on other oceanic regions including the Mediteranean Sea and North East Atlantic Ocean zones. Both authors acknowledge that the current research is currently conducted under the TRIDEC IP FP7 project[1] which involves the development of a system of systems for collaborative, complex and critical decision-support in evolving crises. [1] TRIDEC IP ICT-2009.4.3 Intelligent Information Management Project Reference: 258723. http://www.tridec-online.eu/home

  18. 40 CFR 63.7917 - What are my inspection and monitoring requirements for transfer systems?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... annually inspect the unburied portion of pipeline and all joints for leaks and other defects. In the event that a defect is detected, you must repair the leak or defect according to the requirements of... days after detection and repair shall be completed as soon as possible but no later than 45 calendar...

  19. Detection of VHF lightning from GPS orbit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suszcynsky, D. M.

    2003-01-01

    Satellite-based VHF' lightning detection is characterized at GPS orbit by using a VHF receiver system recently launched on the GPS SVN 54 satellite. Collected lightning triggers consist of Narrow Bipolar Events (80%) and strong negative return strokes (20%). The results are used to evaluate the performance of a future GPS-satellite-based VHF global lightning monitor.

  20. Feasibility of Twitter Based Earthquake Characterization From Analysis of 32 Million Tweets: There's Got to be a Pony in Here Somewhere!

    NASA Astrophysics Data System (ADS)

    Earle, P. S.; Guy, M. R.; Smoczyk, G. M.; Horvath, S. R.; Jessica, T. S.; Bausch, D. B.

    2014-12-01

    The U.S. Geological Survey (USGS) operates a real-time system that detects earthquakes using only data from Twitter—a service for sending and reading public text-based messages of up to 140 characters. The detector algorithm scans for significant increases in tweets containing the word "earthquake" in several languages and sends internal alerts with the detection time, representative tweet texts, and the location of the population center where most of the tweets originated. It has been running in real-time for over two years and finds, on average, two or three felt events per day, with a false detection rate of 9%. The main benefit of the tweet-based detections is speed, with most detections occurring between 20 and 120 seconds after the earthquake origin time. This is considerably faster than seismic detections in poorly instrumented regions of the world. The detections have reasonable coverage of populated areas globally. The number of Twitter-based detections is small compared to the number of earthquakes detected seismically, and only a rough location and qualitative assessment of shaking can be determined based on Tweet data alone. However, the Twitter-based detections are generally caused by widely felt events in populated urban areas that are of more immediate interest than those with no human impact. We will present a technical overview of the system and investigate the potential for rapid characterization of earthquake damage and effects using the 32 million "earthquake" tweets that the system has so far amassed. Initial results show potential for a correlation between characteristic responses and shaking level. For example, tweets containing the word "terremoto" were common following the MMI VII shaking produced by the April 1, 2014 M8.2 Iquique, Chile earthquake whereas a widely-tweeted deep-focus M5.2 north of Santiago, Chile on April 4, 2014 produced MMI VI shaking and almost exclusively "temblor" tweets. We are also investigating the use of other social media such as Instagram to obtain rapid images of earthquake-related damage. An Instagram search following the damaging M6.9 earthquake near the Mexico, Guatemala boarder on July 7, 2014 reveled half a dozen unconfirmed images of damage; the first posted 15 minutes after the event.

  1. 14 CFR 27.859 - Heating systems.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... Except for heaters which incorporate designs to prevent hazards in the event of fuel leakage in the... locations ensuring prompt detection of fire in the heater region. (2) Fire extinguisher systems that provide... shrouds so that no leakage from those components can enter the ventilating airstream. (k) Drains. There...

  2. Development of a remote sensing network for time-sensitive detection of fine scale damage to transportation infrastructure : [final report].

    DOT National Transportation Integrated Search

    2015-09-23

    This research project aimed to develop a remote sensing system capable of rapidly identifying fine-scale damage to critical transportation infrastructure following hazard events. Such a system must be pre-planned for rapid deployment, automate proces...

  3. Electronic circuit detects left ventricular ejection events in cardiovascular system

    NASA Technical Reports Server (NTRS)

    Gebben, V. D.; Webb, J. A., Jr.

    1972-01-01

    Electronic circuit processes arterial blood pressure waveform to produce discrete signals that coincide with beginning and end of left ventricular ejection. Output signals provide timing signals for computers that monitor cardiovascular systems. Circuit operates reliably for heart rates between 50 and 200 beats per minute.

  4. Building a robust vehicle detection and classification module

    NASA Astrophysics Data System (ADS)

    Grigoryev, Anton; Khanipov, Timur; Koptelov, Ivan; Bocharov, Dmitry; Postnikov, Vassily; Nikolaev, Dmitry

    2015-12-01

    The growing adoption of intelligent transportation systems (ITS) and autonomous driving requires robust real-time solutions for various event and object detection problems. Most of real-world systems still cannot rely on computer vision algorithms and employ a wide range of costly additional hardware like LIDARs. In this paper we explore engineering challenges encountered in building a highly robust visual vehicle detection and classification module that works under broad range of environmental and road conditions. The resulting technology is competitive to traditional non-visual means of traffic monitoring. The main focus of the paper is on software and hardware architecture, algorithm selection and domain-specific heuristics that help the computer vision system avoid implausible answers.

  5. A Method of Synchrophasor Technology for Detecting and Analyzing Cyber-Attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, Roy; Al-Sarray, Muthanna

    Studying cybersecurity events and analyzing their impacts encourage planners and operators to develop innovative approaches for preventing attacks in order to avoid outages and other disruptions. This work considers two parts in security studies; detecting an integrity attack and examining its effects on power system generators. The detection was conducted through employing synchrophasor technology to provide authentication of ACG commands based on observed system operating characteristics. The examination of an attack is completed via a detailed simulation of a modified IEEE 68-bus benchmark model to show the associated power system dynamic response. The results of the simulation are discussed formore » assessing the impacts of cyber threats.« less

  6. Noncontact acousto-ultrasonics using laser generation and laser interferometric detection

    NASA Technical Reports Server (NTRS)

    Green, Robert E., Jr.; Huber, Robert D.

    1991-01-01

    A compact, portable fiber-optic heterodyne interferometer designed to detect out-of-plane motion on surfaces is described. The interferometer provides a linear output for displacements over a broad frequency range and can be used for ultrasonic, acoustic emission, and acousto-ultrasonic (AU) testing. The interferometer in conjunction with a compact pulsed Nd:YAG laser represents a noncontact testing system. This system was tested to determine its usefulness for the AU technique. The results obtained show that replacement of conventional piezoelectric transducers (PZT) with a laser generation/detection system make it possible to carry out noncontact AU measurements. The waveforms recorded were 5 MHZ PZT-generated ultrasound propagating through an aluminum block, detection of the acoustic emission event, and laser AU waveforms from graphite-epoxy laminates and a filament-wound composite.

  7. Managed traffic evacuation using distributed sensor processing

    NASA Astrophysics Data System (ADS)

    Ramuhalli, Pradeep; Biswas, Subir

    2005-05-01

    This paper presents an integrated sensor network and distributed event processing architecture for managed in-building traffic evacuation during natural and human-caused disasters, including earthquakes, fire and biological/chemical terrorist attacks. The proposed wireless sensor network protocols and distributed event processing mechanisms offer a new distributed paradigm for improving reliability in building evacuation and disaster management. The networking component of the system is constructed using distributed wireless sensors for measuring environmental parameters such as temperature, humidity, and detecting unusual events such as smoke, structural failures, vibration, biological/chemical or nuclear agents. Distributed event processing algorithms will be executed by these sensor nodes to detect the propagation pattern of the disaster and to measure the concentration and activity of human traffic in different parts of the building. Based on this information, dynamic evacuation decisions are taken for maximizing the evacuation speed and minimizing unwanted incidents such as human exposure to harmful agents and stampedes near exits. A set of audio-visual indicators and actuators are used for aiding the automated evacuation process. In this paper we develop integrated protocols, algorithms and their simulation models for the proposed sensor networking and the distributed event processing framework. Also, efficient harnessing of the individually low, but collectively massive, processing abilities of the sensor nodes is a powerful concept behind our proposed distributed event processing algorithms. Results obtained through simulation in this paper are used for a detailed characterization of the proposed evacuation management system and its associated algorithmic components.

  8. Accuracy of a radiofrequency identification (RFID) badge system to monitor hand hygiene behavior during routine clinical activities.

    PubMed

    Pineles, Lisa L; Morgan, Daniel J; Limper, Heather M; Weber, Stephen G; Thom, Kerri A; Perencevich, Eli N; Harris, Anthony D; Landon, Emily

    2014-02-01

    Hand hygiene (HH) is a critical part of infection prevention in health care settings. Hospitals around the world continuously struggle to improve health care personnel (HCP) HH compliance. The current gold standard for monitoring compliance is direct observation; however, this method is time-consuming and costly. One emerging area of interest involves automated systems for monitoring HH behavior such as radiofrequency identification (RFID) tracking systems. To assess the accuracy of a commercially available RFID system in detecting HCP HH behavior, we compared direct observation with data collected by the RFID system in a simulated validation setting and to a real-life clinical setting over 2 hospitals. A total of 1,554 HH events was observed. Accuracy for identifying HH events was high in the simulated validation setting (88.5%) but relatively low in the real-life clinical setting (52.4%). This difference was significant (P < .01). Accuracy for detecting HCP movement into and out of patient rooms was also high in the simulated setting but not in the real-life clinical setting (100% on entry and exit in simulated setting vs 54.3% entry and 49.5% exit in real-life clinical setting, P < .01). In this validation study of an RFID system, almost half of the HH events were missed. More research is necessary to further develop these systems and improve accuracy prior to widespread adoption. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  9. Early events in copper-ion catalyzed oxidation of α-synuclein.

    PubMed

    Tiwari, Manish K; Leinisch, Fabian; Sahin, Cagla; Møller, Ian Max; Otzen, Daniel E; Davies, Michael J; Bjerrum, Morten J

    2018-04-22

    Previous studies on metal-ion catalyzed oxidation of α-synuclein oxidation have mostly used conditions that result in extensive modification precluding an understanding of the early events in this process. In this study, we have examined time-dependent oxidative events related to α-synuclein modification using six different molar ratios of Cu 2+ /H 2 O 2 /protein and Cu 2+ /H 2 O 2 /ascorbate/protein resulting in mild to moderate extents of oxidation. For a Cu 2+ /H 2 O 2 /protein molar ratio of 2.3:7.8:1 only low levels of carbonyls were detected (0.078 carbonyls per protein), whereas a molar ratio of 4.7:15.6:1 gave 0.22 carbonyls per α-synuclein within 15 min. With the latter conditions, rapid conversion of 3 out of 4 methionines (Met) to methionine sulfoxide, and 2 out of 4 tyrosines (Tyr) were converted to products including inter- and intra-molecular dityrosine cross-links and protein oligomers, as determined by SDS-PAGE and Western blot analysis. Limited histidine (His) modification was observed. The rapid formation of dityrosine cross-links was confirmed by fluorescence and mass-spectrometry. These data indicate that Met and Tyr oxidation are early events in Cu 2+ /H 2 O 2 -mediated damage, with carbonyl formation being a minor process. With the Cu 2+ /H 2 O 2 /ascorbate system, rapid protein carbonyl formation was detected with the first 5 min, but after this time point, little additional carbonyl formation was detected. With this system, lower levels of Met and Tyr oxidation were detected (2 Met and 1 Tyr modified with a Cu 2+ /H 2 O 2 /ascorbate/protein ratio of 2.3:7.8:7.8:1), but greater His oxidation. Only low levels of intra- dityrosine cross-links and no inter- dityrosine oligomers were detected under these conditions, suggesting that ascorbate limits Cu 2+ /H 2 O 2 -induced α-synuclein modification. Copyright © 2018. Published by Elsevier Inc.

  10. Automatic Detection of Seizures with Applications

    NASA Technical Reports Server (NTRS)

    Olsen, Dale E.; Harris, John C.; Cutchis, Protagoras N.; Cristion, John A.; Lesser, Ronald P.; Webber, W. Robert S.

    1993-01-01

    There are an estimated two million people with epilepsy in the United States. Many of these people do not respond to anti-epileptic drug therapy. Two devices can be developed to assist in the treatment of epilepsy. The first is a microcomputer-based system designed to process massive amounts of electroencephalogram (EEG) data collected during long-term monitoring of patients for the purpose of diagnosing seizures, assessing the effectiveness of medical therapy, or selecting patients for epilepsy surgery. Such a device would select and display important EEG events. Currently many such events are missed. A second device could be implanted and would detect seizures and initiate therapy. Both of these devices require a reliable seizure detection algorithm. A new algorithm is described. It is believed to represent an improvement over existing seizure detection algorithms because better signal features were selected and better standardization methods were used.

  11. Automatic event recognition and anomaly detection with attribute grammar by learning scene semantics

    NASA Astrophysics Data System (ADS)

    Qi, Lin; Yao, Zhenyu; Li, Li; Dong, Junyu

    2007-11-01

    In this paper we present a novel framework for automatic event recognition and abnormal behavior detection with attribute grammar by learning scene semantics. This framework combines learning scene semantics by trajectory analysis and constructing attribute grammar-based event representation. The scene and event information is learned automatically. Abnormal behaviors that disobey scene semantics or event grammars rules are detected. By this method, an approach to understanding video scenes is achieved. Further more, with this prior knowledge, the accuracy of abnormal event detection is increased.

  12. Automatic Line Calling Badminton System

    NASA Astrophysics Data System (ADS)

    Affandi Saidi, Syahrul; Adawiyah Zulkiplee, Nurabeahtul; Muhammad, Nazmizan; Sarip, Mohd Sharizan Md

    2018-05-01

    A system and relevant method are described to detect whether a projectile impact occurs on one side of a boundary line or the other. The system employs the use of force sensing resistor-based sensors that may be designed in segments or assemblies and linked to a mechanism with a display. An impact classification system is provided for distinguishing between various events, including a footstep, ball impact and tennis racquet contact. A sensor monitoring system is provided for determining the condition of sensors and providing an error indication if sensor problems exist. A service detection system is provided when the system is used for tennis that permits activation of selected groups of sensors and deactivation of others.

  13. Subsurface event detection and classification using Wireless Signal Networks.

    PubMed

    Yoon, Suk-Un; Ghazanfari, Ehsan; Cheng, Liang; Pamukcu, Sibel; Suleiman, Muhannad T

    2012-11-05

    Subsurface environment sensing and monitoring applications such as detection of water intrusion or a landslide, which could significantly change the physical properties of the host soil, can be accomplished using a novel concept, Wireless Signal Networks (WSiNs). The wireless signal networks take advantage of the variations of radio signal strength on the distributed underground sensor nodes of WSiNs to monitor and characterize the sensed area. To characterize subsurface environments for event detection and classification, this paper provides a detailed list and experimental data of soil properties on how radio propagation is affected by soil properties in subsurface communication environments. Experiments demonstrated that calibrated wireless signal strength variations can be used as indicators to sense changes in the subsurface environment. The concept of WSiNs for the subsurface event detection is evaluated with applications such as detection of water intrusion, relative density change, and relative motion using actual underground sensor nodes. To classify geo-events using the measured signal strength as a main indicator of geo-events, we propose a window-based minimum distance classifier based on Bayesian decision theory. The window-based classifier for wireless signal networks has two steps: event detection and event classification. With the event detection, the window-based classifier classifies geo-events on the event occurring regions that are called a classification window. The proposed window-based classification method is evaluated with a water leakage experiment in which the data has been measured in laboratory experiments. In these experiments, the proposed detection and classification method based on wireless signal network can detect and classify subsurface events.

  14. Subsurface Event Detection and Classification Using Wireless Signal Networks

    PubMed Central

    Yoon, Suk-Un; Ghazanfari, Ehsan; Cheng, Liang; Pamukcu, Sibel; Suleiman, Muhannad T.

    2012-01-01

    Subsurface environment sensing and monitoring applications such as detection of water intrusion or a landslide, which could significantly change the physical properties of the host soil, can be accomplished using a novel concept, Wireless Signal Networks (WSiNs). The wireless signal networks take advantage of the variations of radio signal strength on the distributed underground sensor nodes of WSiNs to monitor and characterize the sensed area. To characterize subsurface environments for event detection and classification, this paper provides a detailed list and experimental data of soil properties on how radio propagation is affected by soil properties in subsurface communication environments. Experiments demonstrated that calibrated wireless signal strength variations can be used as indicators to sense changes in the subsurface environment. The concept of WSiNs for the subsurface event detection is evaluated with applications such as detection of water intrusion, relative density change, and relative motion using actual underground sensor nodes. To classify geo-events using the measured signal strength as a main indicator of geo-events, we propose a window-based minimum distance classifier based on Bayesian decision theory. The window-based classifier for wireless signal networks has two steps: event detection and event classification. With the event detection, the window-based classifier classifies geo-events on the event occurring regions that are called a classification window. The proposed window-based classification method is evaluated with a water leakage experiment in which the data has been measured in laboratory experiments. In these experiments, the proposed detection and classification method based on wireless signal network can detect and classify subsurface events. PMID:23202191

  15. Analysis of arrhythmic events is useful to detect lead failure earlier in patients followed by remote monitoring.

    PubMed

    Nishii, Nobuhiro; Miyoshi, Akihito; Kubo, Motoki; Miyamoto, Masakazu; Morimoto, Yoshimasa; Kawada, Satoshi; Nakagawa, Koji; Watanabe, Atsuyuki; Nakamura, Kazufumi; Morita, Hiroshi; Ito, Hiroshi

    2018-03-01

    Remote monitoring (RM) has been advocated as the new standard of care for patients with cardiovascular implantable electronic devices (CIEDs). RM has allowed the early detection of adverse clinical events, such as arrhythmia, lead failure, and battery depletion. However, lead failure was often identified only by arrhythmic events, but not impedance abnormalities. To compare the usefulness of arrhythmic events with conventional impedance abnormalities for identifying lead failure in CIED patients followed by RM. CIED patients in 12 hospitals have been followed by the RM center in Okayama University Hospital. All transmitted data have been analyzed and summarized. From April 2009 to March 2016, 1,873 patients have been followed by the RM center. During the mean follow-up period of 775 days, 42 lead failure events (atrial lead 22, right ventricular pacemaker lead 5, implantable cardioverter defibrillator [ICD] lead 15) were detected. The proportion of lead failures detected only by arrhythmic events, which were not detected by conventional impedance abnormalities, was significantly higher than that detected by impedance abnormalities (arrhythmic event 76.2%, 95% CI: 60.5-87.9%; impedance abnormalities 23.8%, 95% CI: 12.1-39.5%). Twenty-seven events (64.7%) were detected without any alert. Of 15 patients with ICD lead failure, none has experienced inappropriate therapy. RM can detect lead failure earlier, before clinical adverse events. However, CIEDs often diagnose lead failure as just arrhythmic events without any warning. Thus, to detect lead failure earlier, careful human analysis of arrhythmic events is useful. © 2017 Wiley Periodicals, Inc.

  16. Wearable sensor platform and mobile application for use in cognitive behavioral therapy for drug addiction and PTSD.

    PubMed

    Fletcher, Richard Ribón; Tam, Sharon; Omojola, Olufemi; Redemske, Richard; Kwan, Joyce

    2011-01-01

    We present a wearable sensor platform designed for monitoring and studying autonomic nervous system (ANS) activity for the purpose of mental health treatment and interventions. The mobile sensor system consists of a sensor band worn on the ankle that continuously monitors electrodermal activity (EDA), 3-axis acceleration, and temperature. A custom-designed ECG heart monitor worn on the chest is also used as an optional part of the system. The EDA signal from the ankle bands provides a measure sympathetic nervous system activity and used to detect arousal events. The optional ECG data can be used to improve the sensor classification algorithm and provide a measure of emotional "valence." Both types of sensor bands contain a Bluetooth radio that enables communication with the patient's mobile phone. When a specific arousal event is detected, the phone automatically presents therapeutic and empathetic messages to the patient in the tradition of Cognitive Behavioral Therapy (CBT). As an example of clinical use, we describe how the system is currently being used in an ongoing study for patients with drug-addiction and post-traumatic stress disorder (PTSD).

  17. Safety studies on intravenous administration of oncolytic recombinant vesicular stomatitis virus in purpose-bred beagle dogs.

    PubMed

    LeBlanc, Amy K; Naik, Shruthi; Galyon, Gina D; Jenks, Nathan; Steele, Mike; Peng, Kah-Whye; Federspiel, Mark J; Donnell, Robert; Russell, Stephen J

    2013-12-01

    VSV-IFNβ-NIS is a novel recombinant oncolytic vesicular stomatitis virus (VSV) with documented efficacy and safety in preclinical murine models of cancer. To facilitate clinical translation of this promising oncolytic therapy in patients with disseminated cancer, we are utilizing a comparative oncology approach to gather data describing the safety and efficacy of systemic VSV-IFNβ-NIS administration in dogs with naturally occurring cancer. In support of this, we executed a dose-escalation study in purpose-bred dogs to determine the maximum tolerated dose (MTD) of systemic VSV-hIFNβ-NIS, characterize the adverse event profile, and describe routes and duration of viral shedding in healthy, immune-competent dogs. The data indicate that an intravenous dose of 10(10) TCID50 is well tolerated in dogs. Expected adverse events were mild to moderate fever, self-limiting nausea and vomiting, lymphopenia, and oral mucosal lesions. Unexpected adverse events included prolongation of partial thromboplastin time, development of bacterial urinary tract infection, and scrotal dermatitis, and in one dog receiving 10(11) TCID50 (10 × the MTD), the development of severe hepatotoxicity and symptoms of shock leading to euthanasia. Viral shedding data indicate that detectable viral genome in blood diminishes rapidly with anti-VSV neutralizing antibodies detectable in blood as early as day 5 postintravenous virus administration. While low levels of viral genome copies were detectable in plasma, urine, and buccal swabs of dogs treated at the MTD, no infectious virus was detectable in plasma, urine, or buccal swabs at any of the doses tested. These studies confirm that VSV can be safely administered systemically in dogs, justifying the use of oncolytic VSV as a novel therapy for the treatment of canine cancer.

  18. A Likely Detection of a Two-planet System in a Low-magnification Microlensing Event

    NASA Astrophysics Data System (ADS)

    Suzuki, D.; Bennett, D. P.; Udalski, A.; Bond, I. A.; Sumi, T.; Han, C.; Kim, Ho-il.; Abe, F.; Asakura, Y.; Barry, R. K.; Bhattacharya, A.; Donachie, M.; Freeman, M.; Fukui, A.; Hirao, Y.; Itow, Y.; Koshimoto, N.; Li, M. C. A.; Ling, C. H.; Masuda, K.; Matsubara, Y.; Muraki, Y.; Nagakane, M.; Onishi, K.; Oyokawa, H.; Ranc, C.; Rattenbury, N. J.; Saito, To.; Sharan, A.; Sullivan, D. J.; Tristram, P. J.; Yonehara, A.; MOA Collaboration; Poleski, R.; Mróz, P.; Skowron, J.; Szymański, M. K.; Soszyński, I.; Kozłowski, S.; Pietrukowicz, P.; Wyrzykowski, Ł.; Ulaczyk, K.; OGLE Collaboration

    2018-06-01

    We report on the analysis of a microlensing event, OGLE-2014-BLG-1722, that showed two distinct short-term anomalies. The best-fit model to the observed light curves shows that the two anomalies are explained with two planetary mass ratio companions to the primary lens. Although a binary-source model is also able to explain the second anomaly, it is marginally ruled out by 3.1σ. The two-planet model indicates that the first anomaly was caused by planet “b” with a mass ratio of q=({4.5}-0.6+0.7)× {10}-4 and projected separation in units of the Einstein radius, s = 0.753 ± 0.004. The second anomaly reveals planet “c” with a mass ratio of {q}2=({7.0}-1.7+2.3)× {10}-4 with Δχ 2 ∼ 170 compared to the single-planet model. Its separation has two degenerated solutions: the separation of planet c is s 2 = 0.84 ± 0.03 and 1.37 ± 0.04 for the close and wide models, respectively. Unfortunately, this event does not show clear finite-source and microlensing parallax effects; thus, we estimated the physical parameters of the lens system from Bayesian analysis. This gives the masses of planets b and c as {m}{{b}}={56}-33+51 and {m}{{c}}={85}-51+86 {M}\\oplus , respectively, and they orbit a late-type star with a mass of {M}host} ={0.40}-0.24+0.36 {M}ȯ located at {D}{{L}}={6.4}-1.8+1.3 {kpc} from us. The projected distances between the host and planets are {r}\\perp ,{{b}}=1.5+/- 0.6 {au} for planet b and {r}\\perp ,{{c}}={1.7}-0.6+0.7 {au} and {r}\\perp ,{{c}}={2.7}-1.0+1.1 {au} for the close and wide models of planet c. If the two-planet model is true, then this is the third multiple-planet system detected using the microlensing method and the first multiple-planet system detected in low-magnification events, which are dominant in the microlensing survey data. The occurrence rate of multiple cold gas giant systems is estimated using the two such detections and a simple extrapolation of the survey sensitivity of the 6 yr MOA microlensing survey combined with the 4 yr μFUN detection efficiency. It is estimated that 6% ± 2% of stars host two cold giant planets.

  19. Global, Daily, Near Real-Time Satellite-based Flood Monitoring and Product Dissemination

    NASA Astrophysics Data System (ADS)

    Slayback, D. A.; Policelli, F. S.; Brakenridge, G. R.; Tokay, M. M.; Smith, M. M.; Kettner, A. J.

    2013-12-01

    Flooding is the most destructive, frequent, and costly natural disaster faced by modern society, and is expected to increase in frequency and damage with climate change and population growth. Some of 2013's major floods have impacted the New York City region, the Midwest, Alberta, Australia, various parts of China, Thailand, Pakistan, and central Europe. The toll of these events, in financial costs, displacement of individuals, and deaths, is substantial and continues to rise as climate change generates more extreme weather events. When these events do occur, the disaster management community requires frequently updated and easily accessible information to better understand the extent of flooding and better coordinate response efforts. With funding from NASA's Applied Sciences program, we developed and are now operating a near real-time global flood mapping system to help provide critical flood extent information within 24-48 hours of events. The system applies a water detection algorithm to MODIS imagery received from the LANCE (Land Atmosphere Near real-time Capability for EOS) system at NASA Goddard within a few hours of satellite overpass. Using imagery from both the Terra (10:30 AM local time overpass) and Aqua (1:30 PM) platforms allows an initial daily assessment of flooding extent by late afternoon, and more robust assessments after accumulating cloud-free imagery over several days. Cloud cover is the primary limitation in detecting surface water from MODIS imagery. Other issues include the relatively coarse scale of the MODIS imagery (250 meters), the difficulty of detecting flood waters in areas with continuous canopy cover, confusion of shadow (cloud or terrain) with water, and accurately identifying detected water as flood as opposed to normal water extents. We have made progress on many of these issues, and are working to develop higher resolution flood detection using alternate sensors, including Landsat and various radar sensors. Although these provide better spatial resolution, this typically comes at the cost of being less timely. Since late 2011, this system has been providing daily flood maps of the global non-antarctic land surface. These data products are generated in raster and vector formats, and provided freely on our website. To better serve the disaster response community, we have recently begun providing the products via live OGC (Open Geospatial Consortium) services, allowing easy access in a variety of platforms (Google Earth, desktop GIS software, mobile phone apps). We are also working with the Pacific Disaster Center to bring our product into their Disaster Alert system (including a mobile app), which will help simplify product distribution to the disaster management community.

  20. Use of sonification in the detection of anomalous events

    NASA Astrophysics Data System (ADS)

    Ballora, Mark; Cole, Robert J.; Kruesi, Heidi; Greene, Herbert; Monahan, Ganesh; Hall, David L.

    2012-06-01

    In this paper, we describe the construction of a soundtrack that fuses stock market data with information taken from tweets. This soundtrack, or auditory display, presents the numerical and text data in such a way that anomalous events may be readily detected, even by untrained listeners. The soundtrack generation is flexible, allowing an individual listener to create a unique audio mix from the available information sources. Properly constructed, the display exploits the auditory system's sensitivities to periodicities, to dynamic changes, and to patterns. This type of display could be valuable in environments that demand high levels of situational awareness based on multiple sources of incoming information.

Top