Sample records for event monitoring system

  1. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  2. Shared performance monitor in a multiprocessor system

    DOEpatents

    Chiu, George; Gara, Alan G; Salapura, Valentina

    2014-12-02

    A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU is further programmed to monitor event signals issued from non-processor devices.

  3. Shared performance monitor in a multiprocessor system

    DOEpatents

    Chiu, George; Gara, Alan G.; Salapura, Valentina

    2012-07-24

    A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU comprises: a plurality of performance counters each for counting signals representing occurrences of events from one or more the plurality of processor units in the multiprocessor system; and, a plurality of input devices for receiving the event signals from one or more processor devices of the plurality of processor units, the plurality of input devices programmable to select event signals for receipt by one or more of the plurality of performance counters for counting, wherein the PMU is shared between multiple processing units, or within a group of processors in the multiprocessing system. The PMU is further programmed to monitor event signals issued from non-processor devices.

  4. A Centralized Display for Mission Monitoring

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.

    2004-01-01

    Humans traditionally experience a vigilance decrement over extended periods of time on reliable systems. One possible solution to aiding operators in monitoring is to use polar-star displays that will show deviations from normal in a more salient manner. The primary objectives of this experiment were to determine if polar-star displays aid in monitoring and preliminary diagnosis of the aircraft state. This experiment indicated that the polar-star display does indeed aid operators in detecting and diagnosing system events. Subjects were able to notice system events earlier and they subjectively reported the polar-star display helped them in monitoring, noticing an event, and diagnosing an event. Therefore, these results indicate that the polar-star display used for monitoring and preliminary diagnosis improves performance in these areas for system related events.

  5. Impact of remote monitoring on the management of arrhythmias in patients with implantable cardioverter-defibrillator.

    PubMed

    Marcantoni, Lina; Toselli, Tiziano; Urso, Giulia; Pratola, Claudio; Ceconi, Claudio; Bertini, Matteo

    2015-11-01

    In the last decade, there has been an exponential increase in cardioverter-defibrillator (ICD) implants. Remote monitoring systems, allow daily follow-ups of patients with ICD. To evaluate the impact of remote monitoring on the management of cardiovascular events associated with supraventricular and ventricular arrhythmias during long-term follow-up. A total of 207 patients undergoing ICD implantation/replacement were enrolled: 79 patients received remote monitoring systems and were followed up every 12 months, and 128 patients were followed up conventionally every 6 months. All patients were followed up and monitored for the occurrence of supraventricular and ventricular arrhythmia-related cardiovascular events (ICD shocks and/or hospitalizations). During a median follow-up of 842 days (interquartile range 476-1288 days), 32 (15.5%) patients experienced supraventricular arrhythmia-related events and 51 (24.6%) patients experienced ventricular arrhythmia-related events. Remote monitoring had a significant role in the reduction of supraventricular arrhythmia-related events, but it had no effect on ventricular arrhythmia-related events. In multivariable analysis, remote monitoring remained as an independent protective factor, reducing the risk of supraventricular arrhythmia-related events of 67% [hazard ratio, 0.33; 95% confidence interval (CI), 0.13-0.82; P = 0.017]. Remote monitoring systems improved outcomes in patients with supraventricular arrhythmias by reducing the risk of cardiovascular events, but no benefits were observed in patients with ventricular arrhythmias.

  6. Real-time monitoring of clinical processes using complex event processing and transition systems.

    PubMed

    Meinecke, Sebastian

    2014-01-01

    Dependencies between tasks in clinical processes are often complex and error-prone. Our aim is to describe a new approach for the automatic derivation of clinical events identified via the behaviour of IT systems using Complex Event Processing. Furthermore we map these events on transition systems to monitor crucial clinical processes in real-time for preventing and detecting erroneous situations.

  7. Real-Time Event Detection for Monitoring Natural and Source ...

    EPA Pesticide Factsheets

    The use of event detection systems in finished drinking water systems is increasing in order to monitor water quality in both operational and security contexts. Recent incidents involving harmful algal blooms and chemical spills into watersheds have increased interest in monitoring source water quality prior to treatment. This work highlights the use of the CANARY event detection software in detecting suspected illicit events in an actively monitored watershed in South Carolina. CANARY is an open source event detection software that was developed by USEPA and Sandia National Laboratories. The software works with any type of sensor, utilizes multiple detection algorithms and approaches, and can incorporate operational information as needed. Monitoring has been underway for several years to detect events related to intentional or unintentional dumping of materials into the monitored watershed. This work evaluates the feasibility of using CANARY to enhance the detection of events in this watershed. This presentation will describe the real-time monitoring approach used in this watershed, the selection of CANARY configuration parameters that optimize detection for this watershed and monitoring application, and the performance of CANARY during the time frame analyzed. Further, this work will highlight how rainfall events impacted analysis, and the innovative application of CANARY taken in order to effectively detect the suspected illicit events. This presentation d

  8. ATLAS Eventlndex monitoring system using the Kibana analytics and visualization platform

    NASA Astrophysics Data System (ADS)

    Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration

    2016-10-01

    The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This paper describes how we gather information about the EventIndex components and related subsystems: the Producer-Consumer architecture for data collection, health parameters from the servers that run EventIndex components, EventIndex web interface status, and the Hadoop infrastructure that stores EventIndex data. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.

  9. Initial Evaluation of Signal-Based Bayesian Monitoring

    NASA Astrophysics Data System (ADS)

    Moore, D.; Russell, S.

    2016-12-01

    We present SIGVISA (Signal-based Vertically Integrated Seismic Analysis), a next-generation system for global seismic monitoring through Bayesian inference on seismic signals. Traditional seismic monitoring systems rely on discrete detections produced by station processing software, discarding significant information present in the original recorded signal. By modeling signals directly, our forward model is able to incorporate a rich representation of the physics underlying the signal generation process, including source mechanisms, wave propagation, and station response. This allows inference in the model to recover the qualitative behavior of geophysical methods including waveform matching and double-differencing, all as part of a unified Bayesian monitoring system that simultaneously detects and locates events from a network of stations. We report results from an evaluation of SIGVISA monitoring the western United States for a two-week period following the magnitude 6.0 event in Wells, NV in February 2008. During this period, SIGVISA detects more than twice as many events as NETVISA, and three times as many as SEL3, while operating at the same precision; at lower precisions it detects up to five times as many events as SEL3. At the same time, signal-based monitoring reduces mean location errors by a factor of four relative to detection-based systems. We provide evidence that, given only IMS data, SIGVISA detects events that are missed by regional monitoring networks, indicating that our evaluations may even underestimate its performance. Finally, SIGVISA matches or exceeds the detection rates of existing systems for de novo events - events with no nearby historical seismicity - and detects through automated processing a number of such events missed even by the human analysts generating the LEB.

  10. Using Knowledge Base for Event-Driven Scheduling of Web Monitoring Systems

    NASA Astrophysics Data System (ADS)

    Kim, Yang Sok; Kang, Sung Won; Kang, Byeong Ho; Compton, Paul

    Web monitoring systems report any changes to their target web pages by revisiting them frequently. As they operate under significant resource constraints, it is essential to minimize revisits while ensuring minimal delay and maximum coverage. Various statistical scheduling methods have been proposed to resolve this problem; however, they are static and cannot easily cope with events in the real world. This paper proposes a new scheduling method that manages unpredictable events. An MCRDR (Multiple Classification Ripple-Down Rules) document classification knowledge base was reused to detect events and to initiate a prompt web monitoring process independent of a static monitoring schedule. Our experiment demonstrates that the approach improves monitoring efficiency significantly.

  11. Application of process monitoring to anomaly detection in nuclear material processing systems via system-centric event interpretation of data from multiple sensors of varying reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Humberto E.; Simpson, Michael F.; Lin, Wen-Chiao

    In this paper, we apply an advanced safeguards approach and associated methods for process monitoring to a hypothetical nuclear material processing system. The assessment regarding the state of the processing facility is conducted at a systemcentric level formulated in a hybrid framework. This utilizes architecture for integrating both time- and event-driven data and analysis for decision making. While the time-driven layers of the proposed architecture encompass more traditional process monitoring methods based on time series data and analysis, the event-driven layers encompass operation monitoring methods based on discrete event data and analysis. By integrating process- and operation-related information and methodologiesmore » within a unified framework, the task of anomaly detection is greatly improved. This is because decision-making can benefit from not only known time-series relationships among measured signals but also from known event sequence relationships among generated events. This available knowledge at both time series and discrete event layers can then be effectively used to synthesize observation solutions that optimally balance sensor and data processing requirements. The application of the proposed approach is then implemented on an illustrative monitored system based on pyroprocessing and results are discussed.« less

  12. Adverse event detection (AED) system for continuously monitoring and evaluating structural health status

    NASA Astrophysics Data System (ADS)

    Yun, Jinsik; Ha, Dong Sam; Inman, Daniel J.; Owen, Robert B.

    2011-03-01

    Structural damage for spacecraft is mainly due to impacts such as collision of meteorites or space debris. We present a structural health monitoring (SHM) system for space applications, named Adverse Event Detection (AED), which integrates an acoustic sensor, an impedance-based SHM system, and a Lamb wave SHM system. With these three health-monitoring methods in place, we can determine the presence, location, and severity of damage. An acoustic sensor continuously monitors acoustic events, while the impedance-based and Lamb wave SHM systems are in sleep mode. If an acoustic sensor detects an impact, it activates the impedance-based SHM. The impedance-based system determines if the impact incurred damage. When damage is detected, it activates the Lamb wave SHM system to determine the severity and location of the damage. Further, since an acoustic sensor dissipates much less power than the two SHM systems and the two systems are activated only when there is an acoustic event, our system reduces overall power dissipation significantly. Our prototype system demonstrates the feasibility of the proposed concept.

  13. Structural monitoring for rare events in remote locations

    NASA Astrophysics Data System (ADS)

    Hale, J. M.

    2005-01-01

    A structural monitoring system has been developed for use on high value engineering structures, which is particularly suitable for use in remote locations where rare events such as accidental impacts, seismic activity or terrorist attack might otherwise go undetected. The system comprises a low power intelligent on-site data logger and a remote analysis computer that communicate with one another using the internet and mobile telephone technology. The analysis computer also generates e-mail alarms and maintains a web page that displays detected events in near real-time to authorised users. The application of the prototype system to pipeline monitoring is described in which the analysis of detected events is used to differentiate between impacts and pressure surges. The system has been demonstrated successfully and is ready for deployment.

  14. Wireless battery management control and monitoring system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zumstein, James M.; Chang, John T.; Farmer, Joseph C.

    A battery management system using a sensor inside of the battery that sensor enables monitoring and detection of various events in the battery and transmission of a signal from the sensor through the battery casing to a control and data acquisition module by wireless transmission. The detection of threshold events in the battery enables remedial action to be taken to avoid catastrophic events.

  15. HyperCard Monitor System.

    ERIC Educational Resources Information Center

    Harris, Julian; Maurer, Hermann

    An investigation into high level event monitoring within the scope of a well-known multimedia application, HyperCard--a program on the Macintosh computer, is carried out. A monitoring system is defined as a system which automatically monitors usage of some activity and gathers statistics based on what is has observed. Monitor systems can give the…

  16. Real-Time Event Detection for Monitoring Natural and Source Waterways - Sacramento, CA

    EPA Science Inventory

    The use of event detection systems in finished drinking water systems is increasing in order to monitor water quality in both operational and security contexts. Recent incidents involving harmful algal blooms and chemical spills into watersheds have increased interest in monitori...

  17. On-line data analysis and monitoring for H1 drift chambers

    NASA Astrophysics Data System (ADS)

    Düllmann, Dirk

    1992-05-01

    The on-line monitoring, slow control and calibration of the H1 central jet chamber uses a VME multiprocessor system to perform the analysis and a connected Macintosh computer as graphical interface to the operator on shift. Task of this system are: - analysis of event data including on-line track search, - on-line calibration from normal events and testpulse events, - control of the high voltage and monitoring of settings and currents, - monitoring of temperature, pressure and mixture of the chambergas. A program package is described which controls the dataflow between data aquisition, differnt VME CPUs and Macintosh. It allows to run off-line style programs for the different tasks.

  18. A novel real-time health monitoring system for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Zhang, David C.; Ouyang, Lien; Qing, Peter; Li, Irene

    2008-04-01

    Real-time monitoring the status of in-service structures such as unmanned vehicles can provide invaluable information to detect the damages to the structures on time. The unmanned vehicles can be maintained and repaired in time if such damages are found. One typical cause of damages of unmanned vehicles is from impacts caused by bumping into some obstacles or being hit by some objects such as hostile fire. This paper introduces a novel impact event sensing system that can detect the location of the impact events and the force-time history of the impact events. The system consists of the Piezo-electric sensor network, the hardware platform and the analysis software. The new customized battery-powered impact event sensing system supports up to 64-channel parallel data acquisition. It features an innovative low-power hardware trigger circuit that monitors 64 channels simultaneously. The system is in the sleep mode most of the time. When an impact event happens, the system will wake up in micro-seconds and detect the impact location and corresponding force-time history. The system can be combined with the SMART sensing system to further evaluate the impact damage severity.

  19. GTSO: Global Trace Synchronization and Ordering Mechanism for Wireless Sensor Network Monitoring Platforms.

    PubMed

    Navia, Marlon; Campelo, José Carlos; Bonastre, Alberto; Ors, Rafael

    2017-12-23

    Monitoring is one of the best ways to evaluate the behavior of computer systems. When the monitored system is a distributed system-such as a wireless sensor network (WSN)-the monitoring operation must also be distributed, providing a distributed trace for further analysis. The temporal sequence of occurrence of the events registered by the distributed monitoring platform (DMP) must be correctly established to provide cause-effect relationships between them, so the logs obtained in different monitor nodes must be synchronized. Many of synchronization mechanisms applied to DMPs consist in adjusting the internal clocks of the nodes to the same value as a reference time. However, these mechanisms can create an incoherent event sequence. This article presents a new method to achieve global synchronization of the traces obtained in a DMP. It is based on periodic synchronization signals that are received by the monitor nodes and logged along with the recorded events. This mechanism processes all traces and generates a global post-synchronized trace by scaling all times registered proportionally according with the synchronization signals. It is intended to be a simple but efficient offline mechanism. Its application in a WSN-DMP demonstrates that it guarantees a correct ordering of the events, avoiding the aforementioned issues.

  20. Hydra—The National Earthquake Information Center’s 24/7 seismic monitoring, analysis, catalog production, quality analysis, and special studies tool suite

    USGS Publications Warehouse

    Patton, John M.; Guy, Michelle R.; Benz, Harley M.; Buland, Raymond P.; Erickson, Brian K.; Kragness, David S.

    2016-08-18

    This report provides an overview of the capabilities and design of Hydra, the global seismic monitoring and analysis system used for earthquake response and catalog production at the U.S. Geological Survey National Earthquake Information Center (NEIC). Hydra supports the NEIC’s worldwide earthquake monitoring mission in areas such as seismic event detection, seismic data insertion and storage, seismic data processing and analysis, and seismic data output.The Hydra system automatically identifies seismic phase arrival times and detects the occurrence of earthquakes in near-real time. The system integrates and inserts parametric and waveform seismic data into discrete events in a database for analysis. Hydra computes seismic event parameters, including locations, multiple magnitudes, moment tensors, and depth estimates. Hydra supports the NEIC’s 24/7 analyst staff with a suite of seismic analysis graphical user interfaces.In addition to the NEIC’s monitoring needs, the system supports the processing of aftershock and temporary deployment data, and supports the NEIC’s quality assurance procedures. The Hydra system continues to be developed to expand its seismic analysis and monitoring capabilities.

  1. A new system for continuous and remote monitoring of patients receiving home mechanical ventilation

    NASA Astrophysics Data System (ADS)

    Battista, L.

    2016-09-01

    Home mechanical ventilation is the treatment of patients with respiratory failure or insufficiency by means of a mechanical ventilator at a patient's home. In order to allow remote patient monitoring, several tele-monitoring systems have been introduced in the last few years. However, most of them usually do not allow real-time services, as they have their own proprietary communication protocol implemented and some ventilation parameters are not always measured. Moreover, they monitor only some breaths during the whole day, despite the fact that a patient's respiratory state may change continuously during the day. In order to reduce the above drawbacks, this work reports the development of a novel remote monitoring system for long-term, home-based ventilation therapy; the proposed system allows for continuous monitoring of the main physical quantities involved during home-care ventilation (e.g., differential pressure, volume, and air flow rate) and is developed in order to allow observations of different remote therapy units located in different places of a city, region, or country. The developed remote patient monitoring system is able to detect various clinical events (e.g., events of tube disconnection and sleep apnea events) and has been successfully tested by means of experimental tests carried out with pulmonary ventilators typically used to support sick patients.

  2. A new system for continuous and remote monitoring of patients receiving home mechanical ventilation.

    PubMed

    Battista, L

    2016-09-01

    Home mechanical ventilation is the treatment of patients with respiratory failure or insufficiency by means of a mechanical ventilator at a patient's home. In order to allow remote patient monitoring, several tele-monitoring systems have been introduced in the last few years. However, most of them usually do not allow real-time services, as they have their own proprietary communication protocol implemented and some ventilation parameters are not always measured. Moreover, they monitor only some breaths during the whole day, despite the fact that a patient's respiratory state may change continuously during the day. In order to reduce the above drawbacks, this work reports the development of a novel remote monitoring system for long-term, home-based ventilation therapy; the proposed system allows for continuous monitoring of the main physical quantities involved during home-care ventilation (e.g., differential pressure, volume, and air flow rate) and is developed in order to allow observations of different remote therapy units located in different places of a city, region, or country. The developed remote patient monitoring system is able to detect various clinical events (e.g., events of tube disconnection and sleep apnea events) and has been successfully tested by means of experimental tests carried out with pulmonary ventilators typically used to support sick patients.

  3. Online data monitoring in the LHCb experiment

    NASA Astrophysics Data System (ADS)

    Callot, O.; Cherukuwada, S.; Frank, M.; Gaspar, C.; Graziani, G.; Herwijnen, E. v.; Jost, B.; Neufeld, N.; P-Altarelli, M.; Somogyi, P.; Stoica, R.

    2008-07-01

    The High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 40 MHz of beam crossings. The selected events are sent to permanent storage for subsequent analysis. In order to ensure the quality of the collected data, identify possible malfunctions of the detector and perform calibration and alignment checks, a small fraction of the accepted events is sent to a monitoring farm, which consists of a few tens of general purpose processors. This contribution introduces the architecture of the data stream splitting mechanism from the storage system to the monitoring farm, where the raw data are analyzed by dedicated tasks. It describes the collaborating software components that are all based on the Gaudi event processing framework.

  4. Autonomous Multi-sensor Coordination: The Science Goal Monitor

    NASA Technical Reports Server (NTRS)

    Koratkar, Anuradha; Jung, John; Geiger, Jenny; Grosvenor, Sandy

    2004-01-01

    Next-generation science and exploration systems will employ new observation strategies that will use multiple sensors in a dynamic environment to provide high quality monitoring, self-consistent analyses and informed decision making. The Science Goal Monitor (SGM) is a prototype software tool being developed to explore the nature of automation necessary to enable dynamic observing of earth phenomenon. The tools being developed in SGM improve our ability to autonomously monitor multiple independent sensors and coordinate reactions to better observe the dynamic phenomena. The SGM system enables users to specify events of interest and how to react when an event is detected. The system monitors streams of data to identify occurrences of the key events previously specified by the scientist/user. When an event occurs, the system autonomously coordinates the execution of the users desired reactions between different sensors. The information can be used to rapidly respond to a variety of fast temporal events. Investigators will no longer have to rely on after-the-fact data analysis to determine what happened. Our paper describes a series of prototype demonstrations that we have developed using SGM and NASA's Earth Observing-1 (EO-1) satellite and Earth Observing Systems Aqua/Terra spacecrafts MODIS instrument. Our demonstrations show the promise of coordinating data from different sources, analyzing the data for a relevant event, autonomously updating and rapidly obtaining a follow-on relevant image. SGM is being used to investigate forest fires, floods and volcanic eruptions. We are now identifying new earth science scenarios that will have more complex SGM reasoning. By developing and testing a prototype in an operational environment, we are also establishing and gathering metrics to gauge the success of automating science campaigns.

  5. Bayesian Inference for Signal-Based Seismic Monitoring

    NASA Astrophysics Data System (ADS)

    Moore, D.

    2015-12-01

    Traditional seismic monitoring systems rely on discrete detections produced by station processing software, discarding significant information present in the original recorded signal. SIG-VISA (Signal-based Vertically Integrated Seismic Analysis) is a system for global seismic monitoring through Bayesian inference on seismic signals. By modeling signals directly, our forward model is able to incorporate a rich representation of the physics underlying the signal generation process, including source mechanisms, wave propagation, and station response. This allows inference in the model to recover the qualitative behavior of recent geophysical methods including waveform matching and double-differencing, all as part of a unified Bayesian monitoring system that simultaneously detects and locates events from a global network of stations. We demonstrate recent progress in scaling up SIG-VISA to efficiently process the data stream of global signals recorded by the International Monitoring System (IMS), including comparisons against existing processing methods that show increased sensitivity from our signal-based model and in particular the ability to locate events (including aftershock sequences that can tax analyst processing) precisely from waveform correlation effects. We also provide a Bayesian analysis of an alleged low-magnitude event near the DPRK test site in May 2010 [1] [2], investigating whether such an event could plausibly be detected through automated processing in a signal-based monitoring system. [1] Zhang, Miao and Wen, Lianxing. "Seismological Evidence for a Low-Yield Nuclear Test on 12 May 2010 in North Korea". Seismological Research Letters, January/February 2015. [2] Richards, Paul. "A Seismic Event in North Korea on 12 May 2010". CTBTO SnT 2015 oral presentation, video at https://video-archive.ctbto.org/index.php/kmc/preview/partner_id/103/uiconf_id/4421629/entry_id/0_ymmtpps0/delivery/http

  6. Systems and Sensors for Debris-flow Monitoring and Warning

    PubMed Central

    Arattano, Massimo; Marchi, Lorenzo

    2008-01-01

    Debris flows are a type of mass movement that occurs in mountain torrents. They consist of a high concentration of solid material in water that flows as a wave with a steep front. Debris flows can be considered a phenomenon intermediate between landslides and water floods. They are amongst the most hazardous natural processes in mountainous regions and may occur under different climatic conditions. Their destructiveness is due to different factors: their capability of transporting and depositing huge amounts of solid materials, which may also reach large sizes (boulders of several cubic meters are commonly transported by debris flows), their steep fronts, which may reach several meters of height and also their high velocities. The implementation of both structural and non-structural control measures is often required when debris flows endanger routes, urban areas and other infrastructures. Sensor networks for debris-flow monitoring and warning play an important role amongst non-structural measures intended to reduce debris-flow risk. In particular, debris flow warning systems can be subdivided into two main classes: advance warning and event warning systems. These two classes employ different types of sensors. Advance warning systems are based on monitoring causative hydrometeorological processes (typically rainfall) and aim to issue a warning before a possible debris flow is triggered. Event warning systems are based on detecting debris flows when these processes are in progress. They have a much smaller lead time than advance warning ones but are also less prone to false alarms. Advance warning for debris flows employs sensors and techniques typical of meteorology and hydrology, including measuring rainfall by means of rain gauges and weather radar and monitoring water discharge in headwater streams. Event warning systems use different types of sensors, encompassing ultrasonic or radar gauges, ground vibration sensors, videocameras, avalanche pendulums, photocells, trip wires etc. Event warning systems for debris flows have a strong linkage with debris-flow monitoring that is carried out for research purposes: the same sensors are often used for both monitoring and warning, although warning systems have higher requirements of robustness than monitoring systems. The paper presents a description of the sensors employed for debris-flow monitoring and event warning systems, with attention given to advantages and drawbacks of different types of sensors. PMID:27879828

  7. Monitoring the Microgravity Environment Quality On-Board the International Space Station Using Soft Computing Techniques

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Lin, Paul P.

    2001-01-01

    This paper presents an artificial intelligence monitoring system developed by the NASA Glenn Principal Investigator Microgravity Services project to help the principal investigator teams identify the primary vibratory disturbance sources that are active, at any moment in time, on-board the International Space Station, which might impact the microgravity environment their experiments are exposed to. From the Principal Investigator Microgravity Services' web site, the principal investigator teams can monitor via a graphical display, in near real time, which event(s) is/are on, such as crew activities, pumps, fans, centrifuges, compressor, crew exercise, platform structural modes, etc., and decide whether or not to run their experiments based on the acceleration environment associated with a specific event. This monitoring system is focused primarily on detecting the vibratory disturbance sources, but could be used as well to detect some of the transient disturbance sources, depending on the events duration. The system has built-in capability to detect both known and unknown vibratory disturbance sources. Several soft computing techniques such as Kohonen's Self-Organizing Feature Map, Learning Vector Quantization, Back-Propagation Neural Networks, and Fuzzy Logic were used to design the system.

  8. Early warning, warning or alarm systems for natural hazards? A generic classification.

    NASA Astrophysics Data System (ADS)

    Sättele, Martina; Bründl, Michael; Straub, Daniel

    2013-04-01

    Early warning, warning and alarm systems have gained popularity in recent years as cost-efficient measures for dangerous natural hazard processes such as floods, storms, rock and snow avalanches, debris flows, rock and ice falls, landslides, flash floods, glacier lake outburst floods, forest fires and even earthquakes. These systems can generate information before an event causes loss of property and life. In this way, they mainly mitigate the overall risk by reducing the presence probability of endangered objects. These systems are typically prototypes tailored to specific project needs. Despite their importance there is no recognised system classification. This contribution classifies warning and alarm systems into three classes: i) threshold systems, ii) expert systems and iii) model-based expert systems. The result is a generic classification, which takes the characteristics of the natural hazard process itself and the related monitoring possibilities into account. The choice of the monitoring parameters directly determines the system's lead time. The classification of 52 active systems moreover revealed typical system characteristics for each system class. i) Threshold systems monitor dynamic process parameters of ongoing events (e.g. water level of a debris flow) and incorporate minor lead times. They have a local geographical coverage and a predefined threshold determines if an alarm is automatically activated to warn endangered objects, authorities and system operators. ii) Expert systems monitor direct changes in the variable disposition (e.g crack opening before a rock avalanche) or trigger events (e.g. heavy rain) at a local scale before the main event starts and thus offer extended lead times. The final alarm decision incorporates human, model and organisational related factors. iii) Model-based expert systems monitor indirect changes in the variable disposition (e.g. snow temperature, height or solar radiation that influence the occurrence probability of snow avalanches) or trigger events (e.g. heavy snow fall) to predict spontaneous hazard events in advance. They encompass regional or national measuring networks and satisfy additional demands such as the standardisation of the measuring stations. The developed classification and the characteristics, which were revealed for each class, yield a valuable input to quantifying the reliability of warning and alarm systems. Importantly, this will facilitate to compare them with well-established standard mitigation measures such as dams, nets and galleries within an integrated risk management approach.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dudleson, B.; Arnold, M.; McCann, D.

    Rapid detection of unexpected drilling events requires continuous monitoring of drilling parameters. A major R and D program by a drilling contractor has led to the introduction of a computerized monitoring system on its offshore rigs. System includes advanced color graphics displays and new smart alarms to help both contractor and operator personnel detect and observe drilling events before they would normally be apparent with conventional rig instrumentation. This article describes a module of this monitoring system, which uses expert system technology to detect the earliest stages of drillstring washouts. Field results demonstrate the effectiveness of the smart alarm incorporatedmore » in the system. Early detection allows the driller to react before a twist-off results in expensive fishing operations.« less

  10. EMIR: a configurable hierarchical system for event monitoring and incident response

    NASA Astrophysics Data System (ADS)

    Deich, William T. S.

    2014-07-01

    The Event Monitor and Incident Response system (emir) is a flexible, general-purpose system for monitoring and responding to all aspects of instrument, telescope, and general facility operations, and has been in use at the Automated Planet Finder telescope for two years. Responses to problems can include both passive actions (e.g. generating alerts) and active actions (e.g. modifying system settings). Emir includes a monitor-and-response daemon, plus graphical user interfaces and text-based clients that automatically configure themselves from data supplied at runtime by the daemon. The daemon is driven by a configuration file that describes each condition to be monitored, the actions to take when the condition is triggered, and how the conditions are aggregated into hierarchical groups of conditions. Emir has been implemented for the Keck Task Library (KTL) keyword-based systems used at Keck and Lick Observatories, but can be readily adapted to many event-driven architectures. This paper discusses the design and implementation of Emir , and the challenges in balancing the competing demands for simplicity, flexibility, power, and extensibility. Emir 's design lends itself well to multiple purposes, and in addition to its core monitor and response functions, it provides an effective framework for computing running statistics, aggregate values, and summary state values from the primitive state data generated by other subsystems, and even for creating quick-and-dirty control loops for simple systems.

  11. Remote glucose monitoring in cAMP setting reduces the risk of prolonged nocturnal hypoglycemia.

    PubMed

    DeSalvo, Daniel J; Keith-Hynes, Patrick; Peyser, Thomas; Place, Jérôme; Caswell, Kim; Wilson, Darrell M; Harris, Breanne; Clinton, Paula; Kovatchev, Boris; Buckingham, Bruce A

    2014-01-01

    This study tested the feasibility and effectiveness of remote continuous glucose monitoring (CGM) in a diabetes camp setting. Twenty campers (7-21 years old) with type 1 diabetes were enrolled at each of three camp sessions lasting 5-6 days. On alternating nights, 10 campers were randomized to usual wear of a Dexcom (San Diego, CA) G4™ PLATINUM CGM system, and 10 were randomized to remote monitoring with the Dexcom G4 PLATINUM communicating with the Diabetes Assistant, a cell phone platform, to allow wireless transmission of CGM values. Up to 15 individual graphs and sensor values could be displayed on a single remote monitor or portable tablet. An alarm was triggered for values <70 mg/dL, and treatment was given for meter-confirmed hypoglycemia. The primary end point was to decrease the duration of hypoglycemic episodes <50 mg/dL. There were 320 nights of CGM data and 197 hypoglycemic events. Of the remote monitoring alarms, 79% were true (meter reading of <70 mg/dL). With remote monitoring, 100% of alarms were responded to, whereas without remote monitoring only 54% of alarms were responded to. The median duration of hypoglycemic events <70 mg/dL was 35 min without remote monitoring and 30 min with remote monitoring (P=0.078). Remote monitoring significantly decreased prolonged hypoglycemic events, eliminating all events <50 mg/dL lasting longer than 30 min as well as all events <70 mg/dL lasting more than 2 h. Remote monitoring is feasible at diabetes camps and effective in reducing the risk of prolonged nocturnal hypoglycemia. This technology will facilitate forthcoming studies to evaluate the efficacy of automated closed-loop systems in the camp setting.

  12. Intelligent monitoring of critical pathological events during anesthesia.

    PubMed

    Gohil, Bhupendra; Gholamhhosseini, Hamid; Harrison, Michael J; Lowe, Andrew; Al-Jumaily, Ahmed

    2007-01-01

    Expert algorithms in the field of intelligent patient monitoring have rapidly revolutionized patient care thereby improving patient safety. Patient monitoring during anesthesia requires cautious attention by anesthetists who are monitoring many modalities, diagnosing clinically critical events and performing patient management tasks simultaneously. The mishaps that occur during day-to-day anesthesia causing disastrous errors in anesthesia administration were classified and studied by Reason [1]. Human errors in anesthesia account for 82% of the preventable mishaps [2]. The aim of this paper is to develop a clinically useful diagnostic alarm system for detecting critical events during anesthesia administration. The development of an expert diagnostic alarm system called ;RT-SAAM' for detecting critical pathological events in the operating theatre is presented. This system provides decision support to the anesthetist by presenting the diagnostic results on an integrative, ergonomic display and thus enhancing patient safety. The performance of the system was validated through a series of offline and real-time testing in the operation theatre. When detecting absolute hypovolaemia (AHV), moderate level of agreement was observed between RT-SAAM and the human expert (anesthetist) during surgical procedures. RT-SAAM is a clinically useful diagnostic tool which can be easily modified for diagnosing additional critical pathological events like relative hypovolaemia, fall in cardiac output, sympathetic response and malignant hyperpyrexia during surgical procedures. RT-SAAM is currently being tested at the Auckland City Hospital with ethical approval from the local ethics committees.

  13. Online Toxicity Monitors (OTM) for Distribution System Water Quality Monitoring

    EPA Science Inventory

    Drinking water distribution systems in the U.S. are vulnerable to episodic contamination events (both unintentional and intentional). The U.S. Environmental Protection Agency (EPA) is conducting research to investigate the use of broad-spectrum online toxicity monitors (OTMs) in ...

  14. Safety of herbal products in Thailand: an analysis of reports in the thai health product vigilance center database from 2000 to 2008.

    PubMed

    Saokaew, Surasak; Suwankesawong, Wimon; Permsuwan, Unchalee; Chaiyakunapruk, Nathorn

    2011-04-01

    The use of herbal products continues to expand rapidly across the world and concerns regarding the safety of these products have been raised. In Thailand, Thai Vigibase, developed by the Health Product Vigilance Center (HPVC) under the Thai Food and Drug Administration, is the national database that collates reports from health product surveillance systems and programmes. Thai Vigibase can be used to identify signals of adverse events in patients receiving herbal products. The purpose of the study was to describe the characteristics of reported adverse events in patients receiving herbal products in Thailand. Thai Vigibase data from February 2000 to December 2008 involving adverse events reported in association with herbal products were used. This database includes case reports submitted through the spontaneous reporting system and intensive monitoring programmes. Under the spontaneous reporting system, adverse event reports are collected nationwide via a national network of 22 regional centres covering more than 800 public and private hospitals, and health service centres. An intensive monitoring programme was also conducted to monitor the five single herbal products listed in the Thai National List of Essential Medicines (NLEM), while another intensive monitoring programme was developed to monitor the four single herbal products that were under consideration for inclusion in the NLEM. The database contained patient demographics, adverse events associated with herbal products, and details on seriousness, causality and quality of reports. Descriptive statistics were used for data analyses. A total of 593 reports with 1868 adverse events involving 24 different products were made during the study period. The age range of individuals was 1-86 years (mean 47 years). Most case reports were obtained from the intensive monitoring programme. Of the reports, 72% involved females. The herbal products for which adverse events were frequently reported were products containing turmeric (44%), followed by andrographis (10%), veld grape (10%), pennywort (7%), plai (6%), jewel vine (6%), bitter melon (5%) and snake plant (5%). Gastrointestinal problems were the most common adverse effect reported. Serious adverse events included Stevens-Johnson syndrome, anaphylactic shock and exfoliative dermatitis. Adverse event reports on herbals products were diverse, with most of them being reported through intensive monitoring programmes. Thai Vigibase is a potentially effective data source for signal detection of adverse events associated with herbal products.

  15. GTSO: Global Trace Synchronization and Ordering Mechanism for Wireless Sensor Network Monitoring Platforms

    PubMed Central

    Bonastre, Alberto; Ors, Rafael

    2017-01-01

    Monitoring is one of the best ways to evaluate the behavior of computer systems. When the monitored system is a distributed system—such as a wireless sensor network (WSN)—the monitoring operation must also be distributed, providing a distributed trace for further analysis. The temporal sequence of occurrence of the events registered by the distributed monitoring platform (DMP) must be correctly established to provide cause-effect relationships between them, so the logs obtained in different monitor nodes must be synchronized. Many of synchronization mechanisms applied to DMPs consist in adjusting the internal clocks of the nodes to the same value as a reference time. However, these mechanisms can create an incoherent event sequence. This article presents a new method to achieve global synchronization of the traces obtained in a DMP. It is based on periodic synchronization signals that are received by the monitor nodes and logged along with the recorded events. This mechanism processes all traces and generates a global post-synchronized trace by scaling all times registered proportionally according with the synchronization signals. It is intended to be a simple but efficient offline mechanism. Its application in a WSN-DMP demonstrates that it guarantees a correct ordering of the events, avoiding the aforementioned issues. PMID:29295494

  16. ATLAS EventIndex general dataflow and monitoring infrastructure

    NASA Astrophysics Data System (ADS)

    Fernández Casaní, Á.; Barberis, D.; Favareto, A.; García Montoro, C.; González de la Hoz, S.; Hřivnáč, J.; Prokoshin, F.; Salt, J.; Sánchez, J.; Többicke, R.; Yuan, R.; ATLAS Collaboration

    2017-10-01

    The ATLAS EventIndex has been running in production since mid-2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure at CERN. A subset of this information is copied to an Oracle relational database for fast dataset discovery, event-picking, crosschecks with other ATLAS systems and checks for event duplication. The system design and its optimization is serving event picking from requests of a few events up to scales of tens of thousand of events, and in addition, data consistency checks are performed for large production campaigns. Detecting duplicate events with a scope of physics collections has recently arisen as an important use case. This paper describes the general architecture of the project and the data flow and operation issues, which are addressed by recent developments to improve the throughput of the overall system. In this direction, the data collection system is reducing the usage of the messaging infrastructure to overcome the performance shortcomings detected during production peaks; an object storage approach is instead used to convey the event index information, and messages to signal their location and status. Recent changes in the Producer/Consumer architecture are also presented in detail, as well as the monitoring infrastructure.

  17. Real-time classification of signals from three-component seismic sensors using neural nets

    NASA Astrophysics Data System (ADS)

    Bowman, B. C.; Dowla, F.

    1992-05-01

    Adaptive seismic data acquisition systems with capabilities of signal discrimination and event classification are important in treaty monitoring, proliferation, and earthquake early detection systems. Potential applications include monitoring underground chemical explosions, as well as other military, cultural, and natural activities where characteristics of signals change rapidly and without warning. In these applications, the ability to detect and interpret events rapidly without falling behind the influx of the data is critical. We developed a system for real-time data acquisition, analysis, learning, and classification of recorded events employing some of the latest technology in computer hardware, software, and artificial neural networks methods. The system is able to train dynamically, and updates its knowledge based on new data. The software is modular and hardware-independent; i.e., the front-end instrumentation is transparent to the analysis system. The software is designed to take advantage of the multiprocessing environment of the Unix operating system. The Unix System V shared memory and static RAM protocols for data access and the semaphore mechanism for interprocess communications were used. As the three-component sensor detects a seismic signal, it is displayed graphically on a color monitor using X11/Xlib graphics with interactive screening capabilities. For interesting events, the triaxial signal polarization is computed, a fast Fourier Transform (FFT) algorithm is applied, and the normalized power spectrum is transmitted to a backpropagation neural network for event classification. The system is currently capable of handling three data channels with a sampling rate of 500 Hz, which covers the bandwidth of most seismic events. The system has been tested in laboratory setting with artificial events generated in the vicinity of a three-component sensor.

  18. Data Automata in Scala

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    2014-01-01

    The field of runtime verification has during the last decade seen a multitude of systems for monitoring event sequences (traces) emitted by a running system. The objective is to ensure correctness of a system by checking its execution traces against formal specifications representing requirements. A special challenge is data parameterized events, where monitors have to keep track of the combination of control states as well as data constraints, relating events and the data they carry across time points. This poses a challenge wrt. efficiency of monitors, as well as expressiveness of logics. Data automata is a form of automata where states are parameterized with data, supporting monitoring of data parameterized events. We describe the full details of a very simple API in the Scala programming language, an internal DSL (Domain-Specific Language), implementing data automata. The small implementation suggests a design pattern. Data automata allow transition conditions to refer to other states than the source state, and allow target states of transitions to be inlined, offering a temporal logic flavored notation. An embedding of a logic in a high-level language like Scala in addition allows monitors to be programmed using all of Scala's language constructs, offering the full flexibility of a programming language. The framework is demonstrated on an XML processing scenario previously addressed in related work.

  19. Expert systems for real-time monitoring and fault diagnosis

    NASA Technical Reports Server (NTRS)

    Edwards, S. J.; Caglayan, A. K.

    1989-01-01

    Methods for building real-time onboard expert systems were investigated, and the use of expert systems technology was demonstrated in improving the performance of current real-time onboard monitoring and fault diagnosis applications. The potential applications of the proposed research include an expert system environment allowing the integration of expert systems into conventional time-critical application solutions, a grammar for describing the discrete event behavior of monitoring and fault diagnosis systems, and their applications to new real-time hardware fault diagnosis and monitoring systems for aircraft.

  20. Arden Syntax Clinical Foundation Framework for Event Monitoring in Intensive Care Units: Report on a Pilot Study.

    PubMed

    de Bruin, Jeroen S; Zeckl, Julia; Adlassnig, Katharina; Blacky, Alexander; Koller, Walter; Rappelsberger, Andrea; Adlassnig, Klaus-Peter

    2017-01-01

    The creation of clinical decision support systems has received a strong impulse over the last years, but their integration into a clinical routine has lagged behind, partly due to a lack of interoperability and trust by physicians. We report on the implementation of a clinical foundation framework in Arden Syntax, comprising knowledge units for (a) preprocessing raw clinical data, (b) the determination of single clinical concepts, and (c) more complex medical knowledge, which can be modeled through the composition and configuration of knowledge units in this framework. Thus, it can be tailored to clinical institutions or patients' caregivers. In the present version, we integrated knowledge units for several infection-related clinical concepts into the framework and developed a clinical event monitoring system over the framework that employs three different scenarios for monitoring clinical signs of bloodstream infection. The clinical event monitoring system was tested using data from intensive care units at Vienna General Hospital, Austria.

  1. Reporting vaccine-associated adverse events.

    PubMed Central

    Duclos, P.; Hockin, J.; Pless, R.; Lawlor, B.

    1997-01-01

    OBJECTIVE: To determine family physicians' awareness of the need to monitor and report vaccine-associated adverse events (VAAE) in Canada and to identify mechanisms that could facilitate reporting. DESIGN: Mailed survey. SETTING: Canadian family practices. PARTICIPANTS: Random sample of 747 family physicians. Overall response rate was 32% (226 of 717 eligible physicians). MAIN OUTCOME MEASURES: Access to education on VAAE; knowledge about VAAE monitoring systems, reporting criteria, and reporting forms; method of reporting VAAEs and reasons for not reporting them; and current experience with VAAEs. RESULTS: Of 226 respondents, 55% reported observing VAAEs, and 42% reported the event. Fewer than 50% were aware of a monitoring system for VAAE, and only 39% had had VAAE-related education during medical training. Only 28% knew the reporting criteria. Reporting was significantly associated with knowledge of VAAE monitoring systems and reporting criteria (P < 0.01). CONCLUSION: Physicians need more feedback and education on VAAE reporting and more information about the importance of reporting and about reporting criteria and methods. PMID:9303234

  2. Monitoring Cellular Events in Living Mast Cells Stimulated with an Extremely Small Amount of Fluid on a Microchip

    NASA Astrophysics Data System (ADS)

    Munaka, Tatsuya; Abe, Hirohisa; Kanai, Masaki; Sakamoto, Takashi; Nakanishi, Hiroaki; Yamaoka, Tetsuji; Shoji, Shuichi; Murakami, Akira

    2006-07-01

    We successfully developed a measurement system for real-time analysis of cellular function using a newly designed microchip. This microchip was equipped with a micro cell incubation chamber (240 nl) and was stimulated by a very small amount of stimuli (as small as 24 nl). Using the microchip system, cultivation of mast cells was successfully carried out. Monitoring of the cellular events after stimulation with an extremely small amount of fluid on a microchip was performed. This system could be applicable for various types of cellular analysis including real-time monitoring of cellular response by stimulation.

  3. Event-driven simulation in SELMON: An overview of EDSE

    NASA Technical Reports Server (NTRS)

    Rouquette, Nicolas F.; Chien, Steve A.; Charest, Leonard, Jr.

    1992-01-01

    EDSE (event-driven simulation engine), a model-based event-driven simulator implemented for SELMON, a tool for sensor selection and anomaly detection in real-time monitoring is described. The simulator is used in conjunction with a causal model to predict future behavior of the model from observed data. The behavior of the causal model is interpreted as equivalent to the behavior of the physical system being modeled. An overview of the functionality of the simulator and the model-based event-driven simulation paradigm on which it is based is provided. Included are high-level descriptions of the following key properties: event consumption and event creation, iterative simulation, synchronization and filtering of monitoring data from the physical system. Finally, how EDSE stands with respect to the relevant open issues of discrete-event and model-based simulation is discussed.

  4. Design and rationale of the ANALYZE ST study: a prospective, nonrandomized, multicenter ST monitoring study to detect acute coronary syndrome events in implantable cardioverter-defibrillator patients.

    PubMed

    Gibson, C Michael; Krucoff, Mitchell; Kirtane, Ajay J; Rao, Sunil V; Mackall, Judith A; Matthews, Ray; Saba, Samir; Waksman, Ron; Holmes, David

    2014-10-01

    In the setting of ST-segment elevation myocardial infarction, timely restoration of normal blood flow is associated with improved myocardial salvage and survival. Despite improvements in door-to-needle and door-to-balloon times, there remains an unmet need with respect to improved symptom-to-door times. A prior report of an implanted device to monitor ST-segment deviation demonstrated very short times to reperfusion among patients with an acute coronary syndrome (ACS) with documented thrombotic occlusion. The goal of the ANALYZE ST study is to evaluate the safety and effectiveness of a novel ST-segment monitoring feature using an existing implantable cardioverter-defibrillator (ICD) among patients with known coronary artery disease. The ANALYZE ST study is a prospective, nonrandomized, multicenter, pivotal Investigational Device Exemption study enrolling 5,228 patients with newly implanted ICD systems for standard clinical indications who also have a documented history of coronary artery disease. Patients will be monitored for 48 months, during which effectiveness of the device for the purpose of early detection of cardiac injury will be evaluated by analyzing the sensitivity of the ST monitoring feature to identify clinical ACS events. In addition, the safety of the ST monitoring feature will be evaluated through the assessment of the percentage of patients for which monitoring produces a false-positive event over the course of 12 months. The ANALYZE ST trial is testing the hypothesis that the ST monitoring feature in the Fortify ST ICD system (St. Jude Medical, Inc., St. Paul, MN) (or other ICD systems with the ST monitoring feature) will accurately identify patients with clinical ACS events. Copyright © 2014 Mosby, Inc. All rights reserved.

  5. The relative importance of real-time in-cab and external feedback in managing fatigue in real-world commercial transport operations.

    PubMed

    Fitzharris, Michael; Liu, Sara; Stephens, Amanda N; Lenné, Michael G

    2017-05-29

    Real-time driver monitoring systems represent a solution to address key behavioral risks as they occur, particularly distraction and fatigue. The efficacy of these systems in real-world settings is largely unknown. This article has three objectives: (1) to document the incidence and duration of fatigue in real-world commercial truck-driving operations, (2) to determine the reduction, if any, in the incidence of fatigue episodes associated with providing feedback, and (3) to tease apart the relative contribution of in-cab warnings from 24/7 monitoring and feedback to employers. Data collected from a commercially available in-vehicle camera-based driver monitoring system installed in a commercial truck fleet operating in Australia were analyzed. The real-time driver monitoring system makes continuous assessments of driver drowsiness based on eyelid position and other factors. Data were collected in a baseline period where no feedback was provided to drivers. Real-time feedback to drivers then occurred via in-cab auditory and haptic warnings, which were further enhanced by direct feedback by company management when fatigue events were detected by external 24/7 monitors. Fatigue incidence rates and their timing of occurrence across the three time periods were compared. Relative to no feedback being provided to drivers when fatigue events were detected, in-cab warnings resulted in a 66% reduction in fatigue events, with a 95% reduction achieved by the real-time provision of direct feedback in addition to in-cab warnings (p < 0.01). With feedback, fatigue events were shorter in duration a d occurred later in the trip, and fewer drivers had more than one verified fatigue event per trip. That the provision of feedback to the company on driver fatigue events in real time provides greater benefit than feedback to the driver alone has implications for companies seeking to mitigate risks associated with fatigue. Having fewer fatigue events is likely a reflection of the device itself and the accompanying safety culture of the company in terms of how the information is used. Data were analysed on a per-truck trip basis, and the findings are indicative of fatigue events in a large-scale commercial transport fleet. Future research ought to account for individual driver performance, which was not possible with the available data in this retrospective analysis. Evidence that real-time driver monitoring feedback is effective in reducing fatigue events is invaluable in the development of fleet safety policies, and of future national policy and vehicle safety regulations. Implications for automotive driver monitoring are discussed.

  6. Development of an intelligent hydroinformatic system for real-time monitoring and assessment of civil infrastructure

    NASA Astrophysics Data System (ADS)

    Cahill, Paul; Michalis, Panagiotis; Solman, Hrvoje; Kerin, Igor; Bekic, Damir; Pakrashi, Vikram; McKeogh, Eamon

    2017-04-01

    With the effects of climate change becoming more apparent, extreme weather events are now occurring with greater frequency throughout the world. Such extreme events have resulted in increased high intensity flood events which are having devastating consequences on hydro-structures, especially on bridge infrastructure. The remote and often inaccessible nature of such bridges makes inspections problematic, a major concern if safety assessments are required during and after extreme flood events. A solution to this is the introduction of smart, low cost sensing solutions at locations susceptible to hydro-hazards. Such solutions can provide real-time information on the health of the bridge and its environments, with such information aiding in the mitigation of the risks associated with extreme weather events. This study presents the development of an intelligent system for remote, real-time monitoring of hydro-hazards to bridge infrastructure. The solution consists of two types of remote monitoring stations which have the capacity to monitor environmental conditions and provide real-time information to a centralized, big data database solution, from which an intelligent decision support system will accommodate the results to control and manage bridge, river and catchment assets. The first device developed as part of the system is the Weather Information Logging Device (WILD), which monitors rainfall, temperature and air and soil moisture content. The ability of the WILD to monitor rainfall in real time enables flood early warning alerts and predictive river flow conditions, thereby enabling decision makers the ability to make timely and effective decisions about critical infrastructures in advance of extreme flood events. The WILD is complemented by a second monitoring device, the Bridge Information Recording Device (BIRD), which monitors water levels at a given location in real-time. The monitoring of water levels of a river allows for, among other applications, hydraulic modelling to assess the likely impact that severe flood events will have on a bridges foundation, particularly due to scour. The process of reading and validating data from the WILD and BIRD buffer servers is outlined, as is the transmission protocol used for the sending of recorded data to a centralized repository for further use and analysis. Finally, the development of a centralized repository for the collection of data from the WILD and BIRD devices is presented. Eventually the big data solution would be used to receive, store and send the monitored data to the hydrological models, whether existing or developed, and the results would be transmitted to the intelligent decision support system based on a web-based platform, for managing, planning and executing data, processes and procedures for bridge assets. The development of intelligent hydroinformatic system is an important tool for the protection of key infrastructure assets from the increasingly common effects of climate change. Acknowledgement The authors wish to acknowledge the financial support of the European Commission, through the Marie Curie Industry-Academia Partnership and Pathways Network BRIDGE SMS (Intelligent Bridge Assessment Maintenance and Management System) - FP7-People-2013-IAPP- 612517.

  7. Acoustic Emission Sensing for Maritime Diesel Engine Performance and Health

    DTIC Science & Technology

    2016-05-01

    diesel internal combustion engine operating condition and health. A commercial-off- the-shelf AE monitoring system and a purpose-built data acquisition...subjected to external events such as a combustion event, fluid flow or the opening and closing of valves. This document reports on the monitoring and...conjunction with injection- combustion processes and valve events. AE from misfire as the result of a fuel injector malfunction was readily detectable

  8. A vision-based tool for the control of hydraulic structures in sewer systems

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Sage, D.; Kayal, S.; Jeanbourquin, D.; Rossi, L.

    2009-04-01

    During rain events, the total amount of the wastewater/storm-water mixture cannot be treated in the wastewater treatment plant; the overflowed water goes directly into the environment (lakes, rivers, streams) via devices called combined sewers overflows (CSOs). This water is untreated and is recognized as an important source of pollution. In most cases, the quantity of overflowed water is unknown due to high hydraulic turbulences during rain events; this quantity is often significant. For this reason, the monitoring of the water flow and the water level is of crucial environmental importance. Robust monitoring of sewer systems is a challenging task to achieve. Indeed, the environment inside sewers systems is inherently harsh and hostile: constant humidity of 100%, fast and large water level changes, corrosive atmosphere, presence of gas, difficult access, solid debris inside the flow. A flow monitoring based on traditional probes placed inside the water (such as Doppler flow meter) is difficult to conduct because of the solid material transported by the flow. Probes placed outside the flow such as ultrasonic water level probes are often used; however the measurement is generally done on only one particular point. Experience has shown that the water level in CSOs during rain events is far from being constant due to hydraulic turbulences. Thus, such probes output uncertain information. Moreover, a check of the data reliability is impossible to achieve. The HydroPix system proposes a novel approach to the monitoring of sewers based on video images, without contact with the water flow. The goal of this system is to provide a monitoring tool for wastewater system managers (end-users). The hardware was chosen in order to suit the harsh conditions of sewers system: Cameras are 100% waterproof and corrosion-resistant; Infra-red LED illumination systems are used (waterproof, low power consumption); A waterproof case contains the registration and communication system. The monitoring software has the following requirements: visual analysis of particular hydraulic behavior, automatic vision-based flow measurements, automatic alarm system for particular events (overflows, risk of flooding, etc), database for data management (images, events, measurements, etc.), ability to be controlled remotely. The software is implemented in modular server/client architecture under LabVIEW development system. We have conducted conclusive in situ tests in various sewers configurations (CSOs, storm-water sewerage, WWTP); they have shown the ability of the HydroPix to perform accurate monitoring of hydraulic structures. Visual information demonstrated a better understanding of the flow behavior in complex and difficult environment.

  9. Flight Avionics Sequencing Telemetry (FAST) DIV Latching Display

    NASA Technical Reports Server (NTRS)

    Moore, Charlotte

    2010-01-01

    The NASA Engineering (NE) Directorate at Kennedy Space Center provides engineering services to major programs such as: Space Shuttle, Inter national Space Station, and the Launch Services Program (LSP). The Av ionics Division within NE, provides avionics and flight control syste ms engineering support to LSP. The Launch Services Program is respons ible for procuring safe and reliable services for transporting critical, one of a kind, NASA payloads into orbit. As a result, engineers mu st monitor critical flight events during countdown and launch to asse ss anomalous behavior or any unexpected occurrence. The goal of this project is to take a tailored Systems Engineering approach to design, develop, and test Iris telemetry displays. The Flight Avionics Sequen cing Telemetry Delta-IV (FAST-D4) displays will provide NASA with an improved flight event monitoring tool to evaluate launch vehicle heal th and performance during system-level ground testing and flight. Flight events monitored will include data from the Redundant Inertial Fli ght Control Assembly (RIFCA) flight computer and launch vehicle comma nd feedback data. When a flight event occurs, the flight event is ill uminated on the display. This will enable NASA Engineers to monitor c ritical flight events on the day of launch. Completion of this project requires rudimentary knowledge of launch vehicle Guidance, Navigatio n, and Control (GN&C) systems, telemetry, and console operation. Work locations for the project include the engineering office, NASA telem etry laboratory, and Delta launch sites.

  10. Developing a flood monitoring system from remotely sensed data for the Limpopo basin

    USGS Publications Warehouse

    Asante, K.O.; Macuacua, R.D.; Artan, G.A.; Lietzow, R.W.; Verdin, J.P.

    2007-01-01

    This paper describes the application of remotely sensed precipitation to the monitoring of floods in a region that regularly experiences extreme precipitation and flood events, often associated with cyclonic systems. Precipitation data, which are derived from spaceborne radar aboard the National Aeronautics and Space Administration's Tropical Rainfall Measuring Mission and from National Oceanic and Atmospheric Administration's infrared-based products, are used to monitor areas experiencing extreme precipitation events that are defined as exceedance of a daily mean areal average value of 50 mm over a catchment. The remotely sensed precipitation data are also ingested into a hydrologic model that is parameterized using spatially distributed elevation, soil, and land cover data sets that are available globally from remote sensing and in situ sources. The resulting stream-flow is classified as an extreme flood event when flow anomalies exceed 1.5 standard deviations above the short-term mean. In an application in the Limpopo basin, it is demonstrated that the use of satellite-derived precipitation allows for the identification of extreme precipitation and flood events, both in terms of relative intensity and spatial extent. The system is used by water authorities in Mozambique to proactively initiate independent flood hazard verification before generating flood warnings. The system also serves as a supplementary information source when in situ gauging systems are disrupted. This paper concludes that remotely sensed precipitation and derived products greatly enhance the ability of water managers in the Limpopo basin to monitor extreme flood events and provide at-risk communities with early warning information. ?? 2007 IEEE.

  11. Is There a Lexical Bias Effect in Comprehension Monitoring?

    ERIC Educational Resources Information Center

    Severens, Els; Hartsuiker, Robert J.

    2009-01-01

    Event-related potentials were used to investigate if there is a lexical bias effect in comprehension monitoring. The lexical bias effect in language production (the tendency of phonological errors to result in existing words rather than nonwords) has been attributed to an internal self-monitoring system, which uses the comprehension system, and…

  12. Remote monitoring of electromagnetic signals and seismic events using smart mobile devices

    NASA Astrophysics Data System (ADS)

    Georgiadis, Pantelis; Cavouras, Dionisis; Sidiropoulos, Konstantinos; Ninos, Konstantinos; Nomicos, Constantine

    2009-06-01

    This study presents the design and development of a novel mobile wireless system to be used for monitoring seismic events and related electromagnetic signals, employing smart mobile devices like personal digital assistants (PDAs) and wireless communication technologies such as wireless local area networks (WLANs), general packet radio service (GPRS) and universal mobile telecommunications system (UMTS). The proposed system enables scientists to access critical data while being geographically independent of the sites of data sources, rendering it as a useful tool for preliminary scientific analysis.

  13. Space Weather and the Ground-Level Solar Proton Events of the 23rd Solar Cycle

    NASA Astrophysics Data System (ADS)

    Shea, M. A.; Smart, D. F.

    2012-10-01

    Solar proton events can adversely affect space and ground-based systems. Ground-level events are a subset of solar proton events that have a harder spectrum than average solar proton events and are detectable on Earth's surface by cosmic radiation ionization chambers, muon detectors, and neutron monitors. This paper summarizes the space weather effects associated with ground-level solar proton events during the 23rd solar cycle. These effects include communication and navigation systems, spacecraft electronics and operations, space power systems, manned space missions, and commercial aircraft operations. The major effect of ground-level events that affect manned spacecraft operations is increased radiation exposure. The primary effect on commercial aircraft operations is the loss of high frequency communication and, at extreme polar latitudes, an increase in the radiation exposure above that experienced from the background galactic cosmic radiation. Calculations of the maximum potential aircraft polar route exposure for each ground-level event of the 23rd solar cycle are presented. The space weather effects in October and November 2003 are highlighted together with on-going efforts to utilize cosmic ray neutron monitors to predict high energy solar proton events, thus providing an alert so that system operators can possibly make adjustments to vulnerable spacecraft operations and polar aircraft routes.

  14. LHCb Online event processing and filtering

    NASA Astrophysics Data System (ADS)

    Alessio, F.; Barandela, C.; Brarda, L.; Frank, M.; Franek, B.; Galli, D.; Gaspar, C.; Herwijnen, E. v.; Jacobsson, R.; Jost, B.; Köstner, S.; Moine, G.; Neufeld, N.; Somogyi, P.; Stoica, R.; Suman, S.

    2008-07-01

    The first level trigger of LHCb accepts one million events per second. After preprocessing in custom FPGA-based boards these events are distributed to a large farm of PC-servers using a high-speed Gigabit Ethernet network. Synchronisation and event management is achieved by the Timing and Trigger system of LHCb. Due to the complex nature of the selection of B-events, which are the main interest of LHCb, a full event-readout is required. Event processing on the servers is parallelised on an event basis. The reduction factor is typically 1/500. The remaining events are forwarded to a formatting layer, where the raw data files are formed and temporarily stored. A small part of the events is also forwarded to a dedicated farm for calibration and monitoring. The files are subsequently shipped to the CERN Tier0 facility for permanent storage and from there to the various Tier1 sites for reconstruction. In parallel files are used by various monitoring and calibration processes running within the LHCb Online system. The entire data-flow is controlled and configured by means of a SCADA system and several databases. After an overview of the LHCb data acquisition and its design principles this paper will emphasize the LHCb event filter system, which is now implemented using the final hardware and will be ready for data-taking for the LHC startup. Control, configuration and security aspects will also be discussed.

  15. Complex Event Recognition Architecture

    NASA Technical Reports Server (NTRS)

    Fitzgerald, William A.; Firby, R. James

    2009-01-01

    Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.

  16. Intelligent Software Agents: Sensor Integration and Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulesz, James J; Lee, Ronald W

    2013-01-01

    Abstract In a post Macondo world the buzzwords are Integrity Management and Incident Response Management. The twin processes are not new but the opportunity to link the two is novel. Intelligent software agents can be used with sensor networks in distributed and centralized computing systems to enhance real-time monitoring of system integrity as well as manage the follow-on incident response to changing, and potentially hazardous, environmental conditions. The software components are embedded at the sensor network nodes in surveillance systems used for monitoring unusual events. When an event occurs, the software agents establish a new concept of operation at themore » sensing node, post the event status to a blackboard for software agents at other nodes to see , and then react quickly and efficiently to monitor the scale of the event. The technology addresses a current challenge in sensor networks that prevents a rapid and efficient response when a sensor measurement indicates that an event has occurred. By using intelligent software agents - which can be stationary or mobile, interact socially, and adapt to changing situations - the technology offers features that are particularly important when systems need to adapt to active circumstances. For example, when a release is detected, the local software agent collaborates with other agents at the node to exercise the appropriate operation, such as: targeted detection, increased detection frequency, decreased detection frequency for other non-alarming sensors, and determination of environmental conditions so that adjacent nodes can be informed that an event is occurring and when it will arrive. The software agents at the nodes can also post the data in a targeted manner, so that agents at other nodes and the command center can exercise appropriate operations to recalibrate the overall sensor network and associated intelligence systems. The paper describes the concepts and provides examples of real-world implementations including the Threat Detection and Analysis System (TDAS) at the International Port of Memphis and the Biological Warning and Incident Characterization System (BWIC) Environmental Monitoring (EM) Component. Technologies developed for these 24/7 operational systems have applications for improved real-time system integrity awareness as well as provide incident response (as needed) for production and field applications.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, Casey J.; Brigantic, Robert T.; Keating, Douglas H.

    There is a need to develop and demonstrate technical approaches for verifying potential future agreements to limit and reduce total warhead stockpiles. To facilitate this aim, warhead monitoring systems employ both concepts of operations (CONOPS) and technologies. A systems evaluation approach can be used to assess the relative performance of CONOPS and technologies in their ability to achieve monitoring system objectives which include: 1) confidence that a treaty accountable item (TAI) initialized by the monitoring system is as declared; 2) confidence that there is no undetected diversion from the monitoring system; and 3) confidence that a TAI is dismantled asmore » declared. Although there are many quantitative methods that can be used to assess system performance for the above objectives, this paper focuses on a simulation perspective primarily for the ability to support analysis of the probabilities that are used to define operating characteristics of CONOPS and technologies. This paper describes a discrete event simulation (DES) model, comprised of three major sub-models: including TAI lifecycle flow, monitoring activities, and declaration behavior. The DES model seeks to capture all processes and decision points associated with the progressions of virtual TAIs, with notional characteristics, through the monitoring system from initialization through dismantlement. The simulation updates TAI progression (i.e., whether the generated test objects are accepted and rejected at the appropriate points) all the way through dismantlement. Evaluation of TAI lifecycles primarily serves to assess how the order, frequency, and combination of functions in the CONOPS affect system performance as a whole. It is important, however, to note that discrete event simulation is also capable (at a basic level) of addressing vulnerabilities in the CONOPS and interdependencies between individual functions as well. This approach is beneficial because it does not rely on complex mathematical models, but instead attempts to recreate the real world system as a decision and event driven simulation. Finally, because the simulation addresses warhead confirmation, chain of custody, and warhead dismantlement in a modular fashion, a discrete-event model could be easily adapted to multiple CONOPS for the exploration of a large number of “what if” scenarios.« less

  18. A real-time measurement system for long-life flood monitoring and warning applications.

    PubMed

    Marin-Perez, Rafael; García-Pintado, Javier; Gómez, Antonio Skarmeta

    2012-01-01

    A flood warning system incorporates telemetered rainfall and flow/water level data measured at various locations in the catchment area. Real-time accurate data collection is required for this use, and sensor networks improve the system capabilities. However, existing sensor nodes struggle to satisfy the hydrological requirements in terms of autonomy, sensor hardware compatibility, reliability and long-range communication. We describe the design and development of a real-time measurement system for flood monitoring, and its deployment in a flash-flood prone 650 km(2) semiarid watershed in Southern Spain. A developed low-power and long-range communication device, so-called DatalogV1, provides automatic data gathering and reliable transmission. DatalogV1 incorporates self-monitoring for adapting measurement schedules for consumption management and to capture events of interest. Two tests are used to assess the success of the development. The results show an autonomous and robust monitoring system for long-term collection of water level data in many sparse locations during flood events.

  19. A Real-Time Measurement System for Long-Life Flood Monitoring and Warning Applications

    PubMed Central

    Marin-Perez, Rafael; García-Pintado, Javier; Gómez, Antonio Skarmeta

    2012-01-01

    A flood warning system incorporates telemetered rainfall and flow/water level data measured at various locations in the catchment area. Real-time accurate data collection is required for this use, and sensor networks improve the system capabilities. However, existing sensor nodes struggle to satisfy the hydrological requirements in terms of autonomy, sensor hardware compatibility, reliability and long-range communication. We describe the design and development of a real-time measurement system for flood monitoring, and its deployment in a flash-flood prone 650 km2 semiarid watershed in Southern Spain. A developed low-power and long-range communication device, so-called DatalogV1, provides automatic data gathering and reliable transmission. DatalogV1 incorporates self-monitoring for adapting measurement schedules for consumption management and to capture events of interest. Two tests are used to assess the success of the development. The results show an autonomous and robust monitoring system for long-term collection of water level data in many sparse locations during flood events. PMID:22666028

  20. Apparatus and method for detecting tampering in flexible structures

    DOEpatents

    Maxey, Lonnie C [Knoxville, TN; Haynes, Howard D [Knoxville, TN

    2011-02-01

    A system for monitoring or detecting tampering in a flexible structure includes taking electrical measurements on a sensing cable coupled to the structure, performing spectral analysis on the measured data, and comparing the spectral characteristics of the event to those of known benign and/or known suspicious events. A threshold or trigger value may used to identify an event of interest and initiate data collection. Alternatively, the system may be triggered at preset intervals, triggered manually, or triggered by a signal from another sensing device such as a motion detector. The system may be used to monitor electrical cables and conduits, hoses and flexible ducts, fences and other perimeter control devices, structural cables, flexible fabrics, and other flexible structures.

  1. An Integrated Monitoring System of Pre-earthquake Processes in Peloponnese, Greece

    NASA Astrophysics Data System (ADS)

    Karastathis, V. K.; Tsinganos, K.; Kafatos, M.; Eleftheriou, G.; Ouzounov, D.; Mouzakiotis, E.; Papadopoulos, G. A.; Voulgaris, N.; Bocchini, G. M.; Liakopoulos, S.; Aspiotis, T.; Gika, F.; Tselentis, A.; Moshou, A.; Psiloglou, B.

    2017-12-01

    One of the controversial issues in the contemporary seismology is the ability of radon accumulation monitoring to provide reliable earthquake forecasting. Although there are many examples in the literature showing radon increase before earthquakes, skepticism arises from instability of the measurements, false alarms, difficulties in interpretation caused by the weather influence (eg. rainfall) and difficulties on the consideration an irrefutable theoretical background of the phenomenon.We have developed and extensively tested a multi parameter network aimed for studying of the pre-earthquake processes and operating as a part of integrated monitoring system in the high seismicity area of the Western Hellenic Arc (SW Peloponnese, Greece). The prototype consists of four components: A real-time monitoring system of Radon accumulation. It consists of three gamma radiation detectors [NaI(Tl) scintillators] A nine-station seismic array to monitor the microseismicity in the offshore area of the Hellenic arc. The processing of the data is based on F-K and beam-forming techniques. Real-time weather monitoring systems for air temperature, relative humidity, precipitation and pressure. Thermal radiation emission from AVHRR/NOAA-18 polar orbit satellite observation. The project revolved around the idea of jointly studying the emission of Radon that has been proven in many cases as a reliable indicator of the possible time of an event, with the accurate location of the foreshock activity detected by the seismic array that can be a more reliable indicator of the possible position of an event. In parallel a satellite thermal anomaly detection technique has been used for monitoring of larger magnitude events (possible indicator for strong events M ≥5.0.). The first year of operations revealed a number of pre-seismic radon variation anomalies before several local earthquakes (M>3.6). The Radon increases systematically before the larger events.Details about the overall performance in registration of pre-seismic signals in Peloponnese region, along with two distant but very strong earthquakes in Jun 12, 2017 M6.3 and Jul 20, 2017 M6.6 in Greece will be discussed.

  2. TESTING AND VERIFICATION OF REAL-TIME WATER QUALITY MONITORING SENSORS IN A DISTRIBUTION SYSTEM AGAINST INTRODUCED CONTAMINATION

    EPA Science Inventory

    Drinking water distribution systems reach the majority of American homes, business and civic areas, and are therefore an attractive target for terrorist attack via direct contamination, or backflow events. Instrumental monitoring of such systems may be used to signal the prese...

  3. Synergy of Earth Observation and In-Situ Monitoring Data for Flood Hazard Early Warning System

    NASA Astrophysics Data System (ADS)

    Brodsky, Lukas; Kodesova, Radka; Spazierova, Katerina

    2010-12-01

    In this study, we demonstrate synergy of EO and in-situ monitoring data for early warning flood hazard system in the Czech Republic developed within ESA PECS project FLOREO. The development of the demonstration system is oriented to support existing monitoring activities, especially snow melt and surface water runoff contributing to flooding events. The system consists of two main parts accordingly, the first is snow cover and snow melt monitoring driven mainly by EO data and the other is surface water runoff modeling and monitoring driven by synergy of in-situ and EO data.

  4. The event notification and alarm system for the Open Science Grid operations center

    NASA Astrophysics Data System (ADS)

    Hayashi, S.; Teige and, S.; Quick, R.

    2012-12-01

    The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper describes the automated monitoring and notification systems used to diagnose and report problems. Described here are the means used by OSG Operations to monitor systems such as physical facilities, network operations, server health, service availability and software error events. Once detected, an error condition generates a message sent to, for example, Email, SMS, Twitter, an Instant Message Server, etc. The mechanism being developed to integrate these monitoring systems into a prioritized and configurable alarming system is emphasized.

  5. 40 CFR 49.4166 - Monitoring requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... burning pilot flame, electronically controlled automatic igniters, and monitoring system failures, using a... failure, electronically controlled automatic igniter failure, or improper monitoring equipment operation... and natural gas emissions in the event that natural gas recovered for pipeline injection must be...

  6. 40 CFR 49.4166 - Monitoring requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... burning pilot flame, electronically controlled automatic igniters, and monitoring system failures, using a... failure, electronically controlled automatic igniter failure, or improper monitoring equipment operation... and natural gas emissions in the event that natural gas recovered for pipeline injection must be...

  7. Incidence and economic burden of suspected adverse events and adverse event monitoring during AF therapy.

    PubMed

    Kim, M H; Lin, J; Hussein, M; Battleman, D

    2009-12-01

    Rhythm- and rate-control therapies are an essential part of atrial fibrillation (AF) management; however, the use of existing agents is often limited by the occurrence of adverse events. The aim of this study was to evaluate suspected adverse events and adverse event monitoring, and associated medical costs, in patients receiving AF rhythm-control and/or rate-control therapy. This retrospective cohort study used claims data from the Integrated Healthcare Information Systems National Managed Care Benchmark Database from 2002-2006. Patients hospitalized for AF (primary diagnosis), and who had at least 365 days' enrollment before and after the initial (index) AF hospitalization, were included in the analysis. Suspected AF therapy-related adverse events and function tests for adverse event monitoring were identified according to pre-specified diagnosis codes/procedures, and examined over the 12 months following discharge from the index hospitalization. Events/function tests had to have occurred within 90 days of a claim for AF therapy to be considered a suspected adverse event/adverse event monitoring. Of 4174 AF patients meeting the study criteria, 3323 received AF drugs; 428 received rhythm-control only (12.9%), 2130 rate-control only (64.1%), and 765 combined rhythm/rate-control therapy (23.0%). Overall, 50.1% of treated patients had a suspected adverse event and/or function test for adverse event monitoring (45.5% with rate-control, 53.5% with rhythm-control, and 61.2% with combined rhythm/rate-control). Suspected cardiovascular adverse events were the most common events (occurring in 36.1% of patients), followed by pulmonary (6.1%), and endocrine events (5.9%). Overall, suspected adverse events/function tests were associated with mean annual per-patient costs of $3089 ($1750 with rhythm-control, $2041 with rate control, and $6755 with combined rhythm/rate-control). As a retrospective analysis, the study is subject to potential selection bias, while its reliance on diagnostic codes for identification of AF and suspected adverse events is a source of potential investigator error. A direct cause-effect relationship between suspected adverse events/function tests and AF therapy cannot be confirmed based on the claims data available. The incidence of suspected adverse events and adverse event monitoring during AF rhythm-control and/or rate-control therapy is high. Costs associated with adverse events and adverse event monitoring are likely to add considerably to the overall burden of AF management.

  8. Monitoring and evaluating civil structures using measured vibration

    NASA Astrophysics Data System (ADS)

    Straser, Erik G.; Kiremidjian, Anne S.

    1996-04-01

    The need for a rapid assessment of the state of critical and conventional civil structures, such as bridges, control centers, airports, and hospitals, among many, has been amply demonstrated during recent natural disasters. Research is underway at Stanford University to develop a state-of-the-art automated damage monitoring system for long term and extreme event monitoring based on both ambient and forced response measurements. Such research requires a multi-disciplinary approach harnessing the talents and expertise of civil, electrical, and mechanical engineering to arrive at a novel hardware and software solution. Recent advances in silicon micro-machining and microprocessor design allow for the economical integration of sensing, processing, and communication components. Coupling these technological advances with parameter identification algorithms allows for the realization of extreme event damage monitoring systems for civil structures. This paper addresses the first steps toward the development of a near real-time damage diagnostic and monitoring system based on structural response to extreme events. Specifically, micro-electro-mechanical- structures (MEMS) and microcontroller embedded systems (MES) are demonstrated to be an effective platform for the measurement and analysis of civil structures. Experimental laboratory tests with small scale model specimens and a preliminary sensor module are used to evaluate hardware and obtain structural response data from input accelerograms. A multi-step analysis procedure employing ordinary least squares (OLS), extended Kalman filtering (EKF), and a substructuring approach is conducted to extract system characteristics of the model. Results from experimental tests and system identification (SI) procedures as well as fundamental system design issues are presented.

  9. [Intraoperative monitoring of oxygen tissue pressure: Applications in vascular neurosurgery].

    PubMed

    Arikan, Fuat; Vilalta, Jordi; Torne, Ramon; Chocron, Ivette; Rodriguez-Tesouro, Ana; Sahuquillo, Juan

    2014-01-01

    Ischemic lesions related to surgical procedures are a major cause of postoperative morbidity in patients with cerebral vascular disease. There are different systems of neuromonitoring to detect intraoperative ischemic events, including intraoperative monitoring of oxygen tissue pressure (PtiO2). The aim of this article was to describe, through the discussion of 4 cases, the usefulness of intraoperative PtiO2 monitoring during vascular neurosurgery. In presenting these cases, we demonstrate that monitoring PtiO2 is a reliable way to detect early ischemic events during surgical procedures. Continuous monitoring of PtiO2 in an area at risk allows the surgeon to resolve the cause of the ischemic event before it evolves to an established cerebral infarction. Copyright © 2014 Sociedad Española de Neurocirugía. Published by Elsevier España. All rights reserved.

  10. From mess to mass: a methodology for calculating storm event pollutant loads with their uncertainties, from continuous raw data time series.

    PubMed

    Métadier, M; Bertrand-Krajewski, J-L

    2011-01-01

    With the increasing implementation of continuous monitoring of both discharge and water quality in sewer systems, large data bases are now available. In order to manage large amounts of data and calculate various variables and indicators of interest it is necessary to apply automated methods for data processing. This paper deals with the processing of short time step turbidity time series to estimate TSS (Total Suspended Solids) and COD (Chemical Oxygen Demand) event loads in sewer systems during storm events and their associated uncertainties. The following steps are described: (i) sensor calibration, (ii) estimation of data uncertainties, (iii) correction of raw data, (iv) data pre-validation tests, (v) final validation, and (vi) calculation of TSS and COD event loads and estimation of their uncertainties. These steps have been implemented in an integrated software tool. Examples of results are given for a set of 33 storm events monitored in a stormwater separate sewer system.

  11. Development of priority based statewide scour monitoring systems in New England (PDF file)

    DOT National Transportation Integrated Search

    2001-08-02

    A project was funded by the New England Transportation Consortium to research the creation of a scour monitoring system : that would assist in the allocation of resources during potentially destructive flood events in New England. Emphasis was placed...

  12. Earth resources data acquisition sensor study

    NASA Technical Reports Server (NTRS)

    Grohse, E. W.

    1975-01-01

    The minimum data collection and data processing requirements are investigated for the development of water monitoring systems, which disregard redundant and irrelevant data and process only those data predictive of the onset of significant pollution events. Two approaches are immediately suggested: (1) adaptation of a presently available ambient air monitoring system developed by TVA, and (2) consideration of an air, water, and radiological monitoring system developed by the Georgia Tech Experiment Station. In order to apply monitoring systems, threshold values and maximum allowable rates of change of critical parameters such as dissolved oxygen and temperature are required.

  13. Design and Implementation of a Wireless Sensor Network-Based Remote Water-Level Monitoring System

    PubMed Central

    Li, Xiuhong; Cheng, Xiao; Gong, Peng; Yan, Ke

    2011-01-01

    The proposed remote water-level monitoring system (RWMS) consists of a field sensor module, a base station module, adata center module and aWEB releasing module. It has advantages in real time and synchronized remote control, expandability, and anti-jamming capabilities. The RWMS can realize real-time remote monitoring, providing early warning of events and protection of the safety of monitoring personnel under certain dangerous circumstances. This system has been successfully applied in Poyanghu Lake. The cost of the whole system is approximately 1,500 yuan (RMB). PMID:22319377

  14. Design and implementation of a wireless sensor network-based remote water-level monitoring system.

    PubMed

    Li, Xiuhong; Cheng, Xiao; Gong, Peng; Yan, Ke

    2011-01-01

    The proposed remote water-level monitoring system (RWMS) consists of a field sensor module, a base station module, a data center module and a WEB releasing module. It has advantages in real time and synchronized remote control, expandability, and anti-jamming capabilities. The RWMS can realize real-time remote monitoring, providing early warning of events and protection of the safety of monitoring personnel under certain dangerous circumstances. This system has been successfully applied in Poyanghu Lake. The cost of the whole system is approximately 1,500 yuan (RMB).

  15. Automation for deep space vehicle monitoring

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M.

    1991-01-01

    Information on automation for deep space vehicle monitoring is given in viewgraph form. Information is given on automation goals and strategy; the Monitor Analyzer of Real-time Voyager Engineering Link (MARVEL); intelligent input data management; decision theory for making tradeoffs; dynamic tradeoff evaluation; evaluation of anomaly detection results; evaluation of data management methods; system level analysis with cooperating expert systems; the distributed architecture of multiple expert systems; and event driven response.

  16. The measurement and monitoring of surgical adverse events.

    PubMed

    Bruce, J; Russell, E M; Mollison, J; Krukowski, Z H

    2001-01-01

    Surgical adverse events contribute significantly to postoperative morbidity, yet the measurement and monitoring of events is often imprecise and of uncertain validity. Given the trend of decreasing length of hospital stay and the increase in use of innovative surgical techniques--particularly minimally invasive and endoscopic procedures--accurate measurement and monitoring of adverse events is crucial. The aim of this methodological review was to identify a selection of common and potentially avoidable surgical adverse events and to assess whether they could be reliably and validly measured, to review methods for monitoring their occurrence and to identify examples of effective monitoring systems for selected events. This review is a comprehensive attempt to examine the quality of the definition, measurement, reporting and monitoring of selected events that are known to cause significant postoperative morbidity and mortality. METHODS - SELECTION OF SURGICAL ADVERSE EVENTS: Four adverse events were selected on the basis of their frequency of occurrence and likelihood of evidence of measurement and monitoring: (1) surgical wound infection; (2) anastomotic leak; (3) deep vein thrombosis (DVT); (4) surgical mortality. Surgical wound infection and DVT are common events that cause significant postoperative morbidity. Anastomotic leak is a less common event, but risk of fatality is associated with delay in recognition, detection and investigation. Surgical mortality was selected because of the effort known to have been invested in developing systems for monitoring surgical death, both in the UK and internationally. Systems for monitoring surgical wound infection were also included in the review. METHODS - LITERATURE SEARCH: Thirty separate, systematic literature searches of core health and biomedical bibliographic databases (MEDLINE, EMBASE, CINAHL, HealthSTAR and the Cochrane Library) were conducted. The reference lists of retrieved articles were reviewed to locate additional articles. A matrix was developed whereby different literature and study designs were reviewed for each of the surgical adverse events. Each article eligible for inclusion was independently reviewed by two assessors. METHODS - CRITICAL APPRAISAL: Studies were appraised according to predetermined assessment criteria. Definitions and grading scales were assessed for: content, criterion and construct validity; repeatability; reproducibility; and practicality (surgical wound infection and anastomotic leak). Monitoring systems for surgical wound infection and surgical mortality were assessed on the following criteria: (1) coverage of the system; (2) whether or not denominator data were collected; (3) whether standard and agreed definitions were used; (4) inclusion of risk adjustment; (5) issues related to data collection; (6) postdischarge surveillance; (7) output in terms of feedback and wider dissemination. RESULTS - SURGICAL WOUND INFECTION: A total of 41 different definitions and 13 grading scales of surgical wound infection were identified from 82 studies. Definitions of surgical wound infection varied from presence of pus to complex definitions such as those proposed by the Centres for Disease Control in the USA. A small body of literature has been published on the content, criterion and construct validity of different definitions, and comparisons have been made against wound assessment scales and multidimensional indices. There are examples of comprehensive hospital-based monitoring systems of surgical wound infection, mainly under the auspices of nosocomial surveillance. To date, however, there is little evidence of systematic measurement and monitoring of surgical wound infection after hospital discharge. RESULTS - ANASTOMOTIC LEAK: Over 40 definitions of anastomotic leak were extracted from 107 studies of upper gastrointestinal, hepatopancreaticobiliary and lower gastrointestinal surgery. No formal evaluations were found that assessed the validity or reliability of definitions or severity scales of anastomotic leak. One definition was proposed during a national consensus workshop, but no evidence of its use was found in the surgical literature. The lack of a single definition or gold standard hampers comparison of postoperative anastomotic leak rates between studies and institutions. RESULTS - DEEP VEIN THROMBOSIS: Although a critical review of the DVT literature could not be completed within the realms of this review, it was evident that a number of new techniques for the detection and diagnosis of DVT have emerged in the last 20 years. The group recommends a separate review be undertaken of the different diagnostic tests to detect DVT. RESULTS - SURGICAL MORTALITY MONITORING SYSTEMS: The definition of surgical mortality is relatively consistent between monitoring systems, but duration of follow-up of death postdischarge varies considerably. The majority of systems report in-hospital mortality rates; only some have the potential to link deaths to national death registers. Risk assessment is an important factor and there should be a distinction between recording pre-intervention factors and postoperative complications. A variety of risk scoring systems was identified in the review. Factors associated with accurate and complete data collection include the employment of local, dedicated personnel, simple and structured prompts to ensure that clinical input is complete, and accurate and automated data capture and transfer. The use of standardised, valid and reliable definitions is fundamental to the accurate measurement and monitoring of surgical adverse events. This review found inconsistency in the quality of reporting of postoperative adverse events, limiting accurate comparison of rates over time and between institutions. The duration of follow-up for individual events will vary according to their natural history and epidemiology. Although risk-adjusted aggregated rates can act as screening or warning systems for adverse events, attribution of whether events are avoidable or preventable will invariably require further investigation at the level of the individual, unit or department. CONCLUSIONS - RECOMMENDATIONS FOR RESEARCH: (1) A single, standard definition of surgical wound infection is needed so that comparisons over time and between departments and institutions are valid, accurate and useful. Surgeons and other healthcare professionals should consider adopting the 1992 Centers for Disease Control (CDC) definition for superficial incisional, deep incisional and organ/space surgical site infection for hospital monitoring programmes and surgical audits. There is a need for further methodological research into the performance of the CDC definition in the UK setting. (2) There is a need to formally assess the reliability of self-diagnosis of surgical wound infection by patients. (3) There is a need to assess formally the reliability of case ascertainment by infection control staff. (4) Work is needed to create and agree a standard, valid and reliable definition of anastomotic leak which is acceptable to surgeons. (5) A systematic review is needed of the different diagnostic tests for the diagnosis of DVT. (6) The following variables should be considered in any future DVT review: anatomical region (lower limb, upper limb, pelvis); patient presentation (symptomatic, asymptomatic); outcome of diagnostic test (successfully completed, inconclusive, technically inadequate, negative); length of follow-up; cost of test; whether or not serial screening was conducted; and recording of laboratory cut-off values for fibrinogen equivalent units. (7) A critical review is needed of the surgical risk scoring used in monitoring systems. (8) In the absence of automated linkage there is a need to explore the benefits and costs of monitoring in primary care. (9) The growing potential for automated linkage of data from different sources (including primary care, the private sector and death registers) needs to be explored as a means of improving the ascertainment of surgical complications, including death. This linkage needs to be within the terms of data protection, privacy and human rights legislation. (10) A review is needed of the extent of the use and efficiency of routine hospital data versus special collections or voluntary reporting.

  17. Cloud-Based Smart Health Monitoring System for Automatic Cardiovascular and Fall Risk Assessment in Hypertensive Patients.

    PubMed

    Melillo, P; Orrico, A; Scala, P; Crispino, F; Pecchia, L

    2015-10-01

    The aim of this paper is to describe the design and the preliminary validation of a platform developed to collect and automatically analyze biomedical signals for risk assessment of vascular events and falls in hypertensive patients. This m-health platform, based on cloud computing, was designed to be flexible, extensible, and transparent, and to provide proactive remote monitoring via data-mining functionalities. A retrospective study was conducted to train and test the platform. The developed system was able to predict a future vascular event within the next 12 months with an accuracy rate of 84 % and to identify fallers with an accuracy rate of 72 %. In an ongoing prospective trial, almost all the recruited patients accepted favorably the system with a limited rate of inadherences causing data losses (<20 %). The developed platform supported clinical decision by processing tele-monitored data and providing quick and accurate risk assessment of vascular events and falls.

  18. The Gypsy Moth Event Monitor for FVS: a tool for forest and pest managers

    Treesearch

    Kurt W. Gottschalk; Anthony W. Courter

    2007-01-01

    The Gypsy Moth Event Monitor is a program that simulates the effects of gypsy moth, Lymantria dispar (L.), within the confines of the Forest Vegetation Simulator (FVS). Individual stands are evaluated with a susceptibility index system to determine the vulnerability of the stand to the effects of gypsy moth. A gypsy moth outbreak is scheduled in the...

  19. Identification of unusual events in multi-channel bridge monitoring data

    NASA Astrophysics Data System (ADS)

    Omenzetter, Piotr; Brownjohn, James Mark William; Moyo, Pilate

    2004-03-01

    Continuously operating instrumented structural health monitoring (SHM) systems are becoming a practical alternative to replace visual inspection for assessment of condition and soundness of civil infrastructure such as bridges. However, converting large amounts of data from an SHM system into usable information is a great challenge to which special signal processing techniques must be applied. This study is devoted to identification of abrupt, anomalous and potentially onerous events in the time histories of static, hourly sampled strains recorded by a multi-sensor SHM system installed in a major bridge structure and operating continuously for a long time. Such events may result, among other causes, from sudden settlement of foundation, ground movement, excessive traffic load or failure of post-tensioning cables. A method of outlier detection in multivariate data has been applied to the problem of finding and localising sudden events in the strain data. For sharp discrimination of abrupt strain changes from slowly varying ones wavelet transform has been used. The proposed method has been successfully tested using known events recorded during construction of the bridge, and later effectively used for detection of anomalous post-construction events.

  20. Reducing Dangerous Nighttime Events in Persons with Dementia Using a Nighttime Monitoring System

    PubMed Central

    Rowe, Meredeth A.; Kelly, Annette; Horne, Claydell; Lane, Steve; Campbell, Judy; Lehman, Brandy; Phipps, Chad; Keller, Meredith; Benito, Andrea Pe

    2009-01-01

    Background Nighttime activity, a common occurrence in persons with dementia, increases the risk for injury and unattended home exits, and impairs the sleep patterns of caregivers. Technology is needed that will alert caregivers of nighttime activity in persons with dementia to help prevent injuries and unattended exits. Methods As part of a product development grant, a randomized pilot study was conducted to test the effectiveness of a new night monitoring system designed for informal caregivers to use in the home. Data from 53 subjects were collected at 9 points in time over a 12-month period regarding injuries and unattended home exits that occurred while the caregiver slept. Nighttime activity frequently resulted in nursing home placement. Results The night monitoring system proved a reliable adjunct to assist caregivers in managing nighttime activity. A total of 9 events (injuries or unattended home exits) occurred during the study with 6 events occurring in the control group. Using intent-to-treat analysis, there was no difference between the groups. However, in a secondary analysis based on use of the intervention, experimental subjects were 85% less likely to sustain an event than control subjects. Conclusion When nighttime activity occurred, it resulted in severe injuries sometimes associated with subsequent nursing home placement. The night monitoring system represents a new technology that caregivers can use to assist them in preventing nighttime injuries and unattended home exits in care recipients with dementia. PMID:19751921

  1. Advances in the continuous monitoring of erosion and deposition dynamics: Developments and applications of the new PEEP-3T system

    NASA Astrophysics Data System (ADS)

    Lawler, D. M.

    2008-01-01

    In most episodic erosion and deposition systems, knowledge of the timing of geomorphological change, in relation to fluctuations in the driving forces, is crucial to strong erosion process inference, and model building, validation and development. A challenge for geomorphology, however, is that few studies have focused on geomorphological event structure (timing, magnitude, frequency and duration of individual erosion and deposition events), in relation to applied stresses, because of the absence of key monitoring methodologies. This paper therefore (a) presents full details of a new erosion and deposition measurement system — PEEP-3T — developed from the Photo-Electronic Erosion Pin sensor in five key areas, including the addition of nocturnal monitoring through the integration of the Thermal Consonance Timing (TCT) concept, to produce a continuous sensing system; (b) presents novel high-resolution datasets from the redesigned PEEP-3T system for river bank system of the Rivers Nidd and Wharfe, northern England, UK; and (c) comments on their potential for wider application throughout geomorphology to address these key measurement challenges. Relative to manual methods of erosion and deposition quantification, continuous PEEP-3T methodologies increase the temporal resolution of erosion/deposition event detection by more than three orders of magnitude (better than 1-second resolution if required), and this facility can significantly enhance process inference. Results show that river banks are highly dynamic thermally and respond quickly to radiation inputs. Data on bank retreat timing, fixed with PEEP-3T TCT evidence, confirmed that they were significantly delayed up to 55 h after flood peaks. One event occurred 13 h after emergence from the flow. This suggests that mass failure processes rather than fluid entrainment dominated the system. It is also shown how, by integrating turbidity instrumentation with TCT ideas, linkages between sediment supply and sediment flux can be forged at event timescales, and a lack of sediment exhaustion was evident here. Five challenges for wider geomorphological process investigation are discussed. This event-based dynamics approach, based on continuous monitoring methodologies, appears to have considerable wider potential for stronger process inference and model testing and validation in many areas of geomorphology.

  2. A new method for wireless video monitoring of bird nests

    Treesearch

    David I. King; Richard M. DeGraaf; Paul J. Champlin; Tracey B. Champlin

    2001-01-01

    Video monitoring of active bird nests is gaining popularity among researchers because it eliminates many of the biases associated with reliance on incidental observations of predation events or use of artificial nests, but the expense of video systems may be prohibitive. Also, the range and efficiency of current video monitoring systems may be limited by the need to...

  3. Implementation and Impact of an Automated Group Monitoring and Feedback System to Promote Hand Hygiene Among Health Care Personnel

    PubMed Central

    Conway, Laurie J.; Riley, Linda; Saiman, Lisa; Cohen, Bevin; Alper, Paul; Larson, Elaine L.

    2015-01-01

    Article-at-a-Glance Background Despite substantial evidence to support the effectiveness of hand hygiene for preventing health care–associated infections, hand hygiene practice is often inadequate. Hand hygiene product dispensers that can electronically capture hand hygiene events have the potential to improve hand hygiene performance. A study on an automated group monitoring and feedback system was implemented from January 2012 through March 2013 at a 140-bed community hospital. Methods An electronic system that monitors the use of sanitizer and soap but does not identify individual health care personnel was used to calculate hand hygiene events per patient-hour for each of eight inpatient units and hand hygiene events per patient-visit for the six outpatient units. Hand hygiene was monitored but feedback was not provided during a six-month baseline period and three-month rollout period. During the rollout, focus groups were conducted to determine preferences for feedback frequency and format. During the six-month intervention period, graphical reports were e-mailed monthly to all managers and administrators, and focus groups were repeated. Results After the feedback began, hand hygiene increased on average by 0.17 events/patient-hour in inpatient units (interquartile range = 0.14, p = .008). In outpatient units, hand hygiene performance did not change significantly. A variety of challenges were encountered, including obtaining accurate census and staffing data, engendering confidence in the system, disseminating information in the reports, and using the data to drive improvement. Conclusions Feedback via an automated system was associated with improved hand hygiene performance in the short term. PMID:25252389

  4. Implementation and impact of an automated group monitoring and feedback system to promote hand hygiene among health care personnel.

    PubMed

    Conway, Laurie J; Riley, Linda; Saiman, Lisa; Cohen, Bevin; Alper, Paul; Larson, Elaine L

    2014-09-01

    Despite substantial evidence to support the effectiveness of hand hygiene for preventing health care-associated infections, hand hygiene practice is often inadequate. Hand hygiene product dispensers that can electronically capture hand hygiene events have the potential to improve hand hygiene performance. A study on an automated group monitoring and feedback system was implemented from January 2012 through March 2013 at a 140-bed community hospital. An electronic system that monitors the use of sanitizer and soap but does not identify individual health care personnel was used to calculate hand hygiene events per patient-hour for each of eight inpatient units and hand hygiene events per patient-visit for the six outpatient units. Hand hygiene was monitored but feedback was not provided during a six-month baseline period and three-month rollout period. During the rollout, focus groups were conducted to determine preferences for feedback frequency and format. During the six-month intervention period, graphical reports were e-mailed monthly to all managers and administrators, and focus groups were repeated. After the feedback began, hand hygiene increased on average by 0.17 events/patient-hour in inpatient units (interquartile range = 0.14, p = .008). In outpatient units, hand hygiene performance did not change significantly. A variety of challenges were encountered, including obtaining accurate census and staffing data, engendering confidence in the system, disseminating information in the reports, and using the data to drive improvement. Feedback via an automated system was associated with improved hand hygiene performance in the short-term.

  5. Multileaf collimator performance monitoring and improvement using semiautomated quality control testing and statistical process control.

    PubMed

    Létourneau, Daniel; Wang, An; Amin, Md Nurul; Pearce, Jim; McNiven, Andrea; Keller, Harald; Norrlinger, Bernhard; Jaffray, David A

    2014-12-01

    High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3-4 times/week over a period of 10-11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ± 0.5 and ± 1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. The precision of the MLC performance monitoring QC test and the MLC itself was within ± 0.22 mm for most MLC leaves and the majority of the apparent leaf motion was attributed to beam spot displacements between irradiations. The MLC QC test was performed 193 and 162 times over the monitoring period for the studied units and recalibration had to be repeated up to three times on one of these units. For both units, rate of MLC interlocks was moderately associated with MLC servicing events. The strongest association with the MLC performance was observed between the MLC servicing events and the total number of out-of-control leaves. The average elapsed time for which the number of out-of-specification or out-of-control leaves was within a given performance threshold was computed and used to assess adequacy of MLC test frequency. A MLC performance monitoring system has been developed and implemented to acquire high-quality QC data at high frequency. This is enabled by the relatively short acquisition time for the images and automatic image analysis. The monitoring system was also used to record and track the rate of MLC-related interlocks and servicing events. MLC performances for two commercially available MLC models have been assessed and the results support monthly test frequency for widely accepted ± 1 mm specifications. Higher QC test frequency is however required to maintain tighter specification and in-control behavior.

  6. Temporal Informative Analysis in Smart-ICU Monitoring: M-HealthCare Perspective.

    PubMed

    Bhatia, Munish; Sood, Sandeep K

    2016-08-01

    The rapid introduction of Internet of Things (IoT) Technology has boosted the service deliverance aspects of health sector in terms of m-health, and remote patient monitoring. IoT Technology is not only capable of sensing the acute details of sensitive events from wider perspectives, but it also provides a means to deliver services in time sensitive and efficient manner. Henceforth, IoT Technology has been efficiently adopted in different fields of the healthcare domain. In this paper, a framework for IoT based patient monitoring in Intensive Care Unit (ICU) is presented to enhance the deliverance of curative services. Though ICUs remained a center of attraction for high quality care among researchers, still number of studies have depicted the vulnerability to a patient's life during ICU stay. The work presented in this study addresses such concerns in terms of efficient monitoring of various events (and anomalies) with temporal associations, followed by time sensitive alert generation procedure. In order to validate the system, it was deployed in 3 ICU room facilities for 30 days in which nearly 81 patients were monitored during their ICU stay. The results obtained after implementation depicts that IoT equipped ICUs are more efficient in monitoring sensitive events as compared to manual monitoring and traditional Tele-ICU monitoring. Moreover, the adopted methodology for alert generation with information presentation further enhances the utility of the system.

  7. System control of an autonomous planetary mobile spacecraft

    NASA Technical Reports Server (NTRS)

    Dias, William C.; Zimmerman, Barbara A.

    1990-01-01

    The goal is to suggest the scheduling and control functions necessary for accomplishing mission objectives of a fairly autonomous interplanetary mobile spacecraft, while maximizing reliability. Goals are to provide an extensible, reliable system conservative in its use of on-board resources, while getting full value from subsystem autonomy, and avoiding the lure of ground micromanagement. A functional layout consisting of four basic elements is proposed: GROUND and SYSTEM EXECUTIVE system functions and RESOURCE CONTROL and ACTIVITY MANAGER subsystem functions. The system executive includes six subfunctions: SYSTEM MANAGER, SYSTEM FAULT PROTECTION, PLANNER, SCHEDULE ADAPTER, EVENT MONITOR and RESOURCE MONITOR. The full configuration is needed for autonomous operation on Moon or Mars, whereas a reduced version without the planning, schedule adaption and event monitoring functions could be appropriate for lower-autonomy use on the Moon. An implementation concept is suggested which is conservative in use of system resources and consists of modules combined with a network communications fabric. A language concept termed a scheduling calculus for rapidly performing essential on-board schedule adaption functions is introduced.

  8. Telemetric system for hydrology and water quality monitoring in watersheds of northern New Mexico, USA.

    PubMed

    Meyer, Michael L; Huey, Greg M

    2006-05-01

    This study utilized telemetric systems to sample microbes and pathogens in forest, burned forest, rangeland, and urban watersheds to assess surface water quality in northern New Mexico. Four sites included remote mountainous watersheds, prairie rangelands, and a small urban area. The telemetric system was linked to dataloggers with automated event monitoring equipment to monitor discharge, turbidity, electrical conductivity, water temperature, and rainfall during base flow and storm events. Site data stored in dataloggers was uploaded to one of three types of telemetry: 1) radio in rangeland and urban settings; 2) a conventional phone/modem system with a modem positioned at the urban/forest interface; and 3) a satellite system used in a remote mountainous burned forest watershed. The major variables affecting selection of each system were site access, distance, technology, and cost. The systems were compared based on operation and cost. Utilization of telecommunications systems in this varied geographic area facilitated the gathering of hydrologic and water quality data on a timely basis.

  9. Monitoring and tracing of critical software systems: State of the work and project definition

    DTIC Science & Technology

    2008-12-01

    analysis, troubleshooting and debugging. Some of these subsystems already come with ad hoc tracers for events like wireless connections or SCSI disk... SQLite ). Additional synthetic events (e.g. states) are added to the database. The database thus consists in contexts (process, CPU, state), event...capability on a [operating] system-by-system basis. Additionally, the mechanics of querying the data in an ad - hoc manner outside the boundaries of the

  10. Automation of Physiologic Data Presentation and Alarms in the Post Anesthesia Care Unit

    PubMed Central

    Aukburg, S.J.; Ketikidis, P.H.; Kitz, D.S.; Mavrides, T.G.; Matschinsky, B.B.

    1989-01-01

    The routine use of pulse oximeters, non-invasive blood pressure monitors and electrocardiogram monitors have considerably improved patient care in the post anesthesia period. Using an automated data collection system, we investigated the occurrence of several adverse events frequently revealed by these monitors. We found that the incidence of hypoxia was 35%, hypertension 12%, hypotension 8%, tachycardia 25% and bradycardia 1%. Discriminant analysis was able to correctly predict classification of about 90% of patients into normal vs. hypotensive or hypotensive groups. The system software minimizes artifact, validates data for epidemiologic studies, and is able to identify variables that predict adverse events through application of appropriate statistical and artificial intelligence techniques.

  11. An Overview of the NASA Aviation Safety Program Propulsion Health Monitoring Element

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2000-01-01

    The NASA Aviation Safety Program (AvSP) has been initiated with aggressive goals to reduce the civil aviation accident rate, To meet these goals, several technology investment areas have been identified including a sub-element in propulsion health monitoring (PHM). Specific AvSP PHM objectives are to develop and validate propulsion system health monitoring technologies designed to prevent engine malfunctions from occurring in flight, and to mitigate detrimental effects in the event an in-flight malfunction does occur. A review of available propulsion system safety information was conducted to help prioritize PHM areas to focus on under the AvSP. It is noted that when a propulsion malfunction is involved in an aviation accident or incident, it is often a contributing factor rather than the sole cause for the event. Challenging aspects of the development and implementation of PHM technology such as cost, weight, robustness, and reliability are discussed. Specific technology plans are overviewed including vibration diagnostics, model-based controls and diagnostics, advanced instrumentation, and general aviation propulsion system health monitoring technology. Propulsion system health monitoring, in addition to engine design, inspection, maintenance, and pilot training and awareness, is intrinsic to enhancing aviation propulsion system safety.

  12. U.S. Tsunami Information technology (TIM) Modernization: Performance Assessment of Tsunamigenic Earthquake Discrimination System

    NASA Astrophysics Data System (ADS)

    Hagerty, M. T.; Lomax, A.; Hellman, S. B.; Whitmore, P.; Weinstein, S.; Hirshorn, B. F.; Knight, W. R.

    2015-12-01

    Tsunami warning centers must rapidly decide whether an earthquake is likely to generate a destructive tsunami in order to issue a tsunami warning quickly after a large event. For very large events (Mw > 8 or so), magnitude and location alone are sufficient to warrant an alert. However, for events of smaller magnitude (e.g., Mw ~ 7.5), particularly for so-called "tsunami earthquakes", magnitude alone is insufficient to issue an alert and other measurements must be rapidly made and used to assess tsunamigenic potential. The Tsunami Information technology Modernization (TIM) is a National Oceanic and Atmospheric Administration (NOAA) project to update and standardize the earthquake and tsunami monitoring systems currently employed at the U.S. Tsunami Warning Centers in Ewa Beach, Hawaii (PTWC) and Palmer, Alaska (NTWC). We (ISTI) are responsible for implementing the seismic monitoring components in this new system, including real-time seismic data collection and seismic processing. The seismic data processor includes a variety of methods aimed at real-time discrimination of tsunamigenic events, including: Mwp, Me, slowness (Theta), W-phase, mantle magnitude (Mm), array processing and finite-fault inversion. In addition, it contains the ability to designate earthquake scenarios and play the resulting synthetic seismograms through the processing system. Thus, it is also a convenient tool that integrates research and monitoring and may be used to calibrate and tune the real-time monitoring system. Here we show results of the automated processing system for a large dataset of subduction zone earthquakes containing recent tsunami earthquakes and we examine the accuracy of the various discrimation methods and discuss issues related to their successful real-time application.

  13. Safety monitoring in the Vaccine Adverse Event Reporting System (VAERS)

    PubMed Central

    Shimabukuro, Tom T.; Nguyen, Michael; Martin, David; DeStefano, Frank

    2015-01-01

    The Centers for Disease Control and Prevention (CDC) and the U.S. Food and Drug Administration (FDA) conduct post-licensure vaccine safety monitoring using the Vaccine Adverse Event Reporting System (VAERS), a spontaneous (or passive) reporting system. This means that after a vaccine is approved, CDC and FDA continue to monitor safety while it is distributed in the marketplace for use by collecting and analyzing spontaneous reports of adverse events that occur in persons following vaccination. Various methods and statistical techniques are used to analyze VAERS data, which CDC and FDA use to guide further safety evaluations and inform decisions around vaccine recommendations and regulatory action. VAERS data must be interpreted with caution due to the inherent limitations of passive surveillance. VAERS is primarily a safety signal detection and hypothesis generating system. Generally, VAERS data cannot be used to determine if a vaccine caused an adverse event. VAERS data interpreted alone or out of context can lead to erroneous conclusions about cause and effect as well as the risk of adverse events occurring following vaccination. CDC makes VAERS data available to the public and readily accessible online. We describe fundamental vaccine safety concepts, provide an overview of VAERS for healthcare professionals who provide vaccinations and might want to report or better understand a vaccine adverse event, and explain how CDC and FDA analyze VAERS data. We also describe strengths and limitations, and address common misconceptions about VAERS. Information in this review will be helpful for healthcare professionals counseling patients, parents, and others on vaccine safety and benefit-risk balance of vaccination. PMID:26209838

  14. Remote health monitoring system for detecting cardiac disorders.

    PubMed

    Bansal, Ayush; Kumar, Sunil; Bajpai, Anurag; Tiwari, Vijay N; Nayak, Mithun; Venkatesan, Shankar; Narayanan, Rangavittal

    2015-12-01

    Remote health monitoring system with clinical decision support system as a key component could potentially quicken the response of medical specialists to critical health emergencies experienced by their patients. A monitoring system, specifically designed for cardiac care with electrocardiogram (ECG) signal analysis as the core diagnostic technique, could play a vital role in early detection of a wide range of cardiac ailments, from a simple arrhythmia to life threatening conditions such as myocardial infarction. The system that the authors have developed consists of three major components, namely, (a) mobile gateway, deployed on patient's mobile device, that receives 12-lead ECG signals from any ECG sensor, (b) remote server component that hosts algorithms for accurate annotation and analysis of the ECG signal and (c) point of care device of the doctor to receive a diagnostic report from the server based on the analysis of ECG signals. In the present study, their focus has been toward developing a system capable of detecting critical cardiac events well in advance using an advanced remote monitoring system. A system of this kind is expected to have applications ranging from tracking wellness/fitness to detection of symptoms leading to fatal cardiac events.

  15. A new approach to power quality and electricity reliability monitoring-case study illustrations of the capabilities of the I-GridTM system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Divan, Deepak; Brumsickle, William; Eto, Joseph

    2003-04-01

    This report describes a new approach for collecting information on power quality and reliability and making it available in the public domain. Making this information readily available in a form that is meaningful to electricity consumers is necessary for enabling more informed private and public decisions regarding electricity reliability. The system dramatically reduces the cost (and expertise) needed for customers to obtain information on the most significant power quality events, called voltage sags and interruptions. The system also offers widespread access to information on power quality collected from multiple sites and the potential for capturing information on the impacts ofmore » power quality problems, together enabling a wide variety of analysis and benchmarking to improve system reliability. Six case studies demonstrate selected functionality and capabilities of the system, including: Linking measured power quality events to process interruption and downtime; Demonstrating the ability to correlate events recorded by multiple monitors to narrow and confirm the causes of power quality events; and Benchmarking power quality and reliability on a firm and regional basis.« less

  16. Work burden with remote monitoring of implantable cardioverter defibrillator: is it time for reimbursement policies?

    PubMed

    Papavasileiou, Lida P; Forleo, Giovanni B; Panattoni, Germana; Schirripa, Valentina; Minni, Valentina; Magliano, Giulia; Bellos, Kyriakos; Santini, Luca; Romeo, Francesco

    2013-02-01

    The efficacy and accuracy, as well as patients' satisfaction, of device remote monitoring are well demonstrated. However, the workload of remote monitoring management has not been estimated and reimbursement schemes are currently unavailable in most European countries. This study evaluates the workload associated with remote monitoring systems. A total of 154 consecutive implantable cardioverter defibrillator patients (age 66±12 years; 86.5% men) with a remote monitoring system were enrolled. Data on the clinician's workload required for the management of the patients were analyzed. A total of 1744 transmissions were received during a mean follow-up of 15.3±12.4 months. Median number of transmissions per patient was 11.3. There were 993 event-free transmissions, whereas 638 transmissions regarded one or more events (113 missed transmissions, 141 atrial events, 132 ventricular episodes, 299 heart failure-related transmissions, 14 transmissions regarding lead malfunction and 164 transmissions related to other events). In 402 cases telephonic contact was necessary, whereas in 68 cases an in-clinic visit was necessary and in 23 of them an in-clinic visit was prompted by the manufacturer due to technical issues of the transmitter. During follow-up, 316 work hours were required to manage the enrolled patients. Each month, a total of 14.9 h were spent on the remote monitoring of 154 patients (9.7 h for 100 patients monthly) with approximately 1.1±0.15 h per year for each patient. The clinician's work burden is high in patients with remote monitoring. In order to expand remote monitoring in all patients, reimbursement policies should be considered.

  17. Autonomous Multi-Sensor Coordination: The Science Goal Monitor

    NASA Technical Reports Server (NTRS)

    Koratkar, Anuradha; Grosvenor, Sandy; Jung, John; Hess, Melissa; Jones, Jeremy

    2004-01-01

    Many dramatic earth phenomena are dynamic and coupled. In order to fully understand them, we need to obtain timely coordinated multi-sensor observations from widely dispersed instruments. Such a dynamic observing system must include the ability to Schedule flexibly and react autonomously to sciencehser driven events; Understand higher-level goals of a sciencehser defined campaign; Coordinate various space-based and ground-based resources/sensors effectively and efficiently to achieve goals. In order to capture transient events, such a 'sensor web' system must have an automated reactive capability built into its scientific operations. To do this, we must overcome a number of challenges inherent in infusing autonomy. The Science Goal Monitor (SGM) is a prototype software tool being developed to explore the nature of automation necessary to enable dynamic observing. The tools being developed in SGM improve our ability to autonomously monitor multiple independent sensors and coordinate reactions to better observe dynamic phenomena. The SGM system enables users to specify what to look for and how to react in descriptive rather than technical terms. The system monitors streams of data to identify occurrences of the key events previously specified by the scientisther. When an event occurs, the system autonomously coordinates the execution of the users' desired reactions between different sensors. The information can be used to rapidly respond to a variety of fast temporal events. Investigators will no longer have to rely on after-the-fact data analysis to determine what happened. Our paper describes a series of prototype demonstrations that we have developed using SGM and NASA's Earth Observing-1 (EO-1) satellite and Earth Observing Systems' Aqua/Terra spacecrafts' MODIS instrument. Our demonstrations show the promise of coordinating data from different sources, analyzing the data for a relevant event, autonomously updating and rapidly obtaining a follow-on relevant image. SGM was used to investigate forest fires, floods and volcanic eruptions. We are now identifying new Earth science scenarios that will have more complex SGM reasoning. By developing and testing a prototype in an operational environment, we are also establishing and gathering metrics to gauge the success of automating science campaigns.

  18. Clinical evaluation of a noninvasive alarm system for nocturnal hypoglycemia.

    PubMed

    Skladnev, Victor N; Ghevondian, Nejhdeh; Tarnavskii, Stanislav; Paramalingam, Nirubasini; Jones, Timothy W

    2010-01-01

    The aim of this study was to evaluate the performance of a prototype noninvasive alarm system (HypoMon) for the detection of nocturnal hypoglycemia. A prospective cohort study evaluated an alarm system that included a sensor belt, a radio frequency transmitter for chest belt signals, and a receiver. The receiver incorporated integrated "real-time" algorithms designed to recognize hypoglycemia "signatures" in the physiological parameters monitored by the sensor belt. Fifty-two children and young adults with type 1 diabetes mellitus (T1DM) participated in this blinded, prospective, in-clinic, overnight study. Participants had a mean age of 16 years (standard deviation 2.1, range 12-20 years) and were asked to follow their normal meal and insulin routines for the day of the study. Participants had physiological parameters monitored overnight by a single HypoMon system. Their BG levels were also monitored overnight at regular intervals via an intravenous cannula and read on two independent Yellow Springs Instruments analyzers. Hypoglycemia was not induced by any manipulations of diabetes management, rather the subjects were monitored overnight for "natural" occurrences of hypoglycemia. Performance analyses included comparing HypoMon system alarm times with allowed time windows associated with each hypoglycemic event. The primary recognition algorithm in the prototype alarm system performed at a level consistent with expectations based on prior user surveys. The HypoMon system correctly recognized 8 out of the 11 naturally occurring overnight hypoglycemic events and falsely alarmed on 13 out of the remaining 41 normal nights [sensitivity 73% (8/11), specificity 68% (28/41), positive predictive value 38%,negative predictive value 90%]. The prototype HypoMon shows potential as an adjunct method for noninvasive overnight monitoring for hypoglycemia events in young people with T1DM. 2010 Diabetes Technology Society.

  19. Nested sampling at karst springs: from basic patterns to event triggered sampling and on-line monitoring.

    NASA Astrophysics Data System (ADS)

    Stadler, Hermann; Skritek, Paul; Zerobin, Wolfgang; Klock, Erich; Farnleitner, Andreas H.

    2010-05-01

    In the last year, global changes in ecosystems, the growth of population, and modifications of the legal framework within the EU have caused an increased need of qualitative groundwater and spring water monitoring with the target to continue to supply the consumers with high-quality drinking water in the future. Additionally the demand for sustainable protection of drinking water resources effected the initiated implementation of early warning systems and quality assurance networks in water supplies. In the field of hydrogeological investigations, event monitoring and event sampling is worst case scenario monitoring. Therefore, such tools become more and more indispensible to get detailed information about aquifer parameter and vulnerability. In the framework of water supplies, smart sampling designs combined with in-situ measurements of different parameters and on-line access can play an important role in early warning systems and quality surveillance networks. In this study nested sampling tiers are presented, which were designed to cover total system dynamic. Basic monitoring sampling (BMS), high frequency sampling (HFS) and automated event sampling (AES) were combined. BMS was organized with a monthly increment for at least two years, and HFS was performed during times of increased groundwater recharge (e.g. during snowmelt). At least one AES tier was embedded in this system. AES was enabled by cross-linking of hydrological stations, so the system could be run fully automated and could include real-time availability of data. By means of networking via Low Earth Orbiting Satellites (LEO-satellites), data from the precipitation station (PS) in the catchment area are brought together with data from the spring sampling station (SSS) without the need of terrestrial infrastructure for communication and power supply. Furthermore, the whole course of input and output parameters, like precipitation (input system) and discharge (output system), and the status of the sampling system is transmitted via LEO-Satellites to a Central Monitoring Station (CMS), which can be linked with a web-server to have unlimited real-time data access. The automatically generated notice of event to a local service team of the sampling station is transmitted in combination with internet, GSM, GPRS or LEO-Satellites. If a GPRS-network is available for the stations, this system could be realized also via this network. However, one great problem of these terrestrial communication systems is the risk of default when their networks are overloaded, like during flood events or thunderstorms. Therefore, in addition, it is necessary to have the possibility to transmit the measured values via communication satellites when a terrestrial infrastructure is not available. LEO-satellites are especially useful in the alpine regions because they have no deadspots, but only sometimes latency periods. In the workouts we combined in-situ measurements (precipitation, electrical conductivity, discharge, water temperature, spectral absorption coefficient, turbidity) with time increments from 1 to 15 minutes with data from the different sampling tires (environmental isotopes, chemical, mineralogical and bacteriological data).

  20. Automatic detection and notification of "wrong patient-wrong location'' errors in the operating room.

    PubMed

    Sandberg, Warren S; Häkkinen, Matti; Egan, Marie; Curran, Paige K; Fairbrother, Pamela; Choquette, Ken; Daily, Bethany; Sarkka, Jukka-Pekka; Rattner, David

    2005-09-01

    When procedures and processes to assure patient location based on human performance do not work as expected, patients are brought incrementally closer to a possible "wrong patient-wrong procedure'' error. We developed a system for automated patient location monitoring and management. Real-time data from an active infrared/radio frequency identification tracking system provides patient location data that are robust and can be compared with an "expected process'' model to automatically flag wrong-location events as soon as they occur. The system also generates messages that are automatically sent to process managers via the hospital paging system, thus creating an active alerting function to annunciate errors. We deployed the system to detect and annunciate "patient-in-wrong-OR'' events. The system detected all "wrong-operating room (OR)'' events, and all "wrong-OR'' locations were correctly assigned within 0.50+/-0.28 minutes (mean+/-SD). This corresponded to the measured latency of the tracking system. All wrong-OR events were correctly annunciated via the paging function. This experiment demonstrates that current technology can automatically collect sufficient data to remotely monitor patient flow through a hospital, provide decision support based on predefined rules, and automatically notify stakeholders of errors.

  1. Korea Integrated Seismic System tool(KISStool) for seismic monitoring and data sharing at the local data center

    NASA Astrophysics Data System (ADS)

    Park, J.; Chi, H. C.; Lim, I.; Jeong, B.

    2011-12-01

    The Korea Integrated Seismic System(KISS) is a back-bone seismic network which distributes seismic data to different organizations in near-real time at Korea. The association of earthquake monitoring institutes has shared their seismic data through the KISS from 2003. Local data centers operating remote several stations need to send their free field seismic data to NEMA(National Emergency Management Agency) by the law of countermeasure against earthquake hazard in Korea. It is very important the efficient tool for local data centers which want to rapidly detect local seismic intensity and to transfer seismic event information toward national wide data center including PGA, PGV, dominant frequency of P-wave, raw data, and etc. We developed the KISStool(Korea Integrated Seismic System tool) for easy and convenient operation seismic network in local data center. The KISStool has the function of monitoring real time waveforms by clicking station icon on the Google map and real time variation of PGA, PGV, and other data by opening the bar type monitoring section. If they use the KISStool, any local data center can transfer event information to NEMA(National Emergency Management Agency), KMA(Korea Meteorological Agency) or other institutes through the KISS using UDP or TCP/IP protocols. The KISStool is one of the most efficient methods to monitor and transfer earthquake event at local data center in Korea. KIGAM will support this KISStool not only to the member of the monitoring association but also local governments.

  2. A modular telerobotic task execution system

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Tso, Kam S.; Hayati, Samad; Lee, Thomas S.

    1990-01-01

    A telerobot task execution system is proposed to provide a general parametrizable task execution capability. The system includes communication with the calling system, e.g., a task planning system, and single- and dual-arm sensor-based task execution with monitoring and reflexing. A specific task is described by specifying the parameters to various available task execution modules including trajectory generation, compliance control, teleoperation, monitoring, and sensor fusion. Reflex action is achieved by finding the corresponding reflex action in a reflex table when an execution event has been detected with a monitor.

  3. The Evryscopes: monitoring the entire sky for exciting events

    NASA Astrophysics Data System (ADS)

    Law, Nicholas; Corbett, Hank; Howard, Ward S.; Fors, Octavi; Ratzloff, Jeff; Barlow, Brad; Hermes, JJ

    2018-01-01

    The Evryscope is a new type of array telescope which monitors the entire accessible sky in each exposure. The system, with 700 MPix covering an 8000-square-degree field of view, is building many-year-length, high-cadence light curves for every accessible object brighter than ∼16th magnitude. Every night, we add 600 million object detections to our databases, including exoplanet transits, microlensing events, nearby extragalactic transients, and a wide range of other short timescale events. I will present our science plans, the status of our current Evryscope systems (operational in Chile and soon California), the big-data analysis required to explore the petabyte-scale dataset we are collecting over the next few years, and the first results from the telescopes.

  4. 40 CFR Appendix A to Subpart Uuuuu - Hg Monitoring Provisions

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... “cold standby” and may be reinstalled in the event of a primary monitoring system outage. A non... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Hg Monitoring Provisions A Appendix A... Generating Units Pt. 63, Subpt. UUUUU, App. A Appendix A to Subpart UUUUU—Hg Monitoring Provisions 1. General...

  5. Grid Stability Awareness System (GSAS) Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feuerborn, Scott; Ma, Jian; Black, Clifton

    The project team developed a software suite named Grid Stability Awareness System (GSAS) for power system near real-time stability monitoring and analysis based on synchrophasor measurement. The software suite consists of five analytical tools: an oscillation monitoring tool, a voltage stability monitoring tool, a transient instability monitoring tool, an angle difference monitoring tool, and an event detection tool. These tools have been integrated into one framework to provide power grid operators with both real-time or near real-time stability status of a power grid and historical information about system stability status. These tools are being considered for real-time use in themore » operation environment.« less

  6. Event detection in an assisted living environment.

    PubMed

    Stroiescu, Florin; Daly, Kieran; Kuris, Benjamin

    2011-01-01

    This paper presents the design of a wireless event detection and in building location awareness system. The systems architecture is based on using a body worn sensor to detect events such as falls where they occur in an assisted living environment. This process involves developing event detection algorithms and transmitting such events wirelessly to an in house network based on the 802.15.4 protocol. The network would then generate alerts both in the assisted living facility and remotely to an offsite monitoring facility. The focus of this paper is on the design of the system architecture and the compliance challenges in applying this technology.

  7. An urban observatory for quantifying phosphorus and suspended solid loads in combined natural and stormwater conveyances.

    PubMed

    Melcher, Anthony A; Horsburgh, Jeffery S

    2017-06-01

    Water quality in urban streams and stormwater systems is highly dynamic, both spatially and temporally, and can change drastically during storm events. Infrequent grab samples commonly collected for estimating pollutant loadings are insufficient to characterize water quality in many urban water systems. In situ water quality measurements are being used as surrogates for continuous pollutant load estimates; however, relatively few studies have tested the validity of surrogate indicators in urban stormwater conveyances. In this paper, we describe an observatory aimed at demonstrating the infrastructure required for surrogate monitoring in urban water systems and for capturing the dynamic behavior of stormwater-driven pollutant loads. We describe the instrumentation of multiple, autonomous water quality and quantity monitoring sites within an urban observatory. We also describe smart and adaptive sampling procedures implemented to improve data collection for developing surrogate relationships and for capturing the temporal and spatial variability of pollutant loading events in urban watersheds. Results show that the observatory is able to capture short-duration storm events within multiple catchments and, through inter-site communication, sampling efforts can be synchronized across multiple monitoring sites.

  8. MACRO: a combined microchip-PCR and microarray system for high-throughput monitoring of genetically modified organisms.

    PubMed

    Shao, Ning; Jiang, Shi-Meng; Zhang, Miao; Wang, Jing; Guo, Shu-Juan; Li, Yang; Jiang, He-Wei; Liu, Cheng-Xi; Zhang, Da-Bing; Yang, Li-Tao; Tao, Sheng-Ce

    2014-01-21

    The monitoring of genetically modified organisms (GMOs) is a primary step of GMO regulation. However, there is presently a lack of effective and high-throughput methodologies for specifically and sensitively monitoring most of the commercialized GMOs. Herein, we developed a multiplex amplification on a chip with readout on an oligo microarray (MACRO) system specifically for convenient GMO monitoring. This system is composed of a microchip for multiplex amplification and an oligo microarray for the readout of multiple amplicons, containing a total of 91 targets (18 universal elements, 20 exogenous genes, 45 events, and 8 endogenous reference genes) that covers 97.1% of all GM events that have been commercialized up to 2012. We demonstrate that the specificity of MACRO is ~100%, with a limit of detection (LOD) that is suitable for real-world applications. Moreover, the results obtained of simulated complex samples and blind samples with MACRO were 100% consistent with expectations and the results of independently performed real-time PCRs, respectively. Thus, we believe MACRO is the first system that can be applied for effectively monitoring the majority of the commercialized GMOs in a single test.

  9. Identification of unusual events in multichannel bridge monitoring data using wavelet transform and outlier analysis

    NASA Astrophysics Data System (ADS)

    Omenzetter, Piotr; Brownjohn, James M. W.; Moyo, Pilate

    2003-08-01

    Continuously operating instrumented structural health monitoring (SHM) systems are becoming a practical alternative to replace visual inspection for assessment of condition and soundness of civil infrastructure. However, converting large amount of data from an SHM system into usable information is a great challenge to which special signal processing techniques must be applied. This study is devoted to identification of abrupt, anomalous and potentially onerous events in the time histories of static, hourly sampled strains recorded by a multi-sensor SHM system installed in a major bridge structure in Singapore and operating continuously for a long time. Such events may result, among other causes, from sudden settlement of foundation, ground movement, excessive traffic load or failure of post-tensioning cables. A method of outlier detection in multivariate data has been applied to the problem of finding and localizing sudden events in the strain data. For sharp discrimination of abrupt strain changes from slowly varying ones wavelet transform has been used. The proposed method has been successfully tested using known events recorded during construction of the bridge, and later effectively used for detection of anomalous post-construction events.

  10. ILI-related school dismissal monitoring system: an overview and assessment.

    PubMed

    Kann, Laura; Kinchen, Steve; Modzelski, Bill; Sullivan, Madeline; Carr, Dana; Zaza, Stephanie; Graffunder, Corinne; Cetron, Marty

    2012-06-01

    This report provides an overview and assessment of the School Dismissal Monitoring System (SDMS) that was developed by the Centers for Disease Control and Prevention (CDC) and the US Department of Education (ED) to monitor influenza-like illness (ILI)-related school dismissals during the 2009-2010 school year in the United States. SDMS was developed with considerable consultation with CDC's and ED's partners. Further, each state appointed a single school dismissal monitoring contact, even if that state also had its own school-dismissal monitoring system in place. The SDMS received data from three sources: (1) direct reports submitted through CDC's Web site, (2) state monitoring systems, and (3) media scans and online searches. All cases identified through any of the three data sources were verified. Between August 3, 2009, and December 18, 2009, a total of 812 dismissal events (ie, a single school dismissal or dismissal of all schools in a district) were reported in the United States. These dismissal events had an impact on 1947 schools, approximately 623 616 students, and 40 521 teachers. The SDMS yielded real-time, national summary data that were used widely throughout the US government for situational awareness to assess the impact of CDC guidance and community mitigation efforts and to inform the development of guidance, resources, and tools for schools.

  11. Multileaf collimator performance monitoring and improvement using semiautomated quality control testing and statistical process control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Létourneau, Daniel, E-mail: daniel.letourneau@rmp.uh.on.ca; McNiven, Andrea; Keller, Harald

    2014-12-15

    Purpose: High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. Methods:more » The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3–4 times/week over a period of 10–11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ±0.5 and ±1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. Results: The precision of the MLC performance monitoring QC test and the MLC itself was within ±0.22 mm for most MLC leaves and the majority of the apparent leaf motion was attributed to beam spot displacements between irradiations. The MLC QC test was performed 193 and 162 times over the monitoring period for the studied units and recalibration had to be repeated up to three times on one of these units. For both units, rate of MLC interlocks was moderately associated with MLC servicing events. The strongest association with the MLC performance was observed between the MLC servicing events and the total number of out-of-control leaves. The average elapsed time for which the number of out-of-specification or out-of-control leaves was within a given performance threshold was computed and used to assess adequacy of MLC test frequency. Conclusions: A MLC performance monitoring system has been developed and implemented to acquire high-quality QC data at high frequency. This is enabled by the relatively short acquisition time for the images and automatic image analysis. The monitoring system was also used to record and track the rate of MLC-related interlocks and servicing events. MLC performances for two commercially available MLC models have been assessed and the results support monthly test frequency for widely accepted ±1 mm specifications. Higher QC test frequency is however required to maintain tighter specification and in-control behavior.« less

  12. Imaging Fracking Zones by Microseismic Reverse Time Migration for Downhole Microseismic Monitoring

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Zhang, H.

    2015-12-01

    Hydraulic fracturing is an engineering tool to create fractures in order to better recover oil and gas from low permeability reservoirs. Because microseismic events are generally associated with fracturing development, microseismic monitoring has been used to evaluate the fracking process. Microseismic monitoring generally relies on locating microseismic events to understand the spatial distribution of fractures. For the multi-stage fracturing treatment, fractures created in former stages are strong scatterers in the medium and can induce strong scattering waves on the waveforms for microseismic events induced during later stages. In this study, we propose to take advantage of microseismic scattering waves to image fracking zones by using seismic reverse time migration method. For downhole microseismic monitoring that involves installing a string of seismic sensors in a borehole near the injection well, the observation geometry is actually similar to the VSP (vertical seismic profile) system. For this reason, we adapt the VSP migration method for the common shot gather to the common event gather. Microseismic reverse-time migration method involves solving wave equation both forward and backward in time for each microseismic event. At current stage, the microseismic RTM is based on 2D acoustic wave equation (Zhang and Sun, 2008), solved by the finite-difference method with PML absorbing boundary condition applied to suppress the reflections of artificial boundaries. Additionally, we use local wavefield decomposition instead of cross-correlation imaging condition to suppress the imaging noise. For testing the method, we create a synthetic dataset for a downhole microseismic monitoring system with multiple fracking stages. It shows that microseismic migration using individual event is able to clearly reveal the fracture zone. The shorter distance between fractures and the microseismic event the clearer the migration image is. By summing migration images for many events, it can better reveal the fracture development during the hydraulic fracturing treatment. The synthetic test shows that microseismic migration is able to characterize the fracturing zone along with microseismic events. We will extend the method from 2D to 3D as well as from acoustic to elastic and apply it to real microseismic data.

  13. Method and apparatus for single-stepping coherence events in a multiprocessor system under software control

    DOEpatents

    Blumrich, Matthias A.; Salapura, Valentina

    2010-11-02

    An apparatus and method are disclosed for single-stepping coherence events in a multiprocessor system under software control in order to monitor the behavior of a memory coherence mechanism. Single-stepping coherence events in a multiprocessor system is made possible by adding one or more step registers. By accessing these step registers, one or more coherence requests are processed by the multiprocessor system. The step registers determine if the snoop unit will operate by proceeding in a normal execution mode, or operate in a single-step mode.

  14. Service Management Database for DSN Equipment

    NASA Technical Reports Server (NTRS)

    Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Wolgast, Paul; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed

    2009-01-01

    This data- and event-driven persistent storage system leverages the use of commercial software provided by Oracle for portability, ease of maintenance, scalability, and ease of integration with embedded, client-server, and multi-tiered applications. In this role, the Service Management Database (SMDB) is a key component of the overall end-to-end process involved in the scheduling, preparation, and configuration of the Deep Space Network (DSN) equipment needed to perform the various telecommunication services the DSN provides to its customers worldwide. SMDB makes efficient use of triggers, stored procedures, queuing functions, e-mail capabilities, data management, and Java integration features provided by the Oracle relational database management system. SMDB uses a third normal form schema design that allows for simple data maintenance procedures and thin layers of integration with client applications. The software provides an integrated event logging system with ability to publish events to a JMS messaging system for synchronous and asynchronous delivery to subscribed applications. It provides a structured classification of events and application-level messages stored in database tables that are accessible by monitoring applications for real-time monitoring or for troubleshooting and analysis over historical archives.

  15. Fog-Based Two-Phase Event Monitoring and Data Gathering in Vehicular Sensor Networks

    PubMed Central

    Yang, Fan; Su, Jinsong; Zhou, Qifeng; Wang, Tian; Zhang, Lu; Xu, Yifan

    2017-01-01

    Vehicular nodes are equipped with more and more sensing units, and a large amount of sensing data is generated. Recently, more and more research considers cooperative urban sensing as the heart of intelligent and green city traffic management. The key components of the platform will be a combination of a pervasive vehicular sensing system, as well as a central control and analysis system, where data-gathering is a fundamental component. However, the data-gathering and monitoring are also challenging issues in vehicular sensor networks because of the large amount of data and the dynamic nature of the network. In this paper, we propose an efficient continuous event-monitoring and data-gathering framework based on fog nodes in vehicular sensor networks. A fog-based two-level threshold strategy is adopted to suppress unnecessary data upload and transmissions. In the monitoring phase, nodes sense the environment in low cost sensing mode and generate sensed data. When the probability of the event is high and exceeds some threshold, nodes transfer to the event-checking phase, and some nodes would be selected to transfer to the deep sensing mode to generate more accurate data of the environment. Furthermore, it adaptively adjusts the threshold to upload a suitable amount of data for decision making, while at the same time suppressing unnecessary message transmissions. Simulation results showed that the proposed scheme could reduce more than 84 percent of the data transmissions compared with other existing algorithms, while it detects the events and gathers the event data. PMID:29286320

  16. Indicator Systems for School and Teacher Evaluation: Fire-Fighting It Is!

    ERIC Educational Resources Information Center

    Fitz-Gibbon, C. T.

    In 1979, Gene Glass suggested that it might not be possible to evaluate schools nor to create widely applicable research findings, but that the complexity of education was such that merely "fire-fighting," establishing monitoring systems to alert about educational events, was the best approach. In the United Kingdom, monitoring systems…

  17. Demonstrating the Value of Near Real-time Satellite-based Earth Observations in a Research and Education Framework

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Hao, X.; Kinter, J. L.; Stearn, G.; Aliani, M.

    2017-12-01

    The launch of GOES-16 series provides an opportunity to advance near real-time applications in natural hazard detection, monitoring and warning. This study demonstrates the capability and values of receiving real-time satellite-based Earth observations over a fast terrestrial networks and processing high-resolution remote sensing data in a university environment. The demonstration system includes 4 components: 1) Near real-time data receiving and processing; 2) data analysis and visualization; 3) event detection and monitoring; and 4) information dissemination. Various tools are developed and integrated to receive and process GRB data in near real-time, produce images and value-added data products, and detect and monitor extreme weather events such as hurricane, fire, flooding, fog, lightning, etc. A web-based application system is developed to disseminate near-real satellite images and data products. The images are generated with GIS-compatible format (GeoTIFF) to enable convenient use and integration in various GIS platforms. This study enhances the capacities for undergraduate and graduate education in Earth system and climate sciences, and related applications to understand the basic principles and technology in real-time applications with remote sensing measurements. It also provides an integrated platform for near real-time monitoring of extreme weather events, which are helpful for various user communities.

  18. Time until diagnosis of clinical events with different remote monitoring systems in Implantable Cardioverter-Defibrillator patients.

    PubMed

    Söth-Hansen, Malene; Witt, Christoffer Tobias; Rasmussen, Mathis; Kristensen, Jens; Gerdes, Christian; Nielsen, Jens Cosedis

    2018-05-24

    Remote monitoring (RM) is an established technology integrated into routine follow-up of patients with implantable cardioverter-defibrillator (ICD). Current RM systems differ according to transmission frequency and alert definition. We aimed to compare time difference between detection and acknowledgement of clinically relevant events between four RM systems. We analyzed time delay between detection of ventricular arrhythmic and technical events by the ICD and acknowledgement by hospital staff in 1.802 consecutive patients followed with RM during September 2014 - August 2016. Devices from Biotronik (BIO, n=374), Boston Scientific (BSC, n=196), Medtronic (MDT, n=468) and St Jude Medical (SJM, n=764) were included. We identified all events from RM webpages and their acknowledgement with RM or at in-clinic follow-up. Events occurring during weekends were excluded. We included 3.472 events. Proportion of events acknowledged within 24 hours was 72%, 23%, 18% and 65% with BIO, BSC, MDT and SJM, respectively, with median times of 13, 222, 163 and 18 hours from detection to acknowledgement (p<0.001 for both comparisons between manufacturers). Including only events transmitted as alerts by RM, 72%, 68%, 61% and 65% for BIO, BSC, MDT and SJM, respectively were acknowledged within 24 hours. Variation in time to acknowledgement of ventricular tachyarrhythmia episodes not treated with shock therapy was the primary cause for the difference between manufacturers. Significant and clinically relevant differences in time delay from event detection to acknowledgement exist between RM systems. Varying definitions of which events RM transmits as alerts are important for the differences observed. Copyright © 2018. Published by Elsevier Inc.

  19. Synchronous parallel system for emulation and discrete event simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor)

    1992-01-01

    A synchronous parallel system for emulation and discrete event simulation having parallel nodes responds to received messages at each node by generating event objects having individual time stamps, stores only the changes to state variables of the simulation object attributable to the event object, and produces corresponding messages. The system refrains from transmitting the messages and changing the state variables while it determines whether the changes are superseded, and then stores the unchanged state variables in the event object for later restoral to the simulation object if called for. This determination preferably includes sensing the time stamp of each new event object and determining which new event object has the earliest time stamp as the local event horizon, determining the earliest local event horizon of the nodes as the global event horizon, and ignoring the events whose time stamps are less than the global event horizon. Host processing between the system and external terminals enables such a terminal to query, monitor, command or participate with a simulation object during the simulation process.

  20. Synchronous Parallel System for Emulation and Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor)

    2001-01-01

    A synchronous parallel system for emulation and discrete event simulation having parallel nodes responds to received messages at each node by generating event objects having individual time stamps, stores only the changes to the state variables of the simulation object attributable to the event object and produces corresponding messages. The system refrains from transmitting the messages and changing the state variables while it determines whether the changes are superseded, and then stores the unchanged state variables in the event object for later restoral to the simulation object if called for. This determination preferably includes sensing the time stamp of each new event object and determining which new event object has the earliest time stamp as the local event horizon, determining the earliest local event horizon of the nodes as the global event horizon, and ignoring events whose time stamps are less than the global event horizon. Host processing between the system and external terminals enables such a terminal to query, monitor, command or participate with a simulation object during the simulation process.

  1. Instruments for Deep Space Weather Prediction and Science

    NASA Astrophysics Data System (ADS)

    DeForest, C. E.; Laurent, G.

    2018-02-01

    We discuss remote space weather monitoring system concepts that could mount on the Deep Space Gateway and provide predictive capability for space weather events including SEP events and CME crossings, and advance heliophysics of the solar wind.

  2. Monitoring activities of satellite data processing services in real-time with SDDS Live Monitor

    NASA Astrophysics Data System (ADS)

    Duc Nguyen, Minh

    2017-10-01

    This work describes Live Monitor, the monitoring subsystem of SDDS - an automated system for space experiment data processing, storage, and distribution created at SINP MSU. Live Monitor allows operators and developers of satellite data centers to identify errors occurred in data processing quickly and to prevent further consequences caused by the errors. All activities of the whole data processing cycle are illustrated via a web interface in real-time. Notification messages are delivered to responsible people via emails and Telegram messenger service. The flexible monitoring mechanism implemented in Live Monitor allows us to dynamically change and control events being shown on the web interface on our demands. Physicists, whose space weather analysis models are functioning upon satellite data provided by SDDS, can use the developed RESTful API to monitor their own events and deliver customized notification messages by their needs.

  3. Alternatives for Laboratory Measurement of Aerosol Samples from the International Monitoring System of the CTBT

    NASA Astrophysics Data System (ADS)

    Miley, H.; Forrester, J. B.; Greenwood, L. R.; Keillor, M. E.; Eslinger, P. W.; Regmi, R.; Biegalski, S.; Erikson, L. E.

    2013-12-01

    The aerosol samples taken from the CTBT International Monitoring Systems stations are measured in the field with a minimum detectable concentration (MDC) of ~30 microBq/m3 of Ba-140. This is sufficient to detect far less than 1 kt of aerosol fission products in the atmosphere when the station is in the plume from such an event. Recent thinking about minimizing the potential source region (PSR) from a detection has led to a desire for a multi-station or multi-time period detection. These would be connected through the concept of ';event formation', analogous to event formation in seismic event study. However, to form such events, samples from the nearest neighbors of the detection would require re-analysis with a more sensitive laboratory to gain a substantially lower MDC, and potentially find radionuclide concentrations undetected by the station. The authors will present recent laboratory work with air filters showing various cost effective means for enhancing laboratory sensitivity.

  4. Biotelemetry for Monitoring Electrocardiograms during Athletic Events and Stress Tests

    ERIC Educational Resources Information Center

    Mitchell, B. W.; Thomasson, G. O.

    1975-01-01

    This article discusses a study attempting to determine if a biotelemetry system developed for use on chickens could be suitable for monitoring electrocardiograms of humans during exercise. Techniques for its use are reviewed. (JS)

  5. Karst aquifer characterization using geophysical remote sensing of dynamic recharge events

    NASA Astrophysics Data System (ADS)

    Grapenthin, R.; Bilek, S. L.; Luhmann, A. J.

    2017-12-01

    Geophysical monitoring techniques, long used to make significant advances in a wide range of deeper Earth science disciplines, are now being employed to track surficial processes such as landslide, glacier, and river flow. Karst aquifers are another important hydrologic resource that can benefit from geophysical remote sensing, as this monitoring allows for safe, noninvasive karst conduit measurements. Conduit networks are typically poorly constrained, let alone the processes that occur within them. Geophysical monitoring can also provide a regionally integrated analysis to characterize subsurface architecture and to understand the dynamics of flow and recharge processes in karst aquifers. Geophysical signals are likely produced by several processes during recharge events in karst aquifers. For example, pressure pulses occur when water enters conduits that are full of water, and experiments suggest seismic signals result from this process. Furthermore, increasing water pressure in conduits during recharge events increases the load applied to conduit walls, which deforms the surrounding rock to yield measureable surface displacements. Measureable deformation should also occur with mass loading, with subsidence and rebound signals associated with increases and decreases of water mass stored in the aquifer, respectively. Additionally, geophysical signals will likely arise with turbulent flow and pore pressure change in the rock surrounding conduits. Here we present seismic data collected during a pilot study of controlled and natural recharge events in a karst aquifer system near Bear Spring, near Eyota, MN, USA as well as preliminary model results regarding the processes described above. In addition, we will discuss an upcoming field campaign where we will use seismometers, tiltmeters, and GPS instruments to monitor for recharge-induced responses in a FL, USA karst system with existing cave maps, coupling these geophysical observations with hydrologic and meteorologic data to map and characterize conduits and other features of the larger karst system and to monitor subsurface flow dynamics during recharge events.

  6. DOE Program on Seismic Characterization for Regions of Interest to CTBT Monitoring,

    DTIC Science & Technology

    1995-08-14

    processing of the monitoring network data). While developing and testing the corrections and other parameters needed by the automated processing systems...the secondary network. Parameters tabulated in the knowledge base must be appropriate for routine automated processing of network data, and must also...operation of the PNDC, as well as to results of investigations of "special events" (i.e., those events that fail to locate or discriminate during automated

  7. Passive (Micro-) Seismic Event Detection by Identifying Embedded "Event" Anomalies Within Statistically Describable Background Noise

    NASA Astrophysics Data System (ADS)

    Baziw, Erick; Verbeek, Gerald

    2012-12-01

    Among engineers there is considerable interest in the real-time identification of "events" within time series data with a low signal to noise ratio. This is especially true for acoustic emission analysis, which is utilized to assess the integrity and safety of many structures and is also applied in the field of passive seismic monitoring (PSM). Here an array of seismic receivers are used to acquire acoustic signals to monitor locations where seismic activity is expected: underground excavations, deep open pits and quarries, reservoirs into which fluids are injected or from which fluids are produced, permeable subsurface formations, or sites of large underground explosions. The most important element of PSM is event detection: the monitoring of seismic acoustic emissions is a continuous, real-time process which typically runs 24 h a day, 7 days a week, and therefore a PSM system with poor event detection can easily acquire terabytes of useless data as it does not identify crucial acoustic events. This paper outlines a new algorithm developed for this application, the so-called SEED™ (Signal Enhancement and Event Detection) algorithm. The SEED™ algorithm uses real-time Bayesian recursive estimation digital filtering techniques for PSM signal enhancement and event detection.

  8. The AAL project: automated monitoring and intelligent analysis for the ATLAS data taking infrastructure

    NASA Astrophysics Data System (ADS)

    Kazarov, A.; Lehmann Miotto, G.; Magnoni, L.

    2012-06-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at CERN is the infrastructure responsible for collecting and transferring ATLAS experimental data from detectors to the mass storage system. It relies on a large, distributed computing environment, including thousands of computing nodes with thousands of application running concurrently. In such a complex environment, information analysis is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking runs, streams of messages sent by applications via the message reporting system together with data published from applications via information services are the main sources of knowledge about correctness of running operations. The flow of data produced (with an average rate of O(1-10KHz)) is constantly monitored by experts to detect problem or misbehavior. This requires strong competence and experience in understanding and discovering problems and root causes, and often the meaningful information is not in the single message or update, but in the aggregated behavior in a certain time-line. The AAL project is meant at reducing the man power needs and at assuring a constant high quality of problem detection by automating most of the monitoring tasks and providing real-time correlation of data-taking and system metrics. This project combines technologies coming from different disciplines, in particular it leverages on an Event Driven Architecture to unify the flow of data from the ATLAS infrastructure, on a Complex Event Processing (CEP) engine for correlation of events and on a message oriented architecture for components integration. The project is composed of 2 main components: a core processing engine, responsible for correlation of events through expert-defined queries and a web based front-end to present real-time information and interact with the system. All components works in a loose-coupled event based architecture, with a message broker to centralize all communication between modules. The result is an intelligent system able to extract and compute relevant information from the flow of operational data to provide real-time feedback to human experts who can promptly react when needed. The paper presents the design and implementation of the AAL project, together with the results of its usage as automated monitoring assistant for the ATLAS data taking infrastructure.

  9. Dynamic sensing model for accurate delectability of environmental phenomena using event wireless sensor network

    NASA Astrophysics Data System (ADS)

    Missif, Lial Raja; Kadhum, Mohammad M.

    2017-09-01

    Wireless Sensor Network (WSN) has been widely used for monitoring where sensors are deployed to operate independently to sense abnormal phenomena. Most of the proposed environmental monitoring systems are designed based on a predetermined sensing range which does not reflect the sensor reliability, event characteristics, and the environment conditions. Measuring of the capability of a sensor node to accurately detect an event within a sensing field is of great important for monitoring applications. This paper presents an efficient mechanism for even detection based on probabilistic sensing model. Different models have been presented theoretically in this paper to examine their adaptability and applicability to the real environment applications. The numerical results of the experimental evaluation have showed that the probabilistic sensing model provides accurate observation and delectability of an event, and it can be utilized for different environment scenarios.

  10. Causal simulation and sensor planning in predictive monitoring

    NASA Technical Reports Server (NTRS)

    Doyle, Richard J.

    1989-01-01

    Two issues are addressed which arise in the task of detecting anomalous behavior in complex systems with numerous sensor channels: how to adjust alarm thresholds dynamically, within the changing operating context of the system, and how to utilize sensors selectively, so that nominal operation can be verified reliably without processing a prohibitive amount of sensor data. The approach involves simulation of a causal model of the system, which provides information on expected sensor values, and on dependencies between predicted events, useful in assessing the relative importance of events so that sensor resources can be allocated effectively. The potential applicability of this work to the execution monitoring of robot task plans is briefly discussed.

  11. Use of Unstructured Event-Based Reports for Global Infectious Disease Surveillance

    PubMed Central

    Blench, Michael; Tolentino, Herman; Freifeld, Clark C.; Mandl, Kenneth D.; Mawudeku, Abla; Eysenbach, Gunther; Brownstein, John S.

    2009-01-01

    Free or low-cost sources of unstructured information, such as Internet news and online discussion sites, provide detailed local and near real-time data on disease outbreaks, even in countries that lack traditional public health surveillance. To improve public health surveillance and, ultimately, interventions, we examined 3 primary systems that process event-based outbreak information: Global Public Health Intelligence Network, HealthMap, and EpiSPIDER. Despite similarities among them, these systems are highly complementary because they monitor different data types, rely on varying levels of automation and human analysis, and distribute distinct information. Future development should focus on linking these systems more closely to public health practitioners in the field and establishing collaborative networks for alert verification and dissemination. Such development would further establish event-based monitoring as an invaluable public health resource that provides critical context and an alternative to traditional indicator-based outbreak reporting. PMID:19402953

  12. REAL-TIME MONITORING FOR TOXICITY CAUSED BY ...

    EPA Pesticide Factsheets

    This project, sponsored by EPA's Environmental Monitoring for Public Access and Community Tracking (EMPACT) program, evaluated the ability of an automated biological monitoring system that measures fish ventilatory responses (ventilatory rate, ventilatory depth, and cough rate) to detect developing toxic conditions in water.In laboratory tests, acutely toxic levels of both brevetoxin (PbTx-2) and toxic Pfiesteria piscicida cultures caused fish responses primarily through large increases in cough rate. In the field, the automated biomonitoring system operated continuously for 3 months on the Chicamacomico River, a tributary to the Chesapeake Bay that has had a history of intermittent toxic algal blooms. Data gathered through this effort complemented chemical monitoring data collected by the Maryland Department of Natural Resources (DNR) as part of their Pfiesteria monitoring program. After evaluation by DNR personnel, the public could access the data on the DNR Internet web site at www.dnr.state.md.us/bay/pfiesteria/00results.html or receive more detailed information at www.aquaticpath.umd.edu/empact.. The field biomonitor identified five fish response events. Increased conductivity combined with a substantial decrease in water temperature was the likely cause of one event, while contaminants (probably surfactants) released from inadequately rinsed particle filters produced another response. The other three events, characterized by greatly increased cough ra

  13. Detecting, Monitoring, and Reporting Possible Adverse Drug Events Using an Arden-Syntax-based Rule Engine.

    PubMed

    Fehre, Karsten; Plössnig, Manuela; Schuler, Jochen; Hofer-Dückelmann, Christina; Rappelsberger, Andrea; Adlassnig, Klaus-Peter

    2015-01-01

    The detection of adverse drug events (ADEs) is an important aspect of improving patient safety. The iMedication system employs predefined triggers associated with significant events in a patient's clinical data to automatically detect possible ADEs. We defined four clinically relevant conditions: hyperkalemia, hyponatremia, renal failure, and over-anticoagulation. These are some of the most relevant ADEs in internal medical and geriatric wards. For each patient, ADE risk scores for all four situations are calculated, compared against a threshold, and judged to be monitored, or reported. A ward-based cockpit view summarizes the results.

  14. Hydrological performance of extensive green roofs in New York City: observations and multi-year modeling of three full-scale systems

    NASA Astrophysics Data System (ADS)

    Carson, T. B.; Marasco, D. E.; Culligan, P. J.; McGillis, W. R.

    2013-06-01

    Green roofs can be an attractive strategy for adding perviousness in dense urban environments where rooftops are a high fraction of the impervious land area. As a result, green roofs are being increasingly implemented as part of urban stormwater management plans in cities around the world. In this study, three full-scale green roofs in New York City (NYC) were monitored, representing the three extensive green roof types most commonly constructed: (1) a vegetated mat system installed on a Columbia University residential building, referred to as W118; (2) a built-in-place system installed on the United States Postal Service (USPS) Morgan general mail facility; and (3) a modular tray system installed on the ConEdison (ConEd) Learning Center. Continuous rainfall and runoff data were collected from each green roof between June 2011 and June 2012, resulting in 243 storm events suitable for analysis ranging from 0.25 to 180 mm in depth. Over the monitoring period the W118, USPS, and ConEd roofs retained 36%, 47%, and 61% of the total rainfall respectively. Rainfall attenuation of individual storm events ranged from 3 to 100% for W118, 9 to 100% for USPS, and 20 to 100% for ConEd, where, generally, as total rainfall increased the per cent of rainfall attenuation decreased. Seasonal retention behavior also displayed event size dependence. For events of 10-40 mm rainfall depth, median retention was highest in the summer and lowest in the winter, whereas median retention for events of 0-10 mm and 40 +mm rainfall depth did not conform to this expectation. Given the significant influence of event size on attenuation, the total per cent retention during a given monitoring period might not be indicative of annual rooftop retention if the distribution of observed event sizes varies from characteristic annual rainfall. To account for this, the 12 months of monitoring data were used to develop a characteristic runoff equation (CRE), relating runoff depth and event size, for each green roof. When applied to Central Park, NYC precipitation records from 1971 to 2010, the CRE models estimated total rainfall retention over the 40 year period to be 45%, 53%, and 58% for the W118, USPS, and ConEd green roofs respectively. Differences between the observed and modeled rainfall retention for W118 and USPS were primarily due to an abnormally high frequency of large events, 50 mm of rainfall or more, during the monitoring period compared to historic precipitation patterns. The multi-year retention rates are a more reliable estimate of annual rainfall capture and highlight the importance of long-term evaluations when reporting green roof performance.

  15. Testing the seismology-based landquake monitoring system

    NASA Astrophysics Data System (ADS)

    Chao, Wei-An

    2016-04-01

    I have developed a real-time landquake monitoring system (RLMs), which monitor large-scale landquake activities in the Taiwan using real-time seismic network of Broadband Array in Taiwan for Seismology (BATS). The RLM system applies a grid-based general source inversion (GSI) technique to obtain the preliminary source location and force mechanism. A 2-D virtual source-grid on the Taiwan Island is created with an interval of 0.2° in both latitude and longitude. The depth of each grid point is fixed on the free surface topography. A database is stored on the hard disk for the synthetics, which are obtained using Green's functions computed by the propagator matrix approach for 1-D average velocity model, at all stations from each virtual source-grid due to nine elementary source components: six elementary moment tensors and three orthogonal (north, east and vertical) single-forces. Offline RLM system was carried out for events detected in previous studies. An important aspect of the RLM system is the implementation of GSI approach for different source types (e.g., full moment tensor, double couple faulting, and explosion source) by the grid search through the 2-D virtual source to automatically identify landquake event based on the improvement in waveform fitness and evaluate the best-fit solution in the monitoring area. With this approach, not only the force mechanisms but also the event occurrence time and location can be obtained simultaneously about 6-8 min after an occurrence of an event. To improve the insufficient accuracy of GSI-determined lotion, I further conduct a landquake epicenter determination (LED) method that maximizes the coherency of the high-frequency (1-3 Hz) horizontal envelope functions to determine the final source location. With good knowledge about the source location, I perform landquake force history (LFH) inversion to investigate the source dynamics (e.g., trajectory) for the relatively large-sized landquake event. With providing aforementioned source information in real-time, the government and emergency response agencies have sufficient reaction time for rapid assessment and response to landquake hazards. Since 2016, the RLM system has operated online.

  16. Analysis of the March 30, 2011 Hail Event at Shuttle Launch Pad 39A

    NASA Technical Reports Server (NTRS)

    Lane, John E.; Doesken, Nolan J.; Kasparis, Takis C.; Sharp, David W.

    2012-01-01

    The Kennedy Space Center (KSC) Hail Monitor System, a joint effort of the NASA KSC Physics Lab and the KSC Engineering Services Contract (ESC) Applied Technology Lab, was first deployed for operational testing in the fall of 2006. Volunteers from the Community Collaborative Rain, Hail, and Snow Network (CoCoRaHS) in conjunction with Colorado State University have been instrumental in validation testing using duplicate hail monitor systems at sites in the hail prone high plains of Colorado. The KSC Hail Monitor System (HMS), consisting of three stations positioned approximately 500 ft from the launch pad and forming an approximate equilateral triangle, as shown in Figure 1, was first deployed to Pad 39B for support of STS-115. Two months later, the HMS was deployed to Pad 39A for support of STS-116. During support of STS-117 in late February 2007, an unusually intense (for Florida standards) hail event occurred in the immediate vicinity of the exposed space shuttle and launch pad. Hail data of this event was collected by the HMS and analyzed. Support of STS-118 revealed another important application of the hail monitor system. Ground Instrumentation personnel check the hail monitors daily when a vehicle is on the launch pad, with special attention after any storm suspected of containing hail. If no hail is recorded by the HMS, the vehicle and pad inspection team has no need to conduct a thorough inspection of the vehicle immediately following a storm. On the afternoon of July 13, 2007, hail on the ground was reported by observers at the Vertical Assembly Building (VAB) and Launch Control Center (LCC), about three miles west of Pad 39A, as well as at several other locations at KSC. The HMS showed no impact detections, indicating that the shuttle had not been damaged by any of the numerous hail events which occurred on that day.

  17. High-resolution monitoring of stormwater quality in an urbanising catchment in the United Kingdom during the 2013/2014 winter storms

    NASA Astrophysics Data System (ADS)

    McGrane, S. J.; Hutchins, M. G.; Kjeldsen, T. R.; Miller, J. D.; Bussi, G.; Loewenthal, M.

    2015-12-01

    Urban areas are widely recognised as a key source of contaminants entering our freshwater systems, yet in spite of this, our understanding of stormwater quality dynamics remains limited. The development of in-situ, high-resolution monitoring equipment has revolutionised our capability to capture flow and water quality data at a sub-hourly resolution, enabling us to potentially enhance our understanding of hydrochemical variations from contrasting landscapes during storm events. During the winter of 2013/2014, the United Kingdom experienced a succession of intense storm events, where the south of the country experienced 200% of the average rainfall, resulting in widespread flooding across the Thames basin. We applied high-frequency (15 minute resolution) water quality monitoring across ten contrasting subcatchments (including rural, urban and mixed land-use catchments), seeking to classify the disparity in water quality conditions both within- and between events. Rural catchments increasingly behave like "urban" catchments as soils wet up and become increasingly responsive to subsequent events, however water quality response during the winter months remains limited. By contrast, increasingly urban catchments yield greater contaminant loads during events, and pre-event baseline chemistry highlights a resupply source in dense urban catchments. Wastewater treatment plants were shown to dominate baseline chemistry during low-flow events but also yield a considerable impact on stormwater outputs during peak-flow events, as hydraulic push results in the outflow of untreated solid wastes into the river system. Results are discussed in the context of water quality policy; urban growth scenarios and BMP for stormwater runoff in contrasting landscapes.

  18. Seismic risk mitigation in deep level South African mines by state of the art underground monitoring - Joint South African and Japanese study

    NASA Astrophysics Data System (ADS)

    Milev, A.; Durrheim, R.; Nakatani, M.; Yabe, Y.; Ogasawara, H.; Naoi, M.

    2012-04-01

    Two underground sites in a deep level gold mine in South Africa were instrumented by the Council for Scientific and Industrial Research (CSIR) with tilt meters and seismic monitors. One of the sites was also instrumented by JApanese-German Underground Acoustic emission Research in South Africa (JAGUARS) with a small network, approximately 40m span, of eight Acoustic Emission (AE) sensors. The rate of tilt, defined as quasi-static deformations, and the seismic ground motion, defined as dynamic deformations, were analysed in order to understand the rock mass behavior around deep level mining. In addition the high frequency AE events recorded at hypocentral distances of about 50m located at 3300m below the surface were analysed. A good correspondence between the dynamic and quasi-static deformations was found. The rate of coseismic and aseismic tilt, as well as seismicity recorded by the mine seismic network, are approximately constant until the daily blasting time, which takes place from about 19:30 until shortly before 21:00. During the blasting time and the subsequent seismic events the coseismic and aseismic tilt shows a rapid increase.Much of the quasi-static deformation, however, occurs independently of the seismic events and was described as 'slow' or aseismic events. During the monitoring period a seismic event with MW 2.2 occurred in the vicinity of the instrumented site. This event was recorded by both the CSIR integrated monitoring system and JAGUARS acoustic emotion network. The tilt changes associated with this event showed a well pronounced after-tilt. The aftershock activities were also well recorded by the acoustic emission and the mine seismic networks. More than 21,000 AE aftershocks were located in the first 150 hours after the main event. Using the distribution of the AE events the position of the fault in the source area was successfully delineated. The distribution of the AE events following the main shock was related to after tilt in order to quantify post slip behavior of the source. An attempt to associate the different type of deformations with the various fracture regions and geological structures around the stopes was carried out. A model, was introduced in which the coseismic deformations are associated with the stress regime outside the stope fracture envelope and very often located on existing geological structures, while the aseismic deformations are associated with mobilization of fractures and stress relaxation within the fracture envelope. Further research to verify this model is strongly recommended. This involves long term underground monitoring using a wide variety of instruments such as tilt, closure and strain meters, a highly sensitive AE fracture monitoring system, as well as strong ground motion monitors. A large amount of numerical modeling is also required.

  19. Minicomputer Hardware Monitor Design.

    DTIC Science & Technology

    1980-06-01

    detected signals. Both the COMTEN and TESDATA systems rely on a " plugboard " arrangement where sensor inputs may be combined by means of standard gate logic...systems. A further use of the plugboard "patch panels" is to direct the measured "event" to collection and/or distribution circuitry, where the event...are plugboard and sensor hookup configurations. The available T-PACs are: o Basic System Profile o Regional Mapping o Advanced System Management

  20. Methodology for Designing Operational Banking Risks Monitoring System

    NASA Astrophysics Data System (ADS)

    Kostjunina, T. N.

    2018-05-01

    The research looks at principles of designing an information system for monitoring operational banking risks. A proposed design methodology enables one to automate processes of collecting data on information security incidents in the banking network, serving as the basis for an integrated approach to the creation of an operational risk management system. The system can operate remotely ensuring tracking and forecasting of various operational events in the bank network. A structure of a content management system is described.

  1. Passive seismic monitoring of the Bering Glacier during its last surge event

    NASA Astrophysics Data System (ADS)

    Zhan, Z.

    2017-12-01

    The physical causes behind glacier surges are still unclear. Numerous evidences suggest that they probably involve changes in glacier basal conditions, such as switch of basal water system from concentrated large tunnels to a distributed "layer" as "connected cavities". However, most remote sensing approaches can not penetrate to the base to monitor such changes continuously. Here we apply seismic interferometry using ambient noise to monitor glacier seismic structures, especially to detect possible signatures of the hypothesized high-pressure water "layer". As an example, we derive an 11-year long history of seismic structure of the Bering Glacier, Alaska, covering its latest surge event. We observe substantial drops of Rayleigh and Love wavespeeds across the glacier during the surge event, potentially caused by changes in crevasse density, glacier thickness, and basal conditions.

  2. Assessing dry weather flow contribution in TSS and COD storm events loads in combined sewer systems.

    PubMed

    Métadier, M; Bertrand-Krajewski, J L

    2011-01-01

    Continuous high resolution long term turbidity measurements along with continuous discharge measurements are now recognised as an appropriate technique for the estimation of in sewer total suspended solids (TSS) and Chemical Oxygen Demand (COD) loads during storm events. In the combined system of the Ecully urban catchment (Lyon, France), this technique is implemented since 2003, with more than 200 storm events monitored. This paper presents a method for the estimation of the dry weather (DW) contribution to measured total TSS and COD event loads with special attention devoted to uncertainties assessment. The method accounts for the dynamics of both discharge and turbidity time series at two minutes time step. The study is based on 180 DW days monitored in 2007-2008. Three distinct classes of DW days were evidenced. Variability analysis and quantification showed that no seasonal effect and no trend over the year were detectable. The law of propagation of uncertainties is applicable for uncertainties estimation. The method has then been applied to all measured storm events. This study confirms the interest of long term continuous discharge and turbidity time series in sewer systems, especially in the perspective of wet weather quality modelling.

  3. Helping safeguard Veterans Affairs' hospital buildings by advanced earthquake monitoring

    USGS Publications Warehouse

    Kalkan, Erol; Banga, Krishna; Ulusoy, Hasan S.; Fletcher, Jon Peter B.; Leith, William S.; Blair, James L.

    2012-01-01

    In collaboration with the U.S. Department of Veterans Affairs (VA), the National Strong Motion Project of the U.S. Geological Survey has recently installed sophisticated seismic systems that will monitor the structural integrity of hospital buildings during earthquake shaking. The new systems have been installed at more than 20 VA medical campuses across the country. These monitoring systems, which combine sensitive accelerometers and real-time computer calculations, are capable of determining the structural health of each structure rapidly after an event, helping to ensure the safety of patients and staff.

  4. Development of an Intelligent Monitoring System for Geological Carbon Sequestration (GCS) Systems

    NASA Astrophysics Data System (ADS)

    Sun, A. Y.; Jeong, H.; Xu, W.; Hovorka, S. D.; Zhu, T.; Templeton, T.; Arctur, D. K.

    2016-12-01

    To provide stakeholders timely evidence that GCS repositories are operating safely and efficiently requires integrated monitoring to assess the performance of the storage reservoir as the CO2 plume moves within it. As a result, GCS projects can be data intensive, as a result of proliferation of digital instrumentation and smart-sensing technologies. GCS projects are also resource intensive, often requiring multidisciplinary teams performing different monitoring, verification, and accounting (MVA) tasks throughout the lifecycle of a project to ensure secure containment of injected CO2. How to correlate anomaly detected by a certain sensor to events observed by other devices to verify leakage incidents? How to optimally allocate resources for task-oriented monitoring if reservoir integrity is in question? These are issues that warrant further investigation before real integration can take place. In this work, we are building a web-based, data integration, assimilation, and learning framework for geologic carbon sequestration projects (DIAL-GCS). DIAL-GCS will be an intelligent monitoring system (IMS) for automating GCS closed-loop management by leveraging recent developments in high-throughput database, complex event processing, data assimilation, and machine learning technologies. Results will be demonstrated using realistic data and model derived from a GCS site.

  5. A Remote Monitoring System for Voltage, Current, Power and Temperature Measurements

    NASA Astrophysics Data System (ADS)

    Barakat, E.; Sinno, N.; Keyrouz, C.

    This paper presents a study and design of a monitoring system for the continuous measurement of electrical energy parameters such as voltage, current, power and temperature. This system is designed to monitor the data remotely over internet. The electronic power meter is based on a microcontroller from Microchip Technology Inc. PIC family. The design takes into consideration the correct operation in the event of an outage or brown out by recording the electrical values and the temperatures in EEPROM internally available in the microcontroller. Also a digital display is used to show the acquired measurements. A computer will remotely monitor the data over internet.

  6. Analysis of Alerting System Failures in Commercial Aviation Accidents

    NASA Technical Reports Server (NTRS)

    Mumaw, Randall J.

    2017-01-01

    The role of an alerting system is to make the system operator (e.g., pilot) aware of an impending hazard or unsafe state so the hazard can be avoided or managed successfully. A review of 46 commercial aviation accidents (between 1998 and 2014) revealed that, in the vast majority of events, either the hazard was not alerted or relevant hazard alerting occurred but failed to aid the flight crew sufficiently. For this set of events, alerting system failures were placed in one of five phases: Detection, Understanding, Action Selection, Prioritization, and Execution. This study also reviewed the evolution of alerting system schemes in commercial aviation, which revealed naive assumptions about pilot reliability in monitoring flight path parameters; specifically, pilot monitoring was assumed to be more effective than it actually is. Examples are provided of the types of alerting system failures that have occurred, and recommendations are provided for alerting system improvements.

  7. Faster Array Training and Rapid Analysis for a Sensor Array Intended for an Event Monitor in Air

    NASA Technical Reports Server (NTRS)

    Homer, Margie L.; Shevade, A. V.; Fonollosa, J.; Huerta, R.

    2013-01-01

    Environmental monitoring, in particular, air monitoring, is a critical need for human space flight. Both monitoring and life support systems have needs for closed loop process feedback and quality control for environmental factors. Monitoring protects the air environment and water supply for the astronaut crew and different sensors help ensure that the habitat falls within acceptable limits, and that the life support system is functioning properly and efficiently. The longer the flight duration and the farther the destination, the more critical it becomes to have carefully monitored and automated control systems for life support. There is an acknowledged need for an event monitor which samples the air continuously and provides near real-time information on changes in the air. Past experiments with the JPL ENose have demonstrated a lifetime of the sensor array, with the software, of around 18 months. We are working on a sensor array and new algorithms that will incorporate transient sensor responses in the analysis. Preliminary work has already showed more rapid quantification and identification of analytes and the potential for faster training time of the array. We will look at some of the factors that contribute to demonstrating faster training time for the array. Faster training will decrease the integrated sensor exposure to training analytes, which will also help extend sensor lifetime.

  8. Automatic Response to Intrusion

    DTIC Science & Technology

    2002-10-01

    Computing Corporation Sidewinder Firewall [18] SRI EMERALD Basic Security Module (BSM) and EMERALD File Transfer Protocol (FTP) Monitors...the same event TCP Wrappers [24] Internet Security Systems RealSecure [31] SRI EMERALD IDIP monitor NAI Labs Generic Software Wrappers Prototype...included EMERALD , NetRadar, NAI Labs UNIX wrappers, ARGuE, MPOG, NetRadar, CyberCop Server, Gauntlet, RealSecure, and the Cyber Command System

  9. APDS: the autonomous pathogen detection system.

    PubMed

    Hindson, Benjamin J; Makarewicz, Anthony J; Setlur, Ujwal S; Henderer, Bruce D; McBride, Mary T; Dzenitis, John M

    2005-04-15

    We have developed and tested a fully autonomous pathogen detection system (APDS) capable of continuously monitoring the environment for airborne biological threat agents. The system was developed to provide early warning to civilians in the event of a bioterrorism incident and can be used at high profile events for short-term, intensive monitoring or in major public buildings or transportation nodes for long-term monitoring. The APDS is completely automated, offering continuous aerosol sampling, in-line sample preparation fluidics, multiplexed detection and identification immunoassays, and nucleic acid-based polymerase chain reaction (PCR) amplification and detection. Highly multiplexed antibody-based and duplex nucleic acid-based assays are combined to reduce false positives to a very low level, lower reagent costs, and significantly expand the detection capabilities of this biosensor. This article provides an overview of the current design and operation of the APDS. Certain sub-components of the ADPS are described in detail, including the aerosol collector, the automated sample preparation module that performs multiplexed immunoassays with confirmatory PCR, and the data monitoring and communications system. Data obtained from an APDS that operated continuously for 7 days in a major U.S. transportation hub is reported.

  10. Human region segmentation and description methods for domiciliary healthcare monitoring using chromatic methodology

    NASA Astrophysics Data System (ADS)

    Al-Temeemy, Ali A.

    2018-03-01

    A descriptor is proposed for use in domiciliary healthcare monitoring systems. The descriptor is produced from chromatic methodology to extract robust features from the monitoring system's images. It has superior discrimination capabilities, is robust to events that normally disturb monitoring systems, and requires less computational time and storage space to achieve recognition. A method of human region segmentation is also used with this descriptor. The performance of the proposed descriptor was evaluated using experimental data sets, obtained through a series of experiments performed in the Centre for Intelligent Monitoring Systems, University of Liverpool. The evaluation results show high recognition performance for the proposed descriptor in comparison to traditional descriptors, such as moments invariant. The results also show the effectiveness of the proposed segmentation method regarding distortion effects associated with domiciliary healthcare systems.

  11. On the reliable use of satellite-derived surface water products for global flood monitoring

    NASA Astrophysics Data System (ADS)

    Hirpa, F. A.; Revilla-Romero, B.; Thielen, J.; Salamon, P.; Brakenridge, R.; Pappenberger, F.; de Groeve, T.

    2015-12-01

    Early flood warning and real-time monitoring systems play a key role in flood risk reduction and disaster response management. To this end, real-time flood forecasting and satellite-based detection systems have been developed at global scale. However, due to the limited availability of up-to-date ground observations, the reliability of these systems for real-time applications have not been assessed in large parts of the globe. In this study, we performed comparative evaluations of the commonly used satellite-based global flood detections and operational flood forecasting system using 10 major flood cases reported over three years (2012-2014). Specially, we assessed the flood detection capabilities of the near real-time global flood maps from the Global Flood Detection System (GFDS), and from the Moderate Resolution Imaging Spectroradiometer (MODIS), and the operational forecasts from the Global Flood Awareness System (GloFAS) for the major flood events recorded in global flood databases. We present the evaluation results of the global flood detection and forecasting systems in terms of correctly indicating the reported flood events and highlight the exiting limitations of each system. Finally, we propose possible ways forward to improve the reliability of large scale flood monitoring tools.

  12. A novel CUSUM-based approach for event detection in smart metering

    NASA Astrophysics Data System (ADS)

    Zhu, Zhicheng; Zhang, Shuai; Wei, Zhiqiang; Yin, Bo; Huang, Xianqing

    2018-03-01

    Non-intrusive load monitoring (NILM) plays such a significant role in raising consumer awareness on household electricity use to reduce overall energy consumption in the society. With regard to monitoring low power load, many researchers have introduced CUSUM into the NILM system, since the traditional event detection method is not as effective as expected. Due to the fact that the original CUSUM faces limitations given the small shift is below threshold, we therefore improve the test statistic which allows permissible deviation to gradually rise as the data size increases. This paper proposes a novel event detection and corresponding criterion that could be used in NILM systems to recognize transient states and to help the labelling task. Its performance has been tested in a real scenario where eight different appliances are connected to main line of electric power.

  13. Adverse events after anthrax vaccination reported to the Vaccine Adverse Event Reporting System (VAERS), 1990-2007.

    PubMed

    Niu, Manette T; Ball, Robert; Woo, Emily Jane; Burwen, Dale R; Knippen, Maureen; Braun, M Miles

    2009-01-07

    During the period March 1, 1998 to January 14, 2007, approximately 6 million doses of Anthrax vaccine adsorbed (AVA) vaccine were administered. As of January 16, 2007, 4753 reports of adverse events following receipt of AVA vaccination had been submitted to the Vaccine Adverse Event Reporting System (VAERS). Taken together, reports to VAERS did not definitively link any serious unexpected risk to this vaccine, and review of death and serious reports did not show a distinctive pattern indicative of a causal relationship to AVA vaccination. Continued monitoring of VAERS and analysis of potential associations between AVA vaccination and rare, serious events is warranted.

  14. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications

    PubMed Central

    2018-01-01

    Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter, and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events. PMID:29614060

  15. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications.

    PubMed

    Costa, Daniel G; Duran-Faundez, Cristian; Andrade, Daniel C; Rocha-Junior, João B; Peixoto, João Paulo Just

    2018-04-03

    Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter , and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events.

  16. Accuracy of a radiofrequency identification (RFID) badge system to monitor hand hygiene behavior during routine clinical activities.

    PubMed

    Pineles, Lisa L; Morgan, Daniel J; Limper, Heather M; Weber, Stephen G; Thom, Kerri A; Perencevich, Eli N; Harris, Anthony D; Landon, Emily

    2014-02-01

    Hand hygiene (HH) is a critical part of infection prevention in health care settings. Hospitals around the world continuously struggle to improve health care personnel (HCP) HH compliance. The current gold standard for monitoring compliance is direct observation; however, this method is time-consuming and costly. One emerging area of interest involves automated systems for monitoring HH behavior such as radiofrequency identification (RFID) tracking systems. To assess the accuracy of a commercially available RFID system in detecting HCP HH behavior, we compared direct observation with data collected by the RFID system in a simulated validation setting and to a real-life clinical setting over 2 hospitals. A total of 1,554 HH events was observed. Accuracy for identifying HH events was high in the simulated validation setting (88.5%) but relatively low in the real-life clinical setting (52.4%). This difference was significant (P < .01). Accuracy for detecting HCP movement into and out of patient rooms was also high in the simulated setting but not in the real-life clinical setting (100% on entry and exit in simulated setting vs 54.3% entry and 49.5% exit in real-life clinical setting, P < .01). In this validation study of an RFID system, almost half of the HH events were missed. More research is necessary to further develop these systems and improve accuracy prior to widespread adoption. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  17. Use of Synchronized Phasor Measurements for Model Validation in ERCOT

    NASA Astrophysics Data System (ADS)

    Nuthalapati, Sarma; Chen, Jian; Shrestha, Prakash; Huang, Shun-Hsien; Adams, John; Obadina, Diran; Mortensen, Tim; Blevins, Bill

    2013-05-01

    This paper discusses experiences in the use of synchronized phasor measurement technology in Electric Reliability Council of Texas (ERCOT) interconnection, USA. Implementation of synchronized phasor measurement technology in the region is a collaborative effort involving ERCOT, ONCOR, AEP, SHARYLAND, EPG, CCET, and UT-Arlington. As several phasor measurement units (PMU) have been installed in ERCOT grid in recent years, phasor data with the resolution of 30 samples per second is being used to monitor power system status and record system events. Post-event analyses using recorded phasor data have successfully verified ERCOT dynamic stability simulation studies. Real time monitoring software "RTDMS"® enables ERCOT to analyze small signal stability conditions by monitoring the phase angles and oscillations. The recorded phasor data enables ERCOT to validate the existing dynamic models of conventional and/or wind generator.

  18. Wearable sensor platform and mobile application for use in cognitive behavioral therapy for drug addiction and PTSD.

    PubMed

    Fletcher, Richard Ribón; Tam, Sharon; Omojola, Olufemi; Redemske, Richard; Kwan, Joyce

    2011-01-01

    We present a wearable sensor platform designed for monitoring and studying autonomic nervous system (ANS) activity for the purpose of mental health treatment and interventions. The mobile sensor system consists of a sensor band worn on the ankle that continuously monitors electrodermal activity (EDA), 3-axis acceleration, and temperature. A custom-designed ECG heart monitor worn on the chest is also used as an optional part of the system. The EDA signal from the ankle bands provides a measure sympathetic nervous system activity and used to detect arousal events. The optional ECG data can be used to improve the sensor classification algorithm and provide a measure of emotional "valence." Both types of sensor bands contain a Bluetooth radio that enables communication with the patient's mobile phone. When a specific arousal event is detected, the phone automatically presents therapeutic and empathetic messages to the patient in the tradition of Cognitive Behavioral Therapy (CBT). As an example of clinical use, we describe how the system is currently being used in an ongoing study for patients with drug-addiction and post-traumatic stress disorder (PTSD).

  19. Dynamic state estimation assisted power system monitoring and protection

    NASA Astrophysics Data System (ADS)

    Cui, Yinan

    The advent of phasor measurement units (PMUs) has unlocked several novel methods to monitor, control, and protect bulk electric power systems. This thesis introduces the concept of "Dynamic State Estimation" (DSE), aided by PMUs, for wide-area monitoring and protection of power systems. Unlike traditional State Estimation where algebraic variables are estimated from system measurements, DSE refers to a process to estimate the dynamic states associated with synchronous generators. This thesis first establishes the viability of using particle filtering as a technique to perform DSE in power systems. The utility of DSE for protection and wide-area monitoring are then shown as potential novel applications. The work is presented as a collection of several journal and conference papers. In the first paper, we present a particle filtering approach to dynamically estimate the states of a synchronous generator in a multi-machine setting considering the excitation and prime mover control systems. The second paper proposes an improved out-of-step detection method for generators by means of angular difference. The generator's rotor angle is estimated with a particle filter-based dynamic state estimator and the angular separation is then calculated by combining the raw local phasor measurements with this estimate. The third paper introduces a particle filter-based dual estimation method for tracking the dynamic states of a synchronous generator. It considers the situation where the field voltage measurements are not readily available. The particle filter is modified to treat the field voltage as an unknown input which is sequentially estimated along with the other dynamic states. The fourth paper proposes a novel framework for event detection based on energy functions. The key idea is that any event in the system will leave a signature in WAMS data-sets. It is shown that signatures for four broad classes of disturbance events are buried in the components that constitute the energy function for the system. This establishes a direct correspondence (or mapping) between an event and certain component(s) of the energy function. The last paper considers the dynamic latency effect when the measurements and estimated dynamics are transmitted from remote ends to a centralized location through the networks.

  20. FOSREM - Fibre-Optic System for Rotational Events&Phenomena Monitoring

    NASA Astrophysics Data System (ADS)

    Jaroszewicz, Leszek; Krajewski, Zbigniew; Kurzych, Anna; Kowalski, Jerzy; Teisseyre, Krzysztof

    2016-04-01

    We present the construction and tests of fiber-optic rotational seismometer named FOSREM (Fibre-Optic System for Rotational Events&Phenomena Monitoring). This presented device is designed for detection and monitoring the one-axis rotational motions, brought about to ground or human-made structures both by seismic events and the creep processes. The presented system works by measuring Sagnac effect and generally consists of two basic elements: optical sensor and electronic part. The optical sensor is based on so-called the minimum configuration of FOG (Fibre-Optic Gyroscope) where the Sagnac effect produces a phase shift between two counter-propagating light beams proportional to the measured rotation speed. The main advantage of the sensor of this type is its complete insensitivity to linear motions and a direct measurement of rotational speed. It may work even when tilted, moreover, used in continuous mode it may record the tilt. The electronic system, involving specific electronic solutions, calculates and records rotational events data by realizing synchronous in a digital form by using 32 bit DSP (Digital Signal Processing). Storage data and system control are realised over the internet by using connection between FOSREM and GSM/GPS. The most significant attribute of our system is possibility to measure rotation in wide range both amplitude up to 10 rad/s and frequency up to 328.12 Hz. Application of the wideband, low coherence and high power superluminescent diode with long fibre loop and suitable low losses optical elements assures the theoretical sensitivity of the system equal to 2·10-8 rad/s/Sqrt(Hz). Moreover, the FOSREM is fully remote controlled as well as is suited for continuous, autonomous work in very long period of time (weeks, months, even years), so it is useful for systematic seismological investigation at any place. Possible applications of this system include seismic monitoring in observatories, buildings, mines and even on glaciers and in their vicinity. In geodetic, geomorphological and glaciological survey, joint measurement of tilt and seismic phenomena using a set of three FOSREM devices oriented in perpendicular planes would enable to collect very important information.

  1. Autonomous System for Monitoring the Integrity of Composite Fan Housings

    NASA Technical Reports Server (NTRS)

    Qing, Xinlin P.; Aquino, Christopher; Kumar, Amrita

    2010-01-01

    A low-cost and reliable system assesses the integrity of composite fan-containment structures. The system utilizes a network of miniature sensors integrated with the structure to scan the entire structural area for any impact events and resulting structural damage, and to monitor degradation due to usage. This system can be used to monitor all types of composite structures on aircraft and spacecraft, as well as automatically monitor in real time the location and extent of damage in the containment structures. This diagnostic information is passed to prognostic modeling that is being developed to utilize the information and provide input on the residual strength of the structure, and maintain a history of structural degradation during usage. The structural health-monitoring system would consist of three major components: (1) sensors and a sensor network, which is permanently bonded onto the structure being monitored; (2) integrated hardware; and (3) software to monitor in-situ the health condition of in-service structures.

  2. 75 FR 48349 - Agency Forms Undergoing Paperwork Reduction Act review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-10

    ... monitor syndrome-based (e.g., case information collected in emergency departments (EDs) and diagnostic... syndromic surveillance systems. EARS has been designed and used to monitor syndromic data from emergency... events such as the Beijing Summer Olympics; multiple Superbowls (football) and World Series (baseball...

  3. Methods developed to elucidate nursing related adverse events in Japan.

    PubMed

    Yamagishi, Manaho; Kanda, Katsuya; Takemura, Yukie

    2003-05-01

    Financial resources for quality assurance in Japanese hospitals are limited and few hospitals have quality monitoring systems of nursing service systems. However, recently its necessity has been recognized. This study has cost effectively used adverse event occurrence rates as indicators of the quality of nursing service, and audited methods of collecting data on adverse events to elucidate their approximate true numbers. Data collection was conducted in July, August and November 2000 at a hospital in Tokyo that administered both primary and secondary health care services (281 beds, six wards, average length of stay 23 days). We collected adverse events through incident reports, logs, check-lists, nurse interviews, medication error questionnaires, urine leucocyte tests, patient interviews and medical records. Adverse events included the unplanned removals of invasive lines, medication errors, falls, pressure sores, skin deficiencies, physical restraints, and nosocomial infections. After evaluating the time and useful outcomes of each source, it soon became clear that we could elucidate adverse events most consistently and cost-effectively through incident reports, check lists, nurse interviews, urine leucocyte tests and medication error questionnaires. This study suggests that many hospitals in Japan could monitor the quality of the nursing service using these sources.

  4. Integration of the Eventlndex with other ATLAS systems

    NASA Astrophysics Data System (ADS)

    Barberis, D.; Cárdenas Zárate, S. E.; Gallas, E. J.; Prokoshin, F.

    2015-12-01

    The ATLAS EventIndex System, developed for use in LHC Run 2, is designed to index every processed event in ATLAS, replacing the TAG System used in Run 1. Its storage infrastructure, based on Hadoop open-source software framework, necessitates revamping how information in this system relates to other ATLAS systems. It will store more indexes since the fundamental mechanisms for retrieving these indexes will be better integrated into all stages of data processing, allowing more events from later stages of processing to be indexed than was possible with the previous system. Connections with other systems (conditions database, monitoring) are fundamentally critical to assess dataset completeness, identify data duplication, and check data integrity, and also enhance access to information in EventIndex by user and system interfaces. This paper gives an overview of the ATLAS systems involved, the relevant metadata, and describe the technologies we are deploying to complete these connections.

  5. 40 CFR 264.310 - Closure and post-closure care.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... settling, subsidence, erosion, or other events; (2) Continue to operate the leachate collection and removal system until leachate is no longer detected; (3) Maintain and monitor the leak detection system in...

  6. 40 CFR 264.310 - Closure and post-closure care.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... settling, subsidence, erosion, or other events; (2) Continue to operate the leachate collection and removal system until leachate is no longer detected; (3) Maintain and monitor the leak detection system in...

  7. 40 CFR 264.310 - Closure and post-closure care.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... settling, subsidence, erosion, or other events; (2) Continue to operate the leachate collection and removal system until leachate is no longer detected; (3) Maintain and monitor the leak detection system in...

  8. 40 CFR 264.310 - Closure and post-closure care.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... settling, subsidence, erosion, or other events; (2) Continue to operate the leachate collection and removal system until leachate is no longer detected; (3) Maintain and monitor the leak detection system in...

  9. 40 CFR 264.310 - Closure and post-closure care.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... settling, subsidence, erosion, or other events; (2) Continue to operate the leachate collection and removal system until leachate is no longer detected; (3) Maintain and monitor the leak detection system in...

  10. Observations of transient events with Mini-MegaTORTORA wide-field monitoring system with sub-second temporal resolution

    NASA Astrophysics Data System (ADS)

    Karpov, S.; Beskin, G.; Biryukov, A.; Bondar, S.; Ivanov, E.; Katkova, E.; Orekhova, N.; Perkov, A.; Sasyuk, V.

    2017-07-01

    Here we present the summary of first years of operation and the first results of a novel 9-channel wide-field optical monitoring system with sub-second temporal resolution, Mini-MegaTORTORA (MMT-9), which is in operation now at Special Astrophysical Observatory on Russian Caucasus. The system is able to observe the sky simultaneously in either wide (900 square degrees) or narrow (100 square degrees) fields of view, either in clear light or with any combination of color (Johnson-Cousins B, V or R) and polarimetric filters installed, with exposure times ranging from 0.1 s to hundreds of seconds.The real-time system data analysis pipeline performs automatic detection of rapid transient events, both near-Earth and extragalactic. The objects routinely detected by MMT also include faint meteors and artificial satellites.

  11. Mini-MegaTORTORA Wide-Field Monitoring System with Subsecond Temporal Resolution: Observation of Transient Events

    NASA Astrophysics Data System (ADS)

    Karpov, S.; Beskin, G.; Biryukov, A.; Bondar, S.; Ivanov, E.; Katkova, E.; Orekhova, N.; Perkov, A.; Sasyuk, V.

    2017-06-01

    Here we present the summary of first years of operation and the first results of a novel 9-channel wide-field optical monitoring system with sub-second temporal resolution, Mini-MegaTORTORA (MMT-9), which is in operation now at Special Astrophysical Observatory on Russian Caucasus. The system is able to observe the sky simultaneously in either wide (˜900 square degrees) or narrow (˜100 square degrees) fields of view, either in clear light or with any combination of color (Johnson-Cousins B, V or R) and polarimetric filters installed, with exposure times ranging from 0.1 s to hundreds of seconds.The real-time system data analysis pipeline performs automatic detection of rapid transient events, both near-Earth and extragalactic. The objects routinely detected by MMT include faint meteors and artificial satellites.

  12. Effectiveness of a pressurized stormwater filtration system in Green Bay, Wisconsin: a study for the environmental technology verification program of the U.S. Environmental Protection Agency

    USGS Publications Warehouse

    Horwatich, J.A.; Corsi, Steven R.; Bannerman, Roger T.

    2004-01-01

    A pressurized stormwater filtration system was installed in 1998 as a stormwater-treatment practice to treat runoff from a hospital rooftop and parking lot in Green Bay, Wisconsin. This type of filtration system has been installed in Florida citrus groves and sewage treatment plants around the United States; however, this installation is the first of its kind to be used to treat urban runoff and the first to be tested in Wisconsin. The U.S. Geological Survey (USGS) monitored the system between November 2000 and September 2002 to evaluate it as part of the U.S. Environmental Protection Agency's Environmental Technology Verification Program. Fifteen runoff events were monitored for flow and water quality at the inlet and outlet of the system, and comparison of the event mean concentrations and constituent loads was used to evaluate its effectiveness. Loads were decreased in all particulate-associated constituents monitored, including suspended solids (83 percent), suspended sediment (81 percent), total Kjeldahl nitrogen (26 percent), total phosphorus (54 percent), and total recoverable zinc (62 percent). Total dissolved solids, dissolved phosphorus, and nitrate plus nitrite loads remained similar or increased through the system. The increase in some constituents was most likely due to a ground-water contribution between runoff events. Sand/silt split analysis resulted in the median silt content of 78 percent at the inlet, 87 percent at the outlet, and 3 percent at the flow splitter.

  13. Event Recognition for Contactless Activity Monitoring Using Phase-Modulated Continuous Wave Radar.

    PubMed

    Forouzanfar, Mohamad; Mabrouk, Mohamed; Rajan, Sreeraman; Bolic, Miodrag; Dajani, Hilmi R; Groza, Voicu Z

    2017-02-01

    The use of remote sensing technologies such as radar is gaining popularity as a technique for contactless detection of physiological signals and analysis of human motion. This paper presents a methodology for classifying different events in a collection of phase modulated continuous wave radar returns. The primary application of interest is to monitor inmates where the presence of human vital signs amidst different, interferences needs to be identified. A comprehensive set of features is derived through time and frequency domain analyses of the radar returns. The Bhattacharyya distance is used to preselect the features with highest class separability as the possible candidate features for use in the classification process. The uncorrelated linear discriminant analysis is performed to decorrelate, denoise, and reduce the dimension of the candidate feature set. Linear and quadratic Bayesian classifiers are designed to distinguish breathing, different human motions, and nonhuman motions. The performance of these classifiers is evaluated on a pilot dataset of radar returns that contained different events including breathing, stopped breathing, simple human motions, and movement of fan and water. Our proposed pattern classification system achieved accuracies of up to 93% in stationary subject detection, 90% in stop-breathing detection, and 86% in interference detection. Our proposed radar pattern recognition system was able to accurately distinguish the predefined events amidst interferences. Besides inmate monitoring and suicide attempt detection, this paper can be extended to other radar applications such as home-based monitoring of elderly people, apnea detection, and home occupancy detection.

  14. On line instrument systems for monitoring steam turbogenerators

    NASA Astrophysics Data System (ADS)

    Clapis, A.; Giorgetti, G.; Lapini, G. L.; Benanti, A.; Frigeri, C.; Gadda, E.; Mantino, E.

    A computerized real time data acquisition and data processing for the diagnosis of malfunctioning of steam turbogenerator systems is described. Pressure, vibration and temperature measurements are continuously collected from standard or special sensors including startup or stop events. The architecture of the monitoring system is detailed. Examples of the graphics output are presented. It is shown that such a system allows accurate diagnosis and the possibility of creating a data bank to describe the dynamic characteristics of the machine park.

  15. Information Assurance Technology Analysis Center Information Assurance Tools Report Intrusion Detection

    DTIC Science & Technology

    1998-01-01

    such as central processing unit (CPU) usage, disk input/output (I/O), memory usage, user activity, and number of logins attempted. The statistics... EMERALD Commercial anomaly detection, system monitoring SRI porras@csl.sri.com www.csl.sri.com/ emerald /index. html Gabriel Commercial system...sensors, it starts to protect the network with minimal configuration and maximum intelligence. T 11 EMERALD TITLE EMERALD (Event Monitoring

  16. 40 CFR 63.820 - Applicability.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... were caused by a sudden, infrequent, and unavoidable failure of air pollution control and monitoring... activity or event that could have been foreseen and avoided, or planned for; and were not part of a... ambient air quality, the environment, and human health; (vi) All emissions monitoring and control systems...

  17. 40 CFR 63.820 - Applicability.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... were caused by a sudden, infrequent, and unavoidable failure of air pollution control and monitoring... activity or event that could have been foreseen and avoided, or planned for; and were not part of a... ambient air quality, the environment, and human health; (vi) All emissions monitoring and control systems...

  18. 40 CFR 63.820 - Applicability.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... were caused by a sudden, infrequent, and unavoidable failure of air pollution control and monitoring... activity or event that could have been foreseen and avoided, or planned for; and were not part of a... ambient air quality, the environment, and human health; (vi) All emissions monitoring and control systems...

  19. Operational warning of interplanetary shock arrivals using energetic particle data from ACE: Real-time Upstream Monitoring System

    NASA Astrophysics Data System (ADS)

    Donegan, M.; Vandegriff, J.; Ho, G. C.; Julia, S. J.

    2004-12-01

    We report on an operational system which provides advance warning and predictions of arrival times at Earth of interplanetary (IP) shocks that originate at the Sun. The data stream used in our prediction algorithm is real-time and comes from the Electron, Proton, and Alpha Monitor (EPAM) instrument on NASA's Advanced Composition Explorer (ACE) spacecraft. Since locally accelerated energetic storm particle (ESP) events accompany most IP shocks, their arrival can be predicted using ESP event signatures. We have previously reported on the development and implementation of an algorithm which recognizes the upstream particle signature of approaching IP shocks and provides estimated countdown predictions. A web-based system (see (http://sd-www.jhuapl.edu/UPOS/RISP/index.html) combines this prediction capability with real-time ACE/EPAM data provided by the NOAA Space Environment Center. The most recent ACE data is continually processed and predictions of shock arrival time are updated every five minutes when an event is impending. An operational display is provided to indicate advisories and countdowns for the event. Running the algorithm on a test set of historical events, we obtain a median error of about 10 hours for predictions made 24-36 hours before actual shock arrival and about 6 hours when the shock is 6-12 hours away. This system can provide critical information to mission planners, satellite operations controllers, and scientists by providing significant lead-time for approaching events. Recently, we have made improvements to the triggering mechanism as well as re-training the neural network, and here we report prediction results from the latest system.

  20. IR Variability of Eta Carinae: The 2009 Event

    NASA Astrophysics Data System (ADS)

    Smith, Nathan

    2008-08-01

    Every 5.5 years, η Carinae experiences a dramatic ``spectroscopic event'' when high-excitation lines in its UV, optical, and IR spectrum disappear, and its hard X-ray and radio continuum flux crash. This periodicity has been attributed to an eccentric binary system with a shell ejection occurring at periastron, and the next periastron event will occur in January 2009. The last event in June/July 2003 was poorly observed because the star was very low in the sky, but this next event is perfectly suited for an intense ground-based monitoring campaign. Mid-IR images and spectra with T-ReCS provide a direct measure of changes in the current bolometric luminosity and a direct measure of the mass in dust formation episodes that may occur at periastron in the colliding wind shock. Near-IR emission lines trace related changes in the post-event wind and ionization changes in the circumstellar environment needed to test specific models for the cause of η Car's variability as it recovers from its recent ``event''. Because the nebular geometry is known very well from previous observations in this program, monitoring the changes in nebular ionization will yield a 3-D map of the changing asymmetric UV radiation field geometry in the binary system, and the first estimate of the orientation of its orbit.

  1. Hierarchical Discrete Event Supervisory Control of Aircraft Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Yasar, Murat; Tolani, Devendra; Ray, Asok; Shah, Neerav; Litt, Jonathan S.

    2004-01-01

    This paper presents a hierarchical application of Discrete Event Supervisory (DES) control theory for intelligent decision and control of a twin-engine aircraft propulsion system. A dual layer hierarchical DES controller is designed to supervise and coordinate the operation of two engines of the propulsion system. The two engines are individually controlled to achieve enhanced performance and reliability, necessary for fulfilling the mission objectives. Each engine is operated under a continuously varying control system that maintains the specified performance and a local discrete-event supervisor for condition monitoring and life extending control. A global upper level DES controller is designed for load balancing and overall health management of the propulsion system.

  2. Association rule mining in the US Vaccine Adverse Event Reporting System (VAERS).

    PubMed

    Wei, Lai; Scott, John

    2015-09-01

    Spontaneous adverse event reporting systems are critical tools for monitoring the safety of licensed medical products. Commonly used signal detection algorithms identify disproportionate product-adverse event pairs and may not be sensitive to more complex potential signals. We sought to develop a computationally tractable multivariate data-mining approach to identify product-multiple adverse event associations. We describe an application of stepwise association rule mining (Step-ARM) to detect potential vaccine-symptom group associations in the US Vaccine Adverse Event Reporting System. Step-ARM identifies strong associations between one vaccine and one or more adverse events. To reduce the number of redundant association rules found by Step-ARM, we also propose a clustering method for the post-processing of association rules. In sample applications to a trivalent intradermal inactivated influenza virus vaccine and to measles, mumps, rubella, and varicella (MMRV) vaccine and in simulation studies, we find that Step-ARM can detect a variety of medically coherent potential vaccine-symptom group signals efficiently. In the MMRV example, Step-ARM appears to outperform univariate methods in detecting a known safety signal. Our approach is sensitive to potentially complex signals, which may be particularly important when monitoring novel medical countermeasure products such as pandemic influenza vaccines. The post-processing clustering algorithm improves the applicability of the approach as a screening method to identify patterns that may merit further investigation. Copyright © 2015 John Wiley & Sons, Ltd.

  3. 76 FR 66057 - North American Electric Reliability Corporation; Order Approving Regional Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-25

    ... system conditions when the system experiences dynamic events such as low frequency oscillations, or... R8 requires that dynamic disturbance recorders function continuously. To capture system disturbance... recording capability necessary to monitor the response of the Bulk-Power System to system disturbances...

  4. Continuous monitoring of water flow and solute transport using vadose zone monitoring technology

    NASA Astrophysics Data System (ADS)

    Dahan, O.

    2009-04-01

    Groundwater contamination is usually attributed to pollution events that initiate on land surface. These may be related to various sources such as industrial, urban or agricultural, and may appear as point or non point sources, through a single accidental event or a continuous pollution process. In all cases, groundwater pollution is a consequence of pollutant transport processes that take place in the vadose zone above the water table. Attempts to control pollution events and prevent groundwater contamination usually involve groundwater monitoring programs. This, however, can not provide any protection against contamination since pollution identification in groundwater is clear evidence that the groundwater is already polluted and contaminants have already traversed the entire vadose zone. Accordingly, an efficient monitoring program that aims at providing information that may prevent groundwater pollution has to include vadose-zone monitoring systems. Such system should provide real-time information on the hydrological and chemical properties of the percolating water and serve as an early warning system capable of detecting pollution events in their early stages before arrival of contaminants to groundwater. Recently, a vadose-zone monitoring system (VMS) was developed to allow continuous monitoring of the hydrological and chemical properties of percolating water in the deep vadose zone. The VMS includes flexible time-domain reflectometry (FTDR) probes for continuous tracking of water content profiles, and vadose-zone sampling ports (VSPs) for frequent sampling of the deep vadose pore water at multiple depths. The monitoring probes and sampling ports are installed through uncased slanted boreholes using a flexible sleeve that allows attachment of the monitoring devices to the borehole walls while achieving good contact between the sensors and the undisturbed sediment column. The system has been successfully implemented in several studies on water flow and contaminant transport in various hydrological and geological setups. These include floodwater infiltration in arid environments, land use impact on groundwater quality, and control of remediation process in a contaminated vadose zone. The data which is collected by the VMS allows direct measurements of flow velocities and fluxes in the vadose zone while continuously monitoring the chemical evolution of the percolating water. While real time information on the hydrological and chemical properties of the percolating water in the vadose is essential to prevent groundwater contamination it is also vital for any remediation actions. Remediation of polluted soils and aquifers essentially involves manipulation of surface and subsurface hydrological, physical and biochemical conditions to improve pollutant attenuation. Controlling the biochemical conditions to enhance biodegradation often includes introducing degrading microorganisms, applying electron donors or acceptors, or adding nutrients that can promote growth of the desired degrading organisms. Accordingly real time data on the hydrological and chemical properties of the vadose zone may be used to select remediation strategies and determine its efficiency on the basis of real time information.

  5. Analyzing the Effects of Climate Change on Sea Surface Temperature in Monitoring Coral Reef Health in the Florida Keys Using Sea Surface Temperature Data

    NASA Technical Reports Server (NTRS)

    Jones, Jason; Burbank, Renane; Billiot, Amanda; Schultz, Logan

    2011-01-01

    This presentation discusses use of 4 kilometer satellite-based sea surface temperature (SST) data to monitor and assess coral reef areas of the Florida Keys. There are growing concerns about the impacts of climate change on coral reef systems throughout the world. Satellite remote sensing technology is being used for monitoring coral reef areas with the goal of understanding the climatic and oceanic changes that can lead to coral bleaching events. Elevated SST is a well-documented cause of coral bleaching events. Some coral monitoring studies have used 50 km data from the Advanced Very High Resolution Radiometer (AVHRR) to study the relationships of sea surface temperature anomalies to bleaching events. In partnership with NOAA's Office of National Marine Sanctuaries and the University of South Florida's Institute for Marine Remote Sensing, this project utilized higher resolution SST data from the Terra's Moderate Resolution Imaging Spectroradiometer (MODIS) and AVHRR. SST data for 2000-2010 was employed to compute sea surface temperature anomalies within the study area. The 4 km SST anomaly products enabled visualization of SST levels for known coral bleaching events from 2000-2010.

  6. Nonlinear optical microscopy for immunoimaging: a custom optimized system of high-speed, large-area, multicolor imaging

    PubMed Central

    Li, Hui; Cui, Quan; Zhang, Zhihong; Luo, Qingming

    2015-01-01

    Background The nonlinear optical microscopy has become the current state-of-the-art for intravital imaging. Due to its advantages of high resolution, superior tissue penetration, lower photodamage and photobleaching, as well as intrinsic z-sectioning ability, this technology has been widely applied in immunoimaging for a decade. However, in terms of monitoring immune events in native physiological environment, the conventional nonlinear optical microscope system has to be optimized for live animal imaging. Generally speaking, three crucial capabilities are desired, including high-speed, large-area and multicolor imaging. Among numerous high-speed scanning mechanisms used in nonlinear optical imaging, polygon scanning is not only linearly but also dispersion-freely with high stability and tunable rotation speed, which can overcome disadvantages of multifocal scanning, resonant scanner and acousto-optical deflector (AOD). However, low frame rate, lacking large-area or multicolor imaging ability make current polygonbased nonlinear optical microscopes unable to meet the requirements of immune event monitoring. Methods We built up a polygon-based nonlinear optical microscope system which was custom optimized for immunoimaging with high-speed, large-are and multicolor imaging abilities. Results Firstly, we validated the imaging performance of the system by standard methods. Then, to demonstrate the ability to monitor immune events, migration of immunocytes observed by the system based on typical immunological models such as lymph node, footpad and dorsal skinfold chamber are shown. Finally, we take an outlook for the possible advance of related technologies such as sample stabilization and optical clearing for more stable and deeper intravital immunoimaging. Conclusions This study will be helpful for optimizing nonlinear optical microscope to obtain more comprehensive and accurate information of immune events. PMID:25694951

  7. Phasor Measurement Unit and Its Application in Modern Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jian; Makarov, Yuri V.; Dong, Zhao Yang

    2010-06-01

    The introduction of phasor measuring units (PMUs) in power systems significantly improves the possibilities for monitoring and analyzing power system dynamics. Synchronized measurements make it possible to directly measure phase angles between corresponding phasors in different locations within the power system. Improved monitoring and remedial action capabilities allow network operators to utilize the existing power system in a more efficient way. Improved information allows fast and reliable emergency actions, which reduces the need for relatively high transmission margins required by potential power system disturbances. In this chapter, the applications of PMU in modern power systems are presented. Specifically, the topicsmore » touched in this chapter include state estimation, voltage and transient stability, oscillation monitoring, event and fault detection, situation awareness, and model validation. A case study using Characteristic Ellipsoid method based on PMU to monitor power system dynamic is presented.« less

  8. Planetary Lake Lander - A Robotic Sentinel to Monitor a Remote Lake

    NASA Technical Reports Server (NTRS)

    Pedersen, Liam; Smith, Trey; Lee, Susan; Cabrol, Nathalie; Rose, Kevin

    2012-01-01

    The Planetary Lake Lander Project is studying the impact of rapid deglaciation at a high altitude alpine lake in the Andes, where disrupted environmental, physical, chemical, and biological cycles result in newly emerging natural patterns. The solar powered Lake Lander robot is designed to monitor the lake system and characterize both baseline characteristics and impacts of disturbance events such as storms and landslides. Lake Lander must use an onboard adaptive science-on-the-fly approach to return relevant data about these events to mission control without exceeding limited energy and bandwidth resources. Lake Lander carries weather sensors, cameras and a sonde that is winched up and down the water column to monitor temperature, dissolved oxygen, turbidity and other water quality parameters. Data from Lake Lander is returned via satellite and distributed to an international team of scientists via web-based ground data systems. Here, we describe the Lake Lander Project scientific goals, hardware design, ground data systems, and preliminary data from 2011. The adaptive science-on-the-fly system will be described in future papers.

  9. URBAN-NET: A Network-based Infrastructure Monitoring and Analysis System for Emergency Management and Public Safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkeun; Chen, Liangzhe; Duan, Sisi

    Abstract Critical Infrastructures (CIs) such as energy, water, and transportation are complex networks that are crucial for sustaining day-to-day commodity flows vital to national security, economic stability, and public safety. The nature of these CIs is such that failures caused by an extreme weather event or a man-made incident can trigger widespread cascading failures, sending ripple effects at regional or even national scales. To minimize such effects, it is critical for emergency responders to identify existing or potential vulnerabilities within CIs during such stressor events in a systematic and quantifiable manner and take appropriate mitigating actions. We present here amore » novel critical infrastructure monitoring and analysis system named URBAN-NET. The system includes a software stack and tools for monitoring CIs, pre-processing data, interconnecting multiple CI datasets as a heterogeneous network, identifying vulnerabilities through graph-based topological analysis, and predicting consequences based on what-if simulations along with visualization. As a proof-of-concept, we present several case studies to show the capabilities of our system. We also discuss remaining challenges and future work.« less

  10. Medication errors: an analysis comparing PHICO's closed claims data and PHICO's Event Reporting Trending System (PERTS).

    PubMed

    Benjamin, David M; Pendrak, Robert F

    2003-07-01

    Clinical pharmacologists are all dedicated to improving the use of medications and decreasing medication errors and adverse drug reactions. However, quality improvement requires that some significant parameters of quality be categorized, measured, and tracked to provide benchmarks to which future data (performance) can be compared. One of the best ways to accumulate data on medication errors and adverse drug reactions is to look at medical malpractice data compiled by the insurance industry. Using data from PHICO insurance company, PHICO's Closed Claims Data, and PHICO's Event Reporting Trending System (PERTS), this article examines the significance and trends of the claims and events reported between 1996 and 1998. Those who misread history are doomed to repeat the mistakes of the past. From a quality improvement perspective, the categorization of the claims and events is useful for reengineering integrated medication delivery, particularly in a hospital setting, and for redesigning drug administration protocols on low therapeutic index medications and "high-risk" drugs. Demonstrable evidence of quality improvement is being required by state laws and by accreditation agencies. The state of Florida requires that quality improvement data be posted quarterly on the Web sites of the health care facilities. Other states have followed suit. The insurance industry is concerned with costs, and medication errors cost money. Even excluding costs of litigation, an adverse drug reaction may cost up to $2500 in hospital resources, and a preventable medication error may cost almost $4700. To monitor costs and assess risk, insurance companies want to know what errors are made and where the system has broken down, permitting the error to occur. Recording and evaluating reliable data on adverse drug events is the first step in improving the quality of pharmacotherapy and increasing patient safety. Cost savings and quality improvement evolve on parallel paths. The PHICO data provide an excellent opportunity to review information that typically would not be in the public domain. The events captured by PHICO are similar to the errors and "high-risk" drugs described in the literature, the U.S. Pharmacopeia's MedMARx Reporting System, and the Sentinel Event reporting system maintained by the Joint Commission for the Accreditation of Healthcare Organizations. The information in this report serves to alert clinicians to the possibility of adverse events when treating patients with the reported drugs, thus allowing for greater care in their use and closer monitoring. Moreover, when using high-risk drugs, patients should be well informed of known risks, dosage should be titrated slowly, and therapeutic drug monitoring and laboratory monitoring should be employed to optimize therapy and minimize adverse effects.

  11. The design and implementation of EPL: An event pattern language for active databases

    NASA Technical Reports Server (NTRS)

    Giuffrida, G.; Zaniolo, C.

    1994-01-01

    The growing demand for intelligent information systems requires closer coupling of rule-based reasoning engines, such as CLIPS, with advanced data base management systems (DBMS). For instance, several commercial DBMS now support the notion of triggers that monitor events and transactions occurring in the database and fire induced actions, which perform a variety of critical functions, including safeguarding the integrity of data, monitoring access, and recording volatile information needed by administrators, analysts, and expert systems to perform assorted tasks; examples of these tasks include security enforcement, market studies, knowledge discovery, and link analysis. At UCLA, we designed and implemented the event pattern language (EPL) which is capable of detecting and acting upon complex patterns of events which are temporally related to each other. For instance, a plant manager should be notified when a certain pattern of overheating repeats itself over time in a chemical process; likewise, proper notification is required when a suspicious sequence of bank transactions is executed within a certain time limit. The EPL prototype is built in CLIPS to operate on top of Sybase, a commercial relational DBMS, where actions can be triggered by events such as simple database updates, insertions, and deletions. The rule-based syntax of EPL allows the sequences of goals in rules to be interpreted as sequences of temporal events; each goal can correspond to either (1) a simple event, or (2) a (possibly negated) event/condition predicate, or (3) a complex event defined as the disjunction and repetition of other events. Various extensions have been added to CLIPS in order to tailor the interface with Sybase and its open client/server architecture.

  12. Using Movement and Intentions to Understand Human Activity

    ERIC Educational Resources Information Center

    Zacks, Jeffrey M.; Kumar, Shawn; Abrams, Richard A.; Mehta, Ritesh

    2009-01-01

    During perception, people segment continuous activity into discrete events. They do so in part by monitoring changes in features of an ongoing activity. Characterizing these features is important for theories of event perception and may be helpful for designing information systems. The three experiments reported here asked whether the body…

  13. Monitoring pulmonary function with superimposed pulmonary gas exchange curves from standard analyzers.

    PubMed

    Zar, Harvey A; Noe, Frances E; Szalados, James E; Goodrich, Michael D; Busby, Michael G

    2002-01-01

    A repetitive graphic display of the single breath pulmonary function can indicate changes in cardiac and pulmonary physiology brought on by clinical events. Parallel advances in computer technology and monitoring make real-time, single breath pulmonary function clinically practicable. We describe a system built from a commercially available airway gas monitor and off the shelf computer and data-acquisition hardware. Analog data for gas flow rate, O2, and CO2 concentrations are introduced into a computer through an analog-to-digital conversion board. Oxygen uptake (VO2) and carbon dioxide output (VCO2) are calculated for each breath. Inspired minus expired concentrations for O2 and CO2 are displayed simultaneously with the expired gas flow rate curve for each breath. Dead-space and alveolar ventilation are calculated for each breath and readily appreciated from the display. Graphs illustrating the function of the system are presented for the following clinical scenarios; upper airway obstruction, bronchospasm, bronchopleural fistula, pulmonary perfusion changes and inadequate oxygen delivery. This paper describes a real-time, single breath pulmonary monitoring system that displays three parameters graphed against time: expired flow rate, oxygen uptake and carbon dioxide production. This system allows for early and rapid recognition of treatable conditions that may lead to adverse events without any additional patient measurements or invasive procedures. Monitoring systems similar to the one described in this paper may lead to a higher level of patient safety without any additional patient risk.

  14. Challenges in Regional CTBT Monitoring: The Experience So Far From Vienna

    NASA Astrophysics Data System (ADS)

    Bratt, S. R.

    2001-05-01

    The verification system being established to monitor the CTBT will include an International Monitoring System (IMS) network of 321 seismic, hydroacoustic, infrasound and radionuclide stations, transmitting digital data to the International Data Centre (IDC) in Vienna, Austria over a Global Communications Infrastructure (GCI). The IDC started in February 2000 to disseminate a wide range of products based on automatic processing and interactive analysis of data from about 90 stations from the four IMS technologies. The number of events in the seismo-acoustic Reviewed Event Bulletins (REB) was 18,218 for the year 2000, with the daily number ranging from 30 to 360. Over 300 users from almost 50 Member States are now receiving an average of 18,000 data and product deliveries per month from the IDC. As the IMS network expands (40 - 60 new stations are scheduled start transmitting data this year) and as GCI communications links bring increasing volumes of new data into Vienna (70 new GCI sites are currently in preparation), the monitoring capability of the IMS and IDC has the potential to improve significantly. To realize this potential, the IDC must continue to improve its capacity to exploit regional seismic data from events defined by few stations with large azimuthal gaps. During 2000, 25% of the events in the REB were defined by five or fewer stations. 48% were defined by at least one regional phase, and 24% were defined by at least three. 34% had gaps in azimuthal coverage of more than 180 degrees. The fraction of regional, sparsely detected events will only increase as new, sensitive stations come on-line, and the detection threshold drops. This will be offset, to some extent, because stations within the denser network that detect near-threshold events will be at closer distances, on average. Thus to address the challenges of regional monitoring, the IDC must integrate "tuned" station and network processing parameters for new stations; enhanced and/or new methods for estimating location, depth and uncertainty bounds; and validated, regionally-calibrated travel times, event characterization parameters and screening criteria. A new IDC program to fund research to calibrate regional seismic travel paths seeks to address, in cooperation with other national efforts, one item on this list. More effective use of the full waveform data and cross-technology synergies must be explored. All of this work must be integrated into modular software systems that can be maintained and improved over time. To motivate these regional monitoring challenges and possible improvements, the experience from the IDC will be presented via a series of illustrative, sample events. Challenges in the technical and policy arenas must be addressed as well. IMS data must first be available at the IDC before they can be analyzed. The encouraging experience to date is that the availability of data arriving via the GCI is significantly higher (~95%) than the availability (~70%) from the same stations prior to GCI installation, when they were transmitting data via other routes. Within the IDC, trade-offs must be considered between the desired levels of product quality and timeliness, and the investment in personnel and system development to support the levels sought. Another high-priority objective is to develop a policy for providing data and products to scientific and disaster alert organizations. It is clear that broader exploitation of these rich and unique assets could be of great, mutual benefit, and is, perhaps, a necessity for the CTBT verification system to achieve its potential.

  15. Towards a monitoring system of temperature extremes in Europe

    NASA Astrophysics Data System (ADS)

    Lavaysse, Christophe; Cammalleri, Carmelo; Dosio, Alessandro; van der Schrier, Gerard; Toreti, Andrea; Vogt, Jürgen

    2018-01-01

    Extreme-temperature anomalies such as heat and cold waves may have strong impacts on human activities and health. The heat waves in western Europe in 2003 and in Russia in 2010, or the cold wave in southeastern Europe in 2012, generated a considerable amount of economic loss and resulted in the death of several thousands of people. Providing an operational system to monitor extreme-temperature anomalies in Europe is thus of prime importance to help decision makers and emergency services to be responsive to an unfolding extreme event. In this study, the development and the validation of a monitoring system of extreme-temperature anomalies are presented. The first part of the study describes the methodology based on the persistence of events exceeding a percentile threshold. The method is applied to three different observational datasets, in order to assess the robustness and highlight uncertainties in the observations. The climatology of extreme events from the last 21 years is then analysed to highlight the spatial and temporal variability of the hazard, and discrepancies amongst the observational datasets are discussed. In the last part of the study, the products derived from this study are presented and discussed with respect to previous studies. The results highlight the accuracy of the developed index and the statistical robustness of the distribution used to calculate the return periods.

  16. The role of citizen science in monitoring small-scale pollution events.

    PubMed

    Hyder, Kieran; Wright, Serena; Kirby, Mark; Brant, Jan

    2017-07-15

    Small-scale pollution events involve the release of potentially harmful substances into the marine environment. These events can affect all levels of the ecosystem, with damage to both fauna and flora. Numerous reporting structures are currently available to document spills, however there is a lack of information on small-scale events due to their magnitude and patchy distribution. To this end, volunteers may provide a useful tool in filling this data gap, especially for coastal environments with a high usage by members of the public. The potential for citizen scientists to record small-scale pollution events is explored using the UK as an example, with a focus on highlighting methods and issues associated with using this data source. An integrated monitoring system is proposed which combines citizen science and traditional reporting approaches. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  17. Signaling communication events in a computer network

    DOEpatents

    Bender, Carl A.; DiNicola, Paul D.; Gildea, Kevin J.; Govindaraju, Rama K.; Kim, Chulho; Mirza, Jamshed H.; Shah, Gautam H.; Nieplocha, Jaroslaw

    2000-01-01

    A method, apparatus and program product for detecting a communication event in a distributed parallel data processing system in which a message is sent from an origin to a target. A low-level application programming interface (LAPI) is provided which has an operation for associating a counter with a communication event to be detected. The LAPI increments the counter upon the occurrence of the communication event. The number in the counter is monitored, and when the number increases, the event is detected. A completion counter in the origin is associated with the completion of a message being sent from the origin to the target. When the message is completed, LAPI increments the completion counter such that monitoring the completion counter detects the completion of the message. The completion counter may be used to insure that a first message has been sent from the origin to the target and completed before a second message is sent.

  18. Volcano and Earthquake Monitoring Plan for the Yellowstone Volcano Observatory, 2006-2015

    USGS Publications Warehouse

    ,

    2006-01-01

    To provide Yellowstone National Park (YNP) and its surrounding communities with a modern, comprehensive system for volcano and earthquake monitoring, the Yellowstone Volcano Observatory (YVO) has developed a monitoring plan for the period 2006-2015. Such a plan is needed so that YVO can provide timely information during seismic, volcanic, and hydrothermal crises and can anticipate hazardous events before they occur. The monitoring network will also provide high-quality data for scientific study and interpretation of one of the largest active volcanic systems in the world. Among the needs of the observatory are to upgrade its seismograph network to modern standards and to add five new seismograph stations in areas of the park that currently lack adequate station density. In cooperation with the National Science Foundation (NSF) and its Plate Boundary Observatory Program (PBO), YVO seeks to install five borehole strainmeters and two tiltmeters to measure crustal movements. The boreholes would be located in developed areas close to existing infrastructure and away from sensitive geothermal features. In conjunction with the park's geothermal monitoring program, installation of new stream gages, and gas-measuring instruments will allow YVO to compare geophysical phenomena, such as earthquakes and ground motions, to hydrothermal events, such as anomalous water and gas discharge. In addition, YVO seeks to characterize the behavior of geyser basins, both to detect any precursors to hydrothermal explosions and to monitor earthquakes related to fluid movements that are difficult to detect with the current monitoring system. Finally, a monitoring network consists not solely of instruments, but requires also a secure system for real-time transmission of data. The current telemetry system is vulnerable to failures that could jeopardize data transmission out of Yellowstone. Future advances in monitoring technologies must be accompanied by improvements in the infrastructure for data transmission. Overall, our strategy is to (1) maximize our ability to provide rapid assessments of changing conditions to ensure public safety, (2) minimize environmental and visual impact, and (3) install instrumentation in developed areas.

  19. Use of electronic monitoring in clinical nursing research.

    PubMed

    Ailinger, Rita L; Black, Patricia L; Lima-Garcia, Natalie

    2008-05-01

    In the past decade, the introduction of electronic monitoring systems for monitoring medication adherence has contributed to the dialog about what works and what does not work in monitoring adherence. The purpose of this article is to describe the use of the Medication Event Monitoring System (MEMS) in a study of patients receiving isoniazid for latent tuberculosis infection. Three case examples from the study illustrate the data that are obtained from the electronic device compared to self-reports and point to the disparities that may occur in electronic monitoring. The strengths and limitations of using the MEMS and ethical issues in utilizing this technology are discussed. Nurses need to be aware of these challenges when using electronic measuring devices to monitor medication adherence in clinical nursing practice and research.

  20. Air Quality Side Event Proposal November 2016 GEO XIII ...

    EPA Pesticide Factsheets

    The Group on Earth Observations (GEO), which EPA has participated in since 2003, has put out a call for Side Events for its thirteenth annual international Plenary Meeting which is in St. Petersburg, Russia this year during November, 2016. EPA has put on Side Events on Air Quality and Health observational systems at eight of the previous Plenaries. This document is a Side Event proposal regarding air quality, health and next generation monitoring and observations techniques. It is submitted to the GEO Secretariat for consideration. If accepted, there will likely be presentations by EPA and NASA, other GEO Member Countries and UNEP and other GEO Participating Organizations at the Side Event. It is an opportunity to share scientific and technological advances in this area and build partnerships and collaboration. The Group on Earth Observations (GEO), which EPA has participated in since 2003, has put out a call for Side Events for its thirteenth annual international Plenary Meeting which is in St. Petersburg, Russia this year during November, 2016. EPA has put on Side Events on Air Quality and Health observational systems at eight of the previous Plenaries. This document is a Side Event proposal regarding air quality, health and next generation monitoring and observations techniques.  It is submitted to the GEO Secretariat for consideration. If accepted, there will likely be presentations by EPA and NASA, other GEO Member Countries and UNEP and other GEO P

  1. Evaluation of Local Media Surveillance for Improved Disease Recognition and Monitoring in Global Hotspot Regions

    PubMed Central

    Schwind, Jessica S.; Wolking, David J.; Brownstein, John S.; Mazet, Jonna A. K.; Smith, Woutrina A.

    2014-01-01

    Digital disease detection tools are technologically sophisticated, but dependent on digital information, which for many areas suffering from high disease burdens is simply not an option. In areas where news is often reported in local media with no digital counterpart, integration of local news information with digital surveillance systems, such as HealthMap (Boston Children’s Hospital), is critical. Little research has been published in regards to the specific contribution of local health-related articles to digital surveillance systems. In response, the USAID PREDICT project implemented a local media surveillance (LMS) pilot study in partner countries to monitor disease events reported in print media. This research assessed the potential of LMS to enhance digital surveillance reach in five low- and middle-income countries. Over 16 weeks, select surveillance system attributes of LMS, such as simplicity, flexibility, acceptability, timeliness, and stability were evaluated to identify strengths and weaknesses in the surveillance method. Findings revealed that LMS filled gaps in digital surveillance network coverage by contributing valuable localized information on disease events to the global HealthMap database. A total of 87 health events were reported through the LMS pilot in the 16-week monitoring period, including 71 unique reports not found by the HealthMap digital detection tool. Furthermore, HealthMap identified an additional 236 health events outside of LMS. It was also observed that belief in the importance of the project and proper source selection from the participants was crucial to the success of this method. The timely identification of disease outbreaks near points of emergence and the recognition of risk factors associated with disease occurrence continue to be important components of any comprehensive surveillance system for monitoring disease activity across populations. The LMS method, with its minimal resource commitment, could be one tool used to address the information gaps seen in global ‘hot spot’ regions. PMID:25333618

  2. An experimental system for flood risk forecasting and monitoring at global scale

    NASA Astrophysics Data System (ADS)

    Dottori, Francesco; Alfieri, Lorenzo; Kalas, Milan; Lorini, Valerio; Salamon, Peter

    2017-04-01

    Global flood forecasting and monitoring systems are nowadays a reality and are being applied by a wide range of users and practitioners in disaster risk management. Furthermore, there is an increasing demand from users to integrate flood early warning systems with risk based forecasting, combining streamflow estimations with expected inundated areas and flood impacts. Finally, emerging technologies such as crowdsourcing and social media monitoring can play a crucial role in flood disaster management and preparedness. Here, we present some recent advances of an experimental procedure for near-real time flood mapping and impact assessment. The procedure translates in near real-time the daily streamflow forecasts issued by the Global Flood Awareness System (GloFAS) into event-based flood hazard maps, which are then combined with exposure and vulnerability information at global scale to derive risk forecast. Impacts of the forecasted flood events are evaluated in terms of flood prone areas, potential economic damage, and affected population, infrastructures and cities. To increase the reliability of our forecasts we propose the integration of model-based estimations with an innovative methodology for social media monitoring, which allows for real-time verification and correction of impact forecasts. Finally, we present the results of preliminary tests which show the potential of the proposed procedure in supporting emergency response and management.

  3. APDS: Autonomous Pathogen Detection System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langlois, R G; Brown, S; Burris, L

    An early warning system to counter bioterrorism, the Autonomous Pathogen Detection System (APDS) continuously monitors the environment for the presence of biological pathogens (e.g., anthrax) and once detected, it sounds an alarm much like a smoke detector warns of a fire. Long before September 11, 2001, this system was being developed to protect domestic venues and events including performing arts centers, mass transit systems, major sporting and entertainment events, and other high profile situations in which the public is at risk of becoming a target of bioterrorist attacks. Customizing off-the-shelf components and developing new components, a multidisciplinary team developed APDS,more » a stand-alone system for rapid, continuous monitoring of multiple airborne biological threat agents in the environment. The completely automated APDS samples the air, prepares fluid samples in-line, and performs two orthogonal tests: immunoassay and nucleic acid detection. When compared to competing technologies, APDS is unprecedented in terms of flexibility and system performance.« less

  4. Output Consensus of Heterogeneous Linear Multi-Agent Systems by Distributed Event-Triggered/Self-Triggered Strategy.

    PubMed

    Hu, Wenfeng; Liu, Lu; Feng, Gang

    2016-09-02

    This paper addresses the output consensus problem of heterogeneous linear multi-agent systems. We first propose a novel distributed event-triggered control scheme. It is shown that, with the proposed control scheme, the output consensus problem can be solved if two matrix equations are satisfied. Then, we further propose a novel self-triggered control scheme, with which continuous monitoring is avoided. By introducing a fixed timer into both event- and self-triggered control schemes, Zeno behavior can be ruled out for each agent. The effectiveness of the event- and self-triggered control schemes is illustrated by an example.

  5. Web Based Seismological Monitoring (wbsm)

    NASA Astrophysics Data System (ADS)

    Giudicepietro, F.; Meglio, V.; Romano, S. P.; de Cesare, W.; Ventre, G.; Martini, M.

    Over the last few decades the seismological monitoring systems have dramatically improved tanks to the technological advancements and to the scientific progresses of the seismological studies. The most modern processing systems use the network tech- nologies to realize high quality performances in data transmission and remote controls. Their architecture is designed to favor the real-time signals analysis. This is, usually, realized by adopting a modular structure that allow to easy integrate any new cal- culation algorithm, without affecting the other system functionalities. A further step in the seismic processing systems evolution is the large use of the web based appli- cations. The web technologies can be an useful support for the monitoring activities allowing to automatically publishing the results of signals processing and favoring the remote access to data, software systems and instrumentation. An application of the web technologies to the seismological monitoring has been developed at the "Os- servatorio Vesuviano" monitoring center (INGV) in collaboration with the "Diparti- mento di Informatica e Sistemistica" of the Naples University. A system named Web Based Seismological Monitoring (WBSM) has been developed. Its main objective is to automatically publish the seismic events processing results and to allow displaying, analyzing and downloading seismic data via Internet. WBSM uses the XML tech- nology for hypocentral and picking parameters representation and creates a seismic events data base containing parametric data and wave-forms. In order to give tools for the evaluation of the quality and reliability of the published locations, WBSM also supplies all the quality parameters calculated by the locating program and allow to interactively display the wave-forms and the related parameters. WBSM is a modular system in which the interface function to the data sources is performed by two spe- cific modules so that to make it working in conjunction with a generic data source it is sufficient to modify or substitute the interface modules. WBSM is running at the "Osservatorio Vesuviano" Monitoring Center since the beginning of 2001 and can be visited at http://ov.ingv.it.

  6. Towards real-time regional earthquake simulation I: real-time moment tensor monitoring (RMT) for regional events in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi

    2014-01-01

    We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica (http://rmt.earth.sinica.edu.tw). The long-term goal of this system is to provide real-time source information for rapid seismic hazard assessment during large earthquakes.

  7. Event-synchronized data acquisition system for the SPring-8 linac beam position monitors

    NASA Astrophysics Data System (ADS)

    Masuda, T.; Fukui, T.; Tanaka, R.; Taniuchi, T.; Yamashita, A.; Yanagida, K.

    2005-05-01

    By the summer of 2003, we had completed the installation of a new non-destructive beam position monitor (BPM) system to facilitate beam trajectory and energy correction for the SPring-8 linac. In all, 47 BPM sets were installed on the 1-GeV linac and three beam-transport lines. All of the BPM data acquisition system was required to operate synchronously with the electron beam acceleration cycle. We have developed an event-synchronized data acquisition system for the BPM data readout. We have succeeded in continuously taking all the BPMs data from six VME computers synchronized with the 10 pps operation of the linac to continuously acquire data. For each beam shot, the data points are indexed by event number and stored in a database. Using the real-time features of the Solaris operating system and distributed database technology, we currently have achieved about 99.9% efficiency in capturing and archiving all of the 10 Hz data. The linac BPM data is available for off-line analysis of the beam trajectory, but also for real-time control and automatic correction of the beam trajectory and energy.

  8. Information processing requirements for on-board monitoring of automatic landing

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Karmarkar, J. S.

    1977-01-01

    A systematic procedure is presented for determining the information processing requirements for on-board monitoring of automatic landing systems. The monitoring system detects landing anomalies through use of appropriate statistical tests. The time-to-correct aircraft perturbations is determined from covariance analyses using a sequence of suitable aircraft/autoland/pilot models. The covariance results are used to establish landing safety and a fault recovery operating envelope via an event outcome tree. This procedure is demonstrated with examples using the NASA Terminal Configured Vehicle (B-737 aircraft). The procedure can also be used to define decision height, assess monitoring implementation requirements, and evaluate alternate autoland configurations.

  9. Dual-stage periodic event-triggered output-feedback control for linear systems.

    PubMed

    Ruan, Zhen; Chen, Wu-Hua; Lu, Xiaomei

    2018-05-01

    This paper proposes an event-triggered control framework, called dual-stage periodic event-triggered control (DSPETC), which unifies periodic event-triggered control (PETC) and switching event-triggered control (SETC). Specifically, two period parameters h 1 and h 2 are introduced to characterize the new event-triggering rule, where h 1 denotes the sampling period, while h 2 denotes the monitoring period. By choosing some specified values of h 2 , the proposed control scheme can reduce to PETC or SETC scheme. In the DSPETC framework, the controlled system is represented as a switched system model and its stability is analyzed via a switching-time-dependent Lyapunov functional. Both the cases with/without network-induced delays are investigated. Simulation and experimental results show that the DSPETC scheme is superior to the PETC scheme and the SETC scheme. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Mini-MegaTORTORA wide-field monitoring system with sub-second temporal resolution: observation of transient events

    NASA Astrophysics Data System (ADS)

    Karpov, S.; Beskin, G.; Biryukov, A.; Bondar, S.; Ivanov, E.; Katkova, E.; Perkov, A.; Sasyuk, V.

    2016-06-01

    Here we present a summary of first years of operation and first results of a novel 9-channel wide-field optical monitoring system with sub-second temporal resolution, Mini-MegaTORTORA (MMT-9), which is in operation now at Special Astrophysical Observatory on Russian Caucasus. The system is able to observe the sky simultaneously in either wide (~900 square degrees) or narrow (~100 square degrees) fields of view, either in clear light or with any combination of color (Johnson-Cousins B, V or R) and polarimetric filters installed, with exposure times ranging from 0.1 s to hundreds of seconds. The real-time system data analysis pipeline performs automatic detection of rapid transient events, both near-Earth and extragalactic. The objects routinely detected by MMT include faint meteors and artificial satellites. The pipeline for a longer time scales variability analysis is still in development.

  11. The Effects of High-Altitude Electromagnetic Pulse (HEMP) on Telecommunications Assets

    DTIC Science & Technology

    1988-06-01

    common to a whole class of switches. 5ESS switch software controls the operating system, call processing, and system administration andgmaintenance...LEVEL (ky/rn)3 (a). Mean Fraction of Preset Calls Dropped Due to Induced Transients3 1.0 W -o35kVhM (36 EVENTS) 5-40 kV/M (13 EVENTS) IAUTOMATIC ...eel PERIPHRAL UNIT BUS,IMNA The entire 4ESS system is controlled by the 1A processor. The processor monitors and controls the operation of the

  12. WiSPH: a wireless sensor network-based home care monitoring system.

    PubMed

    Magaña-Espinoza, Pedro; Aquino-Santos, Raúl; Cárdenas-Benítez, Néstor; Aguilar-Velasco, José; Buenrostro-Segura, César; Edwards-Block, Arthur; Medina-Cass, Aldo

    2014-04-22

    This paper presents a system based on WSN technology capable of monitoring heart rate and the rate of motion of seniors within their homes. The system is capable of remotely alerting specialists, caretakers or family members via a smartphone of rapid physiological changes due to falls, tachycardia or bradycardia. This work was carried out using our workgroup's WiSe platform, which we previously developed for use in WSNs. The proposed WSN architecture is flexible, allowing for greater scalability to better allow event-based monitoring. The architecture also provides security mechanisms to assure that the monitored and/or stored data can only be accessed by authorized individuals or devices. The aforementioned characteristics provide the network versatility and solidity required for use in health applications.

  13. Clinical and economic impact of remote monitoring on the follow-up of patients with implantable electronic cardiovascular devices: an observational study.

    PubMed

    Costa, Paulo Dias; Reis, A Hipólito; Rodrigues, Pedro P

    2013-02-01

    Traditional follow-up of patients with cardiovascular devices is still an activity that, in addition to serving an increasing population, requires a considerable amount of time and specialized human and technical resources. Our aim was to evaluate the applicability of the CareLink(®) (Medtronic, Minneapolis, MN) remote monitoring system as a complementary option to the follow-up of patients with implanted devices, between in-office visits. Evaluated outcomes included both clinical (event detection and time to diagnosis) and nonclinical (patient's satisfaction and economic costs) aspects. An observational, longitudinal, prospective study was conducted with patients from a Portuguese central hospital sampled by convenience during 1 week (43 patients). Data were collected in four moments: two in-office visits and two remote evaluations, reproducing 1 year of clinical follow-up. Data sources included health records, implant reports, initial demographic data collection, follow-up printouts, and a questionnaire. After selection criteria were verified, 15 patients (11 men [73%]) were included, 63.4±10.8 years old, representing 14.0±6.3 implant months. Clinically, 15 events were detected (9 by remote monitoring and 6 by patient-initiated activation), of which only 9 were symptomatic. We verified that remote monitoring could detect both symptomatic and asymptomatic events, whereas patient-initiated activation only detected symptomatic ones (p=0.028). Moreover, the mean diagnosis anticipation in patients with events was approximately 58 days (p<0.001). In nonclinical terms, we observed high or very high satisfaction (67% and 33%, respectively) with using remote monitoring technology, but still 8 patients (53%) stated they preferred in-office visits. Finally, the introduction of remote monitoring technology has the ability to reduce total follow-up costs for patients by 25%. We conclude that the use of this system constitutes a viable complementary option to the follow-up of patients with implantable devices, between in-office visits.

  14. Comparative analysis of three different methods for monitoring the use of green bridges by wildlife.

    PubMed

    Gužvica, Goran; Bošnjak, Ivana; Bielen, Ana; Babić, Danijel; Radanović-Gužvica, Biserka; Šver, Lidija

    2014-01-01

    Green bridges are used to decrease highly negative impact of roads/highways on wildlife populations and their effectiveness is evaluated by various monitoring methods. Based on the 3-year monitoring of four Croatian green bridges, we compared the effectiveness of three indirect monitoring methods: track-pads, camera traps and active infrared (IR) trail monitoring system. The ability of the methods to detect different species and to give good estimation of number of animal crossings was analyzed. The accuracy of species detection by track-pad method was influenced by granulometric composition of track-pad material, with the best results obtained with higher percentage of silt and clay. We compared the species composition determined by track-pad and camera trap methods and found that monitoring by tracks underestimated the ratio of small canids, while camera traps underestimated the ratio of roe deer. Regarding total number of recorder events, active IR detectors recorded from 11 to 19 times more events then camera traps and app. 80% of them were not caused by animal crossings. Camera trap method underestimated the real number of total events. Therefore, an algorithm for filtration of the IR dataset was developed for approximation of the real number of crossings. Presented results are valuable for future monitoring of wildlife crossings in Croatia and elsewhere, since advantages and disadvantages of used monitoring methods are shown. In conclusion, different methods should be chosen/combined depending on the aims of the particular monitoring study.

  15. What We Are Watching—Top Global Infectious Disease Threats, 2013-2016: An Update from CDC's Global Disease Detection Operations Center

    PubMed Central

    Iuliano, A. Danielle; Uyeki, Timothy M.; Mintz, Eric D.; Nichol, Stuart T.; Rollin, Pierre; Staples, J. Erin; Arthur, Ray R.

    2017-01-01

    To better track public health events in areas where the public health system is unable or unwilling to report the event to appropriate public health authorities, agencies can conduct event-based surveillance, which is defined as the organized collection, monitoring, assessment, and interpretation of unstructured information regarding public health events that may represent an acute risk to public health. The US Centers for Disease Control and Prevention's (CDC's) Global Disease Detection Operations Center (GDDOC) was created in 2007 to serve as CDC's platform dedicated to conducting worldwide event-based surveillance, which is now highlighted as part of the “detect” element of the Global Health Security Agenda (GHSA). The GHSA works toward making the world more safe and secure from disease threats through building capacity to better “Prevent, Detect, and Respond” to those threats. The GDDOC monitors approximately 30 to 40 public health events each day. In this article, we describe the top threats to public health monitored during 2012 to 2016: avian influenza, cholera, Ebola virus disease, and the vector-borne diseases yellow fever, chikungunya virus, and Zika virus, with updates to the previously described threats from Middle East respiratory syndrome-coronavirus (MERS-CoV) and poliomyelitis. PMID:28805465

  16. What We Are Watching-Top Global Infectious Disease Threats, 2013-2016: An Update from CDC's Global Disease Detection Operations Center.

    PubMed

    Christian, Kira A; Iuliano, A Danielle; Uyeki, Timothy M; Mintz, Eric D; Nichol, Stuart T; Rollin, Pierre; Staples, J Erin; Arthur, Ray R

    To better track public health events in areas where the public health system is unable or unwilling to report the event to appropriate public health authorities, agencies can conduct event-based surveillance, which is defined as the organized collection, monitoring, assessment, and interpretation of unstructured information regarding public health events that may represent an acute risk to public health. The US Centers for Disease Control and Prevention's (CDC's) Global Disease Detection Operations Center (GDDOC) was created in 2007 to serve as CDC's platform dedicated to conducting worldwide event-based surveillance, which is now highlighted as part of the "detect" element of the Global Health Security Agenda (GHSA). The GHSA works toward making the world more safe and secure from disease threats through building capacity to better "Prevent, Detect, and Respond" to those threats. The GDDOC monitors approximately 30 to 40 public health events each day. In this article, we describe the top threats to public health monitored during 2012 to 2016: avian influenza, cholera, Ebola virus disease, and the vector-borne diseases yellow fever, chikungunya virus, and Zika virus, with updates to the previously described threats from Middle East respiratory syndrome-coronavirus (MERS-CoV) and poliomyelitis.

  17. Monitoring damage growth in titanium matrix composites using acoustic emission

    NASA Technical Reports Server (NTRS)

    Bakuckas, J. G., Jr.; Prosser, W. H.; Johnson, W. S.

    1993-01-01

    The application of the acoustic emission (AE) technique to locate and monitor damage growth in titanium matrix composites (TMC) was investigated. Damage growth was studied using several optical techniques including a long focal length, high magnification microscope system with image acquisition capabilities. Fracture surface examinations were conducted using a scanning electron microscope (SEM). The AE technique was used to locate damage based on the arrival times of AE events between two sensors. Using model specimens exhibiting a dominant failure mechanism, correlations were established between the observed damage growth mechanisms and the AE results in terms of the events amplitude. These correlations were used to monitor the damage growth process in laminates exhibiting multiple modes of damage. Results revealed that the AE technique is a viable and effective tool to monitor damage growth in TMC.

  18. Near real-time monitoring of volcanic surface deformation from GPS measurements at Long Valley Caldera, California

    USGS Publications Warehouse

    Ji, Kang Hyeun; Herring, Thomas A.; Llenos, Andrea L.

    2013-01-01

    Long Valley Caldera in eastern California is an active volcanic area and has shown continued unrest in the last three decades. We have monitored surface deformation from Global Positioning System (GPS) data by using a projection method that we call Targeted Projection Operator (TPO). TPO projects residual time series with secular rates and periodic terms removed onto a predefined spatial pattern. We used the 2009–2010 slow deflation as a target spatial pattern. The resulting TPO time series shows a detailed deformation history including the 2007–2009 inflation, the 2009–2010 deflation, and a recent inflation that started in late-2011 and is continuing at the present time (November 2012). The recent inflation event is about four times faster than the previous 2007–2009 event. A Mogi source of the recent event is located beneath the resurgent dome at about 6.6 km depth at a rate of 0.009 km3/yr volume change. TPO is simple and fast and can provide a near real-time continuous monitoring tool without directly looking at all the data from many GPS sites in this potentially eruptive volcanic system.

  19. Dynamic Task Optimization in Remote Diabetes Monitoring Systems.

    PubMed

    Suh, Myung-Kyung; Woodbridge, Jonathan; Moin, Tannaz; Lan, Mars; Alshurafa, Nabil; Samy, Lauren; Mortazavi, Bobak; Ghasemzadeh, Hassan; Bui, Alex; Ahmadi, Sheila; Sarrafzadeh, Majid

    2012-09-01

    Diabetes is the seventh leading cause of death in the United States, but careful symptom monitoring can prevent adverse events. A real-time patient monitoring and feedback system is one of the solutions to help patients with diabetes and their healthcare professionals monitor health-related measurements and provide dynamic feedback. However, data-driven methods to dynamically prioritize and generate tasks are not well investigated in the domain of remote health monitoring. This paper presents a wireless health project (WANDA) that leverages sensor technology and wireless communication to monitor the health status of patients with diabetes. The WANDA dynamic task management function applies data analytics in real-time to discretize continuous features, applying data clustering and association rule mining techniques to manage a sliding window size dynamically and to prioritize required user tasks. The developed algorithm minimizes the number of daily action items required by patients with diabetes using association rules that satisfy a minimum support, confidence and conditional probability thresholds. Each of these tasks maximizes information gain, thereby improving the overall level of patient adherence and satisfaction. Experimental results from applying EM-based clustering and Apriori algorithms show that the developed algorithm can predict further events with higher confidence levels and reduce the number of user tasks by up to 76.19 %.

  20. Dynamic Task Optimization in Remote Diabetes Monitoring Systems

    PubMed Central

    Suh, Myung-kyung; Woodbridge, Jonathan; Moin, Tannaz; Lan, Mars; Alshurafa, Nabil; Samy, Lauren; Mortazavi, Bobak; Ghasemzadeh, Hassan; Bui, Alex; Ahmadi, Sheila; Sarrafzadeh, Majid

    2016-01-01

    Diabetes is the seventh leading cause of death in the United States, but careful symptom monitoring can prevent adverse events. A real-time patient monitoring and feedback system is one of the solutions to help patients with diabetes and their healthcare professionals monitor health-related measurements and provide dynamic feedback. However, data-driven methods to dynamically prioritize and generate tasks are not well investigated in the domain of remote health monitoring. This paper presents a wireless health project (WANDA) that leverages sensor technology and wireless communication to monitor the health status of patients with diabetes. The WANDA dynamic task management function applies data analytics in real-time to discretize continuous features, applying data clustering and association rule mining techniques to manage a sliding window size dynamically and to prioritize required user tasks. The developed algorithm minimizes the number of daily action items required by patients with diabetes using association rules that satisfy a minimum support, confidence and conditional probability thresholds. Each of these tasks maximizes information gain, thereby improving the overall level of patient adherence and satisfaction. Experimental results from applying EM-based clustering and Apriori algorithms show that the developed algorithm can predict further events with higher confidence levels and reduce the number of user tasks by up to 76.19 %. PMID:27617297

  1. Rock Burst Monitoring by Integrated Microseismic and Electromagnetic Radiation Methods

    NASA Astrophysics Data System (ADS)

    Li, Xuelong; Wang, Enyuan; Li, Zhonghui; Liu, Zhentang; Song, Dazhao; Qiu, Liming

    2016-11-01

    For this study, microseismic (MS) and electromagnetic radiation (EMR) monitoring systems were installed in a coal mine to monitor rock bursts. The MS system monitors coal or rock mass ruptures in the whole mine, whereas the EMR equipment monitors the coal or rock stress in a small area. By analysing the MS energy, number of MS events, and EMR intensity with respect to rock bursts, it has been shown that the energy and number of MS events present a "quiet period" 1-3 days before the rock burst. The data also show that the EMR intensity reaches a peak before the rock burst and this EMR intensity peak generally corresponds to the MS "quiet period". There is a positive correlation between stress and EMR intensity. Buckling failure of coal or rock depends on the rheological properties and occurs after the peak stress in the high-stress concentration areas in deep mines. The MS "quiet period" before the rock burst is caused by the heterogeneity of the coal and rock structures, the transfer of high stress into internal areas, locked patches, and self-organized criticality near the stress peak. This study increases our understanding of coal and rock instability in deep mines. Combining MS and EMR to monitor rock burst could improve prediction accuracy.

  2. Surveillance Monitoring Management for General Care Units: Strategy, Design, and Implementation.

    PubMed

    McGrath, Susan P; Taenzer, Andreas H; Karon, Nancy; Blike, George

    2016-07-01

    The growing number of monitoring devices, combined with suboptimal patient monitoring and alarm management strategies, has increased "alarm fatigue," which have led to serious consequences. Most reported alarm man- agement approaches have focused on the critical care setting. Since 2007 Dartmouth-Hitchcock (Lebanon, New Hamp- shire) has developed a generalizable and effective design, implementation, and performance evaluation approach to alarm systems for continuous monitoring in general care settings (that is, patient surveillance monitoring). In late 2007, a patient surveillance monitoring system was piloted on the basis of a structured design and implementation approach in a 36-bed orthopedics unit. Beginning in early 2009, it was expanded to cover more than 200 inpatient beds in all medicine and surgical units, except for psychiatry and labor and delivery. Improvements in clinical outcomes (reduction of unplanned transfers by 50% and reduction of rescue events by more than 60% in 2008) and approximately two alarms per patient per 12-hour nursing shift in the original pilot unit have been sustained across most D-H general care units in spite of increasing patient acuity and unit occupancy. Sample analysis of pager notifications indicates that more than 85% of all alarm conditions are resolved within 30 seconds and that more than 99% are resolved before escalation is triggered. The D-H surveillance monitoring system employs several important, generalizable features to manage alarms in a general care setting: alarm delays, static thresholds set appropriately for the prevalence of events in this setting, directed alarm annunciation, and policy-driven customization of thresholds to allow clinicians to respond to needs of individual patients. The systematic approach to design, implementation, and performance management has been key to the success of the system.

  3. Psychophysiological Control of Acognitive Task Using Adaptive Automation

    NASA Technical Reports Server (NTRS)

    Freeman, Frederick; Pope, Alan T. (Technical Monitor)

    2001-01-01

    The major focus of the present proposal was to examine psychophysiological variables related to hazardous states of awareness induced by monitoring automated systems. With the increased use of automation in today's work environment, people's roles in the work place are being redefined from that of active participant to one of passive monitor. Although the introduction of automated systems has a number of benefits, there are also a number of disadvantages regarding worker performance. Byrne and Parasuraman have argued for the use of psychophysiological measures in the development and the implementation of adaptive automation. While both performance based and model based adaptive automation have been studied, the use of psychophysiological measures, especially EEG, offers the advantage of real time evaluation of the state of the subject. The current study used the closed-loop system, developed at NASA-Langley Research Center, to control the state of awareness of subjects while they performed a cognitive vigilance task. Previous research in our laboratory, supported by NASA, has demonstrated that, in an adaptive automation, closed-loop environment, subjects perform a tracking task better under a negative than a positive, feedback condition. In addition, this condition produces less subjective workload and larger P300 event related potentials to auditory stimuli presented in a concurrent oddball task. We have also recently shown that the closed-loop system used to control the level of automation in a tracking task can also be used to control the event rate of stimuli in a vigilance monitoring task. By changing the event rate based on the subject's index of arousal, we have been able to produce improved monitoring, relative to various control groups. We have demonstrated in our initial closed-loop experiments with the the vigilance paradigm that using a negative feedback contingency (i.e. increasing event rates when the EEG index is low and decreasing event rates when the EEG index is high) results in a marked decrease of the vigilance decrement over a 40 minute session. This effect is in direct contrast to performance of a positive feedback group, as well as a number of other control groups which demonstrated the typical vigilance decrement. Interestingly, however, the negative feedback group performed at virtually the same level as a yoked control group. The yoked control group received the same order of changes in event rate that were generated by the negative feedback subjects using the closed-loop system. Thus it would appear to be possible to optimize vigilance performance by controlling the stimuli which subjects are asked to process.

  4. IBRD sonar scour monitoring project : real-time river channel-bed monitoring at the Chariton and Mississippi Rivers in Missouri, 2007-09, final report, January 2010.

    DOT National Transportation Integrated Search

    2010-01-01

    Scour and depositional responses to hydrologic events have been important to the scientific community studying sediment transport as well as potential effects on bridges and other hydraulic structures within riverine systems. A river channel-bed moni...

  5. STS-2 Induced Environment Contamination Monitor (IECM): Quick-Look Report

    NASA Technical Reports Server (NTRS)

    Miller, E. R. (Editor)

    1982-01-01

    The STS-2/induced environment contamination monitor (IECM) mission is described. The IECM system performance is discussed, and IECM mission time events are briefly described. Quick look analyses are presented for each of the 10 instruments comprising the IECM on the flight of STS-2. A short summary is presented.

  6. Research implementation of the SMART SIGNAL system on Trunk Highway (TH) 13.

    DOT National Transportation Integrated Search

    2013-02-01

    In our previous research, the SMART-SIGNAL (Systematic Monitoring of Arterial Road Traffic and Signals) : system that can collect event-based traffic data and generate comprehensive performance measures has been : successfully developed by the Univer...

  7. The Northern California Earthquake Management System: A Unified System From Realtime Monitoring to Data Distribution

    NASA Astrophysics Data System (ADS)

    Neuhauser, D.; Dietz, L.; Lombard, P.; Klein, F.; Zuzlewski, S.; Kohler, W.; Hellweg, M.; Luetgert, J.; Oppenheimer, D.; Romanowicz, B.

    2006-12-01

    The longstanding cooperation between the USGS Menlo Park and UC Berkeley's Seismological Laboratory for monitoring earthquakes and providing data to the research community is achieving a new level of integration. While station support and data collection for each network (NC, BK, BP) remain the responsibilities of the host institution, picks, codas and amplitudes will be produced and shared between the data centers continuously. Thus, realtime earthquake processing from triggering and locating through magnitude and moment tensor calculation and Shakemap production will take place independently at both locations, improving the robustness of event reporting in the Northern California Earthquake Management Center. Parametric data will also be exchanged with the Southern California Earthquake Management System to allow statewide earthquake detection and processing for further redundancy within the California Integrated Seismic Network (CISN). The database plays an integral part in this system, providing the coordination for event processing as well as the repository for event, instrument (metadata) and waveform information. The same master database serves both realtime processing, data quality control and archival, and the data center which provides waveforms and earthquake data to users in the research community. Continuous waveforms from all BK, BP, and NC stations, event waveform gathers, and event information automatically become available at the Northern California Earthquake Data Center (NCEDC). Currently, the NCEDC is collecting and makes available over 4 TByes of data per year from the NCEMC stations and other seismic networks, as well as from GPS and and other geophysical instrumentation.

  8. A MODIS-based automated flood monitoring system for southeast asia

    NASA Astrophysics Data System (ADS)

    Ahamed, A.; Bolten, J. D.

    2017-09-01

    Flood disasters in Southeast Asia result in significant loss of life and economic damage. Remote sensing information systems designed to spatially and temporally monitor floods can help governments and international agencies formulate effective disaster response strategies during a flood and ultimately alleviate impacts to population, infrastructure, and agriculture. Recent destructive flood events in the Lower Mekong River Basin occurred in 2000, 2011, 2013, and 2016 (http://ffw.mrcmekong.org/historical_rec.htm, April 24, 2017). The large spatial distribution of flooded areas and lack of proper gauge data in the region makes accurate monitoring and assessment of impacts of floods difficult. Here, we discuss the utility of applying satellite-based Earth observations for improving flood inundation monitoring over the flood-prone Lower Mekong River Basin. We present a methodology for determining near real-time surface water extent associated with current and historic flood events by training surface water classifiers from 8-day, 250-m Moderate-resolution Imaging Spectroradiometer (MODIS) data spanning the length of the MODIS satellite record. The Normalized Difference Vegetation Index (NDVI) signature of permanent water bodies (MOD44W; Carroll et al., 2009) is used to train surface water classifiers which are applied to a time period of interest. From this, an operational nowcast flood detection component is produced using twice daily imagery acquired at 3-h latency which performs image compositing routines to minimize cloud cover. Case studies and accuracy assessments against radar-based observations for historic flood events are presented. The customizable system has been transferred to regional organizations and near real-time derived surface water products are made available through a web interface platform. Results highlight the potential of near real-time observation and impact assessment systems to serve as effective decision support tools for governments, international agencies, and disaster responders.

  9. STS-3 Induced Environment Contamination Monitor (IECM): Quick-look report

    NASA Technical Reports Server (NTRS)

    Miller, E. R. (Editor); Fountain, J. A. (Editor)

    1982-01-01

    The STS-3/Induced Environment Contamination Monitor (IECM) mission is described. The IECM system performance is discussed, and IECM mission time events are briefly described. Quick look analyses are presented for each of the 10 instruments comprising the IECM on the flight of STS-3. Finally, a short summary is presented and plans are discussed for future IECM flights, and opportunities for direct mapping of Orbiter effluents using the Remote manipulator System.

  10. Molecular Imaging of Phosphorylation Events for Drug Development

    PubMed Central

    Chan, C. T.; Paulmurugan, R.; Reeves, R. E.; Solow-Cordero, D.; Gambhir, S. S.

    2014-01-01

    Purpose Protein phosphorylation mediated by protein kinases controls numerous cellular processes. A genetically encoded, generalizable split firefly luciferase (FL)-assisted complementation system was developed for noninvasive monitoring phosphorylation events and efficacies of kinase inhibitors in cell culture and in small living subjects by optical bioluminescence imaging. Procedures An Akt sensor (AST) was constructed to monitor Akt phosphorylation and the effect of different PI-3K and Akt inhibitors. Specificity of AST was determined using a non-phosphorylable mutant sensor containing an alanine substitution (ASA). Results The PI-3K inhibitor LY294002 and Akt kinase inhibitor perifosine led to temporal- and dose-dependent increases in complemented FL activities in 293T human kidney cancer cells stably expressing AST (293T/AST) but not in 293T/ASA cells. Inhibition of endogenous Akt phosphorylation and kinase activities by perifosine also correlated with increase in complemented FL activities in 293T/AST cells but not in 293T/ASA cells. Treatment of nude mice bearing 293T/AST xenografts with perifosine led to a 2-fold increase in complemented FL activities compared to that of 293T/ASA xenografts. Our system was used to screen a small chemical library for novel modulators of Akt kinase activity. Conclusion This generalizable approach for noninvasive monitoring of phosphorylation events will accelerate the discovery and validation of novel kinase inhibitors and modulators of phosphorylation events. PMID:19048345

  11. Microbial-based evaluation of foaming events in full-scale wastewater treatment plants by microscopy survey and quantitative image analysis.

    PubMed

    Leal, Cristiano; Amaral, António Luís; Costa, Maria de Lourdes

    2016-08-01

    Activated sludge systems are prone to be affected by foaming occurrences causing the sludge to rise in the reactor and affecting the wastewater treatment plant (WWTP) performance. Nonetheless, there is currently a knowledge gap hindering the development of foaming events prediction tools that may be fulfilled by the quantitative monitoring of AS systems biota and sludge characteristics. As such, the present study focuses on the assessment of foaming events in full-scale WWTPs, by quantitative protozoa, metazoa, filamentous bacteria, and sludge characteristics analysis, further used to enlighten the inner relationships between these parameters. In the current study, a conventional activated sludge system (CAS) and an oxidation ditch (OD) were surveyed throughout a period of 2 and 3 months, respectively, regarding their biota and sludge characteristics. The biota community was monitored by microscopic observation, and a new filamentous bacteria index was developed to quantify their occurrence. Sludge characteristics (aggregated and filamentous biomass contents and aggregate size) were determined by quantitative image analysis (QIA). The obtained data was then processed by principal components analysis (PCA), cross-correlation analysis, and decision trees to assess the foaming occurrences, and enlighten the inner relationships. It was found that such events were best assessed by the combined use of the relative abundance of testate amoeba and nocardioform filamentous index, presenting a 92.9 % success rate for overall foaming events, and 87.5 and 100 %, respectively, for persistent and mild events.

  12. Monitoring of waste disposal in deep geological formations

    NASA Astrophysics Data System (ADS)

    German, V.; Mansurov, V.

    2003-04-01

    In the paper application of kinetic approach for description of rock failure process and waste disposal microseismic monitoring is advanced. On base of two-stage model of failure process the capability of rock fracture is proved. The requests to monitoring system such as real time mode of data registration and processing and its precision range are formulated. The method of failure nuclei delineation in a rock masses is presented. This method is implemented in a software program for strong seismic events forecasting. It is based on direct use of the fracture concentration criterion. The method is applied to the database of microseismic events of the North Ural Bauxite Mine. The results of this application, such as: efficiency, stability, possibility of forecasting rockburst are discussed.

  13. Developing a national system for dealing with adverse events following immunization.

    PubMed Central

    Mehta, U.; Milstien, J. B.; Duclos, P.; Folb, P. I.

    2000-01-01

    Although vaccines are among the safest of pharmaceuticals, the occasional severe adverse event or cluster of adverse events associated with their use may rapidly become a serious threat to public health. It is essential that national monitoring and reporting systems for vaccine safety are efficient and adequately coordinated with those that conventionally deal with non-vaccine pharmaceuticals. Equally important is the need for an enlightened and informed national system to be in place to deal with public concerns and rapid evaluation of the risk to public safety when adverse events occur. Described in this article is the outcome of efforts by the WHO Global Training Network to describe a simple national system for dealing with vaccine safety and with emergencies as they arise. The goals of a training programme designed to help develop such a system are also outlined. PMID:10743281

  14. Analysis of Hospital Processes with Process Mining Techniques.

    PubMed

    Orellana García, Arturo; Pérez Alfonso, Damián; Larrea Armenteros, Osvaldo Ulises

    2015-01-01

    Process mining allows for discovery, monitoring, and improving processes identified in information systems from their event logs. In hospital environments, process analysis has been a crucial factor for cost reduction, control and proper use of resources, better patient care, and achieving service excellence. This paper presents a new component for event logs generation in the Hospital Information System or HIS, developed at University of Informatics Sciences. The event logs obtained are used for analysis of hospital processes with process mining techniques. The proposed solution intends to achieve the generation of event logs in the system with high quality. The performed analyses allowed for redefining functions in the system and proposed proper flow of information. The study exposed the need to incorporate process mining techniques in hospital systems to analyze the processes execution. Moreover, we illustrate its application for making clinical and administrative decisions for the management of hospital activities.

  15. CISN ShakeAlert Earthquake Early Warning System Monitoring Tools

    NASA Astrophysics Data System (ADS)

    Henson, I. H.; Allen, R. M.; Neuhauser, D. S.

    2015-12-01

    CISN ShakeAlert is a prototype earthquake early warning system being developed and tested by the California Integrated Seismic Network. The system has recently been expanded to support redundant data processing and communications. It now runs on six machines at three locations with ten Apache ActiveMQ message brokers linking together 18 waveform processors, 12 event association processes and 4 Decision Module alert processes. The system ingests waveform data from about 500 stations and generates many thousands of triggers per day, from which a small portion produce earthquake alerts. We have developed interactive web browser system-monitoring tools that display near real time state-of-health and performance information. This includes station availability, trigger statistics, communication and alert latencies. Connections to regional earthquake catalogs provide a rapid assessment of the Decision Module hypocenter accuracy. Historical performance can be evaluated, including statistics for hypocenter and origin time accuracy and alert time latencies for different time periods, magnitude ranges and geographic regions. For the ElarmS event associator, individual earthquake processing histories can be examined, including details of the transmission and processing latencies associated with individual P-wave triggers. Individual station trigger and latency statistics are available. Detailed information about the ElarmS trigger association process for both alerted events and rejected events is also available. The Google Web Toolkit and Map API have been used to develop interactive web pages that link tabular and geographic information. Statistical analysis is provided by the R-Statistics System linked to a PostgreSQL database.

  16. Software for embedded processors: Problems and solutions

    NASA Astrophysics Data System (ADS)

    Bogaerts, J. A. C.

    1990-08-01

    Data Acquistion systems in HEP experiments use a wide spectrum of computers to cope with two major problems: high event rates and a large data volume. They do this by using special fast trigger processors at the source to reduce the event rate by several orders of magnitude. The next stage of a data acquisition system consists of a network of fast but conventional microprocessors which are embedded in high speed bus systems where data is still further reduced, filtered and merged. In the final stage complete events are farmed out to a another collection of processors, which reconstruct the events and perhaps achieve a further event rejection by a small factor, prior to recording onto magnetic tape. Detectors are monitored by analyzing a fraction of the data. This may be done for individual detectors at an early state of the data acquisition or it may be delayed till the complete events are available. A network of workstations is used for monitoring, displays and run control. Software for trigger processors must have a simple structure. Rejection algorithms are carefully optimized, and overheads introduced by system software cannot be tolerated. The embedded microprocessors have to co-operate, and need to be synchronized with the preceding and following stages. Real time kernels are typically used to solve synchronization and communication problems. Applications are usually coded in C, which is reasonably efficient and allows direct control over low level hardware functions. Event reconstruction software is very similar or even identical to offline software, predominantly written in FORTRAN. With the advent of powerful RISC processors, and with manufacturers tending to adopt open bus architectures, there is a move towards commercial processors and hence the introduction of the UNIX operating system. Building and controlling such a heterogeneous data acquisition system puts a heavy strain on the software. Communications is now as important as CPU capacity and I/O bandwidth, the traditional key parameters of a HEP data acquisition system. Software engineering and real time system simulation tools are becoming indispensible for the design of future data acquisition systems.

  17. Active surveillance of postmarket medical product safety in the Federal Partners' Collaboration.

    PubMed

    Robb, Melissa A; Racoosin, Judith A; Worrall, Chris; Chapman, Summer; Coster, Trinka; Cunningham, Francesca E

    2012-11-01

    After half a century of monitoring voluntary reports of medical product adverse events, the Food and Drug Administration (FDA) has launched a long-term project to build an adverse events monitoring system, the Sentinel System, which can access and evaluate electronic health care data to help monitor the safety of regulated medical products once they are marketed. On the basis of experience gathered through a number of collaborative efforts, the Federal Partners' Collaboration pilot project, involving FDA, the Centers for Medicare & Medicaid Services, the Department of Veteran Affairs, and the Department of Defense, is already enabling FDA to leverage the power of large public health care databases to assess, in near real time, the utility of analytical tools and methodologies that are being developed for use in the Sentinel System. Active medical product safety surveillance is enhanced by use of these large public health databases because specific populations of exposed patients can be identified and analyzed, and can be further stratified by key variables such as age, sex, race, socioeconomic status, and basis for eligibility to examine important subgroups.

  18. Vehicle Integrated Prognostic Reasoner (VIPR) 2010 Annual Final Report

    NASA Technical Reports Server (NTRS)

    Hadden, George D.; Mylaraswamy, Dinkar; Schimmel, Craig; Biswas, Gautam; Koutsoukos, Xenofon; Mack, Daniel

    2011-01-01

    Honeywell's Central Maintenance Computer Function (CMCF) and Aircraft Condition Monitoring Function (ACMF) represent the state-of-the art in integrated vehicle health management (IVHM). Underlying these technologies is a fault propagation modeling system that provides nose-to-tail coverage and root cause diagnostics. The Vehicle Integrated Prognostic Reasoner (VIPR) extends this technology to interpret evidence generated by advanced diagnostic and prognostic monitors provided by component suppliers to detect, isolate, and predict adverse events that affect flight safety. This report describes year one work that included defining the architecture and communication protocols and establishing the user requirements for such a system. Based on these and a set of ConOps scenarios, we designed and implemented a demonstration of communication pathways and associated three-tiered health management architecture. A series of scripted scenarios showed how VIPR would detect adverse events before they escalate as safety incidents through a combination of advanced reasoning and additional aircraft data collected from an aircraft condition monitoring system. Demonstrating VIPR capability for cases recorded in the ASIAS database and cross linking them with historical aircraft data is planned for year two.

  19. Slow Monitoring Systems for CUORE

    NASA Astrophysics Data System (ADS)

    Dutta, Suryabrata; Cuore Collaboration

    2016-09-01

    The Cryogenic Underground Observatory for Rare Events (CUORE) is a ton-scale neutrinoless double-beta decay experiment under construction at the Laboratori Nazionali del Gran Sasso (LNGS). The experiment is comprised of 988 TeO2 bolometric crystals arranged into 19 towers and operated at a temperature of 10 mK. We have developed slow monitoring systems to monitor the cryostat during detector installation, commissioning, data taking, and other crucial phases of the experiment. Our systems use responsive LabVIEW virtual instruments and video streams of the cryostat. We built a website using the Angular, Bootstrap, and MongoDB frameworks to display this data in real-time. The website can also display archival data and send alarms. I will present how we constructed these slow monitoring systems to be robust, accurate, and secure, while maintaining reliable access for the entire collaboration from any platform in order to ensure efficient communications and fast diagnoses of all CUORE systems.

  20. Error, rather than its probability, elicits specific electrocortical signatures: a combined EEG-immersive virtual reality study of action observation.

    PubMed

    Pezzetta, Rachele; Nicolardi, Valentina; Tidoni, Emmanuele; Aglioti, Salvatore Maria

    2018-06-06

    Detecting errors in one's own actions, and in the actions of others, is a crucial ability for adaptable and flexible behavior. Studies show that specific EEG signatures underpin the monitoring of observed erroneous actions (error-related negativity, error-positivity, mid-frontal theta oscillations). However, the majority of studies on action observation used sequences of trials where erroneous actions were less frequent than correct actions. Therefore, it was not possible to disentangle whether the activation of the performance monitoring system was due to an error - as a violation of the intended goal - or a surprise/novelty effect, associated with a rare and unexpected event. Combining EEG and immersive virtual reality (IVR-CAVE system), we recorded the neural signal of 25 young adults who observed in first-person perspective, simple reach-to-grasp actions performed by an avatar aiming for a glass. Importantly, the proportion of erroneous actions was higher than correct actions. Results showed that the observation of erroneous actions elicits the typical electro-cortical signatures of error monitoring and therefore the violation of the action goal is still perceived as a salient event. The observation of correct actions elicited stronger alpha suppression. This confirmed the role of the alpha frequency band in the general orienting response to novel and infrequent stimuli. Our data provides novel evidence that an observed goal error (the action slip) triggers the activity of the performance monitoring system even when erroneous actions, which are, typically, relevant events, occur more often than correct actions and thus are not salient because of their rarity.

  1. In-situ Fluorometers Reveal High Frequency Dynamics In Dissolved Organic Matter For Urban Rivers

    NASA Astrophysics Data System (ADS)

    Croghan, D.; Bradley, C.; Khamis, K.; Hannah, D. M.; Sadler, J. P.; Van Loon, A.

    2017-12-01

    To-date Dissolved Organic Matter (DOM) dynamics have been quantified poorly in urban rivers, despite the substantial water quality issues linked to urbanisation. Research has been hindered by the low temporal resolution of observations and over-reliance on manual sampling which often fail to capture precipitation events and diurnal dynamics. High frequency data are essential to estimate more accurately DOM fluxes/loads and to understand DOM furnishing and transport processes. Recent advances in optical sensor technology, including field deployable in-situ fluorometers, are yielding new high resolution DOM information. However, no consensus regarding the monitoring resolution required for urban systems exists, with no studies monitoring at <15 min time steps. High-frequency monitoring (5 min resolution; 4 week duration) was conducted on a headwater urban stream in Birmingham, UK (N 52.447430 W -1.936715) to determine the optimum temporal resolution for characterization of DOM event dynamics. A through-flow GGNU-30 monitored wavelengths corresponding to tryptophan-like fluorescence (TLF; Peak T1) (Ex 285 nm/ Em 345 nm) and humic-like fluorescence (HLF; Peak C) (Ex 365 nm/Em 490 nm). The results suggest that at base flow TLF and HLF are relatively stable, though episodic DOM inputs can pulse through the system, which may be missed during lower temporal resolution monitoring. High temporal variation occurs during storm events in TLF and HLF intensity: TLF intensity is highest during the rising limb of the hydrograph and can rapidly decline thereafter, indicating the importance of fast flow-path and close proximity sources to TLF dynamics. HLF intensity tracks discharge more closely, but can also quickly decline during high flow events due to dilution effects. Furthermore, the ratio of TLF:HLF when derived at high-frequency provides a useful indication of the presence and type of organic effluents in stream, which aids in the identification of Combined Sewage Overflow releases. Our work highlights the need for future studies to utilise shorter temporal scales than previously used to monitor urban DOM dynamics. The application of higher frequency monitoring enables the identification of finer-scale patterns and subsequently aids in deciphering the sources and pathways controlling urban DOM dynamics.

  2. Permeable pavement monitoring at the Edison Environmental Center demonstration site

    EPA Science Inventory

    There are few detailed studies of full-scale, replicated, actively-used pervious pavement systems. Practitioners need additional studies of pervious pavement systems in its intended application (parking lot, roadway, etc.) during a range of climatic events, daily usage conditions...

  3. Permeable Pavement Monitoring at the Edison Environmental Center Demonstration Site

    EPA Science Inventory

    There are few detailed studies of full-scale, replicated, actively-used pervious pavement systems. Practitioners need additional studies of pervious pavement systems in its intended application (parking lot, roadway, etc.) during a range of climatic events, daily usage conditions...

  4. Real-time Social Internet Data to Guide Forecasting Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Valle, Sara Y.

    Our goal is to improve decision support by monitoring and forecasting events using social media, mathematical models, and quantifying model uncertainty. Our approach is real-time, data-driven forecasts with quantified uncertainty: Not just for weather anymore. Information flow from human observations of events through an Internet system and classification algorithms is used to produce quantitatively uncertain forecast. In summary, we want to develop new tools to extract useful information from Internet data streams, develop new approaches to assimilate real-time information into predictive models, validate approaches by forecasting events, and our ultimate goal is to develop an event forecasting system using mathematicalmore » approaches and heterogeneous data streams.« less

  5. Sensitivity of the International Monitoring System infrasound network to elevated sources: a western Eurasia case study

    NASA Astrophysics Data System (ADS)

    Nippress, Alexandra; Green, David N.

    2017-11-01

    For the past 5 years (2010-2015) infrasound arrivals have been included in International Data Centre analyst-reviewed bulletins of events detected across the International Monitoring System (IMS). In western Eurasia, there are clusters of up to 268 events that consist of only infrasound arrivals (no associated seismic phases). These clusters are of unknown origin, although one in the North Sea region is associated with sonic booms from supersonic aircraft activity. IMS data for 17 North Sea events are analysed and compared with data from the Large Aperture Infrasound Array in the Netherlands to support the existence of these events and to determine common characteristics. Three other large clusters in western Eurasia are also identified and studied and show similar characteristics to the North Sea events, indicative of supersonic aircraft activity. The IMS infrasound network is shown to be particularly sensitive to sonic booms because the elevated source height reduces the anisotropy of infrasonic propagation within a stratospheric duct and allows for episodic upwind propagation. This episodic upwind propagation in addition to the prevailing downwind propagation, leads to clusters of Reviewed Event Bulletin events with constrained locations in western Eurasia region during the summer months. In the winter months, the recorded arrivals suggest that episodic upwind propagation is not as prevalent. Propagation modelling indicates that the subsequent unidirectional propagation, combined with the sparseness of the IMS network, leads to elongated lines of estimated event locations.

  6. Hydrological extremes in the media: The 2015 drought event in Germany

    NASA Astrophysics Data System (ADS)

    Zink, Matthias; Samaniego, Luis; Kumar, Rohini; Thober, Stephan; Mai, Juliane; Schäfer, David; Marx, Andreas

    2017-04-01

    The 2003 drought event had major implications on many societal sectors, including energy production, health, forestry and agriculture. The reduced availability of water accompanied by high temperatures led to substantial economic losses in Germany on the order of 1.5 Billion Euros, in agriculture alone. Furthermore, soil droughts have considerable impacts on ecosystems, forest fires and water management. In 2015, another drought event impacted Germany which had impacts on inland navigation, forest fire risk and agriculture among others. Due to this drought event, corn yield reduced by 22% compared to the preceding 5 years. This drought event was tracked by the 2014 implemented German Drought Monitor, a near real-time, online soil water monitoring platform (Zink et al., 2016). This platform uses an high resolution, operational modeling system which delivers easy to understand maps of soil drought conditions that are published on a daily basis on www.ufz.de/droughtmonitor. During the 2015 event, the German Drought Monitor was used by several regional to national newspapers as well as by television to inform the public about the recent status of soil moisture conditions. Next to publishing the drought maps, we were asked to comment the drought development and especially the severity of the ongoing drought event. On the one hand, this gave us the opportunity to inform the public about different types and the characterization of droughts. On the other hand, some journalists just tried to invoke statements such as "this is the most severe drought event ever recorded" to get a good headline. Further the secondmost pressing question of the journalists was, if the current event could be directly attributed to climate change. A clear answer to this question could not be given since the drought monitor is based on only a 65 year period of data. Depending on the media company, different depths of information and knowledge was finally transferred to the newsletter article and thus the public. In conclusion, the German Drought Monitor is the most objective instrument to assess agricultural droughts in Germany.

  7. Downhole Microseismic Monitoring at a Carbon Capture, Utilization, and Storage Site, Farnsworth Unit, Ochiltree County, Texas

    NASA Astrophysics Data System (ADS)

    Ziegler, A.; Balch, R. S.; van Wijk, J.

    2015-12-01

    Farnsworth Oil Field in North Texas hosts an ongoing carbon capture, utilization, and storage project. This study is focused on passive seismic monitoring at the carbon injection site to measure, locate, and catalog any induced seismic events. A Geometrics Geode system is being utilized for continuous recording of the passive seismic downhole bore array in a monitoring well. The array consists of 3-component dual Geospace OMNI-2400 15Hz geophones with a vertical spacing of 30.5m. Downhole temperature and pressure are also monitored. Seismic data is recorded continuously and is produced at a rate of over 900GB per month, which must be archived and reviewed. A Short Term Average/Long Term Average (STA/LTA) algorithm was evaluated for its ability to search for events, including identification and quantification of any false positive events. It was determined that the algorithm was not appropriate for event detection with the background level of noise at the field site and for the recording equipment as configured. Alternatives are being investigated. The final intended outcome of the passive seismic monitoring is to mine the continuous database and develop a catalog of microseismic events/locations and to determine if there is any relationship to CO2 injection in the field. Identifying the location of any microseismic events will allow for correlation with carbon injection locations and previously characterized geological and structural features such as faults and paleoslopes. Additionally, the borehole array has recorded over 1200 active sources with three sweeps at each source location that were acquired during a nearby 3D VSP. These data were evaluated for their usability and location within an effective radius of the array and were stacked to improve signal-noise ratio and are used to calibrate a full field velocity model to enhance event location accuracy. Funding for this project is provided by the U.S. Department of Energy under Award No. DE-FC26-05NT42591.

  8. Sources and characteristics of acoustic emissions from mechanically stressed geologic granular media — A review

    NASA Astrophysics Data System (ADS)

    Michlmayr, Gernot; Cohen, Denis; Or, Dani

    2012-05-01

    The formation of cracks and emergence of shearing planes and other modes of rapid macroscopic failure in geologic granular media involve numerous grain scale mechanical interactions often generating high frequency (kHz) elastic waves, referred to as acoustic emissions (AE). These acoustic signals have been used primarily for monitoring and characterizing fatigue and progressive failure in engineered systems, with only a few applications concerning geologic granular media reported in the literature. Similar to the monitoring of seismic events preceding an earthquake, AE may offer a means for non-invasive, in-situ, assessment of mechanical precursors associated with imminent landslides or other types of rapid mass movements (debris flows, rock falls, snow avalanches, glacier stick-slip events). Despite diverse applications and potential usefulness, a systematic description of the AE method and its relevance to mechanical processes in Earth sciences is lacking. This review is aimed at providing a sound foundation for linking observed AE with various micro-mechanical failure events in geologic granular materials, not only for monitoring of triggering events preceding mass mobilization, but also as a non-invasive tool in its own right for probing the rich spectrum of mechanical processes at scales ranging from a single grain to a hillslope. We review first studies reporting use of AE for monitoring of failure in various geologic materials, and describe AE generating source mechanisms in mechanically stressed geologic media (e.g., frictional sliding, micro-crackling, particle collisions, rupture of water bridges, etc.) including AE statistical features, such as frequency content and occurrence probabilities. We summarize available AE sensors and measurement principles. The high sampling rates of advanced AE systems enable detection of numerous discrete failure events within a volume and thus provide access to statistical descriptions of progressive collapse of systems with many interacting mechanical elements such as the fiber bundle model (FBM). We highlight intrinsic links between AE characteristics and established statistical models often used in structural engineering and material sciences, and outline potential applications for failure prediction and early-warning using the AE method in combination with the FBM. The biggest challenge to application of the AE method for field applications is strong signal attenuation. We provide an outlook for overcoming such limitations considering emergence of a class of fiber-optic based distributed AE sensors and deployment of acoustic waveguides as part of monitoring networks.

  9. Passive microseismic monitoring at an Australian CO2 geological storage site

    NASA Astrophysics Data System (ADS)

    Siggins, Anthony

    2010-05-01

    Passive microseismic monitoring at an Australian CO2 geological storage site A.F. Siggins1 and T. Daley2 1. CO2CRC at CSIRO Earth Science and Resource Engineering, Clayton, Victoria, Australia 2. Lawrence Berkeley National Labs, Berkeley, CA, USA Prior to the injection of CO2, background micro-seismic (MS) monitoring commenced at the CO2CRC Otway project site in Victoria, south-eastern Australia on the 4th of October 2007. The seismometer installation consisted of a solar powered ISS MS™ seismometer connected to two triaxial geophones placed in a gravel pack in a shallow borehole at 10m and 40 m depth respectively. The seismometer unit was interfaced to a digital radio which communicated with a remote computer containing the seismic data base. This system was designed to give a qualitative indication of any natural micro-seismicity at the site and to provide backup to a more extensive geophone array installed at the reservoir depth of approximately 2000m. During the period, October to December 2007 in excess of 150 two-station events were recorded. These events could all be associated with surface engineering activities during the down-hole installation of instruments at the nearby Naylor 1 monitoring well and surface seismic weight drop investigations on site. Source location showed the great majority of events to be clustered on the surface. MS activity then quietened down with the completion of these tasks. Injection of a CO2 rich gas commenced in mid March 2008 continuing until late August 2009 with approximately 65,000 tonnes being injected at 2050m depth in to a depleted natural gas formation. Only a small number of subsurface MS events were recorded during 2008 although the monitoring system suffered from long periods of down-time due to power supply failures and frequent mains power outages in the region. In March 2009 the surface installation was upgraded with new hardware and software. The seismometer was replaced with a more sensitive ISS 32-bit GS™ unit. Internet access to the monitoring system and data base was then established with a Telstra Next G connection. Due to the higher sensitivity of the seismometer, many more low amplitude sub-surface events are now being recorded, possibly associated with deep truncated faults in the south west corner of the injection site although any causal link with the CO2 injection remains to be determined.

  10. Event-Driven Messaging for Offline Data Quality Monitoring at ATLAS

    NASA Astrophysics Data System (ADS)

    Onyisi, Peter

    2015-12-01

    During LHC Run 1, the information flow through the offline data quality monitoring in ATLAS relied heavily on chains of processes polling each other's outputs for handshaking purposes. This resulted in a fragile architecture with many possible points of failure and an inability to monitor the overall state of the distributed system. We report on the status of a project undertaken during the LHC shutdown to replace the ad hoc synchronization methods with a uniform message queue system. This enables the use of standard protocols to connect processes on multiple hosts; reliable transmission of messages between possibly unreliable programs; easy monitoring of the information flow; and the removal of inefficient polling-based communication.

  11. Implementing an electronic hand hygiene monitoring system: Lessons learned from community hospitals.

    PubMed

    Edmisten, Catherine; Hall, Charles; Kernizan, Lorna; Korwek, Kimberly; Preston, Aaron; Rhoades, Evan; Shah, Shalin; Spight, Lori; Stradi, Silvia; Wellman, Sonia; Zygadlo, Scott

    2017-08-01

    Measuring and providing feedback about hand hygiene (HH) compliance is a complicated process. Electronic HH monitoring systems have been proposed as a possible solution; however, there is little information available about how to successfully implement and maintain these systems for maximum benefit in community hospitals. An electronic HH monitoring system was implemented in 3 community hospitals by teams at each facility with support from the system vendor. Compliance rates were measured by the electronic monitoring system. The implementation challenges, solutions, and drivers of success were monitored within each facility. The electronic HH monitoring systems tracked on average more than 220,000 compliant HH events per facility per month, with an average monthly compliance rate >85%. The sharing of best practices between facilities was valuable in addressing challenges encountered during implementation and maintaining a high rate of use. Drivers of success included a collaborative environment, leadership commitment, using data to drive improvement, consistent and constant messaging, staff empowerment, and patient involvement. Realizing the full benefit of investments in electronic HH monitoring systems requires careful consideration of implementation strategies, planning for ongoing support and maintenance, and presenting data in a meaningful way to empower and inspire staff. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  12. Supply Warehouse#3, SWMU 088 Operations, Maintenance, and Monitoring Report Kennedy Space Center, Florida

    NASA Technical Reports Server (NTRS)

    Murphy, Alex

    2016-01-01

    This document presents the findings, observations, and results associated with Operations, Maintenance, and Monitoring (OM&M) activities of Corrective Measures Implementation (CMI) activities conducted at Supply Warehouse #3 (SW3) located at John F. Kennedy Space Center (KSC), Florida from October 8, 2015, to September 12, 2016, and performance monitoring results for semi-annual sampling events conducted in March and September 2016. The primary objective of SW3 CMI is to actively decrease concentrations of trichloroethene (TCE) and vinyl chloride (VC) to less than Florida Department of Environmental Protection (FDEP) Natural Attenuation Default Concentrations (NADCs), and the secondary objective is to reduce TCE, cis-1,2-dichloroethene (cDCE), trans-1,2-dichloroethene (tDCE), 1,1-dichloroethene (11DCE), and VC concentrations to less than FDEP Groundwater Cleanup Target Levels (GCTLs). The SW3 facility has been designated Solid Waste Management Unit (SWMU) 088 under KSC's Resource Conservation and Recovery Act (RCRA) Corrective Action Program. Based on the results to date, the SW3 air sparging (AS) system is operating at or below the performance criteria as presented in the 2008 SW3 Corrective Measures Implementation (CMI) Work Plan and 2009 and 2012 CMI Work Plan Addenda. Since the start of AS system operations on December 19, 2012, through the September 2016 groundwater sampling event, TCE concentrations have decreased to less than the GCTL in all wells within the Active Remediation Zone (ARZ), and VC results remain less than NADC but greater than GCTL. Based on these results, team consensus was reached at the October 2016 KSC Remediation Team (KSCRT) meeting to continue AS system operations and semi-annual performance monitoring of volatile organic compounds in March 2017 at ten monitoring wells at select locations, and in September 2017 at four monitoring wells at select locations to reduce VC concentrations to below GCTL. Additionally, surface water samples will be collected at locations SW0001, SW0002, and SW0003 during both the March and September 2017 events. Team consensus was also reached at the October 2017 KSCRT meeting to continue with operation and maintenance (O&M) of the AS system at SW3.

  13. An embedded wireless system for remote monitoring of bridges

    NASA Astrophysics Data System (ADS)

    Harms, T.; Bastianini, F.; Sedigh Sarvestani, S.

    2008-03-01

    This paper describes an autonomous embedded system for remote monitoring of bridges. Salient features of the system include ultra-low power consumption, wireless communication of data and alerts, and incorporation of embedded sensors that monitor various indicators of the structural health of a bridge, while capturing the state of its surrounding environment. Examples include water level, temperature, vibration, and acoustic emissions. Ease of installation, physical robustness, remote maintenance and calibration, and autonomous data communication make the device a self-contained solution for remote monitoring of structural health. The system addresses shortcomings present in centralized structural health monitoring systems, particularly their reliance on a laptop or handheld computer. The system has been field-tested to verify the accuracy of the collected data and dependability of communication. The sheer volume of data collected, and the regularity of its collection can enable accurate and precise assessment of the health of a bridge, guiding maintenance efforts and providing early warning of potentially dangerous events. In this paper, we present a detailed breakdown of the system's power requirements and the results of the initial field test.

  14. 75 FR 80091 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-21

    ...] through Web CRD [System], by a Member immediately following the date of termination, but in no event later...) above, in the event that the Member learns of facts or circumstances causing any information set forth... through a window or by video monitor. The individual responsible for proctoring at each administration...

  15. Applied Use of Safety Event Occurrence Control Charts of Harm and Non-Harm Events: A Case Study.

    PubMed

    Robinson, Susan N; Neyens, David M; Diller, Thomas

    Most hospitals use occurrence reporting systems that facilitate identifying serious events that lead to root cause investigations. Thus, the events catalyze improvement efforts to mitigate patient harm. A serious limitation is that only a few of the occurrences are investigated. A challenge is leveraging the data to generate knowledge. The goal is to present a methodology to supplement these incident assessment efforts. The framework affords an enhanced understanding of patient safety through the use of control charts to monitor non-harm and harm incidents simultaneously. This approach can identify harm and non-harm reporting rates and also can facilitate monitoring occurrence trends. This method also can expedite identifying changes in workflow, processes, or safety culture. Although unable to identify root causes, this approach can identify changes in near real time. This approach also supports evaluating safety or policy interventions that may not be observable in annual safety climate surveys.

  16. The nuts and bolts of pills and portions: the functions of a drug safety working group.

    PubMed

    Nath, Noleen S; Jones, Ellen H; Stride, Peter; Premaratne, Manuja; Thaker, Darshit; Lim, Ivan

    2011-11-01

    Hospitalised patients commonly experience adverse drug events (ADEs) and medication errors. Runciman reported that ADEs in hospitals account for 20% of reported adverse events and contribute to 27% of deaths where death followed an adverse event. Hughes recommends multidisciplinary hospital drug committees to assess performance and raise standards. The new Code of Conduct of the Medical Board of Australia recommends participation in systems for surveillance and monitoring of adverse events, and to improve patient safety. We describe the functions and role of a Drug Safety Working Group (DSWG) in a suburban hospital, which aims to audit and promote a culture of prescribing and medication administration that is prudent and cautious to minimise the risk of harm to patients. We believe that regular prescription monitoring and feedback to Resident Medical Officers (RMOs) improves medication management in our hospital.

  17. Radar observations of the 2009 eruption of Redoubt Volcano, Alaska: Initial deployment of a transportable Doppler radar system for volcano-monitoring

    NASA Astrophysics Data System (ADS)

    Hoblitt, R. P.; Schneider, D. J.

    2009-12-01

    The rapid detection of explosive volcanic eruptions and accurate determination of eruption-column altitude and ash-cloud movement are critical factors in the mitigation of volcanic risks to aviation and in the forecasting of ash fall on nearby communities. The U.S. Geological Survey (USGS) deployed a transportable Doppler radar during the precursory stage of the 2009 eruption of Redoubt Volcano, Alaska, and it provided valuable information during subsequent explosive events. We describe the capabilities of this new monitoring tool and present data that it captured during the Redoubt eruption. The volcano-monitoring Doppler radar operates in the C-band (5.36 cm) and has a 2.4-m parabolic antenna with a beam width of 1.6 degrees, a transmitter power of 330 watts, and a maximum effective range of 240 km. The entire disassembled system, including a radome, fits inside a 6-m-long steel shipping container that has been modified to serve as base for the antenna/radome, and as a field station for observers and other monitoring equipment. The radar was installed at the Kenai Municipal Airport, 82 km east of Redoubt and about 100 km southwest of Anchorage. In addition to an unobstructed view of the volcano, this secure site offered the support of the airport staff and the City of Kenai. A further advantage was the proximity of a NEXRAD Doppler radar operated by the Federal Aviation Administration. This permitted comparisons with an established weather-monitoring radar system. The new radar system first became functional on March 20, roughly a day before the first of nineteen explosive ash-producing events of Redoubt between March 21 and April 4. Despite inevitable start-up problems, nearly all of the events were observed by the radar, which was remotely operated from the Alaska Volcano Observatory office in Anchorage. The USGS and NEXRAD radars both detected the eruption columns and tracked the directions of drifting ash clouds. The USGS radar scanned a 45-degree sector centered on the volcano while NEXRAD scanned a full 360 degrees. The sector strategy scanned the volcano more frequently than the 360-degree strategy. Consequently, the USGS system detected event onset within less than a minute, while the NEXRAD required about 4 minutes. The observed column heights were as high as 20 km above sea level and compared favorably to those from NEXRAD. NEXRAD tracked ash clouds to greater distances than the USGS system. This experience shows that Doppler radar is a valuable complement to traditional seismic and satellite monitoring of explosive eruptions.

  18. Energy Monitoring and Control Systems Operator Training - Recommended Qualifications, Staffing, Job Description, and Training Requirements for EMCS Operators.

    DTIC Science & Technology

    1982-06-01

    start/stop chiller optimization , and demand limiting were added. The system monitors a 7,000 ton chiller plant and controls 74 air handlers. The EMCS does...Modify analog limits. g. Adjust setpoints of selected controllers. h. Select manual or automatic control modes. i. Enable and disable individual points...or event schedules and controller setpoints ; make nonscheduled starts and stops of equipment or disable field panels when required for routine

  19. Frequencies of decision making and monitoring in adaptive resource management

    PubMed Central

    Johnson, Fred A.

    2017-01-01

    Adaptive management involves learning-oriented decision making in the presence of uncertainty about the responses of a resource system to management. It is implemented through an iterative sequence of decision making, monitoring and assessment of system responses, and incorporating what is learned into future decision making. Decision making at each point is informed by a value or objective function, for example total harvest anticipated over some time frame. The value function expresses the value associated with decisions, and it is influenced by system status as updated through monitoring. Often, decision making follows shortly after a monitoring event. However, it is certainly possible for the cadence of decision making to differ from that of monitoring. In this paper we consider different combinations of annual and biennial decision making, along with annual and biennial monitoring. With biennial decision making decisions are changed only every other year; with biennial monitoring field data are collected only every other year. Different cadences of decision making combine with annual and biennial monitoring to define 4 scenarios. Under each scenario we describe optimal valuations for active and passive adaptive decision making. We highlight patterns in valuation among scenarios, depending on the occurrence of monitoring and decision making events. Differences between years are tied to the fact that every other year a new decision can be made no matter what the scenario, and state information is available to inform that decision. In the subsequent year, however, in 3 of the 4 scenarios either a decision is repeated or monitoring does not occur (or both). There are substantive differences in optimal values among the scenarios, as well as the optimal policies producing those values. Especially noteworthy is the influence of monitoring cadence on valuation in some years. We highlight patterns in policy and valuation among the scenarios, and discuss management implications and extensions. PMID:28800591

  20. Frequencies of decision making and monitoring in adaptive resource management

    USGS Publications Warehouse

    Williams, Byron K.; Johnson, Fred A.

    2017-01-01

    Adaptive management involves learning-oriented decision making in the presence of uncertainty about the responses of a resource system to management. It is implemented through an iterative sequence of decision making, monitoring and assessment of system responses, and incorporating what is learned into future decision making. Decision making at each point is informed by a value or objective function, for example total harvest anticipated over some time frame. The value function expresses the value associated with decisions, and it is influenced by system status as updated through monitoring. Often, decision making follows shortly after a monitoring event. However, it is certainly possible for the cadence of decision making to differ from that of monitoring. In this paper we consider different combinations of annual and biennial decision making, along with annual and biennial monitoring. With biennial decision making decisions are changed only every other year; with biennial monitoring field data are collected only every other year. Different cadences of decision making combine with annual and biennial monitoring to define 4 scenarios. Under each scenario we describe optimal valuations for active and passive adaptive decision making. We highlight patterns in valuation among scenarios, depending on the occurrence of monitoring and decision making events. Differences between years are tied to the fact that every other year a new decision can be made no matter what the scenario, and state information is available to inform that decision. In the subsequent year, however, in 3 of the 4 scenarios either a decision is repeated or monitoring does not occur (or both). There are substantive differences in optimal values among the scenarios, as well as the optimal policies producing those values. Especially noteworthy is the influence of monitoring cadence on valuation in some years. We highlight patterns in policy and valuation among the scenarios, and discuss management implications and extensions.

  1. Adverse drug event monitoring at the Food and Drug Administration.

    PubMed

    Ahmad, Syed Rizwanuddin

    2003-01-01

    The Food and Drug Administration (FDA) is responsible not only for approving drugs but also for monitoring their safety after they reach the market. The complete adverse event profile of a drug is not known at the time of approval because of the small sample size, short duration, and limited generalizability of pre-approval clinical trials. This report describes the FDA's postmarketing surveillance system, to which many clinicians submit reports of adverse drug events encountered while treating their patients. Despite its limitations, the spontaneous reporting system is an extremely valuable mechanism by which hazards with drugs that were not observed or recognized at the time of approval are identified. Physicians are strongly encouraged to submit reports of adverse outcomes with suspect drugs to the FDA, and their reports make a difference. The FDA is strengthening its postmarketing surveillance with access to new data sources that have the potential to further improve the identification, quantification, and subsequent management of drug risk.

  2. Adverse Drug Event Monitoring at the Food and Drug Administration

    PubMed Central

    Ahmad, Syed Rizwanuddin

    2003-01-01

    The Food and Drug Administration (FDA) is responsible not only for approving drugs but also for monitoring their safety after they reach the market. The complete adverse event profile of a drug is not known at the time of approval because of the small sample size, short duration, and limited generalizability of pre-approval clinical trials. This report describes the FDA's postmarketing surveillance system, to which many clinicians submit reports of adverse drug events encountered while treating their patients. Despite its limitations, the spontaneous reporting system is an extremely valuable mechanism by which hazards with drugs that were not observed or recognized at the time of approval are identified. Physicians are strongly encouraged to submit reports of adverse outcomes with suspect drugs to the FDA, and their reports make a difference. The FDA is strengthening its postmarketing surveillance with access to new data sources that have the potential to further improve the identification, quantification, and subsequent management of drug risk. PMID:12534765

  3. Seismic monitoring of the unstable rock slope at Aaknes, Norway

    NASA Astrophysics Data System (ADS)

    Roth, M.; Blikra, L. H.

    2009-04-01

    The unstable rock slope at Aaknes has an estimated volume of about 70 million cubic meters, and parts of the slope are moving at a rate between 2-15 cm/year. Amongst many other direct monitoring systems we have installed a small-scale seismic network (8 three-component geophones over an area of 250 x 150 meters) in order to monitor microseismic events related to the movement of the slope. The network has been operational since November 2005 with only a few short-term outages. Seismic data are transferred in real-time from the site to NORSAR for automatic detection processing. The resulting detection lists and charts and the associated waveform are forwarded immediately to the early warning centre of the Municipality of Stranda. Furthermore, we make them available after a delay of about 10-15 minutes on our public project web page (http://www.norsar.no/pc-47-48-Latest-Data.aspx). Seismic monitoring provides independent and complementary data to the more direct monitoring systems at Aaknes. We observe increased seismic activity in periods of heavy rain fall or snow melt, when laser ranging data and extensometer readings indicate temporary acceleration phases of the slope. The seismic network is too small and the velocity structure is too heterogeneous in order to obtain reliable localizations of the microseismic events. In summer 2009 we plan to install a high-sensitive broadband seismometer (60 s - 100 Hz) in the middle of the unstable slope. This will allow us to better constrain the locations of the microseismic events and to investigate potential low-frequency signals associated with the slope movement.

  4. 75 FR 75059 - Mandatory Reporting of Greenhouse Gases: Injection and Geologic Sequestration of Carbon Dioxide

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-01

    ... monitoring will achieve detection and quantification of CO 2 in the event surface leakage occurs. The UIC... leakage detection monitoring system or technical specifications should also be described in the MRV plan... of injected CO 2 or from another cause (e.g. natural variability). The MRV plan leakage detection and...

  5. 77 FR 8160 - Quality Assurance Requirements for Continuous Opacity Monitoring Systems at Stationary Sources

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-14

    ..., glass windows (uncoated or anti-reflection coated, but with no curvature), lenses with mounts where such... requirements must I meet if I use a substitute opacity monitor? In the event that your certified opacity... the above in the maintenance log or in other appropriate permanently maintained records. 10.7 When do...

  6. System for Multiplexing Acoustic Emission (AE) Instrumentation

    NASA Technical Reports Server (NTRS)

    Prosser, William H. (Inventor); Perey, Daniel F. (Inventor); Gorman, Michael R. (Inventor); Scales, Edgar F. (Inventor)

    2003-01-01

    An acoustic monitoring device has at least two acoustic sensors with a triggering mechanism and a multiplexing circuit. After the occurrence of a triggering event at a sensor, the multiplexing circuit allows a recording component to record acoustic emissions at adjacent sensors. The acoustic monitoring device is attached to a solid medium to detect the occurrence of damage.

  7. The clinical effectiveness and cost-effectiveness of point-of-care tests (CoaguChek system, INRatio2 PT/INR monitor and ProTime Microcoagulation system) for the self-monitoring of the coagulation status of people receiving long-term vitamin K antagonist therapy, compared with standard UK practice: systematic review and economic evaluation.

    PubMed

    Sharma, Pawana; Scotland, Graham; Cruickshank, Moira; Tassie, Emma; Fraser, Cynthia; Burton, Chris; Croal, Bernard; Ramsay, Craig R; Brazzelli, Miriam

    2015-06-01

    Self-monitoring (self-testing and self-management) could be a valid option for oral anticoagulation therapy monitoring in the NHS, but current evidence on its clinical effectiveness or cost-effectiveness is limited. We investigated the clinical effectiveness and cost-effectiveness of point-of-care coagulometers for the self-monitoring of coagulation status in people receiving long-term vitamin K antagonist therapy, compared with standard clinic monitoring. We searched major electronic databases (e.g. MEDLINE, MEDLINE In Process & Other Non-Indexed Citations, EMBASE, Bioscience Information Service, Science Citation Index and Cochrane Central Register of Controlled Trials) from 2007 to May 2013. Reports published before 2007 were identified from the existing Cochrane review (major databases searched from inception to 2007). The economic model parameters were derived from the clinical effectiveness review, other relevant reviews, routine sources of cost data and clinical experts' advice. We assessed randomised controlled trials (RCTs) evaluating self-monitoring in people with atrial fibrillation or heart valve disease requiring long-term anticoagulation therapy. CoaguChek(®) XS and S models (Roche Diagnostics, Basel, Switzerland), INRatio2(®) PT/INR monitor (Alere Inc., San Diego, CA USA), and ProTime Microcoagulation system(®) (International Technidyne Corporation, Nexus Dx, Edison, NJ, USA) coagulometers were compared with standard monitoring. Where possible, we combined data from included trials using standard inverse variance methods. Risk of bias assessment was performed using the Cochrane risk of bias tool. A de novo economic model was developed to assess the cost-effectiveness over a 10-year period. We identified 26 RCTs (published in 45 papers) with a total of 8763 participants. CoaguChek was used in 85% of the trials. Primary analyses were based on data from 21 out of 26 trials. Only four trials were at low risk of bias. Major clinical events: self-monitoring was significantly better than standard monitoring in preventing thromboembolic events [relative risk (RR) 0.58, 95% confidence interval (CI) 0.40 to 0.84; p = 0.004]. In people with artificial heart valves (AHVs), self-monitoring almost halved the risk of thromboembolic events (RR 0.56, 95% CI 0.38 to 0.82; p = 0.003) and all-cause mortality (RR 0.54, 95% CI 0.32 to 0.92; p = 0.02). There was greater reduction in thromboembolic events and all-cause mortality through self-management but not through self-testing. Intermediate outcomes: self-testing, but not self-management, showed a modest but significantly higher percentage of time in therapeutic range, compared with standard care (weighted mean difference 4.44, 95% CI 1.71 to 7.18; p = 0.02). Patient-reported outcomes: improvements in patients' quality of life related to self-monitoring were observed in six out of nine trials. High preference rates were reported for self-monitoring (77% to 98% in four trials). Net health and social care costs over 10 years were £7295 (self-monitoring with INRatio2); £7324 (standard care monitoring); £7333 (self-monitoring with CoaguChek XS) and £8609 (self-monitoring with ProTime). The estimated quality-adjusted life-year (QALY) gain associated with self-monitoring was 0.03. Self-monitoring with INRatio2 or CoaguChek XS was found to have ≈ 80% chance of being cost-effective, compared with standard monitoring at a willingness-to-pay threshold of £20,000 per QALY gained. Compared with standard monitoring, self-monitoring appears to be safe and effective, especially for people with AHVs. Self-monitoring, and in particular self-management, of anticoagulation status appeared cost-effective when pooled estimates of clinical effectiveness were applied. However, if self-monitoring does not result in significant reductions in thromboembolic events, it is unlikely to be cost-effective, based on a comparison of annual monitoring costs alone. Trials investigating the longer-term outcomes of self-management are needed, as well as direct comparisons of the various point-of-care coagulometers. This study is registered as PROSPERO CRD42013004944. The National Institute for Health Research Health Technology Assessment programme.

  8. An Integrative Structural Health Monitoring System for the Local/Global Responses of a Large-Scale Irregular Building under Construction

    PubMed Central

    Park, Hyo Seon; Shin, Yunah; Choi, Se Woon; Kim, Yousok

    2013-01-01

    In this study, a practical and integrative SHM system was developed and applied to a large-scale irregular building under construction, where many challenging issues exist. In the proposed sensor network, customized energy-efficient wireless sensing units (sensor nodes, repeater nodes, and master nodes) were employed and comprehensive communications from the sensor node to the remote monitoring server were conducted through wireless communications. The long-term (13-month) monitoring results recorded from a large number of sensors (75 vibrating wire strain gauges, 10 inclinometers, and three laser displacement sensors) indicated that the construction event exhibiting the largest influence on structural behavior was the removal of bents that were temporarily installed to support the free end of the cantilevered members during their construction. The safety of each member could be confirmed based on the quantitative evaluation of each response. Furthermore, it was also confirmed that the relation between these responses (i.e., deflection, strain, and inclination) can provide information about the global behavior of structures induced from specific events. Analysis of the measurement results demonstrates the proposed sensor network system is capable of automatic and real-time monitoring and can be applied and utilized for both the safety evaluation and precise implementation of buildings under construction. PMID:23860317

  9. Appendix I1-2 to Wind HUI Initiative 1: Field Campaign Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Zack; Deborah Hanley; Dora Nakafuji

    This report is an appendix to the Hawaii WindHUI efforts to dev elop and operationalize short-term wind forecasting and wind ramp event forecasting capabilities. The report summarizes the WindNET field campaign deployment experiences and challenges. As part of the WindNET project on the Big Island of Hawaii, AWS Truepower (AWST) conducted a field campaign to assess the viability of deploying a network of monitoring systems to aid in local wind energy forecasting. The data provided at these monitoring locations, which were strategically placed around the Big Island of Hawaii based upon results from the Oahu Wind Integration and Transmission Studymore » (OWITS) observational targeting study (Figure 1), provided predictive indicators for improving wind forecasts and developing responsive strategies for managing real-time, wind-related system events. The goal of the field campaign was to make measurements from a network of remote monitoring devices to improve 1- to 3-hour look ahead forecasts for wind facilities.« less

  10. Piecing together the puzzle: Improving event content coverage for real-time sub-event detection using adaptive microblog crawling

    PubMed Central

    Tokarchuk, Laurissa; Wang, Xinyue; Poslad, Stefan

    2017-01-01

    In an age when people are predisposed to report real-world events through their social media accounts, many researchers value the benefits of mining user generated content from social media. Compared with the traditional news media, social media services, such as Twitter, can provide more complete and timely information about the real-world events. However events are often like a puzzle and in order to solve the puzzle/understand the event, we must identify all the sub-events or pieces. Existing Twitter event monitoring systems for sub-event detection and summarization currently typically analyse events based on partial data as conventional data collection methodologies are unable to collect comprehensive event data. This results in existing systems often being unable to report sub-events in real-time and often in completely missing sub-events or pieces in the broader event puzzle. This paper proposes a Sub-event detection by real-TIme Microblog monitoring (STRIM) framework that leverages the temporal feature of an expanded set of news-worthy event content. In order to more comprehensively and accurately identify sub-events this framework first proposes the use of adaptive microblog crawling. Our adaptive microblog crawler is capable of increasing the coverage of events while minimizing the amount of non-relevant content. We then propose a stream division methodology that can be accomplished in real time so that the temporal features of the expanded event streams can be analysed by a burst detection algorithm. In the final steps of the framework, the content features are extracted from each divided stream and recombined to provide a final summarization of the sub-events. The proposed framework is evaluated against traditional event detection using event recall and event precision metrics. Results show that improving the quality and coverage of event contents contribute to better event detection by identifying additional valid sub-events. The novel combination of our proposed adaptive crawler and our stream division/recombination technique provides significant gains in event recall (44.44%) and event precision (9.57%). The addition of these sub-events or pieces, allows us to get closer to solving the event puzzle. PMID:29107976

  11. Piecing together the puzzle: Improving event content coverage for real-time sub-event detection using adaptive microblog crawling.

    PubMed

    Tokarchuk, Laurissa; Wang, Xinyue; Poslad, Stefan

    2017-01-01

    In an age when people are predisposed to report real-world events through their social media accounts, many researchers value the benefits of mining user generated content from social media. Compared with the traditional news media, social media services, such as Twitter, can provide more complete and timely information about the real-world events. However events are often like a puzzle and in order to solve the puzzle/understand the event, we must identify all the sub-events or pieces. Existing Twitter event monitoring systems for sub-event detection and summarization currently typically analyse events based on partial data as conventional data collection methodologies are unable to collect comprehensive event data. This results in existing systems often being unable to report sub-events in real-time and often in completely missing sub-events or pieces in the broader event puzzle. This paper proposes a Sub-event detection by real-TIme Microblog monitoring (STRIM) framework that leverages the temporal feature of an expanded set of news-worthy event content. In order to more comprehensively and accurately identify sub-events this framework first proposes the use of adaptive microblog crawling. Our adaptive microblog crawler is capable of increasing the coverage of events while minimizing the amount of non-relevant content. We then propose a stream division methodology that can be accomplished in real time so that the temporal features of the expanded event streams can be analysed by a burst detection algorithm. In the final steps of the framework, the content features are extracted from each divided stream and recombined to provide a final summarization of the sub-events. The proposed framework is evaluated against traditional event detection using event recall and event precision metrics. Results show that improving the quality and coverage of event contents contribute to better event detection by identifying additional valid sub-events. The novel combination of our proposed adaptive crawler and our stream division/recombination technique provides significant gains in event recall (44.44%) and event precision (9.57%). The addition of these sub-events or pieces, allows us to get closer to solving the event puzzle.

  12. Use of a Relapse Monitoring Board

    PubMed Central

    MacFadden, Wayne; Anand, Ravi; Khanna, Sumant; Rapaport, Mark H.; Haskins, J. Thomas; Turkoz, Ibrahim; Alphs, Larry

    2011-01-01

    Objective: Independent review boards can provide an objective appraisal of investigators' decisions and may be useful for determining complex primary outcomes, such as bipolar disorder relapse, in crossnational studies. This article describes the use of an independent, blinded relapse monitoring board to assess the primary outcome (relapse) in an international clinical trial of risperidone long-acting therapy adjunctive to standard-care pharmacotherapy for patients with bipolar disorder. Design: The fully autonomous relapse monitoring board was composed of a chair and two additional members—all psychiatrists and experts in the diagnostic, clinical, and therapeutic management of bipolar disorder. The relapse monitoring board met six times during the study to review patient relapse data and was charged with the responsibility of determining if the events described by investigators qualified as relapses. Additionally, the relapse monitoring board reviewed data for all randomized patients to identify any relapse events not recognized by investigators. Results: Primary efficacy results were similar and significant for investigator- and relapse monitoring board-determined relapses. Ten discrepancies were noted: two of the 42 investigator-determined relapses did not meet the intended clinical relapse threshold as determined by the relapse monitoring board; conversely, the relapse monitoring board confirmed eight relapse events not identified by investigators. The relapse monitoring board had no direct interactions with patients and had to rely on the accuracy of investigator assessments. Also, once an investigator determined a relapse and the patients discontinued the study, less information was available to the relapse monitoring board for relapse assessment. Conclusions: Use of the relapse monitoring board supported the validity of the study by incorporating a level of standardization to mitigate the risk that local practice in different cultures and medical systems at the sites would confound study results. PMID:22132367

  13. Design of power cable grounding wire anti-theft monitoring system

    NASA Astrophysics Data System (ADS)

    An, Xisheng; Lu, Peng; Wei, Niansheng; Hong, Gang

    2018-01-01

    In order to prevent the serious consequences of the power grid failure caused by the power cable grounding wire theft, this paper presents a GPRS based power cable grounding wire anti-theft monitoring device system, which includes a camera module, a sensor module, a micro processing system module, and a data monitoring center module, a mobile terminal module. Our design utilize two kinds of methods for detecting and reporting comprehensive image, it can effectively solve the problem of power and cable grounding wire box theft problem, timely follow-up grounded cable theft events, prevent the occurrence of electric field of high voltage transmission line fault, improve the reliability of the safe operation of power grid.

  14. Low-cost failure sensor design and development for water pipeline distribution systems.

    PubMed

    Khan, K; Widdop, P D; Day, A J; Wood, A S; Mounce, S R; Machell, J

    2002-01-01

    This paper describes the design and development of a new sensor which is low cost to manufacture and install and is reliable in operation with sufficient accuracy, resolution and repeatability for use in newly developed systems for pipeline monitoring and leakage detection. To provide an appropriate signal, the concept of a "failure" sensor is introduced, in which the output is not necessarily proportional to the input, but is unmistakably affected when an unusual event occurs. The design of this failure sensor is based on the water opacity which can be indicative of an unusual event in a water distribution network. The laboratory work and field trials necessary to design and prove out this type of failure sensor are described here. It is concluded that a low-cost failure sensor of this type has good potential for use in a comprehensive water monitoring and management system based on Artificial Neural Networks (ANN).

  15. Development of an online morbidity, mortality, and near-miss reporting system to identify patterns of adverse events in surgical patients.

    PubMed

    Bilimoria, Karl Y; Kmiecik, Thomas E; DaRosa, Debra A; Halverson, Amy; Eskandari, Mark K; Bell, Richard H; Soper, Nathaniel J; Wayne, Jeffrey D

    2009-04-01

    To design a Web-based system to track adverse and near-miss events, to establish an automated method to identify patterns of events, and to assess the adverse event reporting behavior of physicians. A Web-based system was designed to collect physician-reported adverse events including weekly Morbidity and Mortality (M&M) entries and anonymous adverse/near-miss events. An automated system was set up to help identify event patterns. Adverse event frequency was compared with hospital databases to assess reporting completeness. A metropolitan tertiary care center. Identification of adverse event patterns and completeness of reporting. From September 2005 to August 2007, 15,524 surgical patients were reported including 957 (6.2%) adverse events and 34 (0.2%) anonymous reports. The automated pattern recognition system helped identify 4 event patterns from M&M reports and 3 patterns from anonymous/near-miss reporting. After multidisciplinary meetings and expert reviews, the patterns were addressed with educational initiatives, correction of systems issues, and/or intensive quality monitoring. Only 25% of complications and 42% of inpatient deaths were reported. A total of 75.2% of adverse events resulting in permanent disability or death were attributed to the nature of the disease. Interventions to improve reporting were largely unsuccessful. We have developed a user-friendly Web-based system to track complications and identify patterns of adverse events. Underreporting of adverse events and attributing the complication to the nature of the disease represent a problem in reporting culture among surgeons at our institution. Similar systems should be used by surgery departments, particularly those affiliated with teaching hospitals, to identify quality improvement opportunities.

  16. Monitoring of the infrastructure and services used to handle and automatically produce Alignment and Calibration conditions at CMS

    NASA Astrophysics Data System (ADS)

    Sipos, Roland; Govi, Giacomo; Franzoni, Giovanni; Di Guida, Salvatore; Pfeiffer, Andreas

    2017-10-01

    The CMS experiment at CERN LHC has a dedicated infrastructure to handle the alignment and calibration data. This infrastructure is composed of several services, which take on various data management tasks required for the consumption of the non-event data (also called as condition data) in the experiment activities. The criticality of these tasks imposes tights requirements for the availability and the reliability of the services executing them. In this scope, a comprehensive monitoring and alarm generating system has been developed. The system has been implemented based on the Nagios open source industry standard for monitoring and alerting services, and monitors the database back-end, the hosting nodes and key heart-beat functionalities for all the services involved. This paper describes the design, implementation and operational experience with the monitoring system developed and deployed at CMS in 2016.

  17. Control charts for monitoring accumulating adverse event count frequencies from single and multiple blinded trials.

    PubMed

    Gould, A Lawrence

    2016-12-30

    Conventional practice monitors accumulating information about drug safety in terms of the numbers of adverse events reported from trials in a drug development program. Estimates of between-treatment adverse event risk differences can be obtained readily from unblinded trials with adjustment for differences among trials using conventional statistical methods. Recent regulatory guidelines require monitoring the cumulative frequency of adverse event reports to identify possible between-treatment adverse event risk differences without unblinding ongoing trials. Conventional statistical methods for assessing between-treatment adverse event risks cannot be applied when the trials are blinded. However, CUSUM charts can be used to monitor the accumulation of adverse event occurrences. CUSUM charts for monitoring adverse event occurrence in a Bayesian paradigm are based on assumptions about the process generating the adverse event counts in a trial as expressed by informative prior distributions. This article describes the construction of control charts for monitoring adverse event occurrence based on statistical models for the processes, characterizes their statistical properties, and describes how to construct useful prior distributions. Application of the approach to two adverse events of interest in a real trial gave nearly identical results for binomial and Poisson observed event count likelihoods. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Passive Seismic Monitoring for Rockfall at Yucca Mountain: Concept Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, J; Twilley, K; Murvosh, H

    2003-03-03

    For the purpose of proof-testing a system intended to remotely monitor rockfall inside a potential radioactive waste repository at Yucca Mountain, a system of seismic sub-arrays will be deployed and tested on the surface of the mountain. The goal is to identify and locate rockfall events remotely using automated data collecting and processing techniques. We install seismometers on the ground surface, generate seismic energy to simulate rockfall in underground space beneath the array, and interpret the surface response to discriminate and locate the event. Data will be analyzed using matched-field processing, a generalized beam forming method for localizing discrete signals.more » Software is being developed to facilitate the processing. To date, a three-component sub-array has been installed and successfully tested.« less

  19. Permeable pavement monitoring at the EPA's Edison Environmental Center demonstration site

    EPA Science Inventory

    There are few detailed studies of full-scale, replicated, actively-used pervious pavement systems. Practitioners need additional studies of pervious pavement systems in its intended application (parking lot, roadway, etc.) during a range of climatic events, daily usage conditions...

  20. POTENTIAL OF BIOLOGICAL MONITORING SYSTEMS TO DETECT TOXICITY IN A FINISHED MATRIX

    EPA Science Inventory

    Distribution systems of the U.S. are vulnerable to natural and anthropogenic factors affecting quality for use as drinking water. Important factors include physical parameters such as increased turbidity, ecological cycles such as algal blooms, and episodic contamination events ...

  1. User Centric Job Monitoring - a redesign and novel approach in the STAR experiment

    NASA Astrophysics Data System (ADS)

    Arkhipkin, D.; Lauret, J.; Zulkarneeva, Y.

    2014-06-01

    User Centric Monitoring (or UCM) has been a long awaited feature in STAR, whereas programs, workflows and system "events" could be logged, broadcast and later analyzed. UCM allows to collect and filter available job monitoring information from various resources and present it to users in a user-centric view rather than an administrative-centric point of view. The first attempt and implementation of "a" UCM approach was made in STAR 2004 using a log4cxx plug-in back-end and then further evolved with an attempt to push toward a scalable database back-end (2006) and finally using a Web-Service approach (2010, CSW4DB SBIR). The latest showed to be incomplete and not addressing the evolving needs of the experiment where streamlined messages for online (data acquisition) purposes as well as the continuous support for the data mining needs and event analysis need to coexists and unified in a seamless approach. The code also revealed to be hardly maintainable. This paper presents the next evolutionary step of the UCM toolkit, a redesign and redirection of our latest attempt acknowledging and integrating recent technologies and a simpler, maintainable and yet scalable manner. The extended version of the job logging package is built upon three-tier approach based on Task, Job and Event, and features a Web-Service based logging API, a responsive AJAX-powered user interface, and a database back-end relying on MongoDB, which is uniquely suited for STAR needs. In addition, we present details of integration of this logging package with the STAR offline and online software frameworks. Leveraging on the reported experience and work from the ATLAS and CMS experience on using the ESPER engine, we discuss and show how such approach has been implemented in STAR for meta-data event triggering stream processing and filtering. An ESPER based solution seems to fit well into the online data acquisition system where many systems are monitored.

  2. An integrated observational site for monitoring pre-earthquake processes in Peloponnese, Greece. Preliminary results.

    NASA Astrophysics Data System (ADS)

    Tsinganos, Kanaris; Karastathis, Vassilios K.; Kafatos, Menas; Ouzounov, Dimitar; Tselentis, Gerassimos; Papadopoulos, Gerassimos A.; Voulgaris, Nikolaos; Eleftheriou, Georgios; Mouzakiotis, Evangellos; Liakopoulos, Spyridon; Aspiotis, Theodoros; Gika, Fevronia; E Psiloglou, Basil

    2017-04-01

    We are presenting the first results of developing a new integrated observational site in Greece to study pre-earthquake processes in Peloponnese, lead by the National Observatory of Athens. We have developed a prototype of multiparameter network approach using an integrated system aimed at monitoring and thorough studies of pre-earthquake processes at the high seismicity area of the Western Hellenic Arc (SW Peloponnese, Greece). The initial prototype of the new observational systems consists of: (1) continuous real-time monitoring of Radon accumulation in the ground through a network of radon sensors, consisting of three gamma radiation detectors [NaI(Tl) scintillators], (2) nine-station seismic array installed to detect and locate events of low magnitude (less than 1.0 R) in the offshore area of the Hellenic arc, (3) real-time weather monitoring systems (air temperature, relative humidity, precipitation, pressure) and (4) satellite thermal radiation from AVHRR/NOAA-18 polar orbit sensing. The first few moths of operations revealed a number of pre-seismic radon variation anomalies before several earthquakes (M>3.6). The radon increases systematically before the larger events. For example a radon anomaly was predominant before the event of Sep 28, M 5.0 (36.73°N, 21.87°E), 18 km ESE of Methoni. The seismic array assists in the evaluation of current seismicity and may yield identification of foreshock activity. Thermal anomalies in satellite images are also examined as an additional tool for evaluation and verification of the Radon increase. According to the Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) concept, atmospheric thermal anomalies observed before large seismic events are associated with the increase of Radon concentration on the ground. Details about the integrating ground and space observations, overall performance of the observational sites, future plans in advancing the cooperation in observations will be discussed.

  3. Geostationary Communications Satellites as Sensors for the Space Weather Environment: Telemetry Event Identification Algorithms

    NASA Astrophysics Data System (ADS)

    Carlton, A.; Cahoy, K.

    2015-12-01

    Reliability of geostationary communication satellites (GEO ComSats) is critical to many industries worldwide. The space radiation environment poses a significant threat and manufacturers and operators expend considerable effort to maintain reliability for users. Knowledge of the space radiation environment at the orbital location of a satellite is of critical importance for diagnosing and resolving issues resulting from space weather, for optimizing cost and reliability, and for space situational awareness. For decades, operators and manufacturers have collected large amounts of telemetry from geostationary (GEO) communications satellites to monitor system health and performance, yet this data is rarely mined for scientific purposes. The goal of this work is to acquire and analyze archived data from commercial operators using new algorithms that can detect when a space weather (or non-space weather) event of interest has occurred or is in progress. We have developed algorithms, collectively called SEER (System Event Evaluation Routine), to statistically analyze power amplifier current and temperature telemetry by identifying deviations from nominal operations or other events and trends of interest. This paper focuses on our work in progress, which currently includes methods for detection of jumps ("spikes", outliers) and step changes (changes in the local mean) in the telemetry. We then examine available space weather data from the NOAA GOES and the NOAA-computed Kp index and sunspot numbers to see what role, if any, it might have played. By combining the results of the algorithm for many components, the spacecraft can be used as a "sensor" for the space radiation environment. Similar events occurring at one time across many component telemetry streams may be indicative of a space radiation event or system-wide health and safety concern. Using SEER on representative datasets of telemetry from Inmarsat and Intelsat, we find events that occur across all or many of telemetry files at certain dates. We compare these system-wide events to known space weather storms, such as the 2003 Halloween storms, and to spacecraft operational events, such as maneuvers. We also present future applications and expansions of SEER for robust space environment sensing and system health and safety monitoring.

  4. Monitoring of pipeline oil spill fire events using Geographical Information System and Remote Sensing

    NASA Astrophysics Data System (ADS)

    Ogungbuyi, M. G.; Eckardt, F. D.; Martinez, P.

    2016-12-01

    Nigeria, the largest producer of crude oil in Africa occupies sixth position in the world. Despite such huge oil revenue potentials, its pipeline network system is consistently susceptible to leaks causing oil spills. We investigate ground based spill events which are caused by operational error, equipment failure and most importantly by deliberate attacks along the major pipeline transport system. Sometimes, these spills are accompanied with fire explosion caused by accidental discharge, natural or illegal refineries in the creeds, etc. MODIS satellites fires data corresponding to the times and spill events (i.e. ground based data) of the Area of Interest (AOI) show significant correlation. The open source Quantum Geographical Information System (QGIS) was used to validate the dataset and the spatiotemporal analyses of the oil spill fires were expressed. We demonstrate that through QGIS and Google Earth (using the time sliders), we can identify and monitor oil spills when they are attended with fire events along the pipeline transport system accordingly. This is shown through the spatiotemporal images of the fires. Evidence of such fire cases resulting from bunt vegetation as different from industrial and domestic fire is also presented. Detecting oil spill fires in the study location may not require an enormous terabyte of image processing: we can however rely on a near-real-time (NRT) MODIS data that is readily available twice daily to detect oil spill fire as early warning signal for those hotspots areas where cases of oil seepage is significant in Nigeria.

  5. Valve Health Monitoring System Utilizing Smart Instrumentation

    NASA Technical Reports Server (NTRS)

    Jensen, Scott L.; Drouant, George J.

    2006-01-01

    The valve monitoring system is a stand alone unit with network capabilities for integration into a higher level health management system. The system is designed for aiding in failure predictions of high-geared ball valves and linearly actuated valves. It performs data tracking and archiving for identifying degraded performance. The data collection types are cryogenic cycles, total cycles, inlet temperature, body temperature torsional strain, linear bonnet strain, preload position, total travel and total directional changes. Events are recorded and time stamped in accordance with the IRIG B True Time. The monitoring system is designed for use in a Class 1 Division II explosive environment. The basic configuration consists of several instrumentation sensor units and a base station. The sensor units are self contained microprocessor controlled and remotely mountable in three by three by two inches. Each unit is potted in a fire retardant substance without any cavities and limited to low operating power for maintaining safe operation in a hydrogen environment. The units are temperature monitored to safeguard against operation outside temperature limitations. Each contains 902-928 MHz band digital transmitters which meet Federal Communication Commission's requirements and are limited to a 35 foot transmission radius for preserving data security. The base-station controller correlates data from the sensor units and generates data event logs on a compact flash memory module for database uploading. The entries are also broadcast over an Ethernet network. Nitrogen purged National Electrical Manufactures Association (NEMA) Class 4 enclosures are used to house the base-station

  6. Valve health monitoring system utilizing smart instrumentation

    NASA Astrophysics Data System (ADS)

    Jensen, Scott L.; Drouant, George J.

    2006-05-01

    The valve monitoring system is a stand alone unit with network capabilities for integration into a higher level health management system. The system is designed for aiding in failure predictions of high-geared ball valves and linearly actuated valves. It performs data tracking and archiving for identifying degraded performance. The data collection types are: cryogenic cycles, total cycles, inlet temperature, outlet temperature, body temperature, torsional strain, linear bonnet strain, preload position, total travel, and total directional changes. Events are recorded and time stamped in accordance with the IRIG B True Time. The monitoring system is designed for use in a Class 1 Division II explosive environment. The basic configuration consists of several instrumentation sensor units and a base station. The sensor units are self contained microprocessor controlled and remotely mountable in three by three by two inches. Each unit is potted in a fire retardant substance without any cavities and limited to low operating power for maintaining safe operation in a hydrogen environment. The units are temperature monitored to safeguard against operation outside temperature limitations. Each contains 902-928 MHz band digital transmitters which meet Federal Communication Commissions requirements and are limited to a 35 foot transmission radius for preserving data security. The base-station controller correlates related data from the sensor units and generates data event logs on a compact flash memory module for database uploading. The entries are also broadcast over an Ethernet network. Nitrogen purged National Electrical Manufactures Association (NEMA) Class 4 Enclosures are used to house the base-station.

  7. Safety management for polluted confined space with IT system: a running case.

    PubMed

    Hwang, Jing-Jang; Wu, Chien-Hsing; Zhuang, Zheng-Yun; Hsu, Yi-Chang

    2015-01-01

    This study traced a deployed real IT system to enhance occupational safety for a polluted confined space. By incorporating wireless technology, it automatically monitors the status of workers on the site and upon detected anomalous events, managers are notified effectively. The system, with a redefined standard operations process, is running well at one of Formosa Petrochemical Corporation's refineries. Evidence shows that after deployment, the system does enhance the safety level by real-time monitoring the workers and by managing well and controlling the anomalies. Therefore, such technical architecture can be applied to similar scenarios for safety enhancement purposes.

  8. Transient upset models in computer systems

    NASA Technical Reports Server (NTRS)

    Mason, G. M.

    1983-01-01

    Essential factors for the design of transient upset monitors for computers are discussed. The upset is a system level event that is software dependent. It can occur in the program flow, the opcode set, the opcode address domain, the read address domain, and the write address domain. Most upsets are in the program flow. It is shown that simple, external monitors functioning transparently relative to the system operations can be built if a detailed accounting is made of the characteristics of the faults that can happen. Sample applications are provided for different states of the Z-80 and 8085 based system.

  9. Integrating physically based simulators with Event Detection Systems: Multi-site detection approach.

    PubMed

    Housh, Mashor; Ohar, Ziv

    2017-03-01

    The Fault Detection (FD) Problem in control theory concerns of monitoring a system to identify when a fault has occurred. Two approaches can be distinguished for the FD: Signal processing based FD and Model-based FD. The former concerns of developing algorithms to directly infer faults from sensors' readings, while the latter uses a simulation model of the real-system to analyze the discrepancy between sensors' readings and expected values from the simulation model. Most contamination Event Detection Systems (EDSs) for water distribution systems have followed the signal processing based FD, which relies on analyzing the signals from monitoring stations independently of each other, rather than evaluating all stations simultaneously within an integrated network. In this study, we show that a model-based EDS which utilizes a physically based water quality and hydraulics simulation models, can outperform the signal processing based EDS. We also show that the model-based EDS can facilitate the development of a Multi-Site EDS (MSEDS), which analyzes the data from all the monitoring stations simultaneously within an integrated network. The advantage of the joint analysis in the MSEDS is expressed by increased detection accuracy (higher true positive alarms and fewer false alarms) and shorter detection time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Optimizing the real-time ground level enhancement alert system based on neutron monitor measurements: Introducing GLE Alert Plus

    NASA Astrophysics Data System (ADS)

    Souvatzoglou, G.; Papaioannou, A.; Mavromichalaki, H.; Dimitroulakos, J.; Sarlanis, C.

    2014-11-01

    Whenever a significant intensity increase is being recorded by at least three neutron monitor stations in real-time mode, a ground level enhancement (GLE) event is marked and an automated alert is issued. Although, the physical concept of the algorithm is solid and has efficiently worked in a number of cases, the availability of real-time data is still an open issue and makes timely GLE alerts quite challenging. In this work we present the optimization of the GLE alert that has been set into operation since 2006 at the Athens Neutron Monitor Station. This upgrade has led to GLE Alert Plus, which is currently based upon the Neutron Monitor Database (NMDB). We have determined the critical values per station allowing us to issue reliable GLE alerts close to the initiation of the event while at the same time we keep the false alert rate at low levels. Furthermore, we have managed to treat the problem of data availability, introducing the Go-Back-N algorithm. A total of 13 GLE events have been marked from January 2000 to December 2012. GLE Alert Plus issued an alert for 12 events. These alert times are compared to the alert times of GOES Space Weather Prediction Center and Solar Energetic Particle forecaster of the University of Málaga (UMASEP). In all cases GLE Alert Plus precedes the GOES alert by ≈8-52 min. The comparison with UMASEP demonstrated a remarkably good agreement. Real-time GLE alerts by GLE Alert Plus may be retrieved by http://cosray.phys.uoa.gr/gle_alert_plus.html, http://www.nmdb.eu, and http://swe.ssa.esa.int/web/guest/space-radiation. An automated GLE alert email notification system is also available to interested users.

  11. The third level trigger and output event unit of the UA1 data-acquisition system

    NASA Astrophysics Data System (ADS)

    Cittolin, S.; Demoulin, M.; Fucci, A.; Haynes, W.; Martin, B.; Porte, J. P.; Sphicas, P.

    1989-12-01

    The upgraded UA1 experiment utilizes twelve 3081/E emulators for its third-level trigger system. The system is interfaced to VME, and is controlled by 68000 microprocessor VME boards on the input and output. The output controller communicates with an IBM 9375 mainframe via the CERN-IBM developed VICI interface. The events selected by the emulators are output on IBM-3480 cassettes. The user interface to this system is based on a series of Macintosh personal computer connected to the VME bus. These Macs are also used for developing software for the emulators and for monitoring the entire system. The same configuration has also been used for offline event reconstruction. A description of the system, together with details of both the online and offline modes of operation and an eveluation of its performance are presented.

  12. Detection and Mapping of the September 2017 Mexico Earthquakes Using DAS Fiber-Optic Infrastructure Arrays

    NASA Astrophysics Data System (ADS)

    Karrenbach, M. H.; Cole, S.; Williams, J. J.; Biondi, B. C.; McMurtry, T.; Martin, E. R.; Yuan, S.

    2017-12-01

    Fiber-optic distributed acoustic sensing (DAS) uses conventional telecom fibers for a wide variety of monitoring purposes. Fiber-optic arrays can be located along pipelines for leak detection; along borders and perimeters to detect and locate intruders, or along railways and roadways to monitor traffic and identify and manage incidents. DAS can also be used to monitor oil and gas reservoirs and to detect earthquakes. Because thousands of such arrays are deployed worldwide and acquiring data continuously, they can be a valuable source of data for earthquake detection and location, and could potentially provide important information to earthquake early-warning systems. In this presentation, we show that DAS arrays in Mexico and the United States detected the M8.1 and M7.2 Mexico earthquakes in September 2017. At Stanford University, we have deployed a 2.4 km fiber-optic DAS array in a figure-eight pattern, with 600 channels spaced 4 meters apart. Data have been recorded continuously since September 2016. Over 800 earthquakes from across California have been detected and catalogued. Distant teleseismic events have also been recorded, including the two Mexican earthquakes. In Mexico, fiber-optic arrays attached to pipelines also detected these two events. Because of the length of these arrays and their proximity to the event locations, we can not only detect the earthquakes but also make location estimates, potentially in near real time. In this presentation, we review the data recorded for these two events recorded at Stanford and in Mexico. We compare the waveforms recorded by the DAS arrays to those recorded by traditional earthquake sensor networks. Using the wide coverage provided by the pipeline arrays, we estimate the event locations. Such fiber-optic DAS networks can potentially play a role in earthquake early-warning systems, allowing actions to be taken to minimize the impact of an earthquake on critical infrastructure components. While many such fiber-optic networks are already in place, new arrays can be created on demand, using existing fiber-optic telecom cables, for specific monitoring situations such as recording aftershocks of a large earthquake or monitoring induced seismicity.

  13. Publicly Available Online Tool Facilitates Real-Time Monitoring Of Vaccine Conversations And Sentiments.

    PubMed

    Bahk, Chi Y; Cumming, Melissa; Paushter, Louisa; Madoff, Lawrence C; Thomson, Angus; Brownstein, John S

    2016-02-01

    Real-time monitoring of mainstream and social media can inform public health practitioners and policy makers about vaccine sentiment and hesitancy. We describe a publicly available platform for monitoring vaccination-related content, called the Vaccine Sentimeter. With automated data collection from 100,000 mainstream media sources and Twitter, natural-language processing for automated filtering, and manual curation to ensure accuracy, the Vaccine Sentimeter offers a global real-time view of vaccination conversations online. To assess the system's utility, we followed two events: polio vaccination in Pakistan after a news story about a Central Intelligence Agency vaccination ruse and subsequent attacks on health care workers, and a controversial episode in a television program about adverse events following human papillomavirus vaccination. For both events, increased online activity was detected and characterized. For the first event, Twitter response to the attacks on health care workers decreased drastically after the first attack, in contrast to mainstream media coverage. For the second event, the mainstream and social media response was largely positive about the HPV vaccine, but antivaccine conversations persisted longer than the provaccine reaction. Using the Vaccine Sentimeter could enable public health professionals to detect increased online activity or sudden shifts in sentiment that could affect vaccination uptake. Project HOPE—The People-to-People Health Foundation, Inc.

  14. Heterogeneous but “Standard” Coding Systems for Adverse Events: Issues in Achieving Interoperability between Apples and Oranges

    PubMed Central

    Richesson, Rachel L.; Fung, Kin Wah; Krischer, Jeffrey P.

    2008-01-01

    Monitoring adverse events (AEs) is an important part of clinical research and a crucial target for data standards. The representation of adverse events themselves requires the use of controlled vocabularies with thousands of needed clinical concepts. Several data standards for adverse events currently exist, each with a strong user base. The structure and features of these current adverse event data standards (including terminologies and classifications) are different, so comparisons and evaluations are not straightforward, nor are strategies for their harmonization. Three different data standards - the Medical Dictionary for Regulatory Activities (MedDRA) and the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) terminologies, and Common Terminology Criteria for Adverse Events (CTCAE) classification - are explored as candidate representations for AEs. This paper describes the structural features of each coding system, their content and relationship to the Unified Medical Language System (UMLS), and unsettled issues for future interoperability of these standards. PMID:18406213

  15. Evaluating the automated blood glucose pattern detection and case-retrieval modules of the 4 Diabetes Support System.

    PubMed

    Schwartz, Frank L; Vernier, Stanley J; Shubrook, Jay H; Marling, Cynthia R

    2010-11-01

    We have developed a prototypical case-based reasoning system to enhance management of patients with type 1 diabetes mellitus (T1DM). The system is capable of automatically analyzing large volumes of life events, self-monitoring of blood glucose readings, continuous glucose monitoring system results, and insulin pump data to detect clinical problems. In a preliminary study, manual entry of large volumes of life-event and other data was too burdensome for patients. In this study, life-event and pump data collection were automated, and then the system was reevaluated. Twenty-three adult T1DM patients on insulin pumps completed the five-week study. A usual daily schedule was entered into the database, and patients were only required to upload their insulin pump data to Medtronic's CareLink® Web site weekly. Situation assessment routines were run weekly for each participant to detect possible problems, and once the trial was completed, the case-retrieval module was tested. Using the situation assessment routines previously developed, the system found 295 possible problems. The enhanced system detected only 2.6 problems per patient per week compared to 4.9 problems per patient per week in the preliminary study (p=.017). Problems detected by the system were correctly identified in 97.9% of the cases, and 96.1% of these were clinically useful. With less life-event data, the system is unable to detect certain clinical problems and detects fewer problems overall. Additional work is needed to provide device/software interfaces that allow patients to provide this data quickly and conveniently. © 2010 Diabetes Technology Society.

  16. Healthcare Blockchain System Using Smart Contracts for Secure Automated Remote Patient Monitoring.

    PubMed

    Griggs, Kristen N; Ossipova, Olya; Kohlios, Christopher P; Baccarini, Alessandro N; Howson, Emily A; Hayajneh, Thaier

    2018-06-06

    As Internet of Things (IoT) devices and other remote patient monitoring systems increase in popularity, security concerns about the transfer and logging of data transactions arise. In order to handle the protected health information (PHI) generated by these devices, we propose utilizing blockchain-based smart contracts to facilitate secure analysis and management of medical sensors. Using a private blockchain based on the Ethereum protocol, we created a system where the sensors communicate with a smart device that calls smart contracts and writes records of all events on the blockchain. This smart contract system would support real-time patient monitoring and medical interventions by sending notifications to patients and medical professionals, while also maintaining a secure record of who has initiated these activities. This would resolve many security vulnerabilities associated with remote patient monitoring and automate the delivery of notifications to all involved parties in a HIPAA compliant manner.

  17. Monitoring with Data Automata

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    2014-01-01

    We present a form of automaton, referred to as data automata, suited for monitoring sequences of data-carrying events, for example emitted by an executing software system. This form of automata allows states to be parameterized with data, forming named records, which are stored in an efficiently indexed data structure, a form of database. This very explicit approach differs from other automaton-based monitoring approaches. Data automata are also characterized by allowing transition conditions to refer to other parameterized states, and by allowing transitions sequences. The presented automaton concept is inspired by rule-based systems, especially the Rete algorithm, which is one of the well-established algorithms for executing rule-based systems. We present an optimized external DSL for data automata, as well as a comparable unoptimized internal DSL (API) in the Scala programming language, in order to compare the two solutions. An evaluation compares these two solutions to several other monitoring systems.

  18. Applications of the Petri net to simulate, test, and validate the performance and safety of complex, heterogeneous, multi-modality patient monitoring alarm systems.

    PubMed

    Sloane, E B; Gelhot, V

    2004-01-01

    This research is motivated by the rapid pace of medical device and information system integration. Although the ability to interconnect many medical devices and information systems may help improve patient care, there is no way to detect if incompatibilities between one or more devices might cause critical events such as patient alarms to go unnoticed or cause one or more of the devices to become stuck in a disabled state. Petri net tools allow automated testing of all possible states and transitions between devices and/or systems to detect potential failure modes in advance. This paper describes an early research project to use Petri nets to simulate and validate a multi-modality central patient monitoring system. A free Petri net tool, HPSim, is used to simulate two wireless patient monitoring networks: one with 44 heart monitors and a central monitoring system and a second version that includes an additional 44 wireless pulse oximeters. In the latter Petri net simulation, a potentially dangerous heart arrhythmia and pulse oximetry alarms were detected.

  19. FRMAC Operations Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frandsen, K.

    In the event of a major radiological incident, the Federal Radiological Monitoring and Assessment Center (FRMAC) will coordinate the federal agencies that have various statutory responsibilities. The FRMAC is responsible for coordinating all environmental radiological monitoring, sampling, and assessment activities for the response. This manual describes the FRMAC’s response activities in a radiological incident. It also outlines how FRMAC fits in the National Incident Management System (NIMS) under the National Response Framework (NRF) and describes the federal assets and subsequent operational activities which provide federal radiological monitoring and assessment of the affected areas. In the event of a potential ormore » existing major radiological incident, the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office (NNSA/NSO) is responsible for establishing and managing the FRMAC during the initial phases.« less

  20. Structural health monitoring of inflatable structures for MMOD impacts

    NASA Astrophysics Data System (ADS)

    Anees, Muhammad; Gbaguidi, Audrey; Kim, Daewon; Namilae, Sirish

    2017-04-01

    Inflatable structures for space habitat are highly prone to damage caused by micrometeoroid and orbital debris impacts. Although the structures are effectively shielded against these impacts through multiple layers of impact resistant materials, there is a necessity for a health monitoring system to monitor the structural integrity and damage state within the structures. Assessment of damage is critical for the safety of personnel in the space habitat, as well as predicting the repair needs and the remaining useful life of the habitat. In this paper, we propose a unique impact detection and health monitoring system based on hybrid nanocomposite sensors. The sensors are composed of two fillers, carbon nanotubes and coarse graphene platelets with an epoxy matrix material. The electrical conductivity of these flexible nanocomposite sensors is highly sensitive to strains as well as presence of any holes and damage in the structure. The sensitivity of the sensors to the presence of 3mm holes due to an event of impact is evaluated using four point probe electrical resistivity measurements. An array of these sensors when sandwiched between soft good layers in a space habitat can act as a damage detection layer for inflatable structures. An algorithm is developed to determine the event of impact, its severity and location on the sensing layer for active health monitoring.

  1. Total On-line Access Data System (TOADS): Phase II Final Report for the Period August 2002 - August 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuracko, K. L.; Parang, M.; Landguth, D. C.

    2004-09-13

    TOADS (Total On-line Access Data System) is a new generation of real-time monitoring and information management system developed to support unattended environmental monitoring and long-term stewardship of U.S. Department of Energy facilities and sites. TOADS enables project managers, regulators, and stakeholders to view environmental monitoring information in realtime over the Internet. Deployment of TOADS at government facilities and sites will reduce the cost of monitoring while increasing confidence and trust in cleanup and long term stewardship activities. TOADS: Reliably interfaces with and acquires data from a wide variety of external databases, remote systems, and sensors such as contaminant monitors, areamore » monitors, atmospheric condition monitors, visual surveillance systems, intrusion devices, motion detectors, fire/heat detection devices, and gas/vapor detectors; Provides notification and triggers alarms as appropriate; Performs QA/QC on data inputs and logs the status of instruments/devices; Provides a fully functional data management system capable of storing, analyzing, and reporting on data; Provides an easy-to-use Internet-based user interface that provides visualization of the site, data, and events; and Enables the community to monitor local environmental conditions in real time. During this Phase II STTR project, TOADS has been developed and successfully deployed for unattended facility, environmental, and radiological monitoring at a Department of Energy facility.« less

  2. More About The Video Event Trigger

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1996-01-01

    Report presents additional information about system described in "Video Event Trigger" (LEW-15076). Digital electronic system processes video-image data to generate trigger signal when image shows significant change, such as motion, or appearance, disappearance, change in color, brightness, or dilation of object. Potential uses include monitoring of hallways, parking lots, and other areas during hours when supposed unoccupied, looking for fires, tracking airplanes or other moving objects, identification of missing or defective parts on production lines, and video recording of automobile crash tests.

  3. Aircraft Operations Classification System

    NASA Technical Reports Server (NTRS)

    Harlow, Charles; Zhu, Weihong

    2001-01-01

    Accurate data is important in the aviation planning process. In this project we consider systems for measuring aircraft activity at airports. This would include determining the type of aircraft such as jet, helicopter, single engine, and multiengine propeller. Some of the issues involved in deploying technologies for monitoring aircraft operations are cost, reliability, and accuracy. In addition, the system must be field portable and acceptable at airports. A comparison of technologies was conducted and it was decided that an aircraft monitoring system should be based upon acoustic technology. A multimedia relational database was established for the study. The information contained in the database consists of airport information, runway information, acoustic records, photographic records, a description of the event (takeoff, landing), aircraft type, and environmental information. We extracted features from the time signal and the frequency content of the signal. A multi-layer feed-forward neural network was chosen as the classifier. Training and testing results were obtained. We were able to obtain classification results of over 90 percent for training and testing for takeoff events.

  4. A new approach to generating research-quality phenology data: The USA National Phenology Monitoring System

    NASA Astrophysics Data System (ADS)

    Denny, Ellen; Miller-Rushing, Abraham; Haggerty, Brian; Wilson, Bruce; Weltzin, Jake

    2010-05-01

    The USA National Phenology Network (www.usanpn.org) has recently initiated a national effort to encourage people at different levels of expertise—from backyard naturalists to professional scientists—to observe phenological events and contribute to a national database that will be used to greatly improve our understanding of spatio-temporal variation in phenology and associated phenological responses to climate change. Traditional phenological observation protocols identify specific single dates at which individual phenological events are observed, but the scientific usefulness of long-term phenological observations can be improved with a more carefully structured protocol. At the USA-NPN we have developed a new approach that directs observers to record each day that they observe an individual plant, and to assess and report the state of specific life stages (or phenophases) as occurring or not occurring on that plant for each observation date. Evaluation is phrased in terms of simple, easy-to-understand, questions (e.g. "Do you see open flowers?"), which makes it very appropriate for a broad audience. From this method, a rich dataset of phenological metrics can be extracted, including the duration of a phenophase (e.g. open flowers), the beginning and end points of a phenophase (e.g. traditional phenological events such as first flower and last flower), multiple distinct occurrences of phenophases within a single growing season (e.g multiple flowering events, common in drought-prone regions), as well as quantification of sampling frequency and observational uncertainties. The system also includes a mechanism for translation of phenophase start and end points into standard traditional phenological events to facilitate comparison of contemporary data collected with this new "phenophase status" monitoring approach to historical datasets collected with the "phenological event" monitoring approach. These features greatly enhance the utility of the resulting data for statistical analyses addressing questions such as how phenological events vary in time and space, and in response to global change.

  5. New techniques for environmental monitoring and risk assessment in water surface systems

    NASA Astrophysics Data System (ADS)

    Valyrakis, Manousos; Alexakis, Athanasios-Theodosios; Maniatis, Georgios; Hoey, Trevor; Escudero, Javier; Vagras, Patricia

    2016-04-01

    Our society is continuously impacted by significant weather events many times resulting in catastrophes that interrupt our normal way of life. In the context of climate change and increasing urbanisation these "extreme" hydrologic events are intensified both in magnitude and frequency, inducing costs of the order of billions of pounds. The vast majority of such costs and impacts (even more to developed societies) are due to water related catastrophes such as the geomorphic action of flowing water (including scouring of critical infrastructure, bed and bank destabilisation) and flooding. New tools and radically novel concepts are in need, to enable our society becoming more resilient. In this presentation, new research at the interface of sensors and water engineering is presented, focusing on addressing the above challenges in a holistic and comprehensive manner. In particular, the design, development, testing and calibration, as well as preliminary field implementation of a new tool for risk assessment and environmental monitoring in water surface systems, is explored in this work. It is demonstrated that novel advances in conceptual approaches in water engineering and specifically in the field of hydrodynamic transport of solids (such as the impulse and energy criteria) can be successfully combined with rapid advances in sensors to help monitor and increase the resilience of our society against catastrophic hydrologic events.

  6. Operational Monitoring of Data Production at KNMI

    NASA Astrophysics Data System (ADS)

    van de Vegte, John; Kwidama, Anecita; van Moosel, Wim; Oosterhof, Rijk; de Wit de Wit, Ronny; Klein Ikkink, Henk Jan; Som de Cerff, Wim; Verhoef, Hans; Koutek, Michal; Duin, Frank; van der Neut, Ian; verhagen, Robert; Wollerich, Rene

    2016-04-01

    Within KNMI a new fully automated system for monitoring the KNMI operational data production systems is being developed: PRISMA (PRocessflow Infrastructure Surveillance and Monitoring Application). Currently the KNMI operational (24/7) production systems consist of over 60 applications, running on different hardware systems and platforms. They are interlinked for the production of numerous data products, which are delivered to internal and external customers. Traditionally these applications are individually monitored by different applications or not at all; complicating root cause and impact analysis. Also, the underlying hardware and network is monitored via an isolated application. Goal of the PRISMA system is to enable production chain monitoring, which enables root cause analysis (what is the root cause of the disruption) and impact analysis (what downstream products/customers will be effected). The PRISMA system will make it possible to reduce existing monitoring applications and provides one interface for monitoring the data production. For modeling and storing the state of the production chains a graph database is used. The model is automatically updated by the applications and systems which are to be monitored. The graph models enables root cause and impact analysis. In the PRISMA web interface interaction with the graph model is accomplished via a graphical representation. The presentation will focus on aspects of: • Modeling real world computers, applications, products to a conceptual model; • Architecture of the system; • Configuration information and (real world) event handling of the to be monitored objects; • Implementation rules for root cause and impact analysis. • How PRISMA was developed (methodology, facts, results) • Presentation of the PRISMA system as it now looks and works

  7. Action Monitoring Cortical Activity Coupled to Submovements

    PubMed Central

    Sobolewski, Aleksander

    2017-01-01

    Numerous studies have examined neural correlates of the human brain’s action-monitoring system during experimentally segmented tasks. However, it remains unknown how such a system operates during continuous motor output when no experimental time marker is available (such as button presses or stimulus onset). We set out to investigate the electrophysiological correlates of action monitoring when hand position has to be repeatedly monitored and corrected. For this, we recorded high-density electroencephalography (EEG) during a visuomotor tracking task during which participants had to follow a target with the mouse cursor along a visible trajectory. By decomposing hand kinematics into naturally occurring periodic submovements, we found an event-related potential (ERP) time-locked to these submovements and localized in a sensorimotor cortical network comprising the supplementary motor area (SMA) and the precentral gyrus. Critically, the amplitude of the ERP correlated with the deviation of the cursor, 110 ms before the submovement. Control analyses showed that this correlation was truly due to the cursor deviation and not to differences in submovement kinematics or to the visual content of the task. The ERP closely resembled those found in response to mismatch events in typical cognitive neuroscience experiments. Our results demonstrate the existence of a cortical process in the SMA, evaluating hand position in synchrony with submovements. These findings suggest a functional role of submovements in a sensorimotor loop of periodic monitoring and correction and generalize previous results from the field of action monitoring to cases where action has to be repeatedly monitored. PMID:29071301

  8. [Development of medical supplies management system].

    PubMed

    Zhong, Jianping; Shen, Beijun; Zhu, Huili

    2012-11-01

    This paper adopts advanced information technology to manage medical supplies, in order to improve the medical supplies management level and reduce material cost. It develops a Medical Supplies Management System with B/S and C/S mixed structure, optimizing material management process, building large equipment performance evaluation model, providing interface solution with HIS, and realizing real-time information briefing of high value material's consumption. The medical materials are managed during its full life-cycle. The material consumption of the clinical departments is monitored real-timely. Through the closed-loop management with pre-event budget, mid-event control and after-event analysis, it realizes the final purpose of management yielding benefit.

  9. The Capabilities and Applications of FY-3A/B SEM on Monitoring Space Weather Events

    NASA Astrophysics Data System (ADS)

    Huang, C.; Li, J.; Yu, T.; Xue, B.; Wang, C.; Zhang, X.; Cao, G.; Liu, D.; Tang, W.

    2012-12-01

    The Space Environment Monitor (SEM), on board the Chinese meteorological satellites, FengYun-3A/B has the abilities to measure proton flux in 3-300 Mev energy range and electron flux in 0.15-5.7 Mev energy range. SEM can also detect the heavy ion compositions, satellite surface potential, the radiation dose in sensors, and the single events. The space environment information derived from SEM can be utilized for satellite security designs, scientific studies, development of radiation belt models, and space weather monitoring and disaster warning. In this study, the SEM's instrument characteristics are introduced and the post-launch calibration algorithm is presented. The applications in monitoring space weather events and the service for manned spaceflights are also demonstrated.; The protons with particle energy over 10 Mev are called "killer particles". These particles may damage the satellite and cause disruption of satellite's system. The protons flux of 10 M-26 Mev energy band reached 5000 in the SPE caused by a solar flare with CME during the period of 2012.01.23 to 2012.01.27 as shown in the figure. THE COMPARISONS OF HEAVY IONS (2010.11.11-2010.12.15)t;

  10. Trial of real-time locating and messaging system with Bluetooth low energy.

    PubMed

    Arisaka, Naoya; Mamorita, Noritaka; Isonaka, Risa; Kawakami, Tadashi; Takeuchi, Akihiro

    2016-09-14

    Hospital real-time location systems (RTLS) are increasing efficiency and reducing operational costs, but room access tags are necessary. We developed three iPhone 5 applications for an RTLS and communications using Bluetooth low energy (BLE). The applications were: Peripheral device tags, Central beacons, and a Monitor. A Peripheral communicated with a Central using BLE. The Central communicated with a Monitor using sockets on TCP/IP (Transmission Control Protocol/Internet Protocol) via a WLAN (wireless local area network). To determine a BLE threshold level for the received signal strength indicator (RSSI), relationships between signal strength and distance were measured in our laboratory and on the terrace. The BLE RSSI threshold was set at -70 dB, about 10 m. While an individual with a Peripheral moved around in a concrete building, the Peripheral was captured in a few 10-sec units at about 10 m from a Central. The Central and Monitor showed and saved the approach events, location, and Peripheral's nickname sequentially in real time. Remote Centrals also interactively communicate with Peripherals by intermediating through Monitors that found the nickname in the event database. Trial applications using BLE on iPhones worked well for patient tracking, and messaging in indoor environments.

  11. Real time monitoring of induced seismicity in the Insheim and Landau deep geothermal reservoirs, Upper Rhine Graben, using the new SeisComP3 cross-correlation detector

    NASA Astrophysics Data System (ADS)

    Vasterling, Margarete; Wegler, Ulrich; Bruestle, Andrea; Becker, Jan

    2016-04-01

    Real time information on the locations and magnitudes of induced earthquakes is essential for response plans based on the magnitude frequency distribution. We developed and tested a real time cross-correlation detector focusing on induced microseismicity in deep geothermal reservoirs. The incoming seismological data are cross-correlated in real time with a set of known master events. We use the envelopes of the seismograms rather than the seismograms themselves to account for small changes in the source locations or in the focal mechanisms. Two different detection conditions are implemented: After first passing a single trace correlation condition, secondly a network correlation is calculated taking the amplitude information of the seismic network into account. The magnitude is estimated by using the respective ratio of the maximum amplitudes of the master event and the detected event. The detector is implemented as a real time tool and put into practice as a SeisComp3 module, an established open source software for seismological real time data handling and analysis. We validated the reliability and robustness of the detector by an offline playback test using four month of data from monitoring the power plant in Insheim (Upper Rhine Graben, SW Germany). Subsequently, in October 2013 the detector was installed as real time monitoring system within the project "MAGS2 - Microseismic Activity of Geothermal Systems". Master events from the two neighboring geothermal power plants in Insheim and Landau and two nearby quarries are defined. After detection, manual phase determination and event location are performed at the local seismological survey of the Geological Survey and Mining Authority of Rhineland-Palatinate. Until November 2015 the detector identified 454 events out of which 95% were assigned correctly to the respective source. 5% were misdetections caused by local tectonic events. To evaluate the completeness of the automatically obtained catalogue, it is compared to the event catalogue of the Seismological Service of Southwestern Germany and to the events reported by the company tasked with seismic monitoring of the Insheim power plant. Events missed by the cross-correlation detector are generally very small. They are registered at too few stations to meet the detection criteria. Most of these small events were not locatable. The automatic catalogue has a magnitude of completeness around 0.0 and is significantly more detailed than the catalogue from standard processing of the Seismological Service of Southwestern Germany for this region. For events in the magnitude range of the master event the magnitude estimated from the amplitude ratio reproduces the local magnitude well. For weaker events there tends to be a small offset. Altogether, the developed real time cross correlation detector provides robust detections with reliable association of the events to the respective sources and valid magnitude estimates. Thus, it provides input parameters for the mitigation of seismic hazard by using response plans in real time.

  12. Integrated software system for improving medical equipment management.

    PubMed

    Bliznakov, Z; Pappous, G; Bliznakova, K; Pallikarakis, N

    2003-01-01

    The evolution of biomedical technology has led to an extraordinary use of medical devices in health care delivery. During the last decade, clinical engineering departments (CEDs) turned toward computerization and application of specific software systems for medical equipment management in order to improve their services and monitor outcomes. Recently, much emphasis has been given to patient safety. Through its Medical Device Directives, the European Union has required all member nations to use a vigilance system to prevent the reoccurrence of adverse events that could lead to injuries or death of patients or personnel as a result of equipment malfunction or improper use. The World Health Organization also has made this issue a high priority and has prepared a number of actions and recommendations. In the present workplace, a new integrated, Windows-oriented system is proposed, addressing all tasks of CEDs but also offering a global approach to their management needs, including vigilance. The system architecture is based on a star model, consisting of a central core module and peripheral units. Its development has been based on the integration of 3 software modules, each one addressing specific predefined tasks. The main features of this system include equipment acquisition and replacement management, inventory archiving and monitoring, follow up on scheduled maintenance, corrective maintenance, user training, data analysis, and reports. It also incorporates vigilance monitoring and information exchange for adverse events, together with a specific application for quality-control procedures. The system offers clinical engineers the ability to monitor and evaluate the quality and cost-effectiveness of the service provided by means of quality and cost indicators. Particular emphasis has been placed on the use of harmonized standards with regard to medical device nomenclature and classification. The system's practical applications have been demonstrated through a pilot evaluation trial.

  13. Smart sensor technology for advanced launch vehicles

    NASA Astrophysics Data System (ADS)

    Schoess, Jeff

    1989-07-01

    Next-generation advanced launch vehicles will require improved use of sensor data and the management of multisensor resources to achieve automated preflight checkout, prelaunch readiness assessment and vehicle inflight condition monitoring. Smart sensor technology is a key component in meeting these needs. This paper describes the development of a smart sensor-based condition monitoring system concept referred to as the Distributed Sensor Architecture. A significant event and anomaly detection scheme that provides real-time condition assessment and fault diagnosis of advanced launch system rocket engines is described. The design and flight test of a smart autonomous sensor for Space Shuttle structural integrity health monitoring is presented.

  14. Event-Based Surveillance During EXPO Milan 2015: Rationale, Tools, Procedures, and Initial Results

    PubMed Central

    Manso, Martina Del; Caporali, Maria Grazia; Napoli, Christian; Linge, Jens P.; Mantica, Eleonora; Verile, Marco; Piatti, Alessandra; Pompa, Maria Grazia; Vellucci, Loredana; Costanzo, Virgilio; Bastiampillai, Anan Judina; Gabrielli, Eugenia; Gramegna, Maria; Declich, Silvia

    2016-01-01

    More than 21 million participants attended EXPO Milan from May to October 2015, making it one of the largest protracted mass gathering events in Europe. Given the expected national and international population movement and health security issues associated with this event, Italy fully implemented, for the first time, an event-based surveillance (EBS) system focusing on naturally occurring infectious diseases and the monitoring of biological agents with potential for intentional release. The system started its pilot phase in March 2015 and was fully operational between April and November 2015. In order to set the specific objectives of the EBS system, and its complementary role to indicator-based surveillance, we defined a list of priority diseases and conditions. This list was designed on the basis of the probability and possible public health impact of infectious disease transmission, existing statutory surveillance systems in place, and any surveillance enhancements during the mass gathering event. This article reports the methodology used to design the EBS system for EXPO Milan and the results of 8 months of surveillance. PMID:27314656

  15. The use of a computerized database to monitor vaccine safety in Viet Nam.

    PubMed Central

    Ali, Mohammad; Canh, Gia Do; Clemens, John D.; Park, Jin-Kyung; von Seidlein, Lorenz; Minh, Tan Truong; Thiem, Dinh Vu; Tho, Huu Le; Trach, Duc Dang

    2005-01-01

    Health information systems to monitor vaccine safety are used in industrialized countries to detect adverse medical events related to vaccinations or to prove the safety of vaccines. There are no such information systems in the developing world, but they are urgently needed. A large linked database for the monitoring of vaccine-related adverse events has been established in Khanh Hoa province, Viet Nam. Data collected during the first 2 years of surveillance, a period which included a mass measles vaccination campaign, were used to evaluate the system. For this purpose the discharge diagnoses of individuals admitted to polyclinics and hospitals were coded according to the International Classification of Diseases (ICD)-10 guidelines and linked in a dynamic population database with vaccination histories. A case-series analysis was applied to the cohort of children vaccinated during the mass measles vaccination campaign. The study recorded 107,022 immunizations in a catchment area with a population of 357,458 and confirmed vaccine coverage of 87% or higher for completed routine childhood vaccinations. The measles vaccination campaign immunized at least 86% of the targeted children aged 9 months to 10 years. No medical event was detected significantly more frequently during the 14 days after measles vaccination than before it. The experience in Viet Nam confirmed the safety of a measles vaccination campaign and shows that it is feasible to establish health information systems such as a large linked database which can provide reliable data in a developing country for a modest increase in use of resources. PMID:16193545

  16. Pilot evaluation of electricity-reliability and power-quality monitoring in California's Silicon Valley with the I-Grid(R) system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph; Divan, Deepak; Brumsickle, William

    2004-02-01

    Power-quality events are of increasing concern for the economy because today's equipment, particularly computers and automated manufacturing devices, is susceptible to these imperceptible voltage changes. A small variation in voltage can cause this equipment to shut down for long periods, resulting in significant business losses. Tiny variations in power quality are difficult to detect except with expensive monitoring equipment used by trained technicians, so many electricity customers are unaware of the role of power-quality events in equipment malfunctioning. This report describes the findings from a pilot study coordinated through the Silicon Valley Manufacturers Group in California to explore the capabilitiesmore » of I-Grid(R), a new power-quality monitoring system. This system is designed to improve the accessibility of power-quality in formation and to increase understanding of the growing importance of electricity reliability and power quality to the economy. The study used data collected by I-Grid sensors at seven Silicon Valley firms to investigate the impacts of power quality on individual study participants as well as to explore the capabilities of the I-Grid system to detect events on the larger electricity grid by means of correlation of data from the sensors at the different sites. In addition, study participants were interviewed about the value they place on power quality, and their efforts to address electricity-reliability and power-quality problems. Issues were identified that should be taken into consideration in developing a larger, potentially nationwide, network of power-quality sensors.« less

  17. Privacy issues and the monitoring of sumatriptan in the New Zealand Intensive Medicines Monitoring Programme.

    PubMed

    Coulter, D M

    2001-12-01

    The purpose of this paper is to describe how the New Zealand (NZ) Intensive Medicines Monitoring Programme (IMMP) functions in relation to NZ privacy laws and to describe the attitudes of patients to drug safety monitoring and the privacy of their personal and health information. The IMMP undertakes prospective observational event monitoring cohort studies on new drugs. The cohorts are established from prescription data and the events are obtained using prescription event monitoring and spontaneous reporting. Personal details, prescribing history of the monitored drugs and adverse events data are stored in databases long term. The NZ Health Information Privacy Code is outlined and the monitoring of sumatriptan is used to illustrate how the IMMP functions in relation to the Code. Patient responses to the programme are described. Sumatriptan was monitored in 14,964 patients and 107,646 prescriptions were recorded. There were 2344 reports received describing 3987 adverse events. A majority of the patients were involved in the recording of events data either personally or by telephone interview. There were no objections to the monitoring process on privacy grounds. Given the fact that all reasonable precautions are taken to ensure privacy, patients perceive drug safety to have greater priority than any slight risk of breach of confidentiality concerning their personal details and health information.

  18. Timber Mountain Precipitation Monitoring Station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyles, Brad; McCurdy, Greg; Chapman, Jenny

    2012-01-01

    A precipitation monitoring station was placed on the west flank of Timber Mountain during the year 2010. It is located in an isolated highland area near the western border of the Nevada National Security Site (NNSS), south of Pahute Mesa. The cost of the equipment, permitting, and installation was provided by the Environmental Monitoring Systems Initiative (EMSI) project. Data collection, analysis, and maintenance of the station during fiscal year 2011 was funded by the U.S. Department of Energy, National Nuclear Security Administration, Nevada Site Office Environmental Restoration, Soils Activity. The station is located near the western headwaters of Forty Milemore » Wash on the Nevada Test and Training Range (NTTR). Overland flows from precipitation events that occur in the Timber Mountain high elevation area cross several of the contaminated Soils project CAU (Corrective Action Unit) sites located in the Forty Mile Wash watershed. Rain-on-snow events in the early winter and spring around Timber Mountain have contributed to several significant flow events in Forty Mile Wash. The data from the new precipitation gauge at Timber Mountain will provide important information for determining runoff response to precipitation events in this area of the NNSS. Timber Mountain is also a groundwater recharge area, and estimation of recharge from precipitation was important for the EMSI project in determining groundwater flowpaths and designing effective groundwater monitoring for Yucca Mountain. Recharge estimation additionally provides benefit to the Underground Test Area Sub-project analysis of groundwater flow direction and velocity from nuclear test areas on Pahute Mesa. Additionally, this site provides data that has been used during wild fire events and provided a singular monitoring location of the extreme precipitation events during December 2010 (see data section for more details). This letter report provides a summary of the site location, equipment, and data collected in fiscal year 2011.« less

  19. Demonstration of a Novel Synchrophasor-based Situational Awareness System: Wide Area Power System Visualization, On-line Event Replay and Early Warning of Grid Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosso, A.

    Since the large North Eastern power system blackout on August 14, 2003, U.S. electric utilities have spent lot of effort on preventing power system cascading outages. Two of the main causes of the August 14, 2003 blackout were inadequate situational awareness and inadequate operator training In addition to the enhancements of the infrastructure of the interconnected power systems, more research and development of advanced power system applications are required for improving the wide-area security monitoring, operation and planning in order to prevent large- scale cascading outages of interconnected power systems. It is critically important for improving the wide-area situation awarenessmore » of the operators or operational engineers and regional reliability coordinators of large interconnected systems. With the installation of large number of phasor measurement units (PMU) and the related communication infrastructure, it will be possible to improve the operators’ situation awareness and to quickly identify the sequence of events during a large system disturbance for the post-event analysis using the real-time or historical synchrophasor data. The purpose of this project was to develop and demonstrate a novel synchrophasor-based comprehensive situational awareness system for control centers of power transmission systems. The developed system named WASA (Wide Area Situation Awareness) is intended to improve situational awareness at control centers of the power system operators and regional reliability coordinators. It consists of following main software modules: • Wide-area visualizations of real-time frequency, voltage, and phase angle measurements and their contour displays for security monitoring. • Online detection and location of a major event (location, time, size, and type, such as generator or line outage). • Near-real-time event replay (in seconds) after a major event occurs. • Early warning of potential wide-area stability problems. The system has been deployed and demonstrated at the Tennessee Valley Authority (TVA) and ISO New England system using real-time synchrophasor data from openPDC. Apart from the software product, the outcome of this project consists of a set of technical reports and papers describing the mathematical foundations and computational approaches of different tools and modules, implementation issues and considerations, lessons learned, and the results of lidation processes.« less

  20. Hail Disrometer Array for Launch Systems Support

    NASA Technical Reports Server (NTRS)

    Lane, John E.; Sharp, David W.; Kasparis, Takis C.; Doesken, Nolan J.

    2008-01-01

    Prior to launch, the space shuttle might be described as a very large thermos bottle containing substantial quantities of cryogenic fuels. Because thermal insulation is a critical design requirement, the external wall of the launch vehicle fuel tank is covered with an insulating foam layer. This foam is fragile and can be damaged by very minor impacts, such as that from small- to medium-size hail, which may go unnoticed. In May 1999, hail damage to the top of the External Tank (ET) of STS-96 required a rollback from the launch pad to the Vehicle Assembly Building (VAB) for repair of the insulating foam. Because of the potential for hail damage to the ET while exposed to the weather, a vigilant hail sentry system using impact transducers was developed as a hail damage warning system and to record and quantify hail events. The Kennedy Space Center (KSC) Hail Monitor System, a joint effort of the NASA and University Affiliated Spaceport Technology Development Contract (USTDC) Physics Labs, was first deployed for operational testing in the fall of 2006. Volunteers from the Community Collaborative Rain. Hail, and Snow Network (CoCoRaHS) in conjunction with Colorado State University were and continue to be active in testing duplicate hail monitor systems at sites in the hail prone high plains of Colorado. The KSC Hail Monitor System (HMS), consisting of three stations positioned approximately 500 ft from the launch pad and forming an approximate equilateral triangle (see Figure 1), was deployed to Pad 39B for support of STS-115. Two months later, the HMS was deployed to Pad 39A for support of STS-116. During support of STS-117 in late February 2007, an unusual hail event occurred in the immediate vicinity of the exposed space shuttle and launch pad. Hail data of this event was collected by the HMS and analyzed. Support of STS-118 revealed another important application of the hail monitor system. Ground Instrumentation personnel check the hail monitors daily when a vehicle is on the launch pad, with special attention after any storm suspected of containing hail. If no hail is recorded by the HMS, the vehicle and pad inspection team has no need to conduct a thorough inspection of the vehicle immediately following a storm. On the afternoon of July 13, 2007, hail on the ground was reported by observers at the VAB, about three miles west of Pad 39A, as well as at several other locations around Kennedy Space Center. The HMS showed no impact detections, indicating that the shuttle had not been damaged by any of the numerous hail events which occurred that day.

  1. Detection, location, and characterization of hydroacoustic signals using seafloor cable networks offshore Japan (Invited)

    NASA Astrophysics Data System (ADS)

    Sugioka, H.; Suyehiro, K.; Shinohara, M.

    2009-12-01

    The hydroacoustic monitoring by the International Monitoring System (IMS) for Comprehensive Nuclear-Test-Treaty (CTBT) verification system utilize hydrophone stations and seismic stations called T-phase stations for worldwide detection. Some signals of natural origin include those from earthquakes, submarine volcanic eruptions, or whale calls. Among artificial sources there are non-nuclear explosions and air-gun shots. It is important for IMS system to detect and locate hydroacoustic events with sufficient accuracy and correctly characterize the signals and identify the source. As there are a number of seafloor cable networks operated offshore Japanese islands basically facing the Pacific Ocean for monitoring regional seismicity, the data from these stations (pressures, hydrophones and seismic sensors) may be utilized to verify and increase the capability of the IMS. We use these data to compare some selected event parameters with those by Pacific in the time period of 2004-present. These anomalous examples and also dynamite shots used for seismic crustal structure studies and other natural sources will be presented in order to help improve the IMS verification capabilities for detection, location and characterization of anomalous signals. The seafloor cable networks composed of three hydrophones and six seismometers and a temporal dense seismic array detected and located hydroacoustic events offshore Japanese island on 12th of March in 2008, which had been reported by the IMS. We detected not only the reverberated hydroacoustic waves between the sea surface and the sea bottom but also the seismic waves going through the crust associated with the events. The determined source of the seismic waves is almost coincident with the one of hydroacoustic waves, suggesting that the seismic waves are converted very close to the origin of the hydroacoustic source. We also detected very similar signals on 16th of March in 2009 to the ones associated with the event of 12th of March in 2008.

  2. An Early Warning System for Identification and Monitoring of Disturbances to Forest Ecosystems

    NASA Astrophysics Data System (ADS)

    Marshall, A. A.; Hoffman, F. M.; Kumar, J.; Hargrove, W. W.; Spruce, J.; Mills, R. T.

    2011-12-01

    Forest ecosystems are susceptible to damage due to threat events like wildfires, insect and disease attacks, extreme weather events, land use change, and long-term climate change. Early identification of such events is desired to devise and implement a protective response. The mission of the USDA Forest Service is to sustain the health, diversity, and productivity of the nation's forests. However, limited resources for aerial surveys and ground-based inspections are insufficient for monitoring the large areas covered by the U.S. forests. The USDA Forest Service, Oak Ridge National Laboratory, and NASA Stennis Space Center are developing an early warning system for the continuous tracking and long-term monitoring of disturbances and responses in forest ecosystems using high resolution satellite remote sensing data. Geospatiotemporal data mining techniques were developed and applied to normalized difference vegetation index (NDVI) products derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) MOD 13 data at 250 m resolution on eight day intervals. Representative phenologically similar regions, or phenoregions, were developed for the conterminous United States (CONUS) by applying a k-means clustering algorithm to the NDVI data spanning the full eight years of the MODIS record. Annual changes in the phenoregions were quantitatively analyzed to identify the significant changes in phenological behavior. This methodology was successfully applied for identification of various forest disturbance events, including wildfire, tree mortality due to Mountain Pine Beetle, and other insect infestation and diseases, as well as extreme events like storms and hurricanes in the United States. Where possible, the results were validated and quantitatively compared with aerial and ground-based survey data available from different agencies. This system was able to identify most of the disturbances reported by aerial and ground-based surveys, and it also identified affected areas that were not covered by any of the surveys. Analysis results and validation data will be presented.

  3. The Edison Environmental Center Permeable Pavement Site: Initial Results from a Stormwater Control Designed for Monitoring

    EPA Science Inventory

    There are few detailed studies of full-scale, replicated, actively-used permeable pavement systems. Practitioners need additional studies of permeable pavement systems in its intended application (parking lot, roadway, etc.) across a range of climatic events, daily usage conditio...

  4. Electronic circuit detects left ventricular ejection events in cardiovascular system

    NASA Technical Reports Server (NTRS)

    Gebben, V. D.; Webb, J. A., Jr.

    1972-01-01

    Electronic circuit processes arterial blood pressure waveform to produce discrete signals that coincide with beginning and end of left ventricular ejection. Output signals provide timing signals for computers that monitor cardiovascular systems. Circuit operates reliably for heart rates between 50 and 200 beats per minute.

  5. Strategies for monitoring the bacteriological quality of water supply in distribution systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geldreich, E.E.; Goodrich, J.A.; Clark, R.M.

    1989-01-01

    Monitoring strategies for characterizing the bacteriological quality of water in the distribution system require a complete understanding of a variety of interrelated aspects that include treated water quality, water-supply retention in storage and infrastructure deterioration in the distribution system. A study of field data from several water-supply utilities was used to highlight some innovative interpretations of compliance monitoring data. Major perceptions include: The use of a 5% coliform frequency of occurrence limit highlights compliance significance in those situations where there are clusters of positive samples containing less than 4 coliforms per 100 mL. Unfortunately, this presence/absence concept does not providemore » any indication of the magnitude of a contamination event.« less

  6. No moving parts safe & arm apparatus and method with monitoring and built-in-test for optical firing of explosive systems

    DOEpatents

    Hendrix, J.L.

    1995-04-11

    A laser initiated ordnance controller apparatus which provides a safe and arm scheme with no moving parts. The safe & arm apparatus provides isolation of firing energy to explosive devices using a combination of polarization isolation and control through acousto-optical deviation of laser energy pulses. The apparatus provides constant monitoring of the systems status and performs 100% built-in-test at any time prior to ordnance ignition without the risk of premature ignition or detonation. The apparatus has a computer controller, a solid state laser, an acousto-optic deflector and RF drive circuitry, built-in-test optics and electronics, and system monitoring capabilities. The optical system is completed from the laser beam power source to the pyrotechnic ordnance through fiber optic cabling, optical splitters and optical connectors. During operation of the apparatus, a command is provided by the computer controller and, simultaneous with laser flashlamp fire, the safe & arm device is opened for approximately 200 microseconds which allows the laser pulse to transmit through the device. The arm signal also energizes the laser power supply and activates the acousto-optical deflector. When the correct fire format command is received, the acousto-optic deflector moves to the selected event channel, and the channel is verified to ensure the system is pointing to the correct position. Laser energy is transmitted through the fiber where an ignitor or detonator designed to be sensitive to optical pulses is fired at the end of the fiber channel. Simultaneous event channels may also be utilized by optically splitting a single event channel. The built-in-test may be performed anytime prior to ordnance ignition. 6 figures.

  7. No moving parts safe & arm apparatus and method with monitoring and built-in-test for optical firing of explosive systems

    DOEpatents

    Hendrix, James L.

    1995-01-01

    A laser initiated ordnance controller apparatus which provides a safe and m scheme with no moving parts. The safe & arm apparatus provides isolation of firing energy to explosive devices using a combination of polarization isolation and control through acousto-optical deviation of laser energy pulses. The apparatus provides constant monitoring of the systems status and performs 100% built-in-test at any time prior to ordnance ignition without the risk of premature ignition or detonation. The apparatus has a computer controller, a solid state laser, an acousto-optic deflector and RF drive circuitry, built-in-test optics and electronics, and system monitoring capabilities. The optical system is completed from the laser beam power source to the pyrotechnic ordnance through fiber optic cabling, optical splitters and optical connectors. During operation of the apparatus, a command is provided by the computer controller and, simultaneous with laser flashlamp fire, the safe & arm device is opened for approximately 200 microseconds which allows the laser pulse to transmit through the device. The arm signal also energizes the laser power supply and activates the acousto-optical deflector. When the correct fire format command is received, the acousto-optic deflector moves to the selected event channel, and the channel is verified to ensure the system is pointing to the correct position. Laser energy is transmitted through the fiber where an ignitor or detonator designed to be sensitive to optical pulses is fired at the end of the fiber channel. Simultaneous event channels may also be utilized by optically splitting a single event channel. The built-in-test may be performed anytime prior to ordnance ignition.

  8. The value of Doppler LiDAR systems to monitor turbulence intensity during storm events in order to enhance aviation safety in Iceland

    NASA Astrophysics Data System (ADS)

    Yang, Shu; Nína Petersen, Guðrún; Finger, David C.

    2017-04-01

    Turbulence and wind shear are a major natural hazards for aviation safety in Iceland. The temporal and spatial scale of atmospheric turbulence is very dynamic, requiring an adequate method to detect and monitor turbulence with high resolution. The Doppler Light Detection and Ranging (LiDAR) system can provide continuous information about the wind field using the Doppler effect form emitted light signals. In this study, we use a Leosphere Windcube 200s LiDAR systems stationed near Reykjavik city Airport and at Keflavik International Airport, Iceland, to evaluate turbulence intensity by estimating eddy dissipation rate (EDR). For this purpose, we retrieved radial wind velocity observations from Velocity Azimuth Display (VAD) scans (360°scans at 15° and 75° elevation angle) to compute EDR. The method was used to monitor and characterize storm events in fall 2016 and the following winter. The preliminary result reveal that the LiDAR observations can detect and quantify atmospheric turbulence with high spatial and temporal resolution. This finding is an important step towards enhanced aviation safety in subpolar climate characterized by sever wind turbulence.

  9. Bayesian Monitoring Systems for the CTBT: Historical Development and New Results

    NASA Astrophysics Data System (ADS)

    Russell, S.; Arora, N. S.; Moore, D.

    2016-12-01

    A project at Berkeley, begun in 2009 in collaboration with CTBTO andmore recently with LLNL, has reformulated the global seismicmonitoring problem in a Bayesian framework. A first-generation system,NETVISA, has been built comprising a spatial event prior andgenerative models of event transmission and detection, as well as aMonte Carlo inference algorithm. The probabilistic model allows forseamless integration of various disparate sources of information,including negative information (the absence of detections). Workingfrom arrivals extracted by traditional station processing fromInternational Monitoring System (IMS) data, NETVISA achieves areduction of around 60% in the number of missed events compared withthe currently deployed network processing system. It also finds manyevents that are missed by the human analysts who postprocess the IMSoutput. Recent improvements include the integration of models forinfrasound and hydroacoustic detections and a global depth model fornatural seismicity trained from ISC data. NETVISA is now fullycompatible with the CTBTO operating environment. A second-generation model called SIGVISA extends NETVISA's generativemodel all the way from events to raw signal data, avoiding theerror-prone bottom-up detection phase of station processing. SIGVISA'smodel automatically captures the phenomena underlying existingdetection and location techniques such as multilateration, waveformcorrelation matching, and double-differencing, and integrates theminto a global inference process that also (like NETVISA) handles denovo events. Initial results for the Western US in early 2008 (whenthe transportable US Array was operating) shows that SIGVISA finds,from IMS data only, more than twice the number of events recorded inthe CTBTO Late Event Bulletin (LEB). For mb 1.0-2.5, the ratio is more than10; put another way, for this data set, SIGVISA lowers the detectionthreshold by roughly one magnitude compared to LEB. The broader message of this work is that probabilistic inference basedon a vertically integrated generative model that directly expressesgeophysical knowledge can be a much more effective approach forinterpreting scientific data than the traditional bottom-up processingpipeline.

  10. A Low-Cost, Reliable, High-Throughput System for Rodent Behavioral Phenotyping in a Home Cage Environment

    PubMed Central

    Parkison, Steven A.; Carlson, Jay D.; Chaudoin, Tammy R.; Hoke, Traci A.; Schenk, A. Katrin; Goulding, Evan H.; Pérez, Lance C.; Bonasera, Stephen J.

    2016-01-01

    Inexpensive, high-throughput, low maintenance systems for precise temporal and spatial measurement of mouse home cage behavior (including movement, feeding, and drinking) are required to evaluate products from large scale pharmaceutical design and genetic lesion programs. These measurements are also required to interpret results from more focused behavioral assays. We describe the design and validation of a highly-scalable, reliable mouse home cage behavioral monitoring system modeled on a previously described, one-of-a-kind system [1]. Mouse position was determined by solving static equilibrium equations describing the force and torques acting on the system strain gauges; feeding events were detected by a photobeam across the food hopper, and drinking events were detected by a capacitive lick sensor. Validation studies show excellent agreement between mouse position and drinking events measured by the system compared with video-based observation – a gold standard in neuroscience. PMID:23366406

  11. Automated terrestrial laser scanning with near-real-time change detection - monitoring of the Séchilienne landslide

    NASA Astrophysics Data System (ADS)

    Kromer, Ryan A.; Abellán, Antonio; Hutchinson, D. Jean; Lato, Matt; Chanut, Marie-Aurelie; Dubois, Laurent; Jaboyedoff, Michel

    2017-05-01

    We present an automated terrestrial laser scanning (ATLS) system with automatic near-real-time change detection processing. The ATLS system was tested on the Séchilienne landslide in France for a 6-week period with data collected at 30 min intervals. The purpose of developing the system was to fill the gap of high-temporal-resolution TLS monitoring studies of earth surface processes and to offer a cost-effective, light, portable alternative to ground-based interferometric synthetic aperture radar (GB-InSAR) deformation monitoring. During the study, we detected the flux of talus, displacement of the landslide and pre-failure deformation of discrete rockfall events. Additionally, we found the ATLS system to be an effective tool in monitoring landslide and rockfall processes despite missing points due to poor atmospheric conditions or rainfall. Furthermore, such a system has the potential to help us better understand a wide variety of slope processes at high levels of temporal detail.

  12. Provider experiences with negative-pressure wound therapy systems.

    PubMed

    Kaufman-Rivi, Diana; Hazlett, Antoinette C; Hardy, Mary Anne; Smith, Jacquelyn M; Seid, Heather B

    2013-07-01

    MedWatch, the Food and Drug Administration's (FDA's) nationwide adverse event reporting system, serves to monitor device performance after a medical device is approved or cleared for market. Through the MedWatch adverse event reporting system, the FDA receives Medical Device Reports of deaths and serious injuries with negative-pressure wound therapy (NPWT) systems, many of which are used in homes and in extended-care facilities. In response to reported events, this study was conducted to obtain additional information about device issues that healthcare professionals face in these settings, as well as challenges that caregivers might encounter using this technology at home. The study was exploratory and descriptive in nature. The FDA surveyed wound care specialists and professional home healthcare providers to learn about users' experiences with NPWT. In the first phase of the study, a semistructured questionnaire was developed for telephone interviews and self-administration. In the second phase, a web-based survey was adapted from the semistructured instrument. Respondent concerns primarily centered on issues not directly related to the NPWT devices: NPWT prescription, provider education in addition to patient training and appropriate wound management practices, notably ongoing wound assessment, and patient monitoring. Overall, respondents thought that there was a definite benefit to NPWT, regardless of the care setting, and that it was a safe therapy when prescribed and administered appropriately.

  13. Special event discrimination analysis: The TEXAR blind test and identification of the August 16, 1997 Kara Sea event. Final report, 13 September 1995--31 January 1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumgardt, D.

    1998-03-31

    The International Monitoring System (IMS) for the Comprehensive Test Ban Treaty (CTBT) faces the serious challenge of being able to accurately and reliably identify seismic events in any region of the world. Extensive research has been performed in recent years on developing discrimination techniques which appear to classify seismic events into broad categories of source types, such as nuclear explosion, earthquake, and mine blast. This report examines in detail the problem of effectiveness of regional discrimination procedures in the application of waveform discriminants to Special Event identification and the issue of discriminant transportability.

  14. Reprogrammable field programmable gate array with integrated system for mitigating effects of single event upsets

    NASA Technical Reports Server (NTRS)

    Ng, Tak-kwong (Inventor); Herath, Jeffrey A. (Inventor)

    2010-01-01

    An integrated system mitigates the effects of a single event upset (SEU) on a reprogrammable field programmable gate array (RFPGA). The system includes (i) a RFPGA having an internal configuration memory, and (ii) a memory for storing a configuration associated with the RFPGA. Logic circuitry programmed into the RFPGA and coupled to the memory reloads a portion of the configuration from the memory into the RFPGA's internal configuration memory at predetermined times. Additional SEU mitigation can be provided by logic circuitry on the RFPGA that monitors and maintains synchronized operation of the RFPGA's digital clock managers.

  15. A head-mounted display-based personal integrated-image monitoring system for transurethral resection of the prostate.

    PubMed

    Yoshida, Soichiro; Kihara, Kazunori; Takeshita, Hideki; Fujii, Yasuhisa

    2014-12-01

    The head-mounted display (HMD) is a new image monitoring system. We developed the Personal Integrated-image Monitoring System (PIM System) using the HMD (HMZ-T2, Sony Corporation, Tokyo, Japan) in combination with video splitters and multiplexers as a surgical guide system for transurethral resection of the prostate (TURP). The imaging information obtained from the cystoscope, the transurethral ultrasonography (TRUS), the video camera attached to the HMD, and the patient's vital signs monitor were split and integrated by the PIM System and a composite image was displayed by the HMD using a four-split screen technique. Wearing the HMD, the lead surgeon and the assistant could simultaneously and continuously monitor the same information displayed by the HMD in an ergonomically efficient posture. Each participant could independently rearrange the images comprising the composite image depending on the engaging step. Two benign prostatic hyperplasia (BPH) patients underwent TURP performed by surgeons guided with this system. In both cases, the TURP procedure was successfully performed, and their postoperative clinical courses had no remarkable unfavorable events. During the procedure, none of the participants experienced any HMD-wear related adverse effects or reported any discomfort.

  16. Three years of operational experience from Schauinsland CTBT monitoring station.

    PubMed

    Zähringer, M; Bieringer, J; Schlosser, C

    2008-04-01

    Data from three years of operation of a low-level aerosol sampler and analyzer (RASA) at Schauinsland monitoring station are reported. The system is part of the International Monitoring System (IMS) for verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The fully automatic system is capable to measure aerosol borne gamma emitters with high sensitivity and routinely quantifies 7Be and 212Pb. The system had a high level of data availability of 90% within the reporting period. A daily screening process rendered 66 tentative identifications of verification relevant radionuclides since the system entered IMS operation in February 2004. Two of these were real events and associated to a plausible source. The remaining 64 cases can consistently be explained by detector background and statistical phenomena. Inter-comparison with data from a weekly sampler operated at the same station shows instabilities of the calibration during the test phase and a good agreement since certification of the system.

  17. Analysis of extreme rain and flood events using a regional hydrologically enhanced hydrometeorological system

    NASA Astrophysics Data System (ADS)

    Yucel, Ismail; Onen, Alper

    2013-04-01

    Evidence is showing that global warming or climate change has a direct influence on changes in precipitation and the hydrological cycle. Extreme weather events such as heavy rainfall and flooding are projected to become much more frequent as climate warms. Regional hydrometeorological system model which couples the atmosphere with physical and gridded based surface hydrology provide efficient predictions for extreme hydrological events. This modeling system can be used for flood forecasting and warning issues as they provide continuous monitoring of precipitation over large areas at high spatial resolution. This study examines the performance of the Weather Research and Forecasting (WRF-Hydro) model that performs the terrain, sub-terrain, and channel routing in producing streamflow from WRF-derived forcing of extreme precipitation events. The capability of the system with different options such as data assimilation is tested for number of flood events observed in basins of western Black Sea Region in Turkey. Rainfall event structures and associated flood responses are evaluated with gauge and satellite-derived precipitation and measured streamflow values. The modeling system shows skills in capturing the spatial and temporal structure of extreme rainfall events and resulted flood hydrographs. High-resolution routing modules activated in the model enhance the simulated discharges.

  18. Detecting NEO Impacts using the International Monitoring System

    NASA Astrophysics Data System (ADS)

    Brown, Peter G.; Dube, Kimberlee; Silber, Elizabeth

    2014-11-01

    As part of the verification regime for the Comprehensive Nuclear Test Ban Treaty an International Monitoring System (IMS) consisting of seismic, hydroacoustic, infrasound and radionuclide technologies has been globally deployed beginning in the late 1990s. The infrasound network sub-component of the IMS consists of 47 active stations as of mid-2014. These microbarograph arrays detect coherent infrasonic signals from a range of sources including volcanoes, man-made explosions and bolides. Bolide detections from IMS stations have been reported since ~2000, but with the maturation of the network over the last several years the rate of detections has increased substantially. Presently the IMS performs semi-automated near real-time global event identification on timescales of 6-12 hours as well as analyst verified event identification having time lags of several weeks. Here we report on infrasound events identified by the IMS between 2010-2014 which are likely bolide impacts. Identification in this context refers to an event being included in one of the event bulletins issued by the IMS. In this untargeted study we find that the IMS globally identifies approximately 16 events per year which are likely bolide impacts. Using data released since the beginning of 2014 of US Government sensor detections (as given at http://neo.jpl.nasa.gov/fireballs/ ) of fireballs we find in a complementary targeted survey that the current IMS system is able to identify ~25% of fireballs with E > 0.1 kT energy. Using all 16 US Government sensor fireballs listed as of July 31, 2014 we are able to detect infrasound from 75% of these events on at least one IMS station. The high ratio of detection/identification is a product of the stricter criteria adopted by the IMS for inclusion in an event bulletin as compared to simple station detection.We discuss energy comparisons between infrasound-estimated energies based on amplitudes and periods and estimates provided by US Government sensors. Specific impact events of interest will be discussed as well as the utility of the global IMS infrasound system for location and timing of future NEAs detected prior to impact.

  19. Time vs. Money: A Quantitative Evaluation of Monitoring Frequency vs. Monitoring Duration.

    PubMed

    McHugh, Thomas E; Kulkarni, Poonam R; Newell, Charles J

    2016-09-01

    The National Research Council has estimated that over 126,000 contaminated groundwater sites are unlikely to achieve low ug/L clean-up goals in the foreseeable future. At these sites, cost-effective, long-term monitoring schemes are needed in order to understand the long-term changes in contaminant concentrations. Current monitoring optimization schemes rely on site-specific evaluations to optimize groundwater monitoring frequency. However, when using linear regression to estimate the long-term zero-order or first-order contaminant attenuation rate, the effect of monitoring frequency and monitoring duration on the accuracy and confidence for the estimated attenuation rate is not site-specific. For a fixed number of monitoring events, doubling the time between monitoring events (e.g., changing from quarterly monitoring to semi-annual monitoring) will double the accuracy of estimated attenuation rate. For a fixed monitoring frequency (e.g., semi-annual monitoring), increasing the number of monitoring events by 60% will double the accuracy of the estimated attenuation rate. Combining these two factors, doubling the time between monitoring events (e.g., quarterly monitoring to semi-annual monitoring) while decreasing the total number of monitoring events by 38% will result in no change in the accuracy of the estimated attenuation rate. However, the time required to collect this dataset will increase by 25%. Understanding that the trade-off between monitoring frequency and monitoring duration is not site-specific should simplify the process of optimizing groundwater monitoring frequency at contaminated groundwater sites. © 2016 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.

  20. Smart acoustic emission system for wireless monitoring of concrete structures

    NASA Astrophysics Data System (ADS)

    Yoon, Dong-Jin; Kim, Young-Gil; Kim, Chi-Yeop; Seo, Dae-Cheol

    2008-03-01

    Acoustic emission (AE) has emerged as a powerful nondestructive tool to detect preexisting defects or to characterize failure mechanisms. Recently, this technique or this kind of principle, that is an in-situ monitoring of inside damages of materials or structures, becomes increasingly popular for monitoring the integrity of large structures. Concrete is one of the most widely used materials for constructing civil structures. In the nondestructive evaluation point of view, a lot of AE signals are generated in concrete structures under loading whether the crack development is active or not. Also, it was required to find a symptom of damage propagation before catastrophic failure through a continuous monitoring. Therefore we have done a practical study in this work to fabricate compact wireless AE sensor and to develop diagnosis system. First, this study aims to identify the differences of AE event patterns caused by both real damage sources and the other normal sources. Secondly, it was focused to develop acoustic emission diagnosis system for assessing the deterioration of concrete structures such as a bridge, dame, building slab, tunnel etc. Thirdly, the wireless acoustic emission system was developed for the application of monitoring concrete structures. From the previous laboratory study such as AE event patterns analysis under various loading conditions, we confirmed that AE analysis provided a promising approach for estimating the condition of damage and distress in concrete structures. In this work, the algorithm for determining the damage status of concrete structures was developed and typical criteria for decision making was also suggested. For the future application of wireless monitoring, a low energy consumable, compact, and robust wireless acoustic emission sensor module was developed and applied to the concrete beam for performance test. Finally, based on the self-developed diagnosis algorithm and compact wireless AE sensor, new AE system for practical AE diagnosis was demonstrated for assessing the conditions of damage and distress in concrete structures.

  1. Continuous event monitoring via a Bayesian predictive approach.

    PubMed

    Di, Jianing; Wang, Daniel; Brashear, H Robert; Dragalin, Vladimir; Krams, Michael

    2016-01-01

    In clinical trials, continuous monitoring of event incidence rate plays a critical role in making timely decisions affecting trial outcome. For example, continuous monitoring of adverse events protects the safety of trial participants, while continuous monitoring of efficacy events helps identify early signals of efficacy or futility. Because the endpoint of interest is often the event incidence associated with a given length of treatment duration (e.g., incidence proportion of an adverse event with 2 years of dosing), assessing the event proportion before reaching the intended treatment duration becomes challenging, especially when the event onset profile evolves over time with accumulated exposure. In particular, in the earlier part of the study, ignoring censored subjects may result in significant bias in estimating the cumulative event incidence rate. Such a problem is addressed using a predictive approach in the Bayesian framework. In the proposed approach, experts' prior knowledge about both the frequency and timing of the event occurrence is combined with observed data. More specifically, during any interim look, each event-free subject will be counted with a probability that is derived using prior knowledge. The proposed approach is particularly useful in early stage studies for signal detection based on limited information. But it can also be used as a tool for safety monitoring (e.g., data monitoring committee) during later stage trials. Application of the approach is illustrated using a case study where the incidence rate of an adverse event is continuously monitored during an Alzheimer's disease clinical trial. The performance of the proposed approach is also assessed and compared with other Bayesian and frequentist methods via simulation. Copyright © 2015 John Wiley & Sons, Ltd.

  2. A Review of Wearable Technologies for Elderly Care that Can Accurately Track Indoor Position, Recognize Physical Activities and Monitor Vital Signs in Real Time

    PubMed Central

    Wang, Zhihua; Yang, Zhaochu; Dong, Tao

    2017-01-01

    Rapid growth of the aged population has caused an immense increase in the demand for healthcare services. Generally, the elderly are more prone to health problems compared to other age groups. With effective monitoring and alarm systems, the adverse effects of unpredictable events such as sudden illnesses, falls, and so on can be ameliorated to some extent. Recently, advances in wearable and sensor technologies have improved the prospects of these service systems for assisting elderly people. In this article, we review state-of-the-art wearable technologies that can be used for elderly care. These technologies are categorized into three types: indoor positioning, activity recognition and real time vital sign monitoring. Positioning is the process of accurate localization and is particularly important for elderly people so that they can be found in a timely manner. Activity recognition not only helps ensure that sudden events (e.g., falls) will raise alarms but also functions as a feasible way to guide people’s activities so that they avoid dangerous behaviors. Since most elderly people suffer from age-related problems, some vital signs that can be monitored comfortably and continuously via existing techniques are also summarized. Finally, we discussed a series of considerations and future trends with regard to the construction of “smart clothing” system. PMID:28208620

  3. A Review of Wearable Technologies for Elderly Care that Can Accurately Track Indoor Position, Recognize Physical Activities and Monitor Vital Signs in Real Time.

    PubMed

    Wang, Zhihua; Yang, Zhaochu; Dong, Tao

    2017-02-10

    Rapid growth of the aged population has caused an immense increase in the demand for healthcare services. Generally, the elderly are more prone to health problems compared to other age groups. With effective monitoring and alarm systems, the adverse effects of unpredictable events such as sudden illnesses, falls, and so on can be ameliorated to some extent. Recently, advances in wearable and sensor technologies have improved the prospects of these service systems for assisting elderly people. In this article, we review state-of-the-art wearable technologies that can be used for elderly care. These technologies are categorized into three types: indoor positioning, activity recognition and real time vital sign monitoring. Positioning is the process of accurate localization and is particularly important for elderly people so that they can be found in a timely manner. Activity recognition not only helps ensure that sudden events (e.g., falls) will raise alarms but also functions as a feasible way to guide people's activities so that they avoid dangerous behaviors. Since most elderly people suffer from age-related problems, some vital signs that can be monitored comfortably and continuously via existing techniques are also summarized. Finally, we discussed a series of considerations and future trends with regard to the construction of "smart clothing" system.

  4. Towards a Satellite-Based Near Real-Time Monitoring System for Water Quality; September 27th 2017

    EPA Science Inventory

    Declining water quality in inland and coastal systems has become, and will continue to be, a major environmental, social and economic problem as human populations increase, agricultural activities expand, and climate change effects on hydrological cycles and extreme events become...

  5. The Edison Environmental Center Permeable Pavement Site: Initial Results from a Stormwater Control Designed for Monitoring - Paper

    EPA Science Inventory

    There exist few detailed studies of full-scale, replicated, actively-used permeable pavement systems. Practitioners need additional studies of permeable pavement systems in its intended application (parking lot, roadway, etc.) across a range of climatic events, daily usage condit...

  6. The Edison Environmental Center Permeable Pavement Site: Initial Results from a Stormwater Control Designed for Monitoring - Slides

    EPA Science Inventory

    There exist few detailed studies of full-scale, replicated, actively-used permeable pavement systems. Practitioners need additional studies of permeable pavement systems in its intended application (parking lot, roadway, etc.) across a range of climatic events, daily usage condit...

  7. Monitoring the Microgravity Environment Quality On-board the International Space Station Using Soft Computing Techniques. Part 2; Preliminary System Performance Results

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Lin, Paul P.; Weiss, Daniel S.

    2002-01-01

    This paper presents the preliminary performance results of the artificial intelligence monitoring system in full operational mode using near real time acceleration data downlinked from the International Space Station. Preliminary microgravity environment characterization analysis result for the International Space Station (Increment-2), using the monitoring system is presented. Also, comparison between the system predicted performance based on ground test data for the US laboratory "Destiny" module and actual on-orbit performance, using measured acceleration data from the U.S. laboratory module of the International Space Station is presented. Finally, preliminary on-orbit disturbance magnitude levels are presented for the Experiment of Physics of Colloids in Space, which are compared with on ground test data. The ground test data for the Experiment of Physics of Colloids in Space were acquired from the Microgravity Emission Laboratory, located at the NASA Glenn Research Center, Cleveland, Ohio. The artificial intelligence was developed by the NASA Glenn Principal Investigator Microgravity Services Project to help the principal investigator teams identify the primary vibratory disturbance sources that are active, at any moment of time, on-board the International Space Station, which might impact the microgravity environment their experiments are exposed to. From the Principal Investigator Microgravity Services' web site, the principal investigator teams can monitor via a dynamic graphical display, implemented in Java, in near real time, which event(s) is/are on, such as crew activities, pumps, fans, centrifuges, compressor, crew exercise, structural modes, etc., and decide whether or not to run their experiments, whenever that is an option, based on the acceleration magnitude and frequency sensitivity associated with that experiment. This monitoring system detects primarily the vibratory disturbance sources. The system has built-in capability to detect both known and unknown vibratory disturbance sources. Several soft computing techniques such as Kohonen's Self-Organizing Feature Map, Learning Vector Quantization, Back-Propagation Neural Networks, and Fuzzy Logic were used to design the system.

  8. Monitoring the CMS strip tracker readout system

    NASA Astrophysics Data System (ADS)

    Mersi, S.; Bainbridge, R.; Baulieu, G.; Bel, S.; Cole, J.; Cripps, N.; Delaere, C.; Drouhin, F.; Fulcher, J.; Giassi, A.; Gross, L.; Hahn, K.; Mirabito, L.; Nikolic, M.; Tkaczyk, S.; Wingham, M.

    2008-07-01

    The CMS Silicon Strip Tracker at the LHC comprises a sensitive area of approximately 200 m2 and 10 million readout channels. Its data acquisition system is based around a custom analogue front-end chip. Both the control and the readout of the front-end electronics are performed by off-detector VME boards in the counting room, which digitise the raw event data and perform zero-suppression and formatting. The data acquisition system uses the CMS online software framework to configure, control and monitor the hardware components and steer the data acquisition. The first data analysis is performed online within the official CMS reconstruction framework, which provides many services, such as distributed analysis, access to geometry and conditions data, and a Data Quality Monitoring tool based on the online physics reconstruction. The data acquisition monitoring of the Strip Tracker uses both the data acquisition and the reconstruction software frameworks in order to provide real-time feedback to shifters on the operational state of the detector, archiving for later analysis and possibly trigger automatic recovery actions in case of errors. Here we review the proposed architecture of the monitoring system and we describe its software components, which are already in place, the various monitoring streams available, and our experiences of operating and monitoring a large-scale system.

  9. Assessment of SRS ambient air monitoring network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, K.; Jannik, T.

    Three methodologies have been used to assess the effectiveness of the existing ambient air monitoring system in place at the Savannah River Site in Aiken, SC. Effectiveness was measured using two metrics that have been utilized in previous quantification of air-monitoring network performance; frequency of detection (a measurement of how frequently a minimum number of samplers within the network detect an event), and network intensity (a measurement of how consistent each sampler within the network is at detecting events). In addition to determining the effectiveness of the current system, the objective of performing this assessment was to determine what, ifmore » any, changes could make the system more effective. Methodologies included 1) the Waite method of determining sampler distribution, 2) the CAP88- PC annual dose model, and 3) a puff/plume transport model used to predict air concentrations at sampler locations. Data collected from air samplers at SRS in 2015 compared with predicted data resulting from the methodologies determined that the frequency of detection for the current system is 79.2% with sampler efficiencies ranging from 5% to 45%, and a mean network intensity of 21.5%. One of the air monitoring stations had an efficiency of less than 10%, and detected releases during just one sampling period of the entire year, adding little to the overall network intensity. By moving or removing this sampler, the mean network intensity increased to about 23%. Further work in increasing the network intensity and simulating accident scenarios to further test the ambient air system at SRS is planned« less

  10. Agents for Plan Monitoring and Repair

    DTIC Science & Technology

    2003-04-01

    events requires time and effort. In this paper, we describe how Heracles and Theseus , two information gathering and monitoring tools that we built...on an information agent platform, called Theseus , that provides the technology for efficiently executing agents for information gather- ing and...we can easily define a system for interactively planning a trip. The second is the Theseus information agent platform [Barish et al., 2000], which

  11. Science Goal Driven Observing: A Step Towards Maximizing Science Returns and Spacecraft Autonomy

    NASA Technical Reports Server (NTRS)

    Koratkar, Anuradha; Grosvenor, Sandy; Jones, Jeremy; Memarsadeghi, Nargess; Wolf, Karl

    2002-01-01

    In the coming decade, the drive to increase the scientific returns on capital investment and to reduce costs will force automation to be implemented in many of the scientific tasks that have traditionally been manually overseen. Thus, spacecraft autonomy will become an even greater part of mission operations. While recent missions have made great strides in the ability to autonomously monitor and react to changing health and physical status of spacecraft, little progress has been made in responding quickly to science driven events. The new generation of space-based telescopes/observatories will see deeper, with greater clarity, and they will generate data at an unprecedented rate. Yet, while onboard data processing and storage capability will increase rapidly, bandwidth for downloading data will not increase as fast and can become a significant bottleneck and cost of a science program. For observations of inherently variable targets and targets of opportunity, the ability to recognize early if an observation will not meet the science goals of variability or minimum brightness, and react accordingly, can have a major positive impact on the overall scientific returns of an observatory and on its operational costs. If the observatory can reprioritize the schedule to focus on alternate targets, discard uninteresting observations prior to downloading, or download them at a reduced resolution its overall efficiency will be dramatically increased. We are investigating and developing tools for a science goal monitoring (SGM) system. The SGM will have an interface to help capture higher-level science goals from scientists and translate them into a flexible observing strategy that SGM can execute and monitor. SGM will then monitor the incoming data stream and interface with data processing systems to recognize significant events. When an event occurs, the system will use the science goals given it to reprioritize observations, and react appropriately and/or communicate with ground systems - both human and machine - for confirmation and/or further high priority analyses.

  12. Science Goal Monitor: Science Goal Driven Automation for NASA Missions

    NASA Technical Reports Server (NTRS)

    Koratkar, Anuradha; Grosvenor, Sandy; Jung, John; Pell, Melissa; Matusow, David; Bailyn, Charles

    2004-01-01

    Infusion of automation technologies into NASA s future missions will be essential because of the need to: (1) effectively handle an exponentially increasing volume of scientific data, (2) successfully meet dynamic, opportunistic scientific goals and objectives, and (3) substantially reduce mission operations staff and costs. While much effort has gone into automating routine spacecraft operations to reduce human workload and hence costs, applying intelligent automation to the science side, i.e., science data acquisition, data analysis and reactions to that data analysis in a timely and still scientifically valid manner, has been relatively under-emphasized. In order to introduce science driven automation in missions, we must be able to: capture and interpret the science goals of observing programs, represent those goals in machine interpretable language; and allow spacecrafts onboard systems to autonomously react to the scientist's goals. In short, we must teach our platforms to dynamically understand, recognize, and react to the scientists goals. The Science Goal Monitor (SGM) project at NASA Goddard Space Flight Center is a prototype software tool being developed to determine the best strategies for implementing science goal driven automation in missions. The tools being developed in SGM improve the ability to monitor and react to the changing status of scientific events. The SGM system enables scientists to specify what to look for and how to react in descriptive rather than technical terms. The system monitors streams of science data to identify occurrences of key events previously specified by the scientist. When an event occurs, the system autonomously coordinates the execution of the scientist s desired reactions. Through SGM, we will improve om understanding about the capabilities needed onboard for success, develop metrics to understand the potential increase in science returns, and develop an operational prototype so that the perceived risks associated with increased use of automation can be reduced.

  13. Geodetic Space Weather Monitoring by means of Ionosphere Modelling

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael

    2017-04-01

    The term space weather indicates physical processes and phenomena in space caused by radiation of energy mainly from the Sun. Manifestations of space weather are (1) variations of the Earth's magnetic field, (2) the polar lights in the northern and southern hemisphere, (3) variations within the ionosphere as part of the upper atmosphere characterized by the existence of free electrons and ions, (4) the solar wind, i.e. the permanent emission of electrons and photons, (5) the interplanetary magnetic field, and (6) electric currents, e.g. the van Allen radiation belt. It can be stated that ionosphere disturbances are often caused by so-called solar storms. A solar storm comprises solar events such as solar flares and coronal mass ejections (CMEs) which have different effects on the Earth. Solar flares may cause disturbances in positioning, navigation and communication. CMEs can effect severe disturbances and in extreme cases damages or even destructions of modern infrastructure. Examples are interruptions to satellite services including the global navigation satellite systems (GNSS), communication systems, Earth observation and imaging systems or a potential failure of power networks. Currently the measurements of solar satellite missions such as STEREO and SOHO are used to forecast solar events. Besides these measurements the Earth's ionosphere plays another key role in monitoring the space weather, because it responses to solar storms with an increase of the electron density. Space-geodetic observation techniques, such as terrestrial GNSS, satellite altimetry, space-borne GPS (radio occultation), DORIS and VLBI provide valuable global information about the state of the ionosphere. Additionally geodesy has a long history and large experience in developing and using sophisticated analysis and combination techniques as well as empirical and physical modelling approaches. Consequently, geodesy is predestinated for strongly supporting space weather monitoring via modelling the ionosphere and detecting and forecasting its disturbances. At present a couple of nations, such as the US, UK, Japan, Canada and China, are taken the threats from extreme space weather events seriously and support the development of observing strategies and fundamental research. However, (extreme) space weather events are in all their consequences on the modern highly technologized society, causative global problems which have to be treated globally and not regionally or even nationally. Consequently, space weather monitoring must include (1) all space-geodetic observation techniques and (2) geodetic evaluation methods such as data combination, real-time modelling and forecast. In other words, geodetic space weather monitoring comprises the basic ideas of GGOS and will provide products such as forecasts of severe solar events in order to initiate necessary activities to protect the infrastructure of modern society.

  14. A Systematic Review of Economic Evaluations of Pacemaker Telemonitoring Systems.

    PubMed

    López-Villegas, Antonio; Catalán-Matamoros, Daniel; Martín-Saborido, Carlos; Villegas-Tripiana, Irene; Robles-Musso, Emilio

    2016-02-01

    Over the last decade, telemedicine applied to pacemaker monitoring has undergone extraordinary growth. It is not known if telemonitoring is more or less efficient than conventional monitoring. The aim of this study was to carry out a systematic review analyzing the available evidence on resource use and health outcomes in both follow-up modalities. We searched 11 databases and included studies published up until November 2014. The inclusion criteria were: a) experimental or observational design; b) studies based on complete economic evaluations; c) patients with pacemakers, and d) telemonitoring compared with conventional hospital monitoring. Seven studies met the inclusion criteria, providing information on 2852 patients, with a mean age of 81 years. The main indication for device implantation was atrioventricular block. With telemonitoring, cardiovascular events were detected and treated 2 months earlier than with conventional monitoring, thus reducing length of hospital stay by 34% and reducing routine and emergency hospital visits as well. There were no significant intergroup differences in perceived quality of life or number of adverse events. The cost of telemonitoring was 60% lower than that of conventional hospital monitoring. Compared with conventional monitoring, cardiovascular events were detected earlier and the number or hospitalizations and hospital visits was reduced with pacemaker telemonitoring. In addition, the costs associated with follow-up were lower with telemonitoring. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  15. Potential of turbidity monitoring for real time control of pollutant discharge in sewers during rainfall events.

    PubMed

    Lacour, C; Joannis, C; Gromaire, M-C; Chebbo, G

    2009-01-01

    Turbidity sensors can be used to continuously monitor the evolution of pollutant mass discharge. For two sites within the Paris combined sewer system, continuous turbidity, conductivity and flow data were recorded at one-minute time intervals over a one-year period. This paper is intended to highlight the variability in turbidity dynamics during wet weather. For each storm event, turbidity response aspects were analysed through different classifications. The correlation between classification and common parameters, such as the antecedent dry weather period, total event volume per impervious hectare and both the mean and maximum hydraulic flow for each event, was also studied. Moreover, the dynamics of flow and turbidity signals were compared at the event scale. No simple relation between turbidity responses, hydraulic flow dynamics and the chosen parameters was derived from this effort. Knowledge of turbidity dynamics could therefore potentially improve wet weather management, especially when using pollution-based real-time control (P-RTC) since turbidity contains information not included in hydraulic flow dynamics and not readily predictable from such dynamics.

  16. Comparing near-regional and local measurements of infrasound from Mount Erebus, Antarctica: Implications for monitoring

    NASA Astrophysics Data System (ADS)

    Dabrowa, A. L.; Green, D. N.; Johnson, J. B.; Phillips, J. C.; Rust, A. C.

    2014-11-01

    Local (100 s of metres from vent) monitoring of volcanic infrasound is a common tool at volcanoes characterized by frequent low-magnitude eruptions, but it is generally not safe or practical to have sensors so close to the vent during more intense eruptions. To investigate the potential and limitations of monitoring at near-regional ranges (10 s of km) we studied infrasound detection and propagation at Mount Erebus, Antarctica. This site has both a good local monitoring network and an additional International Monitoring System infrasound array, IS55, located 25 km away. We compared data recorded at IS55 with a set of 117 known Strombolian events that were recorded with the local network in January 2006. 75% of these events were identified at IS55 by an analyst looking for a pressure transient coincident with an F-statistic detection, which identifies coherent infrasound signals. With the data from January 2006, we developed and calibrated an automated signal-detection algorithm based on threshold values of both the F-statistic and the correlation coefficient. Application of the algorithm across IS55 data for all of 2006 identified infrasonic signals expected to be Strombolian explosions, and proved reliable for indicating trends in eruption frequency. However, detectability at IS55 of known Strombolian events depended strongly on the local signal amplitude: 90% of events with local amplitudes > 25 Pa were identified at IS55, compared to only 26% of events with local amplitudes < 25 Pa. Event detection was also affected by considerable variation in amplitude decay rates between the local and near-regional sensors. Amplitudes recorded at IS55 varied between 3% and 180% of the amplitude expected assuming hemispherical spreading, indicating that amplitudes recorded at near-regional ranges to Erebus are unreliable indicators of event magnitude. Comparing amplitude decay rates with locally collected radiosonde data indicates a close relationship between recorded amplitude and lower atmosphere effective sound speed structure. At times of increased sound speed gradient, higher amplitude decay rates are observed, consistent with increased upward refraction of acoustic energy along the propagation path. This study indicates that whilst monitoring activity levels at near-regional ranges can be successful, variable amplitude decay rate means quantitative analysis of infrasound data for eruption intensity and magnitude is not advisable without the consideration of local atmospheric sound speed structure.

  17. Simulation of Greenhouse Climate Monitoring and Control with Wireless Sensor Network and Event-Based Control

    PubMed Central

    Pawlowski, Andrzej; Guzman, Jose Luis; Rodríguez, Francisco; Berenguel, Manuel; Sánchez, José; Dormido, Sebastián

    2009-01-01

    Monitoring and control of the greenhouse environment play a decisive role in greenhouse production processes. Assurance of optimal climate conditions has a direct influence on crop growth performance, but it usually increases the required equipment cost. Traditionally, greenhouse installations have required a great effort to connect and distribute all the sensors and data acquisition systems. These installations need many data and power wires to be distributed along the greenhouses, making the system complex and expensive. For this reason, and others such as unavailability of distributed actuators, only individual sensors are usually located in a fixed point that is selected as representative of the overall greenhouse dynamics. On the other hand, the actuation system in greenhouses is usually composed by mechanical devices controlled by relays, being desirable to reduce the number of commutations of the control signals from security and economical point of views. Therefore, and in order to face these drawbacks, this paper describes how the greenhouse climate control can be represented as an event-based system in combination with wireless sensor networks, where low-frequency dynamics variables have to be controlled and control actions are mainly calculated against events produced by external disturbances. The proposed control system allows saving costs related with wear minimization and prolonging the actuator life, but keeping promising performance results. Analysis and conclusions are given by means of simulation results. PMID:22389597

  18. Simulation of greenhouse climate monitoring and control with wireless sensor network and event-based control.

    PubMed

    Pawlowski, Andrzej; Guzman, Jose Luis; Rodríguez, Francisco; Berenguel, Manuel; Sánchez, José; Dormido, Sebastián

    2009-01-01

    Monitoring and control of the greenhouse environment play a decisive role in greenhouse production processes. Assurance of optimal climate conditions has a direct influence on crop growth performance, but it usually increases the required equipment cost. Traditionally, greenhouse installations have required a great effort to connect and distribute all the sensors and data acquisition systems. These installations need many data and power wires to be distributed along the greenhouses, making the system complex and expensive. For this reason, and others such as unavailability of distributed actuators, only individual sensors are usually located in a fixed point that is selected as representative of the overall greenhouse dynamics. On the other hand, the actuation system in greenhouses is usually composed by mechanical devices controlled by relays, being desirable to reduce the number of commutations of the control signals from security and economical point of views. Therefore, and in order to face these drawbacks, this paper describes how the greenhouse climate control can be represented as an event-based system in combination with wireless sensor networks, where low-frequency dynamics variables have to be controlled and control actions are mainly calculated against events produced by external disturbances. The proposed control system allows saving costs related with wear minimization and prolonging the actuator life, but keeping promising performance results. Analysis and conclusions are given by means of simulation results.

  19. Monitoring Drought Conditions in the Navajo Nation Using NASA Earth Observations

    NASA Technical Reports Server (NTRS)

    Ly, Vickie; Gao, Michael; Cary, Cheryl; Turnbull-Appell, Sophie; Surunis, Anton

    2016-01-01

    The Navajo Nation, a 65,700 sq km Native American territory located in the southwestern United States, has been increasingly impacted by severe drought events and changes in climate. These events are coupled with a lack of domestic water infrastructure and economic resources, leaving approximately one-third of the population without access to potable water in their homes. Current methods of monitoring drought are dependent on state-based monthly Standardized Precipitation Index value maps calculated by the Western Regional Climate Center. However, these maps do not provide the spatial resolution needed to illustrate differences in drought severity across the vast Nation. To better understand and monitor drought events and drought regime changes in the Navajo Nation, this project created a geodatabase of historical climate information specific to the area, and a decision support tool to calculate average Standardized Precipitation Index values for user-specified areas. The tool and geodatabase use Tropical Rainfall Monitoring Mission (TRMM) and Global Precipitation Monitor (GPM) observed precipitation data and Parameter-elevation Relationships on Independent Slopes Model modeled historical precipitation data, as well as NASA's modeled Land Data Assimilation Systems deep soil moisture, evaporation, and transpiration data products. The geodatabase and decision support tool will allow resource managers in the Navajo Nation to utilize current and future NASA Earth observation data for increased decision-making capacity regarding future climate change impact on water resources.

  20. Tsunami Forecasting and Monitoring in New Zealand

    NASA Astrophysics Data System (ADS)

    Power, William; Gale, Nora

    2011-06-01

    New Zealand is exposed to tsunami threats from several sources that vary significantly in their potential impact and travel time. One route for reducing the risk from these tsunami sources is to provide advance warning based on forecasting and monitoring of events in progress. In this paper the National Tsunami Warning System framework, including the responsibilities of key organisations and the procedures that they follow in the event of a tsunami threatening New Zealand, are summarised. A method for forecasting threat-levels based on tsunami models is presented, similar in many respects to that developed for Australia by Allen and Greenslade (Nat Hazards 46:35-52, 2008), and a simple system for easy access to the threat-level forecasts using a clickable pdf file is presented. Once a tsunami enters or initiates within New Zealand waters, its progress and evolution can be monitored in real-time using a newly established network of online tsunami gauge sensors placed at strategic locations around the New Zealand coasts and offshore islands. Information from these gauges can be used to validate and revise forecasts, and assist in making the all-clear decision.

  1. High resolution solar observations in the context of space weather prediction

    NASA Astrophysics Data System (ADS)

    Yang, Guo

    Space weather has a great impact on the Earth and human life. It is important to study and monitor active regions on the solar surface and ultimately to predict space weather based on the Sun's activity. In this study, a system that uses the full power of speckle masking imaging by parallel processing to obtain high-spatial resolution images of the solar surface in near real-time has been developed and built. The application of this system greatly improves the ability to monitor the evolution of solar active regions and to predict the adverse effects of space weather. The data obtained by this system have also been used to study fine structures on the solar surface and their effects on the upper solar atmosphere. A solar active region has been studied using high resolution data obtained by speckle masking imaging. Evolution of a pore in an active region presented. Formation of a rudimentary penumbra is studied. The effects of the change of the magnetic fields on the upper level atmosphere is discussed. Coronal Mass Ejections (CMEs) have a great impact on space weather. To study the relationship between CMEs and filament disappearance, a list of 431 filament and prominence disappearance events has been compiled. Comparison of this list with CME data obtained by satellite has shown that most filament disappearances seem to have no corresponding CME events. Even for the limb events, only thirty percent of filament disappearances are associated with CMEs. A CME event that was observed on March 20, 2000 has been studied in detail. This event did not show the three-parts structure of typical CMEs. The kinematical and morphological properties of this event were examined.

  2. Assessing the Utility of a Satellite-Based Flood Inundation and Socio-Economic Impact Tool for the Lower Mekong River Basin

    NASA Astrophysics Data System (ADS)

    Ahamed, A.; Bolten, J. D.

    2016-12-01

    Flood disaster events in Southeast Asia result in significant loss of life and economic damage. Remote sensing information systems designed to monitor floods and assess their severity can help governments and international agencies formulate an effective response before and during flood events, and ultimately alleviate impacts to population, infrastructure, and agriculture. Recent examples of destructive flood events in the Lower Mekong River Basin occurred in 2000, 2011, and 2013. Floods can be particularly costly in the developing countries of Southeast Asia where large portions of the population live on or near the floodplain (Jonkman, 2005; Kirsch et al., 2012; Long and Trong, 2001; Stromberg. 2007). Regional studies (Knox, 1993; Mirza, 2002; Schiermeier, 2011; Västilä et al, 2010) and Intergovernmental Panel on Climate Change (IPCC, 2007) projections suggest that precipitation extremes and flood frequency are increasing. Thus, improved systems to rapidly monitor flooding in vulnerable areas are needed. This study determines surface water extent for current and historic flood events by using stacks of historic multispectral Moderate-resolution Imaging Spectroradiometer (MODIS) 250-meter imagery and the spectral Normalized Difference Vegetation Index (NDVI) signatures of permanent water bodies (MOD44W). Supporting software tools automatically assess flood impacts to population and infrastructure to provide a rapid first set of impact numbers generated hours after the onset of an event. The near real-time component uses twice daily imagery acquired at 3-hour latency, and performs image compositing routines to minimize cloud cover. Case studies for historic flood events are presented. Results suggest that near real-time remote sensing-based observation and impact assessment systems can serve as effective regional decision support tools for governments, international agencies, and disaster responders.

  3. The role of Environmental Health System air quality monitors in Space Station Contingency Operations

    NASA Technical Reports Server (NTRS)

    Limero, Thomas F.; Wilson, Steve; Perlot, Susan; James, John

    1992-01-01

    This paper describes the Space Station Freedom (SSF) Environmental Health System's air-quality monitoring strategy and instrumentation. A two-tier system has been developed, consisting of first-alert instruments that warn the crew of airborne contamination and a volatile organic analyzer that can identify volatile organic contaminants in near-real time. The strategy for air quality monitoring on SSF is designed to provide early detection so that the contamination can be confined to one module and so that crew health and safety can be protected throughout the contingency event. The use of air-quality monitors in fixed and portable modes will be presented as a means of following the progress of decontamination efforts and ensuring acceptable air quality in a module after an incident. The technology of each instrument will be reviewed briefly; the main focus of this paper, however, will be the use of air-quality monitors before, during, and after contingency incidents.

  4. Monitoring of EPIC 204278916 requested

    NASA Astrophysics Data System (ADS)

    Waagen, Elizabeth O.

    2017-04-01

    Dr. Carlo Manara (ESA Science and Technology SCI-S, the Netherlands) and colleagues have requested AAVSO assistance in monitoring the young, disk-bearing low-mass (M type) pre-main-sequence star EPIC 204278916 (2MASS J16020757-2257467). Dr. Manara reports that this star showed "a very interesting dimming event in August-September 2014 which may be caused by transiting material (exo-comets like) (Scaringi et al., 2016MNRAS.463.2265S, https://ui.adsabs.harvard.edu/#abs/2016MNRAS.tmp.1267S/abstract). It would be very useful to know whether this event has any periodicity in order to constrain the possible scenario...The major dimming [up to 65%] we see is 1.2 mag in V, others are 0.5-0.8 mag" in V. He also notes that "the dimming event we saw lasted for some 25 days, although the most extreme event was 1 day long. Based on the noisy WASP data we have [there are] some suggestions that the event happens every 100 days, but we are not sure about it." Manara requests ongoing monitoring of this system to look for additional dimming events and to observe any that are seen, so that he and his colleagues may determine if periodicity exists in these events and to study its nature. Beginning now and continuing until further notice, nightly observations in V are requested. Weekly observations in B are also requested. If a dimming event occurs, observations in V and B at a higher cadence are requested. Finder charts with sequence may be created using the AAVSO Variable Star Plotter (https://www.aavso.org/vsp). Observations should be submitted to the AAVSO International Database. See full Alert Notice for more details.

  5. Monitoring Effects of Climatic stresses on a Papyrus Wetland System in Eastern Uganda Using Times Series of Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Kayendeke, Ellen; French, Helen K.; Kansiime, Frank; Bamutaze, Yazidhi

    2017-04-01

    Papyrus wetlands predominant in southern, central and eastern Africa; are important in supporting community livelihoods since they provide land for agriculture, materials for building and craft making, as well as services of water purification and water storage. Papyrus wetlands are dominated by a sedge Cyperus papyrus, which is rooted at wetland edges but floats in open water with the help of a root mat composed of intermingled roots and rhizomes. The hypothesis is that the papyrus mat structure reduces flow velocity and increases storage volume during storm events, which not only helps to mitigate flood events but aids in storage of excess water that can be utilised during the dry seasons. However, due to sparse gauging there is inadequate meteorological and hydrological data for continuous monitoring of the hydrological functioning of papyrus systems. The objective of this study was to assess the potential of utilising freely available remote sensing data (MODIS, Landsat, and Sentinel-1) for cost effective monitoring of papyrus wetland systems, and their response to climatic stresses. This was done through segmentation of MODIS NDVI and Landsat derived NDWI datasets; as well as classification of Sentinel-1 images taken in wet and dry seasons of 2015 and 2016. The classified maps were used as proxies for changes in hydrological conditions with time. The preliminary results show that it is possible to monitor changes in biomass, wetland inundation extent, flooded areas, as well as changes in moisture content in surrounding agricultural areas in the different seasons. Therefore, we propose that remote sensing data, when complemented with available meteorological data, is a useful resource for monitoring changes in the papyrus wetland systems as a result of climatic and human induced stresses.

  6. A fault-tolerant intelligent robotic control system

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Tso, Kam Sing

    1993-01-01

    This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.

  7. The upgraded data acquisition system for beam loss monitoring at the Fermilab Tevatron and Main Injector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumbaugh, A.; Briegel, C.; Brown, B.C.

    2011-11-01

    A VME-based data acquisition system for beam-loss monitors has been developed and is in use in the Tevatron and Main Injector accelerators at the Fermilab complex. The need for enhanced beam-loss protection when the Tevatron is operating in collider-mode was the main driving force for the new design. Prior to the implementation of the present system, the beam-loss monitor system was disabled during collider operation and protection of the Tevatron magnets relied on the quench protection system. The new Beam-Loss Monitor system allows appropriate abort logic and thresholds to be set over the full set of collider operating conditions. Themore » system also records a history of beam-loss data prior to a beam-abort event for post-abort analysis. Installation of the Main Injector system occurred in the fall of 2006 and the Tevatron system in the summer of 2007. Both systems were fully operation by the summer of 2008. In this paper we report on the overall system design, provide a description of its normal operation, and show a number of examples of its use in both the Main Injector and Tevatron.« less

  8. The upgraded data acquisition system for beam loss monitoring at the Fermilab Tevatron and Main Injector

    NASA Astrophysics Data System (ADS)

    Baumbaugh, A.; Briegel, C.; Brown, B. C.; Capista, D.; Drennan, C.; Fellenz, B.; Knickerbocker, K.; Lewis, J. D.; Marchionni, A.; Needles, C.; Olson, M.; Pordes, S.; Shi, Z.; Still, D.; Thurman-Keup, R.; Utes, M.; Wu, J.

    2011-11-01

    A VME-based data acquisition system for beam-loss monitors has been developed and is in use in the Tevatron and Main Injector accelerators at the Fermilab complex. The need for enhanced beam-loss protection when the Tevatron is operating in collider-mode was the main driving force for the new design. Prior to the implementation of the present system, the beam-loss monitor system was disabled during collider operation and protection of the Tevatron magnets relied on the quench protection system. The new Beam-Loss Monitor system allows appropriate abort logic and thresholds to be set over the full set of collider operating conditions. The system also records a history of beam-loss data prior to a beam-abort event for post-abort analysis. Installation of the Main Injector system occurred in the fall of 2006 and the Tevatron system in the summer of 2007. Both systems were fully operation by the summer of 2008. In this paper we report on the overall system design, provide a description of its normal operation, and show a number of examples of its use in both the Main Injector and Tevatron.

  9. Early prediction of eruption site using lightning location data: An operational real-time system in Iceland

    NASA Astrophysics Data System (ADS)

    Arason, Þórður; Bjornsson, Halldór; Nína Petersen, Guðrún

    2013-04-01

    Eruption of subglacial volcanoes may lead to catastrophic floods and thus early determination of the exact eruption site may be critical to civil protection evacuation plans. A system is being developed that automatically monitors and analyses volcanic lightning in Iceland. The system predicts the eruption site location from mean lightning locations, taking into account upper level wind. In estimating mean lightning locations, outliers are automatically omitted. A simple wind correction is performed based on the vector wind at the 500 hPa pressure level in the latest radiosonde from Keflavík airport. The system automatically creates a web page with maps and tables showing individual lightning locations and mean locations with and without wind corrections along with estimates of uncetainty. A dormant automatic monitoring system, waiting for a rare event, potentially for several years, is quite susceptible to degeneration during the waiting period, e.g. due to computer or other IT-system upgrades. However, ordinary weather thunderstorms in Iceland should initiate special monitoring and automatic analysis of this system in the same fashion as during a volcanic eruption. Such ordinary weather thunderstorm events will be used to observe anomalies and malfunctions in the system. The essential elements of this system will be described. An example is presented of how the system would have worked during the first hours of the Grímsvötn 2011 eruption. In that case the exact eruption site, within the Grímsvötn caldera, was first known about 15 hours into the eruption.

  10. A new approach to generating research-quality phenology data: The USA National Phenology Monitoring System

    NASA Astrophysics Data System (ADS)

    Denny, E. G.; Miller-Rushing, A. J.; Haggerty, B. P.; Wilson, B. E.

    2009-12-01

    The USA National Phenology Network has recently initiated a national effort to encourage people at different levels of expertise—from backyard naturalists to professional scientists—to observe phenological events and contribute to a national database that will be used to greatly improve our understanding of spatio-temporal variation in phenology and associated phenological responses to climate change. Traditional phenological observation protocols identify specific single dates at which individual phenological events are observed, but the scientific usefulness of long-term phenological observations can be improved with a more carefully structured protocol. At the USA-NPN we have developed a new approach that directs observers to record each day that they observe an individual plant, and to assess and report the state of specific life stages (or phenophases) as occurring or not occurring on that plant for each observation date. Evaluation is phrased in terms of simple, easy-to-understand, questions (e.g. “Do you see open flowers?”), which makes it very appropriate for a broad audience. From this method, a rich dataset of phenological metrics can be extracted, including the duration of a phenophase (e.g. open flowers), the beginning and end points of a phenophase (e.g. traditional phenological events such as first flower and last flower), multiple distinct occurrences of phenophases within a single growing season (e.g multiple flowering events, common in drought-prone regions), as well as quantification of sampling frequency and observational uncertainties. The system also includes a mechanism for translation of phenophase start and end points into standard traditional phenological events to facilitate comparison of contemporary data collected with this new “phenophase status” monitoring approach to historical datasets collected with the “phenological event” monitoring approach. These features greatly enhance the utility of the resulting data for statistical analyses addressing questions such as how phenological events vary in time and space, and in response to global change.

  11. Vaxtracker: Active on-line surveillance for adverse events following inactivated influenza vaccine in children.

    PubMed

    Cashman, Patrick; Moberley, Sarah; Dalton, Craig; Stephenson, Jody; Elvidge, Elissa; Butler, Michelle; Durrheim, David N

    2014-09-22

    Vaxtracker is a web based survey for active post marketing surveillance of Adverse Events Following Immunisation. It is designed to efficiently monitor vaccine safety of new vaccines by early signal detection of serious adverse events. The Vaxtracker system automates contact with the parents or carers of immunised children by email and/or sms message to their smart phone. A hyperlink on the email and text messages links to a web based survey exploring adverse events following the immunisation. The Vaxtracker concept was developed during 2011 (n=21), and piloted during the 2012 (n=200) and 2013 (n=477) influenza seasons for children receiving inactivated influenza vaccine (IIV) in the Hunter New England Local Health District, New South Wales, Australia. Survey results were reviewed by surveillance staff to detect any safety signals and compare adverse event frequencies among the different influenza vaccines administered. In 2012, 57% (n=113) of the 200 participants responded to the online survey and 61% (290/477) in 2013. Vaxtracker appears to be an effective method for actively monitoring adverse events following influenza vaccination in children. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  12. Pervasive monitoring--an intelligent sensor pod approach for standardised measurement infrastructures.

    PubMed

    Resch, Bernd; Mittlboeck, Manfred; Lippautz, Michael

    2010-01-01

    Geo-sensor networks have traditionally been built up in closed monolithic systems, thus limiting trans-domain usage of real-time measurements. This paper presents the technical infrastructure of a standardised embedded sensing device, which has been developed in the course of the Live Geography approach. The sensor pod implements data provision standards of the Sensor Web Enablement initiative, including an event-based alerting mechanism and location-aware Complex Event Processing functionality for detection of threshold transgression and quality assurance. The goal of this research is that the resultant highly flexible sensing architecture will bring sensor network applications one step further towards the realisation of the vision of a "digital skin for planet earth". The developed infrastructure can potentially have far-reaching impacts on sensor-based monitoring systems through the deployment of ubiquitous and fine-grained sensor networks. This in turn allows for the straight-forward use of live sensor data in existing spatial decision support systems to enable better-informed decision-making.

  13. Key design elements of a data utility for national biosurveillance: event-driven architecture, caching, and Web service model.

    PubMed

    Tsui, Fu-Chiang; Espino, Jeremy U; Weng, Yan; Choudary, Arvinder; Su, Hoah-Der; Wagner, Michael M

    2005-01-01

    The National Retail Data Monitor (NRDM) has monitored over-the-counter (OTC) medication sales in the United States since December 2002. The NRDM collects data from over 18,600 retail stores and processes over 0.6 million sales records per day. This paper describes key architectural features that we have found necessary for a data utility component in a national biosurveillance system. These elements include event-driven architecture to provide analyses of data in near real time, multiple levels of caching to improve query response time, high availability through the use of clustered servers, scalable data storage through the use of storage area networks and a web-service function for interoperation with affiliated systems. The methods and architectural principles are relevant to the design of any production data utility for public health surveillance-systems that collect data from multiple sources in near real time for use by analytic programs and user interfaces that have substantial requirements for time-series data aggregated in multiple dimensions.

  14. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    PubMed

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  15. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality

    PubMed Central

    Tata, Matthew S.

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518

  16. Integrating Remote Sensing Data, Hybrid-Cloud Computing, and Event Notifications for Advanced Rapid Imaging & Analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Fielding, E. J.; Agram, P.; Manipon, G.; Stough, T. M.; Simons, M.; Rosen, P. A.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.

    2013-12-01

    Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Continuous Global Positioning System (CGPS) are now important elements in our toolset for monitoring earthquake-generating faults, volcanic eruptions, hurricane damage, landslides, reservoir subsidence, and other natural and man-made hazards. Geodetic imaging's unique ability to capture surface deformation with high spatial and temporal resolution has revolutionized both earthquake science and volcanology. Continuous monitoring of surface deformation and surface change before, during, and after natural hazards improves decision-making from better forecasts, increased situational awareness, and more informed recovery. However, analyses of InSAR and GPS data sets are currently handcrafted following events and are not generated rapidly and reliably enough for use in operational response to natural disasters. Additionally, the sheer data volumes needed to handle a continuous stream of InSAR data sets also presents a bottleneck. It has been estimated that continuous processing of InSAR coverage of California alone over 3-years would reach PB-scale data volumes. Our Advanced Rapid Imaging and Analysis for Monitoring Hazards (ARIA-MH) science data system enables both science and decision-making communities to monitor areas of interest with derived geodetic data products via seamless data preparation, processing, discovery, and access. We will present our findings on the use of hybrid-cloud computing to improve the timely processing and delivery of geodetic data products, integrating event notifications from USGS to improve the timely processing for response, as well as providing browse results for quick looks with other tools for integrative analysis.

  17. Geological hazard monitoring system in Georgia

    NASA Astrophysics Data System (ADS)

    Gaprindashvili, George

    2017-04-01

    Georgia belongs to one of world's most complex mountainous regions according to the scale and frequency of Geological processes and damage caused to population, farmlands, and Infrastructure facilities. Geological hazards (landslide, debrisflow/mudflow, rockfall, erosion and etc.) are affecting many populated areas, agricultural fields, roads, oil and gas pipes, high-voltage electric power transmission towers, hydraulic structures, and tourist complexes. Landslides occur almost in all geomorphological zones, resulting in wide differentiation in the failure types and mechanisms and in the size-frequency distribution. In Georgia, geological hazards triggered by: 1. Activation of highly intense earthquakes; 2. Meteorological events provoking the disaster processes on the background of global climatic change; 3. Large-scale Human impact on the environment. The prediction and monitoring of Geological Hazards is a very wide theme, which involves different researchers from different spheres. Geological hazard monitoring is essential to prevent and mitigate these hazards. In past years in Georgia several monitoring system, such as Ground-based geodetic techniques, Debrisflow Early Warning System (EWS) were installed on high sensitive landslide and debrisflow areas. This work presents description of Geological hazard monitoring system in Georgia.

  18. The Chandra Monitoring System

    NASA Astrophysics Data System (ADS)

    Wolk, S. J.; Petreshock, J. G.; Allen, P.; Bartholowmew, R. T.; Isobe, T.; Cresitello-Dittmar, M.; Dewey, D.

    The NASA Great Observatory Chandra was launched July 23, 1999 aboard the space shuttle Columbia. The Chandra Science Center (CXC) runs a monitoring and trends analysis program to maximize the science return from this mission. At the time of the launch, the monitoring portion of this system was in place. The system is a collection of multiple threads and programming methodologies acting cohesively. Real-time data are passed to the CXC. Our real-time tool, ACORN (A Comprehensive object-ORiented Necessity), performs limit checking of performance related hardware. Chandra is in ground contact less than 3 hours a day, so the bulk of the monitoring must take place on data dumped by the spacecraft. To do this, we have written several tools which run off of the CXC data system pipelines. MTA_MONITOR_STATIC, limit checks FITS files containing hardware data. MTA_EVENT_MON and MTA_GRAT_MON create quick look data for the focal place instruments and the transmission gratings. When instruments violate their operational limits, the responsible scientists are notified by email and problem tracking is initiated. Output from all these codes is distributed to CXC scientists via HTML interface.

  19. The immunization data quality audit: verifying the quality and consistency of immunization monitoring systems.

    PubMed Central

    Ronveaux, O.; Rickert, D.; Hadler, S.; Groom, H.; Lloyd, J.; Bchir, A.; Birmingham, M.

    2005-01-01

    OBJECTIVE: To evaluate the consistency and quality of immunization monitoring systems in 27 countries during 2002-03 using standardized data quality audits (DQAs) that had been launched within the framework of the Global Alliance for Vaccines and Immunization. METHODS: The consistency of reporting systems was estimated by determining the proportion of third doses of diphtheria-tetanuspertussis (DTP-3) vaccine reported as being administered that could be verified by written documentation at health facilities and districts. The quality of monitoring systems was measured using quality indices for different components of the monitoring systems. These indices were applied to each level of the health service (health unit, district and national). FINDINGS: The proportion of verified DTP-3 doses was lower than 85% in 16 countries. Difficulties in verifying the doses administered often arose at the peripheral level of the health service, usually as the result of discrepancies in information between health units and their corresponding districts or because completed recording forms were not available from health units. All countries had weaknesses in their monitoring systems; these included the inconsistent use of monitoring charts; inadequate monitoring of vaccine stocks, injection supplies and adverse events; unsafe computer practices; and poor monitoring of completeness and timeliness of reporting. CONCLUSION: Inconsistencies in immunization data occur in many countries, hampering their ability to manage their immunization programmes. Countries should use these findings to strengthen monitoring systems so that data can reliably guide programme activities. The DQA is an innovative tool that provides a way to independently assess the quality of immunization monitoring systems at all levels of a health service and serves as a point of entry to make improvements. It provides a useful example for other global health initiatives. PMID:16175824

  20. Unplanned releases and injuries associated with aerial application of chemicals, 1995-2002.

    PubMed

    Rice, Nancy; Messing, Rita; Souther, Larry; Berkowitz, Zahava

    2005-11-01

    For this article, records of the Hazardous Substances Emergency Events Surveillance (HSEES) system were reviewed to identify and describe acute, unplanned releases of agricultural chemicals and associated injuries related to aerial application during 1995-2002. Records of aerial-application accidents from the National Transportation Safety Board were also reviewed. Of the 54,090 events in the HSEES system for 1995-2002, 91 were identified as aerial-application events. The most commonly released substance was malathion. There were 56 victims; 12 died, and 34 required treatment at a hospital. A higher percentage of HSEES aerial-applicator events involved injury and death than did other HSEES transportation events. The relatively high number of injuries and fatalities underscores the need for precautions such as monitoring and limiting pilot cumulative exposures to pesticides, and using appropriate personal protective equipment and decontamination equipment. Emergency responders should be educated about the hazards associated with chemicals at aerial-application crash sites.

  1. Variations of seismic parameters during different activity levels of the Soufriere Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Powell, T.; Neuberg, J.

    2003-04-01

    The low-frequency seismic events on Montserrat are linked to conduit resonance and the pressurisation of the volcanic system. Analysis of these events tell us more about the behaviour of the volcanic system and provide a monitoring and interpretation tool. We have written an Automated Event Classification Algorithm Program (AECAP), which finds and classifies seismic events and calculates seismic parameters such as energy, intermittency, peak frequency and event duration. Comparison of low-frequency energy with the tilt cycles in 1997 allows us to link pressurisation of the volcano with seismic behaviour. An empirical relationship provides us with an estimate of pressurisation through released seismic energy. During 1997, the activity of the volcano varied considerably. We compare seismic parameters from quiet periods to those from active periods and investigate how the relationships between these parameters change. These changes are then used to constrain models of magmatic processes during different stages of volcanic activity.

  2. Using movement and intentions to understand human activity.

    PubMed

    Zacks, Jeffrey M; Kumar, Shawn; Abrams, Richard A; Mehta, Ritesh

    2009-08-01

    During perception, people segment continuous activity into discrete events. They do so in part by monitoring changes in features of an ongoing activity. Characterizing these features is important for theories of event perception and may be helpful for designing information systems. The three experiments reported here asked whether the body movements of an actor predict when viewers will perceive event boundaries. Body movements were recorded using a magnetic motion tracking system and compared with viewers' segmentation of his activity into events. Changes in movement features were strongly associated with segmentation. This was more true for fine-grained than for coarse-grained boundaries, and was strengthened when the stimulus displays were reduced from live-action movies to simplified animations. These results suggest that movement variables play an important role in the process of segmenting activity into meaningful events, and that the influence of movement on segmentation depends on the availability of other information sources.

  3. Historical Radiological Event Monitoring

    EPA Pesticide Factsheets

    During and after radiological events EPA's RadNet monitors the environment for radiation. EPA monitored environmental radiation levels during and after Chernobyl, Fukushima and other international and domestic radiological incidents.

  4. Advanced earthquake monitoring system for U.S. Department of Veterans Affairs medical buildings--instrumentation

    USGS Publications Warehouse

    Kalkan, Erol; Banga, Krishna; Ulusoy, Hasan S.; Fletcher, Jon Peter B.; Leith, William S.; Reza, Shahneam; Cheng, Timothy

    2012-01-01

    In collaboration with the U.S. Department of Veterans Affairs (VA), the National Strong Motion Project (NSMP; http://nsmp.wr.usgs.gov/) of the U.S. Geological Survey has been installing sophisticated seismic systems that will monitor the structural integrity of 28 VA hospital buildings located in seismically active regions of the conterminous United States, Alaska, and Puerto Rico during earthquake shaking. These advanced monitoring systems, which combine the use of sensitive accelerometers and real-time computer calculations, are designed to determine the structural health of each hospital building rapidly after an event, helping the VA to ensure the safety of patients and staff. This report presents the instrumentation component of this project by providing details of each hospital building, including a summary of its structural, geotechnical, and seismic hazard information, as well as instrumentation objectives and design. The structural-health monitoring component of the project, including data retrieval and processing, damage detection and localization, automated alerting system, and finally data dissemination, will be presented in a separate report.

  5. Spatially and temporally variable urinary N loads deposited by lactating cows on a grazing system dairy farm.

    PubMed

    Ahmed, Awais; Sohi, Rajneet; Roohi, Rakhshan; Jois, Markandeya; Raedts, Peter; Aarons, Sharon R

    2018-06-01

    Feed nitrogen (N) intakes in Australian grazing systems average 545 g cow -1 day -1 , indicating that urinary N is likely to be the dominant form excreted. Grazing animals spend disproportionate amounts of time in places on dairy farms where N accumulation is likely to occur. We attached to grazing cows sensors that measure urine volume and N concentration, as well as global positioning systems sensors used to monitor the times the cows spent in different places on a farm and the location of urination events. The cows were monitored for up to 72 h in each of two seasons. More urination events and greater urine volumes per event were recorded in spring 2014 (3.1 L) compared with winter 2015 (1.4 L), most likely influenced by environmental conditions and the greater spring rainfall observed. Mean (range) N concentration (0.71%; 0.02 to 1.52%) and N load (12.8 g cow -1 event -1 ; 0.3 to 64.5 g cow -1 event -1 ) did not differ over the two monitoring periods. However, mean (range) daily N load was greater in spring (277 g cow -1 day -1 ; 200 to 346 g cow -1 day -1 ) than in winter (90 g cow -1 day -1 ; 44 to 116 g cow -1 day -1 ) due to the influence of urine volume. Relatively greater time was spent in paddocks overnight (13.3 h) than in paddocks between morning and evening milking (6.4 h), compared with the mean numbers of urinations in these places (6.4 and 3.8 respectively). The mean N load deposited overnight in paddocks (89.6 g cow -1 ) was more than twice that deposited in paddocks during the day (43.8 g cow -1 ), due to the greater N load per event overnight, and was more closely linked to the relative difference in time spent in paddocks than in the number of urination events. These data suggest that routinely holding cows in the same paddocks overnight will lead to high urinary N depositions, increasing the potential for N losses from these places. Further research using this technology is required to acquire farm and environment specific urinary data to improve N management. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. 11-14 November 2012 Umbria Region (Central Italy) flood event: from prediction to management for civil protection purposes

    NASA Astrophysics Data System (ADS)

    Berni, Nicola; Pandolfo, Claudia; Stelluti, Marco; Zauri, Renato; Ponziani, Francesco; Francioni, Marco; Governatori Leonardi, Federico; Formica, Alessandro; Natazzi, Loredana; Costantini, Sandro

    2013-04-01

    Following laws and regulations concerning extreme natural events management, the Italian national hydrometeorological early warning system is composed by 21 regional offices (Functional Centres - CF). Umbria Region CF is located in Central Italy and provides early warning, monitoring and decision support systems (DSS) when significant flood/landslide events occur. The alert system is based on hydrometric and rainfall thresholds with detailed procedures for the management of critical events in which different roles of authorities and institutions involved are defined. For the real time flood forecasting system, at the CF several operational hydrological and hydraulic models were developed and implemented for a "dynamic" hazard/risk scenario assessment for Civil Protection DSS, useful also for the development of Flood Risk Management Plans according to the European "Floods Directive" 2007/60. In the period 11th-14th November 2012, a significant flood event occurred in Umbria (as well as Tuscany and northern Lazio). The territory was interested by intense and persistent rainfall; the hydro-meteorological monitoring network recorded locally rainfall depth over 300 mm in 72 hours and, generally, values greater than the seasonal averages all over the region. In the most affected area the recorded rainfall depths correspond to centenarian return period: one-third of the annual mean precipitation occurred in 2-3 days. Almost all rivers in Umbria have been involved, exceeding hydrometric thresholds, and several ones overflowed. Furthermore, in some cases, so high water levels have never been recorded by the hydrometric network. As in the major flood events occurred in the last years, dams (Montedoglio and Corbara dams along Tiber River and Casanuova dam along Chiascio River) and other hydraulic works for flood defense (e.g. along Chiani stream) played a very important mitigation role, storing high water volumes and avoiding the overlap of peak discharges downstream. During the event many emergency interventions were necessary. There were no casualties among the population, but many landslides and flooding occurred causing over 240 million Euros of damages (to hydraulic works, infrastructures, public and commercial facilities, residential buildings, agriculture, etc.) enough to induce the Regional Administration to request declaration of state of emergency to the National Government. The day before the beginning of the event (10th November) QPFs values were high enough to activate "Attention" Phase of Regional Civil Protection System and CF, during the critical phases, provided 24h decision support activities, also through the official web site (www.cfumbria.it), very useful for monitoring and data/info dissemination from the national to the municipality level. The thresholds presented good agreement with direct territorial presidiums observations and the alert system has been tested. The purpose of this work is to highlight what worked well and what did not, in order to improve the early warning and DSS for Civil Protection purposes.

  7. Induced earthquake during the 2016 Kumamoto earthquake (Mw7.0): Importance of real-time shake monitoring for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Hoshiba, M.; Ogiso, M.

    2016-12-01

    Sequence of the 2016 Kumamoto earthquakes (Mw6.2 on April 14, Mw7.0 on April 16, and many aftershocks) caused a devastating damage at Kumamoto and Oita prefectures, Japan. During the Mw7.0 event, just after the direct S waves passing the central Oita, another M6 class event occurred there more than 80 km apart from the Mw7.0 event. The M6 event is interpreted as an induced earthquake; but it brought stronger shaking at the central Oita than that from the Mw7.0 event. We will discuss the induced earthquake from viewpoint of Earthquake Early Warning. In terms of ground shaking such as PGA and PGV, the Mw7.0 event is much smaller than those of the M6 induced earthquake at the central Oita (for example, 1/8 smaller at OIT009 station for PGA), and then it is easy to discriminate two events. However, PGD of the Mw7.0 is larger than that of the induced earthquake, and its appearance is just before the occurrence of the induced earthquake. It is quite difficult to recognize the induced earthquake from displacement waveforms only, because the displacement is strongly contaminated by that of the preceding Mw7.0 event. In many methods of EEW (including current JMA EEW system), magnitude is used for prediction of ground shaking through Ground Motion Prediction Equation (GMPE) and the magnitude is often estimated from displacement. However, displacement magnitude does not necessarily mean the best one for prediction of ground shaking, such as PGA and PGV. In case of the induced earthquake during the Kumamoto earthquake, displacement magnitude could not be estimated because of the strong contamination. Actually JMA EEW system could not recognize the induced earthquake. One of the important lessons we learned from eight years' operation of EEW is an issue of the multiple simultaneous earthquakes, such as aftershocks of the 2011 Mw9.0 Tohoku earthquake. Based on this lesson, we have proposed enhancement of real-time monitor of ground shaking itself instead of rapid estimation of hypocenter location and magnitude. Because we want to predict ground shaking in EEW, we should more focus on monitoring of ground shaking. Experience of the induced earthquake also indicates the importance of the real-time monitor of ground shaking for making EEW more rapid and precise.

  8. Diagnostic yield and optimal duration of continuous-loop event monitoring for the diagnosis of palpitations. A cost-effectiveness analysis

    NASA Technical Reports Server (NTRS)

    Zimetbaum, P. J.; Kim, K. Y.; Josephson, M. E.; Goldberger, A. L.; Cohen, D. J.

    1998-01-01

    BACKGROUND: Continuous-loop event recorders are widely used for the evaluation of palpitations, but the optimal duration of monitoring is unknown. OBJECTIVE: To determine the yield, timing, and incremental cost-effectiveness of each week of event monitoring for palpitations. DESIGN: Prospective cohort study. PATIENTS: 105 consecutive outpatients referred for the placement of a continuous-loop event recorder for the evaluation of palpitations. MEASUREMENTS: Diagnostic yield, incremental cost, and cost-effectiveness for each week of monitoring. RESULTS: The diagnostic yield of continuous-loop event recorders was 1.04 diagnoses per patient in week 1, 0.15 diagnoses per patient in week 2, and 0.01 diagnoses per patient in week 3 and beyond. Over time, the cost-effectiveness ratio increased from $98 per new diagnosis in week 1 to $576 per new diagnosis in week 2 and $5832 per new diagnosis in week 3. CONCLUSIONS: In patients referred for evaluation of palpitations, the diagnostic yield of continuous-loop event recording decreases rapidly after 2 weeks of monitoring. A 2-week monitoring period is reasonably cost-effective for most patients and should be the standard period for continuous-loop event recording for the evaluation of palpitations.

  9. A multidisciplinary system for monitoring and forecasting Etna volcanic plumes

    NASA Astrophysics Data System (ADS)

    Coltelli, Mauro; Prestifilippo, Michele; Spata, Gaetano; Scollo, Simona; Andronico, Daniele

    2010-05-01

    One of the most active volcanoes in the world is Mt. Etna, in Italy, characterized by frequent explosive activity from the central craters and from fractures opened along the volcano flanks which, during the last years, caused several damages to aviation and forced the closure of the Catania International Airport. To give precise warning to the aviation authorities and air traffic controller and to assist the work of VAACs, a novel system for monitoring and forecasting Etna volcanic plumes, was developed at the Istituto Nazionale di Geofisica e Vulcanologia, sezione di Catania, the managing institution for the surveillance of Etna volcano. Monitoring is carried out using multispectral infrared measurements from the Spin Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation geosynchronous satellite able to track the volcanic plume with a high time resolution, visual and thermal cameras used to monitor the explosive activity, three continuous wave X-band disdrometers which detect ash dispersal and fallout, sounding balloons used to evaluate the atmospheric fields, and finally field data collected after the end of the eruptive event needed to extrapolate important features of explosive activity. Forecasting is carried out daily using automatic procedures which download weather forecast data obtained by meteorological mesoscale models from the Italian Air Force national Meteorological Office and from the hydrometeorological service of ARPA-SIM; run four different tephra dispersal models using input parameters obtained by the analysis of the deposits collected after few hours since the eruptive event similar to 22 July 1998, 21-24 July 2001 and 2002-03 Etna eruptions; plot hazard maps on ground and in air and finally publish them on a web-site dedicated to the Italian Civil Protection. The system has been already tested successfully during several explosive events occurring at Etna in 2006, 2007 and 2008. These events produced eruption columns high up to several kilometers above sea level and, on the basis of parameters such as mass eruption rate and total grain-size distributions, showed different explosive style. The monitoring and forecasting system is going on developing through the installation of new instruments able to detect different features of the volcanic plumes (e.g. the dispersal and sedimentation processes) in order to reduce the uncertainty of the input parameters used in the modeling. This is crucial to perform a reliable forecasting. We show that multidisciplinary approaches can really give useful information on the presence of volcanic ash and consequently to prevent damages and airport disruptions.

  10. A Discrete Events Delay Differential System Model for Transmission of Vancomycin-Resistant Enterococcus (VRE) in Hospitals

    DTIC Science & Technology

    2010-09-19

    estimated directly form the surveillance data Infection control measures were implemented in the form of health care worker hand - hygiene before and after...hospital infections , is used to motivate possibilities of modeling nosocomial infec- tion dynamics. This is done in the context of hospital monitoring and...model development. Key Words: Delay equations, discrete events, nosocomial infection dynamics, surveil- lance data, inverse problems, parameter

  11. Field and modelling investigations of fresh-water plume behaviour in response to infrequent high-precipitation events, Sydney Estuary, Australia

    NASA Astrophysics Data System (ADS)

    B., Serena; Lee | Gavin, F.; Birch | Charles, J.; Lemckert

    2011-05-01

    Runoff from the urban environment is a major contributor of non-point source contamination for many estuaries, yet the ultimate fate of this stormwater within the estuary is frequently unknown in detail. The relationship between catchment rainfall and estuarine response within the Sydney Estuary (Australia) was investigated in the present study. A verified hydrodynamic model (Environmental Fluid Dynamics Computer Code) was utilised in concert with measured salinity data and rainfall measurements to determine the relationship between rainfall and discharge to the estuary, with particular attention being paid to a significant high-precipitation event. A simplified rational method for calculating runoff based upon daily rainfall, subcatchment area and runoff coefficients was found to replicate discharge into the estuary associated with the monitored event. Determining fresh-water supply based upon estuary conditions is a novel technique which may assist those researching systems where field-measured runoff data are not available and where minor field-measured information on catchment characteristics are obtainable. The study concluded that since the monitored fresh-water plume broke down within the estuary, contaminants associated with stormwater runoff due to high-precipitation events (daily rainfall > 50 mm) were retained within the system for a longer period than was previously recognised.

  12. An Overview of the Runtime Verification Tool Java PathExplorer

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present an overview of the Java PathExplorer runtime verification tool, in short referred to as JPAX. JPAX can monitor the execution of a Java program and check that it conforms with a set of user provided properties formulated in temporal logic. JPAX can in addition analyze the program for concurrency errors such as deadlocks and data races. The concurrency analysis requires no user provided specification. The tool facilitates automated instrumentation of a program's bytecode, which when executed will emit an event stream, the execution trace, to an observer. The observer dispatches the incoming event stream to a set of observer processes, each performing a specialized analysis, such as the temporal logic verification, the deadlock analysis and the data race analysis. Temporal logic specifications can be formulated by the user in the Maude rewriting logic, where Maude is a high-speed rewriting system for equational logic, but here extended with executable temporal logic. The Maude rewriting engine is then activated as an event driven monitoring process. Alternatively, temporal specifications can be translated into efficient automata, which check the event stream. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems.

  13. Pesticide leaching by agricultural drainage in sloping, mid-textured soil conditions - the role of runoff components.

    PubMed

    Zajíček, Antonín; Fučík, Petr; Kaplická, Markéta; Liška, Marek; Maxová, Jana; Dobiáš, Jakub

    2018-04-01

    Dynamics of pesticides and their metabolites in drainage waters during baseflow periods and rainfall-runoff events (RREs) were studied from 2014 to 2016 at three small, tile-drained agricultural catchments in Bohemian-Moravian Highlands, Czech Republic. Drainage systems in this region are typically built in slopes with considerable proportion of drainage runoff originating outside the drained area itself. Continuous monitoring was performed by automated samplers, and the event hydrograph was separated using 18 O and 2 H isotopes and drainage water temperature. Results showed that drainage systems represent a significant source for pesticides leaching from agricultural land. Leaching of pesticide metabolites was mainly associated with baseflow and shallow interflow. Water from causal precipitation diluted their concentrations. The prerequisites for the leaching of parental compounds were a rainfall-runoff event occurring shortly after spraying, and the presence of event water in the runoff. When such situations happened consequently, pesticides concentrations in drainage water were high and the pesticide load reached several grams in a few hours. Presented results introduce new insights into the processes of pesticides movement in small, tile-drained catchments and emphasizes the need to incorporate drainage hydrology and flow-triggered sampling into monitoring programmes in larger catchments as well as in environment-conservation policy.

  14. Visual Sensing for Urban Flood Monitoring

    PubMed Central

    Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han

    2015-01-01

    With the increasing climatic extremes, the frequency and severity of urban flood events have intensified worldwide. In this study, image-based automated monitoring of flood formation and analyses of water level fluctuation were proposed as value-added intelligent sensing applications to turn a passive monitoring camera into a visual sensor. Combined with the proposed visual sensing method, traditional hydrological monitoring cameras have the ability to sense and analyze the local situation of flood events. This can solve the current problem that image-based flood monitoring heavily relies on continuous manned monitoring. Conventional sensing networks can only offer one-dimensional physical parameters measured by gauge sensors, whereas visual sensors can acquire dynamic image information of monitored sites and provide disaster prevention agencies with actual field information for decision-making to relieve flood hazards. The visual sensing method established in this study provides spatiotemporal information that can be used for automated remote analysis for monitoring urban floods. This paper focuses on the determination of flood formation based on image-processing techniques. The experimental results suggest that the visual sensing approach may be a reliable way for determining the water fluctuation and measuring its elevation and flood intrusion with respect to real-world coordinates. The performance of the proposed method has been confirmed; it has the capability to monitor and analyze the flood status, and therefore, it can serve as an active flood warning system. PMID:26287201

  15. Pickless event detection and location: The waveform correlation event detection system (WCEDS) revisited

    DOE PAGES

    Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford; ...

    2016-01-01

    The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events thatmore » generated the phase picks.« less

  16. Final Technical Report. Project Boeing SGS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bell, Thomas E.

    Boeing and its partner, PJM Interconnection, teamed to bring advanced “defense-grade” technologies for cyber security to the US regional power grid through demonstration in PJM’s energy management environment. Under this cooperative project with the Department of Energy, Boeing and PJM have developed and demonstrated a host of technologies specifically tailored to the needs of PJM and the electric sector as a whole. The team has demonstrated to the energy industry a combination of processes, techniques and technologies that have been successfully implemented in the commercial, defense, and intelligence communities to identify, mitigate and continuously monitor the cyber security of criticalmore » systems. Guided by the results of a Cyber Security Risk-Based Assessment completed in Phase I, the Boeing-PJM team has completed multiple iterations through the Phase II Development and Phase III Deployment phases. Multiple cyber security solutions have been completed across a variety of controls including: Application Security, Enhanced Malware Detection, Security Incident and Event Management (SIEM) Optimization, Continuous Vulnerability Monitoring, SCADA Monitoring/Intrusion Detection, Operational Resiliency, Cyber Range simulations and hands on cyber security personnel training. All of the developed and demonstrated solutions are suitable for replication across the electric sector and/or the energy sector as a whole. Benefits identified include; Improved malware and intrusion detection capability on critical SCADA networks including behavioral-based alerts resulting in improved zero-day threat protection; Improved Security Incident and Event Management system resulting in better threat visibility, thus increasing the likelihood of detecting a serious event; Improved malware detection and zero-day threat response capability; Improved ability to systematically evaluate and secure in house and vendor sourced software applications; Improved ability to continuously monitor and maintain secure configuration of network devices resulting in reduced vulnerabilities for potential exploitation; Improved overall cyber security situational awareness through the integration of multiple discrete security technologies into a single cyber security reporting console; Improved ability to maintain the resiliency of critical systems in the face of a targeted cyber attack of other significant event; Improved ability to model complex networks for penetration testing and advanced training of cyber security personnel« less

  17. The use of a medical dictionary for regulatory activities terminology (MedDRA) in prescription-event monitoring in Japan (J-PEM).

    PubMed

    Yokotsuka, M; Aoyama, M; Kubota, K

    2000-07-01

    The Medical Dictionary for Regulatory Activities Terminology (MedDRA) version 2.1 (V2.1) was released in March 1999 accompanied by the MedDRA/J V2.1J specifically for Japanese users. In prescription-event monitoring in Japan (J-PEM), we have employed the MedDRA/J for data entry, signal generation and event listing. In J-PEM, the lowest level terms (LLTs) in the MedDRA/J are used in data entry because the richness of LLTs is judged to be advantageous. A signal is generated normally at the preferred term (PT) level, but it has been found that various reporters describe the same event using descriptions that are potentially encoded by LLTs under different PTs. In addition, some PTs are considered too specific to generate the proper signal. In the system used in J-PEM, when an LLT is selected as a candidate to encode an event, another LLT under a different PT, if any, is displayed on the computer screen so that it may be coded instead of, or in addition to, the candidate LLT. The five-level structure of the MedDRA is used when listing events but some modification is required to generate a functional event list.

  18. A Framework for Resilient Remote Monitoring

    DTIC Science & Technology

    2014-08-01

    of low-level observables are availa- ble, audited , and recorded. This establishes the need for a re- mote monitoring framework that can integrate with...Security, WS-Policy, SAML, XML Signature, and XML Encryption. Pearson Higher Education, 2004. [3] OMG, “Common Secure Interoperability Protocol...www.darpa.mil/Our_Work/I2O/Programs/Integrated_Cyb er_Analysis_System_%28ICAS%29.aspx. [8] D. Miller and B. Pearson , Security information and event man

  19. The U.S. Response to NEOS: Avoiding a Black Swan Event

    DTIC Science & Technology

    2016-09-01

    Ibid. 224 Rich, “Major Earthquake Scenario.” 44 number of different radio frequencies .225 Employing a uniform system increases communications ...officials from using the radio as a means of communicating important information to tsunami victims. Overall, Japan’s strategy for applying the lessons...MONITORING AGENCY NAME(S) AND ADDRESS(ES) N/A 10. SPONSORING / MONITORING AGENCY REPORT NUMBER 11 . SUPPLEMENTARY NOTES The views expressed in this

  20. Ultrafast table-top dynamic radiography of spontaneous or stimulated events

    DOEpatents

    Smilowitz, Laura; Henson, Bryan

    2018-01-16

    Disclosed herein are representative embodiments of methods, apparatus, and systems for performing radiography. For example, certain embodiments concern X-ray radiography of spontaneous events. Particular embodiments of the disclosed technology provide continuous high-speed x-ray imaging of spontaneous dynamic events, such as explosions, reaction-front propagation, and even material failure. Further, in certain embodiments, x-ray activation and data collection activation are triggered by the object itself that is under observation (e.g., triggered by a change of state detected by one or more sensors monitoring the object itself).

  1. Analysis of the Transport and Fate of Metals Released From ...

    EPA Pesticide Factsheets

    This project’s objectives were to provide analysis of water quality following the release of acid mine drainage in the Animas and San Juan Rivers in a timely manner to 1) generate a comprehensive picture of the plume at the river system level, 2) help inform future monitoring efforts and 3) to predict potential secondary effects that could occur from materials that may remain stored within the system. The project focuses on assessing metals contamination during the plume and in the first month following the event. This project’s objectives were to provide analysis of water quality following the release of acid mine drainage from the Gold King Mine in the Animas and San Juan Rivers in a timely manner to 1) generate a comprehensive picture of the plume at the river system level, 2) help inform future monitoring efforts and 3) to predict potential secondary effects that could occur from materials that may remain stored within the system. The project focuses on assessing metals contamination during the plume and in the first month following the event.

  2. Dynamic Fault Detection Chassis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mize, Jeffery J

    2007-01-01

    Abstract The high frequency switching megawatt-class High Voltage Converter Modulator (HVCM) developed by Los Alamos National Laboratory for the Oak Ridge National Laboratory's Spallation Neutron Source (SNS) is now in operation. One of the major problems with the modulator systems is shoot-thru conditions that can occur in a IGBTs H-bridge topology resulting in large fault currents and device failure in a few microseconds. The Dynamic Fault Detection Chassis (DFDC) is a fault monitoring system; it monitors transformer flux saturation using a window comparator and dV/dt events on the cathode voltage caused by any abnormality such as capacitor breakdown, transformer primarymore » turns shorts, or dielectric breakdown between the transformer primary and secondary. If faults are detected, the DFDC will inhibit the IGBT gate drives and shut the system down, significantly reducing the possibility of a shoot-thru condition or other equipment damaging events. In this paper, we will present system integration considerations, performance characteristics of the DFDC, and discuss its ability to significantly reduce costly down time for the entire facility.« less

  3. Converter Compressor Building, SWMU 089, Hot Spot Areas 1, 2, and 5 Operations, Maintenance, and Monitoring Report, Kennedy Space Center, Florida

    NASA Technical Reports Server (NTRS)

    Wilson, Deborah M.

    2015-01-01

    This Operations, Maintenance, and Monitoring Report (OMMR) presents the findings, observations, and results from operation of the air sparging (AS) interim measure (IM) for Hot Spot (HS) Areas 1, 2, and 5 at the Converter Compressor Building (CCB) located at Kennedy Space Center (KSC), Florida. The objective of the IM at CCB HS Areas 1, 2, and 5 is to decrease concentrations of volatile organic compounds (VOCs) in groundwater in the treatment zones via AS to levels that will enable a transition to a monitored natural attenuation (MNA) phase. This OMMR presents system operations and maintenance (O&M) information and performance monitoring results since full-scale O&M began in June 2014 (2 months after initial system startup in April 2014), including quarterly performance monitoring events in July and October 2014 and January and May 2015. Based on the results to date, the AS system is operating as designed and is meeting the performance criteria and IM objective. The performance monitoring network is adequately constructed for assessment of IM performance at CCB HS Areas 1, 2, and 5. At the March 2014 KSC Remediation Team (KSCRT) Meeting, team consensus was reached for the design prepared for expansion of the system to treat the HS 4 area, and at the November 2014 KSCRT Meeting, team consensus was reached that HS 3 was adequately delineated horizontally and vertically and for selection of AS for the remedial approach for HS 3. At the July 2015 KSCRT meeting, team consensus was reached to continue IM operations in all zones until HSs 3 and 4 is operational, once HS 3 and 4 zones are operational discontinue operations in HS 1, 2, and 5 zones where concentrations are less than GCTLs to observe whether rebounding conditions occur. Team consensus was also reached to continue quarterly performance monitoring to determine whether operational zones achieve GCTLs and to continue annual IGWM of CCB-MW0012, CCBMW0013, and CCB-MW0056, located south of the treatment area. The next performance monitoring event is scheduled for July 2015.

  4. Cortical spreading depression occurs during elective neurosurgical procedures.

    PubMed

    Carlson, Andrew P; William Shuttleworth, C; Mead, Brittany; Burlbaw, Brittany; Krasberg, Mark; Yonas, Howard

    2017-01-01

    OBJECTIVE Cortical spreading depression (CSD) has been observed with relatively high frequency in the period following human brain injury, including traumatic brain injury and ischemic/hemorrhagic stroke. These events are characterized by loss of ionic gradients through massive cellular depolarization, neuronal dysfunction (depression of electrocorticographic [ECoG] activity) and slow spread (2-5 mm/min) across the cortical surface. Previous data obtained in animals have suggested that even in the absence of underlying injury, neurosurgical manipulation can induce CSD and could potentially be a modifiable factor in neurosurgical injury. The authors report their initial experience with direct intraoperative ECoG monitoring for CSD. METHODS The authors prospectively enrolled patients undergoing elective craniotomy for supratentorial lesions in cases in which the surgical procedure was expected to last > 2 hours. These patients were monitored for CSD from the time of dural opening through the time of dural closure, using a standard 1 × 6 platinum electrode coupled with an AC or full-spectrum DC amplifier. The data were processed using standard techniques to evaluate for slow potential changes coupled with suppression of high-frequency ECoG propagating across the electrodes. Data were compared with CSD validated in previous intensive care unit (ICU) studies, to evaluate recording conditions most likely to permit CSD detection, and identify likely events during the course of neurosurgical procedures using standard criteria. RESULTS Eleven patients underwent ECoG monitoring during elective neurosurgical procedures. During the periods of monitoring, 2 definite CSDs were observed to occur in 1 patient and 8 suspicious events were detected in 4 patients. In other patients, either no events were observed or artifact limited interpretation of the data. The DC-coupled amplifier system represented an improvement in stability of data compared with AC-coupled systems. Compared with more widely used postoperative ICU monitoring, there were additional challenges with artifact from saturation during bipolar cautery as well as additional noise peaks detected. CONCLUSIONS CSD can occur during elective neurosurgical procedures even in brain regions distant from the immediate operative site. ECoG monitoring with a DC-coupled full-spectrum amplifier seemed to provide the most stable signal despite significant challenges to the operating room environment. CSD may be responsible for some cases of secondary surgical injury. Though further studies on outcome related to the occurrence of these events is needed, efforts to decrease the occurrence of CSD by modification of anesthetic regimen may represent a novel target for study to increase the safety of neurosurgical procedures.

  5. Design and Deployment of a Pediatric Cardiac Arrest Surveillance System

    PubMed Central

    Newton, Heather Marie; McNamara, Leann; Engorn, Branden Michael; Jones, Kareen; Bernier, Meghan; Dodge, Pamela; Salamone, Cheryl; Bhalala, Utpal; Jeffers, Justin M.; Engineer, Lilly; Diener-West, Marie; Hunt, Elizabeth Anne

    2018-01-01

    Objective We aimed to increase detection of pediatric cardiopulmonary resuscitation (CPR) events and collection of physiologic and performance data for use in quality improvement (QI) efforts. Materials and Methods We developed a workflow-driven surveillance system that leveraged organizational information technology systems to trigger CPR detection and analysis processes. We characterized detection by notification source, type, location, and year, and compared it to previous methods of detection. Results From 1/1/2013 through 12/31/2015, there were 2,986 unique notifications associated with 2,145 events, 317 requiring CPR. PICU and PEDS-ED accounted for 65% of CPR events, whereas floor care areas were responsible for only 3% of events. 100% of PEDS-OR and >70% of PICU CPR events would not have been included in QI efforts. Performance data from both defibrillator and bedside monitor increased annually. (2013: 1%; 2014: 18%; 2015: 27%). Discussion After deployment of this system, detection has increased ∼9-fold and performance data collection increased annually. Had the system not been deployed, 100% of PEDS-OR and 50–70% of PICU, NICU, and PEDS-ED events would have been missed. Conclusion By leveraging hospital information technology and medical device data, identification of pediatric cardiac arrest with an associated increased capture in the proportion of objective performance data is possible. PMID:29854451

  6. Design and Deployment of a Pediatric Cardiac Arrest Surveillance System.

    PubMed

    Duval-Arnould, Jordan Michel; Newton, Heather Marie; McNamara, Leann; Engorn, Branden Michael; Jones, Kareen; Bernier, Meghan; Dodge, Pamela; Salamone, Cheryl; Bhalala, Utpal; Jeffers, Justin M; Engineer, Lilly; Diener-West, Marie; Hunt, Elizabeth Anne

    2018-01-01

    We aimed to increase detection of pediatric cardiopulmonary resuscitation (CPR) events and collection of physiologic and performance data for use in quality improvement (QI) efforts. We developed a workflow-driven surveillance system that leveraged organizational information technology systems to trigger CPR detection and analysis processes. We characterized detection by notification source, type, location, and year, and compared it to previous methods of detection. From 1/1/2013 through 12/31/2015, there were 2,986 unique notifications associated with 2,145 events, 317 requiring CPR. PICU and PEDS-ED accounted for 65% of CPR events, whereas floor care areas were responsible for only 3% of events. 100% of PEDS-OR and >70% of PICU CPR events would not have been included in QI efforts. Performance data from both defibrillator and bedside monitor increased annually. (2013: 1%; 2014: 18%; 2015: 27%). After deployment of this system, detection has increased ∼9-fold and performance data collection increased annually. Had the system not been deployed, 100% of PEDS-OR and 50-70% of PICU, NICU, and PEDS-ED events would have been missed. By leveraging hospital information technology and medical device data, identification of pediatric cardiac arrest with an associated increased capture in the proportion of objective performance data is possible.

  7. A model of human decision making in multiple process monitoring situations

    NASA Technical Reports Server (NTRS)

    Greenstein, J. S.; Rouse, W. B.

    1982-01-01

    Human decision making in multiple process monitoring situations is considered. It is proposed that human decision making in many multiple process monitoring situations can be modeled in terms of the human's detection of process related events and his allocation of attention among processes once he feels event have occurred. A mathematical model of human event detection and attention allocation performance in multiple process monitoring situations is developed. An assumption made in developing the model is that, in attempting to detect events, the human generates estimates of the probabilities that events have occurred. An elementary pattern recognition technique, discriminant analysis, is used to model the human's generation of these probability estimates. The performance of the model is compared to that of four subjects in a multiple process monitoring situation requiring allocation of attention among processes.

  8. HIPAA-compliant automatic monitoring system for RIS-integrated PACS operation

    NASA Astrophysics Data System (ADS)

    Jin, Jin; Zhang, Jianguo; Chen, Xiaomeng; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen; Feng, Jie; Sheng, Liwei; Huang, H. K.

    2006-03-01

    As a governmental regulation, Health Insurance Portability and Accountability Act (HIPAA) was issued to protect the privacy of health information that identifies individuals who are living or deceased. HIPAA requires security services supporting implementation features: Access control; Audit controls; Authorization control; Data authentication; and Entity authentication. These controls, which proposed in HIPAA Security Standards, are Audit trails here. Audit trails can be used for surveillance purposes, to detect when interesting events might be happening that warrant further investigation. Or they can be used forensically, after the detection of a security breach, to determine what went wrong and who or what was at fault. In order to provide security control services and to achieve the high and continuous availability, we design the HIPAA-Compliant Automatic Monitoring System for RIS-Integrated PACS operation. The system consists of two parts: monitoring agents running in each PACS component computer and a Monitor Server running in a remote computer. Monitoring agents are deployed on all computer nodes in RIS-Integrated PACS system to collect the Audit trail messages defined by the Supplement 95 of the DICOM standard: Audit Trail Messages. Then the Monitor Server gathers all audit messages and processes them to provide security information in three levels: system resources, PACS/RIS applications, and users/patients data accessing. Now the RIS-Integrated PACS managers can monitor and control the entire RIS-Integrated PACS operation through web service provided by the Monitor Server. This paper presents the design of a HIPAA-compliant automatic monitoring system for RIS-Integrated PACS Operation, and gives the preliminary results performed by this monitoring system on a clinical RIS-integrated PACS.

  9. Investigating volcanic hazard in Cape Verde Islands through geophysical monitoring: network description and first results

    NASA Astrophysics Data System (ADS)

    Faria, B.; Fonseca, J. F. B. D.

    2014-02-01

    We describe a new geophysical network deployed in the Cape Verde Archipelago for the assessment and monitoring of volcanic hazards as well as the first results from the network. Across the archipelago, the ages of volcanic activity range from ca. 20 Ma to present. In general, older islands are in the east and younger ones are in the west, but there is no clear age progression of eruptive activity as widely separated islands have erupted contemporaneously on geological timescales. The overall magmatic rate is low, and there are indications that eruptive activity is episodic, with intervals between episodes of intense activity ranging from 1 to 4 Ma. Although only Fogo Island has experienced eruptions (mainly effusive) in the historic period (last 550 yr), Brava and Santo Antão have experienced numerous geologically recent eruptions, including violent explosive eruptions, and show felt seismic activity and geothermal activity. Evidence for recent volcanism in the other islands is more limited and the emphasis has therefore been on monitoring of the three critical islands of Fogo, Brava and Santo Antão, where volcanic hazard levels are highest. Geophysical monitoring of all three islands is now in operation. The first results show that on Fogo, the seismic activity is dominated by hydrothermal events and volcano-tectonic events that may be related to settling of the edifice after the 1995 eruption; in Brava by volcano-tectonic events (mostly offshore), and in Santo Antão by volcano-tectonic events, medium-frequency events and harmonic tremor. Both in Brava and in Santo Antão, the recorded seismicity indicates that relatively shallow magmatic systems are present and causing deformation of the edifices that may include episodes of dike intrusion.

  10. Investigating volcanic hazard in Cape Verde Islands through geophysical monitoring: network description and first results

    NASA Astrophysics Data System (ADS)

    Faria, B.; Fonseca, J. F. B. D.

    2013-09-01

    We describe a new geophysical network deployed in the Cape Verde archipelago for the assessment and monitoring of volcanic hazards, and the first results from the network. Across the archipelago, the ages of volcanic activity range from ca. 20 Ma to present. In general, older islands are in the east and younger ones are in the west, but there is no clear age progression and widely-separated islands have erupted contemporaneously on geological time scales. The overall magmatic rate is low, and there are indications that eruptive activity is episodic, with intervals between episodes of intense activity ranging from 1 to 4 Ma. Although only Fogo island has experienced eruptions (mainly effusive) in the historic period (last 550 yr), Brava and Santo Antão have experienced numerous geologically recent eruptions including violent explosive eruptions, and show felt seismic activity and geothermal activity. Evidence for recent volcanism in the other islands is more limited and the emphasis has therefore been on monitoring of the three critical islands of Fogo, Brava and Santo Antão, where volcanic hazard levels are highest. Geophysical monitoring of all three islands is now in operation. The first results show that in Fogo the seismic activity is dominated by hydrothermal events and volcano-tectonic events that may be related to settling of the edifice after the 1995 eruption; in Brava by volcano-tectonic events (mostly offshore), and in Santo Antão by volcano-tectonic events, medium frequency events and harmonic tremor. Both in Brava and in Santo Antão, the recorded seismicity indicates that relatively shallow magmatic systems are present and causing deformation of the edifices that may include episodes of dike intrusion.

  11. AGILE/GRID Science Alert Monitoring System: The Workflow and the Crab Flare Case

    NASA Astrophysics Data System (ADS)

    Bulgarelli, A.; Trifoglio, M.; Gianotti, F.; Tavani, M.; Conforti, V.; Parmiggiani, N.

    2013-10-01

    During the first five years of the AGILE mission we have observed many gamma-ray transients of Galactic and extragalactic origin. A fast reaction to unexpected transient events is a crucial part of the AGILE monitoring program, because the follow-up of astrophysical transients is a key point for this space mission. We present the workflow and the software developed by the AGILE Team to perform the automatic analysis for the detection of gamma-ray transients. In addition, an App for iPhone will be released enabling the Team to access the monitoring system through mobile phones. In 2010 September the science alert monitoring system presented in this paper recorded a transient phenomena from the Crab Nebula, generating an automated alert sent via email and SMS two hours after the end of an AGILE satellite orbit, i.e. two hours after the Crab flare itself: for this discovery AGILE won the 2012 Bruno Rossi prize. The design of this alert system is maximized to reach the maximum speed, and in this, as in many other cases, AGILE has demonstrated that the reaction speed of the monitoring system is crucial for the scientific return of the mission.

  12. Soft real-time alarm messages for ATLAS TDAQ

    NASA Astrophysics Data System (ADS)

    Darlea, G.; Al Shabibi, A.; Martin, B.; Lehmann Miotto, G.

    2010-05-01

    The ATLAS TDAQ network consists of three separate Ethernet-based networks (Data, Control and Management) with over 2000 end-nodes. The TDAQ system has to be aware of the meaningful network failures and events in order for it to take effective recovery actions. The first stage of the process is implemented with Spectrum, a commercial network management tool. Spectrum detects and registers all network events, then it publishes the information via a CORBA programming interface. A gateway program (called NSG—Network Service Gateway) connects to Spectrum through CORBA and exposes to its clients a Java RMI interface. This interface implements a callback mechanism that allows the clients to subscribe for monitoring "interesting" parts of the network. The last stage of the TDAQ network monitoring tool is implemented in a module named DNC (DAQ to Network Connection), which filters the events that are to be reported to the TDAQ system: it subscribes to the gateway only for the machines that are currently active in the system and it forwards only the alarms that are considered important for the current TDAQ data taking session. The network information is then synthesized and presented in a human-readable format. These messages can be further processed either by the shifter who is in charge, the network expert or the Online Expert System. This article aims to describe the different mechanisms of the chain that transports the network events to the front-end user, as well as the constraints and rules that govern the filtering and the final format of the alarm messages.

  13. Misclassification of OSA severity with automated scoring of home sleep recordings.

    PubMed

    Aurora, R Nisha; Swartz, Rachel; Punjabi, Naresh M

    2015-03-01

    The advent of home sleep testing has allowed for the development of an ambulatory care model for OSA that most health-care providers can easily deploy. Although automated algorithms that accompany home sleep monitors can identify and classify disordered breathing events, it is unclear whether manual scoring followed by expert review of home sleep recordings is of any value. Thus, this study examined the agreement between automated and manual scoring of home sleep recordings. Two type 3 monitors (ApneaLink Plus [ResMed] and Embletta [Embla Systems]) were examined in distinct study samples. Data from manual and automated scoring were available for 200 subjects. Two thresholds for oxygen desaturation (≥ 3% and ≥ 4%) were used to define disordered breathing events. Agreement between manual and automated scoring was examined using Pearson correlation coefficients and Bland-Altman analyses. Automated scoring consistently underscored disordered breathing events compared with manual scoring for both sleep monitors irrespective of whether a ≥ 3% or ≥ 4% oxygen desaturation threshold was used to define the apnea-hypopnea index (AHI). For the ApneaLink Plus monitor, Bland-Altman analyses revealed an average AHI difference between manual and automated scoring of 6.1 (95% CI, 4.9-7.3) and 4.6 (95% CI, 3.5-5.6) events/h for the ≥ 3% and ≥ 4% oxygen desaturation thresholds, respectively. Similarly for the Embletta monitor, the average difference between manual and automated scoring was 5.3 (95% CI, 3.2-7.3) and 8.4 (95% CI, 7.2-9.6) events/h, respectively. Although agreement between automated and manual scoring of home sleep recordings varies based on the device used, modest agreement was observed between the two approaches. However, manual review of home sleep test recordings can decrease the misclassification of OSA severity, particularly for those with mild disease. ClinicalTrials.gov; No.: NCT01503164; www.clinicaltrials.gov.

  14. Misclassification of OSA Severity With Automated Scoring of Home Sleep Recordings

    PubMed Central

    Aurora, R. Nisha; Swartz, Rachel

    2015-01-01

    BACKGROUND: The advent of home sleep testing has allowed for the development of an ambulatory care model for OSA that most health-care providers can easily deploy. Although automated algorithms that accompany home sleep monitors can identify and classify disordered breathing events, it is unclear whether manual scoring followed by expert review of home sleep recordings is of any value. Thus, this study examined the agreement between automated and manual scoring of home sleep recordings. METHODS: Two type 3 monitors (ApneaLink Plus [ResMed] and Embletta [Embla Systems]) were examined in distinct study samples. Data from manual and automated scoring were available for 200 subjects. Two thresholds for oxygen desaturation (≥ 3% and ≥ 4%) were used to define disordered breathing events. Agreement between manual and automated scoring was examined using Pearson correlation coefficients and Bland-Altman analyses. RESULTS: Automated scoring consistently underscored disordered breathing events compared with manual scoring for both sleep monitors irrespective of whether a ≥ 3% or ≥ 4% oxygen desaturation threshold was used to define the apnea-hypopnea index (AHI). For the ApneaLink Plus monitor, Bland-Altman analyses revealed an average AHI difference between manual and automated scoring of 6.1 (95% CI, 4.9-7.3) and 4.6 (95% CI, 3.5-5.6) events/h for the ≥ 3% and ≥ 4% oxygen desaturation thresholds, respectively. Similarly for the Embletta monitor, the average difference between manual and automated scoring was 5.3 (95% CI, 3.2-7.3) and 8.4 (95% CI, 7.2-9.6) events/h, respectively. CONCLUSIONS: Although agreement between automated and manual scoring of home sleep recordings varies based on the device used, modest agreement was observed between the two approaches. However, manual review of home sleep test recordings can decrease the misclassification of OSA severity, particularly for those with mild disease. TRIAL REGISTRY: ClinicalTrials.gov; No.: NCT01503164; www.clinicaltrials.gov PMID:25411804

  15. Online monitoring of seismic damage in water distribution systems

    NASA Astrophysics Data System (ADS)

    Liang, Jianwen; Xiao, Di; Zhao, Xinhua; Zhang, Hongwei

    2004-07-01

    It is shown that water distribution systems can be damaged by earthquakes, and the seismic damages cannot easily be located, especially immediately after the events. Earthquake experiences show that accurate and quick location of seismic damage is critical to emergency response of water distribution systems. This paper develops a methodology to locate seismic damage -- multiple breaks in a water distribution system by monitoring water pressure online at limited positions in the water distribution system. For the purpose of online monitoring, supervisory control and data acquisition (SCADA) technology can well be used. A neural network-based inverse analysis method is constructed for locating the seismic damage based on the variation of water pressure. The neural network is trained by using analytically simulated data from the water distribution system, and validated by using a set of data that have never been used in the training. It is found that the methodology provides an effective and practical way in which seismic damage in a water distribution system can be accurately and quickly located.

  16. Seasonal and spatial variability of nitrosamines and their precursor sources at a large-scale urban drinking water system.

    PubMed

    Woods, Gwen C; Trenholm, Rebecca A; Hale, Bruce; Campbell, Zeke; Dickenson, Eric R V

    2015-07-01

    Nitrosamines are considered to pose greater health risks than currently regulated DBPs and are subsequently listed as a priority pollutant by the EPA, with potential for future regulation. Denver Water, as part of the EPA's Unregulated Contaminant Monitoring Rule 2 (UCMR2) monitoring campaign, found detectable levels of N-nitrosodimethylamine (NDMA) at all sites of maximum residency within the distribution system. To better understand the occurrence of nitrosamines and nitrosamine precursors, Denver Water undertook a comprehensive year-long monitoring campaign. Samples were taken every two weeks to monitor for NDMA in the distribution system, and quarterly sampling events further examined 9 nitrosamines and nitrosamine precursors throughout the treatment and distribution systems. NDMA levels within the distribution system were typically low (>1.3 to 7.2 ng/L) with a remote distribution site (frequently >200 h of residency) experiencing the highest concentrations found. Eight other nitrosamines (N-nitrosomethylethylamine, N-nitrosodiethylamine, N-nitroso-di-n-propylamine, N-nitroso-di-n-butylamine, N-nitroso-di-phenylamine, N-nitrosopyrrolidine, N-nitrosopiperidine, N-nitrosomorpholine) were also monitored but none of these 8, or precursors of these 8 [as estimated with formation potential (FP) tests], were detected anywhere in raw, partially-treated or distribution samples. Throughout the year, there was evidence that seasonality may impact NDMA formation, such that lower temperatures (~5-10°C) produced greater NDMA than during warmer months. The year of sampling further provided evidence that water quality and weather events may impact NDMA precursor loads. Precursor loading estimates demonstrated that NDMA precursors increased during treatment (potentially from cationic polymer coagulant aids). The precursor analysis also provided evidence that precursors may have increased further within the distribution system itself. This comprehensive study of a large-scale drinking water system provides insight into the variability of NDMA occurrence in a chloraminated system, which may be impacted by seasonality, water quality changes and/or the varied origins of NDMA precursors within a given system. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. On the monitoring and prediction of flash floods in small and medium-sized catchments - the EXTRUSO project

    NASA Astrophysics Data System (ADS)

    Wiemann, Stefan; Eltner, Anette; Sardemann, Hannes; Spieler, Diana; Singer, Thomas; Thanh Luong, Thi; Janabi, Firas Al; Schütze, Niels; Bernard, Lars; Bernhofer, Christian; Maas, Hans-Gerd

    2017-04-01

    Flash floods regularly cause severe socio-economic damage worldwide. In parallel, climate change is very likely to increase the number of such events, due to an increasing frequency of extreme precipitation events (EASAC 2013). Whereas recent work primarily addresses the resilience of large catchment areas, the major impact of hydro-meteorological extremes caused by heavy precipitation is on small areas. Those are very difficult to observe and predict, due to sparse monitoring networks and only few means for hydro-meteorological modelling, especially in small catchment areas. The objective of the EXTRUSO project is to identify and implement appropriate means to close this gap by an interdisciplinary approach, combining comprehensive research expertise from meteorology, hydrology, photogrammetry and geoinformatics. The project targets innovative techniques for achieving spatio-temporal densified monitoring and simulations for the analysis, prediction and warning of local hydro-meteorological extreme events. The following four aspects are of particular interest: 1. The monitoring, analysis and combination of relevant hydro-meteorological parameters from various sources, including existing monitoring networks, ground radar, specific low-cost sensors and crowdsourcing. 2. The determination of relevant hydro-morphological parameters from different photogrammetric sensors (e.g. camera, laser scanner) and sensor platforms (e.g. UAV (unmanned aerial vehicle) and UWV (unmanned water vehicle)). 3. The continuous hydro-meteorological modelling of precipitation, soil moisture and water flows by means of conceptual and data-driven modelling. 4. The development of a collaborative, web-based service infrastructure as an information and communication point, especially in the case of an extreme event. There are three major applications for the planned information system: First, the warning of local extreme events for the population in potentially affected areas, second, the support for decision makers and emergency responders in the case of an event and, third, the development of open, interoperable tools for other researchers to be applied and further developed. The test area of the project is the Free State of Saxony (Germany) with a number of small and medium catchment areas. However, the whole system, comprising models, tools and sensor setups, is planned to be transferred and tested in other areas, within and outside Europe, as well. The team working on the project consists of eight researchers, including five PhD students and three postdocs. The EXTRUSO project is funded by the European Social Fund (ESF grant nr. 100270097) with a project duration of three years until June 2019. EASAC (2013): Trends in extreme weather events in Europe: implications for national and European Union adaption strategies. European Academies Science Advisory Council. Policy report 22, November 2013 The EXTRUSO project is funded by the European Social Fund (ESF), grant nr. 100270097

  18. Heart rate informed artificial pancreas system enhances glycemic control during exercise in adolescents with T1D.

    PubMed

    DeBoer, Mark D; Cherñavvsky, Daniel R; Topchyan, Katarina; Kovatchev, Boris P; Francis, Gary L; Breton, Marc D

    2017-11-01

    To evaluate the safety and performance of using a heart rate (HR) monitor to inform an artificial pancreas (AP) system during exercise among adolescents with type 1 diabetes (T1D). In a randomized, cross-over trial, adolescents with T1D age 13 - 18 years were enrolled to receive on separate days either the unmodified UVa AP (stdAP) or an AP system connected to a portable HR monitor (AP-HR) that triggered an exercise algorithm for blood glucose (BG) control. During admissions participants underwent a structured exercise regimen. Hypoglycemic events and CGM tracings were compared between the two admissions, during exercise and for the full 24-hour period. Eighteen participants completed the trial. While number of hypoglycemic events during exercise and rest was not different between visits (0.39 AP-HR vs 0.50 stdAP), time below 70 mg dL -1 was lower on AP-HR compared to stdAP, 0.5±2.1% vs 7.4±12.5% (P = 0.028). Time with BG within 70-180 mg dL -1 was higher for the AP-HR admission vs stdAP during the exercise portion and overall (96% vs 87%, and 77% vs 74%), but these did not reach statistical significance (P = 0.075 and P = 0.366). Heart rate signals can safely and efficaciously be integrated in a wireless AP system to inform of physical activity. While exercise contributes to hypoglycemia among adolescents, even when using an AP system, informing the system of exercise via a HR monitor improved time <70 mg dL -1 . Nonetheless, it did not significantly reduce the total number of hypoglycemic events, which were low in both groups. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Real-time monitoring of Lévy flights in a single quantum system

    NASA Astrophysics Data System (ADS)

    Issler, M.; Höller, J.; Imamoǧlu, A.

    2016-02-01

    Lévy flights are random walks where the dynamics is dominated by rare events. Even though they have been studied in vastly different physical systems, their observation in a single quantum system has remained elusive. Here we analyze a periodically driven open central spin system and demonstrate theoretically that the dynamics of the spin environment exhibits Lévy flights. For the particular realization in a single-electron charged quantum dot driven by periodic resonant laser pulses, we use Monte Carlo simulations to confirm that the long waiting times between successive nuclear spin-flip events are governed by a power-law distribution; the corresponding exponent η =-3 /2 can be directly measured in real time by observing the waiting time distribution of successive photon emission events. Remarkably, the dominant intrinsic limitation of the scheme arising from nuclear quadrupole coupling can be minimized by adjusting the magnetic field or by implementing spin echo.

  20. Design and implementation of the GLIF3 guideline execution engine.

    PubMed

    Wang, Dongwen; Peleg, Mor; Tu, Samson W; Boxwala, Aziz A; Ogunyemi, Omolola; Zeng, Qing; Greenes, Robert A; Patel, Vimla L; Shortliffe, Edward H

    2004-10-01

    We have developed the GLIF3 Guideline Execution Engine (GLEE) as a tool for executing guidelines encoded in the GLIF3 format. In addition to serving as an interface to the GLIF3 guideline representation model to support the specified functions, GLEE provides defined interfaces to electronic medical records (EMRs) and other clinical applications to facilitate its integration with the clinical information system at a local institution. The execution model of GLEE takes the "system suggests, user controls" approach. A tracing system is used to record an individual patient's state when a guideline is applied to that patient. GLEE can also support an event-driven execution model once it is linked to the clinical event monitor in a local environment. Evaluation has shown that GLEE can be used effectively for proper execution of guidelines encoded in the GLIF3 format. When using it to execute each guideline in the evaluation, GLEE's performance duplicated that of the reference systems implementing the same guideline but taking different approaches. The execution flexibility and generality provided by GLEE, and its integration with a local environment, need to be further evaluated in clinical settings. Integration of GLEE with a specific event-monitoring and order-entry environment is the next step of our work to demonstrate its use for clinical decision support. Potential uses of GLEE also include quality assurance, guideline development, and medical education.

  1. Real-time Geographic Information System (GIS) for Monitoring the Area of Potential Water Level Using Rule Based System

    NASA Astrophysics Data System (ADS)

    Anugrah, Wirdah; Suryono; Suseno, Jatmiko Endro

    2018-02-01

    Management of water resources based on Geographic Information System can provide substantial benefits to water availability settings. Monitoring the potential water level is needed in the development sector, agriculture, energy and others. In this research is developed water resource information system using real-time Geographic Information System concept for monitoring the potential water level of web based area by applying rule based system method. GIS consists of hardware, software, and database. Based on the web-based GIS architecture, this study uses a set of computer that are connected to the network, run on the Apache web server and PHP programming language using MySQL database. The Ultrasound Wireless Sensor System is used as a water level data input. It also includes time and geographic location information. This GIS maps the five sensor locations. GIS is processed through a rule based system to determine the level of potential water level of the area. Water level monitoring information result can be displayed on thematic maps by overlaying more than one layer, and also generating information in the form of tables from the database, as well as graphs are based on the timing of events and the water level values.

  2. Detection, Location, and Characterization of Hydroacoustic Signals Using Seafloor Cable Networks Offshore Japan

    NASA Astrophysics Data System (ADS)

    Suyehiro, K.; Sugioka, H.; Watanabe, T.

    2008-12-01

    The hydroacoustic monitoring by the International Monitoring System for CTBT (Comprehensive Nuclear- Test-Ban Treaty) verification system utilizes hydrophone stations (6) and seismic stations (5 and called T- phase stations) for worldwide detection. Some conspicuous signals of natural origin include those from earthquakes, volcanic eruptions, or whale calls. Among artificial sources are non-nuclear explosions and airgun shots. It is important for the IMS system to detect and locate hydroacoustic events with sufficient accuracy and correctly characterize the signals and identify the source. As there are a number of seafloor cable networks operated offshore Japanese islands basically facing the Pacific Ocean for monitoring regional seismicity, the data from these stations (pressure and seismic sensors) may be utilized to increase the capability of IMS. We use these data to compare some selected event parameters with those by IMS. In particular, there have been several unconventional acoustic signals in the western Pacific,which were also captured by IMS hydrophones across the Pacific in the time period of 2007-present. These anomalous examples and also dynamite shots used for seismic crustal structure studies and other natural sources will be presented in order to help improve the IMS verification capabilities for detection, location and characterization of anomalous signals.

  3. Visual Indicators on Vaccine Boxes as Early Warning Tools to Identify Potential Freeze Damage.

    PubMed

    Angoff, Ronald; Wood, Jillian; Chernock, Maria C; Tipping, Diane

    2015-07-01

    The aim of this study was to determine whether the use of visual freeze indicators on vaccines would assist health care providers in identifying vaccines that may have been exposed to potentially damaging temperatures. Twenty-seven sites in Connecticut involved in the Vaccine for Children Program participated. In addition to standard procedures, visual freeze indicators (FREEZEmarker ® L; Temptime Corporation, Morris Plains, NJ) were affixed to each box of vaccine that required refrigeration but must not be frozen. Temperatures were monitored twice daily. During the 24 weeks, all 27 sites experienced triggered visual freeze indicator events in 40 of the 45 refrigerators. A total of 66 triggered freeze indicator events occurred in all 4 types of refrigerators used. Only 1 of the freeze events was identified by a temperature-monitoring device. Temperatures recorded on vaccine data logs before freeze indicator events were within the 35°F to 46°F (2°C to 8°C) range in all but 1 instance. A total of 46,954 doses of freeze-sensitive vaccine were stored at the time of a visual freeze indicator event. Triggered visual freeze indicators were found on boxes containing 6566 doses (14.0% of total doses). Of all doses stored, 14,323 doses (30.5%) were of highly freeze-sensitive vaccine; 1789 of these doses (12.5%) had triggered indicators on the boxes. Visual freeze indicators are useful in the early identification of freeze events involving vaccines. Consideration should be given to including these devices as a component of the temperature-monitoring system for vaccines.

  4. Design, development, and field demonstration of a remotely deployable water quality monitoring system

    NASA Technical Reports Server (NTRS)

    Wallace, J. W.; Lovelady, R. W.; Ferguson, R. L.

    1981-01-01

    A prototype water quality monitoring system is described which offers almost continuous in situ monitoring. The two-man portable system features: (1) a microprocessor controlled central processing unit which allows preprogrammed sampling schedules and reprogramming in situ; (2) a subsurface unit for multiple depth capability and security from vandalism; (3) an acoustic data link for communications between the subsurface unit and the surface control unit; (4) eight water quality parameter sensors; (5) a nonvolatile magnetic bubble memory which prevents data loss in the event of power interruption; (6) a rechargeable power supply sufficient for 2 weeks of unattended operation; (7) a water sampler which can collect samples for laboratory analysis; (8) data output in direct engineering units on printed tape or through a computer compatible link; (9) internal electronic calibration eliminating external sensor adjustment; and (10) acoustic location and recovery systems. Data obtained in Saginaw Bay, Lake Huron are tabulated.

  5. Sewage Monitors

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Every U.S. municipality must determine how much waste water it is processing and more importantly, how much is going unprocessed into lakes and streams either because of leaks in the sewer system or because the city's sewage facilities were getting more sewer flow than they were designed to handle. ADS Environmental Services, Inc.'s development of the Quadrascan Flow Monitoring System met the need for an accurate method of data collection. The system consists of a series of monitoring sensors and microcomputers that continually measure water depth at particular sewer locations and report their findings to a central computer. This provides precise information to city managers on overall flow, flow in any section of the city, location and severity of leaks and warnings of potential overload. The core technology has been expanded upon in terms of both technical improvements, and functionality for new applications, including event alarming and control for critical collection system management problems.

  6. Intelligent MONitoring System for antiviral pharmacotherapy in patients with chronic hepatitis C (SiMON-VC).

    PubMed

    Margusino-Framiñán, Luis; Cid-Silva, Purificación; Mena-de-Cea, Álvaro; Sanclaudio-Luhía, Ana Isabel; Castro-Castro, José Antonio; Vázquez-González, Guillermo; Martín-Herranz, Isabel

    2017-01-01

    Two out of six strategic axes of pharmaceutical care in our hospital are quality and safety of care, and the incorporation of information technologies. Based on this, an information system was developed in the outpatient setting for pharmaceutical care of patients with chronic hepatitis C, SiMON-VC, which would improve the quality and safety of their pharmacotherapy. The objective of this paper is to describe requirements, structure and features of Si- MON-VC. Requirements demanded were that the information system would enter automatically all critical data from electronic clinical records at each of the visits to the Outpatient Pharmacy Unit, allowing the generation of events and alerts, documenting the pharmaceutical care provided, and allowing the use of data for research purposes. In order to meet these requirements, 5 sections were structured for each patient in SiMON-VC: Main Record, Events, Notes, Monitoring Graphs and Tables, and Follow-up. Each section presents a number of tabs with those coded data needed to monitor patients in the outpatient unit. The system automatically generates alerts for assisted prescription validation, efficacy and safety of using antivirals for the treatment of this disease. It features a completely versatile Indicator Control Panel, where temporary monitoring standards and alerts can be set. It allows the generation of reports, and their export to the electronic clinical record. It also allows data to be exported to the usual operating systems, through Big Data and Business Intelligence. Summing up, we can state that SiMON-VC improves the quality of pharmaceutical care provided in the outpatient pharmacy unit to patients with chronic hepatitis C, increasing the safety of antiviral therapy. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  7. Paroxysmal events during prolonged video-video electroencephalography monitoring in refractory epilepsy.

    PubMed

    Sanabria-Castro, A; Henríquez-Varela, F; Monge-Bonilla, C; Lara-Maier, S; Sittenfeld-Appel, M

    2017-03-16

    Given that epileptic seizures and non-epileptic paroxysmal events have similar clinical manifestations, using specific diagnostic methods is crucial, especially in patients with drug-resistant epilepsy. Prolonged video electroencephalography monitoring during epileptic seizures reveals epileptiform discharges and has become an essential procedure for epilepsy diagnosis. The main purpose of this study is to characterise paroxysmal events and compare patterns in patients with refractory epilepsy. We conducted a retrospective analysis of medical records from 91 patients diagnosed with refractory epilepsy who underwent prolonged video electroencephalography monitoring during hospitalisation. During prolonged video electroencephalography monitoring, 76.9% of the patients (n=70) had paroxysmal events. The mean number of events was 3.4±2.7; the duration of these events was highly variable. Most patients (80%) experienced seizures during wakefulness. The most common events were focal seizures with altered levels of consciousness, progressive bilateral generalized seizures and psychogenic non-epileptic seizures. Regarding all paroxysmal events, no differences were observed in the number or type of events by sex, in duration by sex or age at onset, or in the number of events by type of event. Psychogenic nonepileptic seizures were predominantly registered during wakefulness, lasted longer, started at older ages, and were more frequent in women. Paroxysmal events recorded during prolonged video electroencephalography monitoring in patients with refractory epilepsy show similar patterns and characteristics to those reported in other latitudes. Copyright © 2017 The Author(s). Publicado por Elsevier España, S.L.U. All rights reserved.

  8. Introduction to monitoring dynamic environmental phenomena of the world using satellite data collection systems, 1978

    USGS Publications Warehouse

    Carter, William Douglas; Paulson, Richard W.

    1979-01-01

    The rapid development of satellite technology, especially in the area of radio transmission and imaging systems, makes it possible to monitor dynamic surface phenomena of the Earth in considerable detail. The monitoring systems that have been developed are compatible with standard monitoring systems such as snow, stream, and rain gages; wind, temperature and humidity measuring instruments; tiltmeters and seismic event counters. Supported by appropriate power, radios and antennae, remote stations can be left unattended for at least 1 year and consistently relay local information via polar orbiting or geostationary satellites. These data, in conjunction with timely Landsat images, can provide a basis for more accurate estimates on snowfall, water runoff, reservoir level changes, flooding, drought effects, and vegetation trends and may be of help in forecasting volcanic eruptions. These types of information are critical for resource inventory and development, especially in developing countries where remote regions are commonly difficult to access. This paper introduces the reader to the systems available, describes their features and limitations, and provides suggestions on how to employ them. An extensive bibliography is provided for those who wish more information.

  9. Design and implementation of a status at a glance user interface for a power distribution expert system

    NASA Technical Reports Server (NTRS)

    Liberman, Eugene M.; Manner, David B.; Dolce, James L.; Mellor, Pamela A.

    1993-01-01

    A user interface to the power distribution expert system for Space Station Freedom is discussed. The importance of features which simplify assessing system status and which minimize navigating through layers of information are examined. Design rationale and implementation choices are also presented. The amalgamation of such design features as message linking arrows, reduced information content screens, high salience anomaly icons, and color choices with failure detection and diagnostic explanation from an expert system is shown to provide an effective status-at-a-glance monitoring system for power distribution. This user interface design offers diagnostic reasoning without compromising the monitoring of current events. The display can convey complex concepts in terms that are clear to its users.

  10. The Future of the Perfusion Record: Automated Data Collection vs. Manual Recording

    PubMed Central

    Ottens, Jane; Baker, Robert A.; Newland, Richard F.; Mazzone, Annette

    2005-01-01

    Abstract: The perfusion record, whether manually recorded or computer generated, is a legal representation of the procedure. The handwritten perfusion record has been the most common method of recording events that occur during cardiopulmonary bypass. This record is of significant contrast to the integrated data management systems available that provide continuous collection of data automatically or by means of a few keystrokes. Additionally, an increasing number of monitoring devices are available to assist in the management of patients on bypass. These devices are becoming more complex and provide more data for the perfusionist to monitor and record. Most of the data from these can be downloaded automatically into online data management systems, allowing more time for the perfusionist to concentrate on the patient while simultaneously producing a more accurate record. In this prospective report, we compared 17 cases that were recorded using both manual and electronic data collection techniques. The perfusionist in charge of the case recorded the perfusion using the manual technique while a second perfusionist entered relevant events on the electronic record generated by the Stockert S3 Data Management System/Data Bahn (Munich, Germany). Analysis of the two types of perfusion records showed significant variations in the recorded information. Areas that showed the most inconsistency included measurement of the perfusion pressures, flow, blood temperatures, cardioplegia delivery details, and the recording of events, with the electronic record superior in the integrity of the data. In addition, the limitations of the electronic system were also shown by the lack of electronic gas flow data in our hardware. Our results confirm the importance of accurate methods of recording of perfusion events. The use of an automated system provides the opportunity to minimize transcription error and bias. This study highlights the limitation of spot recording of perfusion events in the overall record keeping for perfusion management. PMID:16524151

  11. SCADA data and the quantification of hazardous events for QMRA.

    PubMed

    Nilsson, P; Roser, D; Thorwaldsdotter, R; Petterson, S; Davies, C; Signor, R; Bergstedt, O; Ashbolt, N

    2007-01-01

    The objective of this study was to assess the use of on-line monitoring to support the QMRA at water treatment plants studied in the EU MicroRisk project. SCADA data were obtained from three Catchment-to-Tap Systems (CTS) along with system descriptions, diary records, grab sample data and deviation reports. Particular attention was paid to estimating hazardous event frequency, duration and magnitude. Using Shewart and CUSUM we identified 'change-points' corresponding to events of between 10 min and >1 month duration in timeseries data. Our analysis confirmed it is possible to quantify hazardous event durations from turbidity, chlorine residual and pH records and distinguish them from non-hazardous variability in the timeseries dataset. The durations of most 'events' were short-term (0.5-2.3 h). These data were combined with QMRA to estimate pathogen infection risk arising from such events as chlorination failure. While analysis of SCADA data alone could identify events provisionally, its interpretation was severely constrained in the absence of diary records and other system information. SCADA data analysis should only complement traditional water sampling, rather than replace it. More work on on-line data management, quality control and interpretation is needed before it can be used routinely for event characterization.

  12. Single-event and total-dose effects in geo-stationary transfer orbit during solar-activity maximum period measured by the Tsubasa satellite

    NASA Astrophysics Data System (ADS)

    Koshiishi, H.; Kimoto, Y.; Matsumoto, H.; Goka, T.

    The Tsubasa satellite developed by the Japan Aerospace Exploration Agency was launched in Feb 2002 into Geo-stationary Transfer Orbit GTO Perigee 500km Apogee 36000km and had been operated well until Sep 2003 The objective of this satellite was to verify the function of commercial parts and new technologies of bus-system components in space Thus the on-board experiments were conducted in the more severe radiation environment of GTO rather than in Geo-stationary Earth Orbit GEO or Low Earth Orbit LEO The Space Environment Data Acquisition equipment SEDA on board the Tsubasa satellite had the Single-event Upset Monitor SUM and the DOSimeter DOS to evaluate influences on electronic devices caused by radiation environment that was also measured by the particle detectors of the SEDA the Standard DOse Monitor SDOM for measurements of light particles and the Heavy Ion Telescope HIT for measurements of heavy ions The SUM monitored single-event upsets and single-event latch-ups occurred in the test sample of two 64-Mbit DRAMs The DOS measured accumulated radiation dose at fifty-six locations in the body of the Tsubasa satellite Using the data obtained by these instruments single-event and total-dose effects in GTO during solar-activity maximum period especially their rapid changes due to solar flares and CMEs in the region from L 1 1 through L 11 is discussed in this paper

  13. A Decision Support System for Tele-Monitoring COPD-Related Worrisome Events.

    PubMed

    Merone, Mario; Pedone, Claudio; Capasso, Giuseppe; Incalzi, Raffaele Antonelli; Soda, Paolo

    2017-03-01

    Chronic Obstructive Pulmonary Disease (COPD) is a preventable, treatable, and slowly progressive disease, whose course is aggravated by a periodic worsening of symptoms and lung function lasting for several days. The development of home telemonitoring systems has made possible to collect symptoms and physiological data in electronic records, boosting the development of decision support systems (DSSs). Current DSSs work with physiological measurements collected by means of several measuring and communication devices as well as with symptoms gathered by questionnaires submitted to COPD subjects. However, this contrasts with the advices provided by the World Health Organization and the Global initiative for chronic Obstructive Lung Disease that recommend to avoid invasive or complex daily measurements. For these reasons this manuscript presents a DSS detecting the onset of worrisome events in COPD subjects. It uses the hearth rate and the oxygen saturation, which can be collected via a pulse oximeter. The DSS consists in a binary finite state machine, whose training stage allows a subject specific personalization of the predictive model, triggering warnings, and alarms as the health status evolves over time. The experiments on data collected from 22 COPD patients tele-monitored at home for six months show that the system recognition performance is better than the one achieved by medical experts. Furthermore, the support offered by the system in the decision-making process allows to increase the agreement between the specialists, largely impacting the recognition of the worrisome events.

  14. Validation of reactive gases and aerosols in the MACC global analysis and forecast system

    NASA Astrophysics Data System (ADS)

    Eskes, H.; Huijnen, V.; Arola, A.; Benedictow, A.; Blechschmidt, A.-M.; Botek, E.; Boucher, O.; Bouarar, I.; Chabrillat, S.; Cuevas, E.; Engelen, R.; Flentje, H.; Gaudel, A.; Griesfeller, J.; Jones, L.; Kapsomenakis, J.; Katragkou, E.; Kinne, S.; Langerock, B.; Razinger, M.; Richter, A.; Schultz, M.; Schulz, M.; Sudarchikova, N.; Thouret, V.; Vrekoussis, M.; Wagner, A.; Zerefos, C.

    2015-02-01

    The European MACC (Monitoring Atmospheric Composition and Climate) project is preparing the operational Copernicus Atmosphere Monitoring Service (CAMS), one of the services of the European Copernicus Programme on Earth observation and environmental services. MACC uses data assimilation to combine in-situ and remote sensing observations with global and regional models of atmospheric reactive gases, aerosols and greenhouse gases, and is based on the Integrated Forecast System of the ECMWF. The global component of the MACC service has a dedicated validation activity to document the quality of the atmospheric composition products. In this paper we discuss the approach to validation that has been developed over the past three years. Topics discussed are the validation requirements, the operational aspects, the measurement data sets used, the structure of the validation reports, the models and assimilation systems validated, the procedure to introduce new upgrades, and the scoring methods. One specific target of the MACC system concerns forecasting special events with high pollution concentrations. Such events receive extra attention in the validation process. Finally, a summary is provided of the results from the validation of the latest set of daily global analysis and forecast products from the MACC system reported in November 2014.

  15. Expanding veterinary biosurveillance in Washington, DC: The creation and utilization of an electronic-based online veterinary surveillance system.

    PubMed

    Hennenfent, Andrew; DelVento, Vito; Davies-Cole, John; Johnson-Clarke, Fern

    2017-03-01

    To enhance the early detection of emerging infectious diseases and bioterrorism events using companion animal-based surveillance. Washington, DC, small animal veterinary facilities (n=17) were surveyed to determine interest in conducting infectious disease surveillance. Using these results, an electronic-based online reporting system was developed and launched in August 2015 to monitor rates of canine influenza, canine leptospirosis, antibiotic resistant infections, canine parvovirus, and syndromic disease trends. Nine of the 10 facilities that responded expressed interest conducting surveillance. In September 2015, 17 canine parvovirus cases were reported. In response, a campaign encouraging regular veterinary preventative care was launched and featured on local media platforms. Additionally, during the system's first year of operation it detected 5 canine leptospirosis cases and 2 antibiotic resistant infections. No canine influenza cases were reported and syndromic surveillance compliance varied, peaking during National Special Security Events. Small animal veterinarians and the general public are interested in companion animal disease surveillance. The system described can serve as a model for establishing similar systems to monitor disease trends of public health importance in pet populations and enhance biosurveillance capabilities. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Fire monitoring capability of the joint Landsat and Sentinel 2 constellation

    NASA Astrophysics Data System (ADS)

    Murphy, S.; Wright, R.

    2017-12-01

    Fires are a global hazard. Landsat and Sentinel 2 can monitor the Earth's surface every 2 - 4 days. This provides an important opportunity to complement the operational (lower resolution) fire monitoring systems. Landsat-class sensors can detect small fires that would be missed by MODIS-classed sensors. All large fires start out as small fires. We analyze fire patterns in California from 1984 to 2017 and compare the performance of Landsat-type and MODIS-type sensors. Had an operational Landsat-Sentinel 2 fire detection system been in place at the time of the Soberanes fire last year (i.e. August 2016), the cost of suppressing of this fire event (US $236 million) could potentially have been reduced by an order of magnitude.

  17. Implementation of a computer-assisted monitoring system for the detection of adverse drug reactions in gastroenterology.

    PubMed

    Dormann, H; Criegee-Rieck, M; Neubert, A; Egger, T; Levy, M; Hahn, E G; Brune, K

    2004-02-01

    To investigate the effectiveness of a computer monitoring system that detects adverse drug reactions (ADRs) by laboratory signals in gastroenterology. A prospective, 6-month, pharmaco-epidemiological survey was carried out on a gastroenterological ward at the University Hospital Erlangen-Nuremberg. Two methods were used to identify ADRs. (i) All charts were reviewed daily by physicians and clinical pharmacists. (ii) A computer monitoring system generated a daily list of automatic laboratory signals and alerts of ADRs, including patient data and dates of events. One hundred and nine ADRs were detected in 474 admissions (377 patients). The computer monitoring system generated 4454 automatic laboratory signals from 39 819 laboratory parameters tested, and issued 2328 alerts, 914 (39%) of which were associated with ADRs; 574 (25%) were associated with ADR-positive admissions. Of all the alerts generated, signals of hepatotoxicity (1255), followed by coagulation disorders (407) and haematological toxicity (207), were prevalent. Correspondingly, the prevailing ADRs were concerned with the metabolic and hepato-gastrointestinal system (61). The sensitivity was 91%: 69 of 76 ADR-positive patients were indicated by an alert. The specificity of alerts was increased from 23% to 76% after implementation of an automatic laboratory signal trend monitoring algorithm. This study shows that a computer monitoring system is a useful tool for the systematic and automated detection of ADRs in gastroenterological patients.

  18. Comprehensive Seismological Monitoring of Geomorphic Processes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chao, W. A.; Chen, C. H.

    2016-12-01

    Geomorphic processes such as hillslope mass wasting and river sediment transport are important for studying landscape dynamics. Mass movements induced from geomorphic events can generate seismic waves and be recorded by seismometers. Recent studies demonstrate that seismic monitoring techniques not only fully map the spatiotemporal patterns of geomorphic activity but also allow for exploration of the dynamic links between hillslope failures and channel processes, which may not be resolved by conventional techniques (e.g., optical remote sensing). We have recently developed a real-time landquake monitoring system (RLMS, here we use the term `landquake' to represent all hillslope failures such as rockfall, rock avalanche and landslide), which has been continuously monitoring landquake activities in Taiwan since June 2015 based on broadband seismic records, yielding source information (e.g., location, occurrence time, magnitude and mechanism) for large-sized events (http://140.112.57.117/main.html). Several seismic arrays have also been deployed over the past few years around the catchments and along the river channels in Taiwan for monitoring erosion processes at catchment scale, improving the spatiotemporal resolution in exploring the interaction between geomorphic events and specific meteorological conditions. Based on a forward model accounting for the impulsive impacts of saltating particles, we can further invert for the sediment load flux, a critical parameter in landscape evolution studies, by fitting the seismic observations only. To test the validity of the seismologically determined sediment load flux, we conduct a series of controlled dam breaking experiments that are advantageous in well constraining the spatiotemporal variations of the sediment transport. Incorporating the seismological constrains on geomorphic processes with the effects of tectonic and/or climate perturbations can provide valuable and quantitative information for more fully understanding and modeling of the dynamics of erosional mountain landscapes. Comprehensive seismic monitoring also yields important information for the evaluation, assessment and emergency response of hazardous geomorphic events.

  19. Classification and definition of misuse, abuse, and related events in clinical trials: ACTTION systematic review and recommendations.

    PubMed

    Smith, Shannon M; Dart, Richard C; Katz, Nathaniel P; Paillard, Florence; Adams, Edgar H; Comer, Sandra D; Degroot, Aldemar; Edwards, Robert R; Haddox, J David; Jaffe, Jerome H; Jones, Christopher M; Kleber, Herbert D; Kopecky, Ernest A; Markman, John D; Montoya, Ivan D; O'Brien, Charles; Roland, Carl L; Stanton, Marsha; Strain, Eric C; Vorsanger, Gary; Wasan, Ajay D; Weiss, Roger D; Turk, Dennis C; Dworkin, Robert H

    2013-11-01

    As the nontherapeutic use of prescription medications escalates, serious associated consequences have also increased. This makes it essential to estimate misuse, abuse, and related events (MAREs) in the development and postmarketing adverse event surveillance and monitoring of prescription drugs accurately. However, classifications and definitions to describe prescription drug MAREs differ depending on the purpose of the classification system, may apply to single events or ongoing patterns of inappropriate use, and are not standardized or systematically employed, thereby complicating the ability to assess MARE occurrence adequately. In a systematic review of existing prescription drug MARE terminology and definitions from consensus efforts, review articles, and major institutions and agencies, MARE terms were often defined inconsistently or idiosyncratically, or had definitions that overlapped with other MARE terms. The Analgesic, Anesthetic, and Addiction Clinical Trials, Translations, Innovations, Opportunities, and Networks (ACTTION) public-private partnership convened an expert panel to develop mutually exclusive and exhaustive consensus classifications and definitions of MAREs occurring in clinical trials of analgesic medications to increase accuracy and consistency in characterizing their occurrence and prevalence in clinical trials. The proposed ACTTION classifications and definitions are designed as a first step in a system to adjudicate MAREs that occur in analgesic clinical trials and postmarketing adverse event surveillance and monitoring, which can be used in conjunction with other methods of assessing a treatment's abuse potential. Copyright © 2013 International Association for the Study of Pain. All rights reserved.

  20. Detection of planets in extremely weak central perturbation microlensing events via next-generation ground-based surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Sun-Ju; Lee, Chung-Uk; Koo, Jae-Rim, E-mail: sjchung@kasi.re.kr, E-mail: leecu@kasi.re.kr, E-mail: koojr@kasi.re.kr

    2014-04-20

    Even though the recently discovered high-magnification event MOA-2010-BLG-311 had complete coverage over its peak, confident planet detection did not happen due to extremely weak central perturbations (EWCPs, fractional deviations of ≲ 2%). For confident detection of planets in EWCP events, it is necessary to have both high cadence monitoring and high photometric accuracy better than those of current follow-up observation systems. The next-generation ground-based observation project, Korea Microlensing Telescope Network (KMTNet), satisfies these conditions. We estimate the probability of occurrence of EWCP events with fractional deviations of ≤2% in high-magnification events and the efficiency of detecting planets in the EWCPmore » events using the KMTNet. From this study, we find that the EWCP events occur with a frequency of >50% in the case of ≲ 100 M {sub E} planets with separations of 0.2 AU ≲ d ≲ 20 AU. We find that for main-sequence and sub-giant source stars, ≳ 1 M {sub E} planets in EWCP events with deviations ≤2% can be detected with frequency >50% in a certain range that changes with the planet mass. However, it is difficult to detect planets in EWCP events of bright stars like giant stars because it is easy for KMTNet to be saturated around the peak of the events because of its constant exposure time. EWCP events are caused by close, intermediate, and wide planetary systems with low-mass planets and close and wide planetary systems with massive planets. Therefore, we expect that a much greater variety of planetary systems than those already detected, which are mostly intermediate planetary systems, regardless of the planet mass, will be significantly detected in the near future.« less

  1. On-line tritium production monitor

    DOEpatents

    Mihalczo, John T.

    1993-01-01

    A scintillation optical fiber system for the on-line monitoring of nuclear reactions in an event-by-event manner is described. In the measurement of tritium production one or more optical fibers are coated with enriched .sup.6 Li and connected to standard scintillation counter circuitry. A neutron generated .sup.6 Li(n )T reaction occurs in the coated surface of .sup.6 Li-coated fiber to produce energetic alpha and triton particles one of which enters the optical fiber and scintillates light through the fiber to the counting circuit. The coated optical fibers can be provided with position sensitivity by placing a mirror at the free end of the fibers or by using pulse counting circuits at both ends of the fibers.

  2. On-line tritium production monitor

    DOEpatents

    Mihalczo, J.T.

    1993-11-23

    A scintillation optical fiber system for the on-line monitoring of nuclear reactions in an event-by-event manner is described. In the measurement of tritium production one or more optical fibers are coated with enriched {sup 6}Li and connected to standard scintillation counter circuitry. A neutron generated {sup 6}Li(n)T reaction occurs in the coated surface of {sup 6}Li-coated fiber to produce energetic alpha and triton particles one of which enters the optical fiber and scintillates light through the fiber to the counting circuit. The coated optical fibers can be provided with position sensitivity by placing a mirror at the free end of the fibers or by using pulse counting circuits at both ends of the fibers. 5 figures.

  3. Development of a process-oriented vulnerability concept for water travel time in karst aquifers-case study of Tanour and Rasoun springs catchment area.

    NASA Astrophysics Data System (ADS)

    Hamdan, Ibraheem; Sauter, Martin; Ptak, Thomas; Wiegand, Bettina; Margane, Armin; Toll, Mathias

    2017-04-01

    Key words: Karst aquifer, water travel time, vulnerability assessment, Jordan. The understanding of the groundwater pathways and movement through karst aquifers, and the karst aquifer response to precipitation events especially in the arid to semi-arid areas is fundamental to evaluate pollution risks from point and non-point sources. In spite of the great importance of the karst aquifer for drinking purposes, karst aquifers are highly sensitive to contamination events due to the fast connections between the land-surface and the groundwater (through the karst features) which is makes groundwater quality issues within karst systems very complicated. Within this study, different methods and approaches were developed and applied in order to characterise the karst aquifer system of the Tanour and Rasoun springs (NW-Jordan) and the flow dynamics within the aquifer, and to develop a process-oriented method for vulnerability assessment based on the monitoring of different multi-spatially variable parameters of water travel time in karst aquifer. In general, this study aims to achieve two main objectives: 1. Characterization of the karst aquifer system and flow dynamics. 2. Development of a process-oriented method for vulnerability assessment based on spatially variable parameters of travel time. In order to achieve these aims, different approaches and methods were applied starting from the understanding of the geological and hydrogeological characteristics of the karst aquifer and its vulnerability against pollutants, to using different methods, procedures and monitored parameters in order to determine the water travel time within the aquifer and investigate its response to precipitation event and, finally, with the study of the aquifer response to pollution events. The integrated breakthrough signal obtained from the applied methods and procedures including the using of stable isotopes of oxygen and hydrogen, the monitoring of multi qualitative and quantitative parameters using automated probes and data loggers, and the development of travel time physics-based vulnerability assessment method shows good agreement as an applicable methods to determine the water travel time in karst aquifers, and to investigate its response to precipitation and pollution events.

  4. Using Statistical Process Control for detecting anomalies in multivariate spatiotemporal Earth Observations

    NASA Astrophysics Data System (ADS)

    Flach, Milan; Mahecha, Miguel; Gans, Fabian; Rodner, Erik; Bodesheim, Paul; Guanche-Garcia, Yanira; Brenning, Alexander; Denzler, Joachim; Reichstein, Markus

    2016-04-01

    The number of available Earth observations (EOs) is currently substantially increasing. Detecting anomalous patterns in these multivariate time series is an important step in identifying changes in the underlying dynamical system. Likewise, data quality issues might result in anomalous multivariate data constellations and have to be identified before corrupting subsequent analyses. In industrial application a common strategy is to monitor production chains with several sensors coupled to some statistical process control (SPC) algorithm. The basic idea is to raise an alarm when these sensor data depict some anomalous pattern according to the SPC, i.e. the production chain is considered 'out of control'. In fact, the industrial applications are conceptually similar to the on-line monitoring of EOs. However, algorithms used in the context of SPC or process monitoring are rarely considered for supervising multivariate spatio-temporal Earth observations. The objective of this study is to exploit the potential and transferability of SPC concepts to Earth system applications. We compare a range of different algorithms typically applied by SPC systems and evaluate their capability to detect e.g. known extreme events in land surface processes. Specifically two main issues are addressed: (1) identifying the most suitable combination of data pre-processing and detection algorithm for a specific type of event and (2) analyzing the limits of the individual approaches with respect to the magnitude, spatio-temporal size of the event as well as the data's signal to noise ratio. Extensive artificial data sets that represent the typical properties of Earth observations are used in this study. Our results show that the majority of the algorithms used can be considered for the detection of multivariate spatiotemporal events and directly transferred to real Earth observation data as currently assembled in different projects at the European scale, e.g. http://baci-h2020.eu/index.php/ and http://earthsystemdatacube.net/. Known anomalies such as the Russian heatwave are detected as well as anomalies which are not detectable with univariate methods.

  5. Event-triggered fault detection for a class of discrete-time linear systems using interval observers.

    PubMed

    Zhang, Zhi-Hui; Yang, Guang-Hong

    2017-05-01

    This paper provides a novel event-triggered fault detection (FD) scheme for discrete-time linear systems. First, an event-triggered interval observer is proposed to generate the upper and lower residuals by taking into account the influence of the disturbances and the event error. Second, the robustness of the residual interval against the disturbances and the fault sensitivity are improved by introducing l 1 and H ∞ performances. Third, dilated linear matrix inequalities are used to decouple the Lyapunov matrices from the system matrices. The nonnegative conditions for the estimation error variables are presented with the aid of the slack matrix variables. This technique allows considering a more general Lyapunov function. Furthermore, the FD decision scheme is proposed by monitoring whether the zero value belongs to the residual interval. It is shown that the information communication burden is reduced by designing the event-triggering mechanism, while the FD performance can still be guaranteed. Finally, simulation results demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Monitoring Natural Events Globally in Near Real-Time Using NASA's Open Web Services and Tools

    NASA Technical Reports Server (NTRS)

    Boller, Ryan A.; Ward, Kevin Alan; Murphy, Kevin J.

    2015-01-01

    Since 1960, NASA has been making global measurements of the Earth from a multitude of space-based missions, many of which can be useful for monitoring natural events. In recent years, these measurements have been made available in near real-time, making it possible to use them to also aid in managing the response to natural events. We present the challenges and ongoing solutions to using NASA satellite data for monitoring and managing these events.

  7. Evidence and Perspectives on the 24-hour Management of Hypertension: Hemodynamic Biomarker-Initiated 'Anticipation Medicine' for Zero Cardiovascular Event.

    PubMed

    Kario, Kazuomi

    There are notable differences between Asians and Westerners regarding hypertension (HTN) and the relationship between HTN and cardiovascular disease (CVD). Asians show greater morning surges in blood pressure (BP) and a steeper slope illustrating the link between higher BP and the risk of CVD events. It is thus particularly important for Asian hypertensives to achieve 24-h BP control, including morning and night-time control. There are three components of 'perfect 24-h BP control:' the 24-h BP level, nocturnal BP dipping, and BP variability (BPV), such as the morning BP surge that can be assessed by ambulatory BP monitoring. The morning BP-guided approach using home BP monitoring (HBPM) is the first step toward perfect 24-h BP control, followed by the control of nocturnal HTN. We have been developing new HBPM devices that can measure nocturnal BP. BPV includes different time-phase variability from the shortest beat-by-beat, positional, diurnal, day-by-day, visit-to-visit, seasonal, and yearly changes. The synergistic resonance of each type of BPV would produce a great dynamic BP surge (resonance hypothesis), which triggers a CVD event, especially in the high-risk patients with systemic hemodynamic atherothrombotic syndrome (SHATS). In the future, the innovative management of HTN based on the simultaneous assessment of the resonance of all of the BPV phenotypes using a beat by beat wearable 'surge' BP monitoring device (WSP) and an information and communication technology (ICT)-based data analysis system will produce a paradigm shift from 'dots' BP management to 'seamless' ultimate individualized 'anticipation medication' for reaching a zero CVD event rate. Copyright © 2016 The Author. Published by Elsevier Inc. All rights reserved.

  8. Ongoing right ventricular hemodynamics in heart failure: clinical value of measurements derived from an implantable monitoring system.

    PubMed

    Adamson, Philip B; Magalski, Anthony; Braunschweig, Frieder; Böhm, Michael; Reynolds, Dwight; Steinhaus, David; Luby, Allyson; Linde, Cecilia; Ryden, Lars; Cremers, Bodo; Takle, Teri; Bennett, Tom

    2003-02-19

    This study examined the characteristics of continuously measured right ventricular (RV) hemodynamic information derived from an implantable hemodynamic monitor (IHM) in heart failure patients. Hemodynamic monitoring might improve the day-to-day management of patients with chronic heart failure (CHF). Little is known about the characteristics of long-term hemodynamic information in patients with CHF or how such information relates to meaningful clinical events. Thirty-two patients with CHF received a permanent RV IHM system similar to a single-lead pacemaker. Right ventricular systolic and diastolic pressures, heart rate, and pressure derivatives were continuously measured for nine months without using the data for clinical decision-making or management of patients. Data were then made available to clinical providers, and the patients were followed up for 17 months. Pressure characteristics during optimal volume, clinically determined volume-overload exacerbations, and volume depletion events were examined. The effect of IHM on hospitalizations was examined using the patients' historical controls. Long-term RV pressure measurements had either marked variability or minimal time-related changes. During 36 volume-overload events, RV systolic pressures increased by 25 +/- 4% (p < 0.05) and heart rate increased by 11 +/- 2% (p < 0.05). Pressure increases occurred in 9 of 12 events 4 +/- 2 days before the exacerbations requiring hospitalization. Hospitalizations before using IHM data for clinical management averaged 1.08 per patient year and decreased to 0.47 per patient-year (57% reduction, p < 0.01) after hemodynamic data were used. Long-term ambulatory pressure measurements from an IHM may be helpful in guiding day-to-day clinical management, with a potentially favorable impact on CHF hospitalizations.

  9. Automated Car Park Management System

    NASA Astrophysics Data System (ADS)

    Fabros, J. P.; Tabañag, D.; Espra, A.; Gerasta, O. J.

    2015-06-01

    This study aims to develop a prototype for an Automated Car Park Management System that will increase the quality of service of parking lots through the integration of a smart system that assists motorist in finding vacant parking lot. The research was based on implementing an operating system and a monitoring system for parking system without the use of manpower. This will include Parking Guidance and Information System concept which will efficiently assist motorists and ensures the safety of the vehicles and the valuables inside the vehicle. For monitoring, Optical Character Recognition was employed to monitor and put into list all the cars entering the parking area. All parking events in this system are visible via MATLAB GUI which contain time-in, time-out, time consumed information and also the lot number where the car parks. To put into reality, this system has a payment method, and it comes via a coin slot operation to control the exit gate. The Automated Car Park Management System was successfully built by utilizing microcontrollers specifically one PIC18f4550 and two PIC16F84s and one PIC16F628A.

  10. High-Speed Observer: Automated Streak Detection in SSME Plumes

    NASA Technical Reports Server (NTRS)

    Rieckoff, T. J.; Covan, M.; OFarrell, J. M.

    2001-01-01

    A high frame rate digital video camera installed on test stands at Stennis Space Center has been used to capture images of Space Shuttle main engine plumes during test. These plume images are processed in real time to detect and differentiate anomalous plume events occurring during a time interval on the order of 5 msec. Such speed yields near instantaneous availability of information concerning the state of the hardware. This information can be monitored by the test conductor or by other computer systems, such as the integrated health monitoring system processors, for possible test shutdown before occurrence of a catastrophic engine failure.

  11. Development of a Test Protocol for Spacecraft Post-Fire Atmospheric Cleanup and Monitoring

    NASA Technical Reports Server (NTRS)

    Zuniga, David; Hornung, Steven D.; Haas, Jon P.; Graf, John C.

    2009-01-01

    Detecting and extinguishing fires, along with post-fire atmospheric cleaning and monitoring, are vital components of a spacecraft fire response system. Preliminary efforts focused on the technology evaluation of these systems under realistic conditions are described in this paper. While the primary objective of testing is to determine a smoke mitigation filter s performance, supplemental evaluations measuring the smoke-filled chamber handheld commercial off-the-shelf (COTS) atmospheric monitoring devices (combustion product monitors) are also conducted. The test chamber consists of a 1.4 cubic meter (50 cu. ft.) volume containing a smoke generator. The fuel used to generate the smoke is a mixture of polymers in quantities representative of materials involved in a circuit board fire as a typical spacecraft fire. Two fire conditions were examined: no flame and flame. No flame events are produced by pyrolyzing the fuel mixture in a quartz tube furnace with forced ventilation to produce a white, lingering-type smoke. Flame events ignite the smoke at the outlet of the tube furnace producing combustion characterized by a less opaque smoke with black soot. Electrochemical sensor measurements showed carbon monoxide is a major indicator of each fire. Acid gas measurements were recorded, but cross interferents are currently uncharacterized. Electrochemical sensor measurements and sample acquisition techniques from photoacoustic sensors are being improved. Overall, this research shows fire characterization using traditional analytical chemistry techniques is required to verify measurements recorded using COTS atmospheric monitoring devices.

  12. Results of remote follow-up and monitoring in young patients with cardiac implantable electronic devices.

    PubMed

    Silvetti, Massimo S; Saputo, Fabio A; Palmieri, Rosalinda; Placidi, Silvia; Santucci, Lorenzo; Di Mambro, Corrado; Righi, Daniela; Drago, Fabrizio

    2016-01-01

    Remote monitoring is increasingly used in the follow-up of patients with cardiac implantable electronic devices. Data on paediatric populations are still lacking. The aim of our study was to follow-up young patients both in-hospital and remotely to enhance device surveillance. This is an observational registry collecting data on consecutive patients followed-up with the CareLink system. Inclusion criteria were a Medtronic device implanted and patient's willingness to receive CareLink. Patients were stratified according to age and presence of congenital/structural heart defects (CHD). A total of 221 patients with a device - 200 pacemakers, 19 implantable cardioverter defibrillators, and two loop recorders--were enrolled (median age of 17 years, range 1-40); 58% of patients were younger than 18 years of age and 73% had CHD. During a follow-up of 12 months (range 4-18), 1361 transmissions (8.9% unscheduled) were reviewed by technicians. Time for review was 6 ± 2 minutes (mean ± standard deviation). Missed transmissions were 10.1%. Events were documented in 45% of transmissions, with 2.7% yellow alerts and 0.6% red alerts sent by wireless devices. No significant differences were found in transmission results according to age or presence of CHD. Physicians reviewed 6.3% of transmissions, 29 patients were contacted by phone, and 12 patients underwent unscheduled in-hospital visits. The event recognition with remote monitoring occurred 76 days (range 16-150) earlier than the next scheduled in-office follow-up. Remote follow-up/monitoring with the CareLink system is useful to enhance device surveillance in young patients. The majority of events were not clinically relevant, and the remaining led to timely management of problems.

  13. Exploiting semantics for sensor re-calibration in event detection systems

    NASA Astrophysics Data System (ADS)

    Vaisenberg, Ronen; Ji, Shengyue; Hore, Bijit; Mehrotra, Sharad; Venkatasubramanian, Nalini

    2008-01-01

    Event detection from a video stream is becoming an important and challenging task in surveillance and sentient systems. While computer vision has been extensively studied to solve different kinds of detection problems over time, it is still a hard problem and even in a controlled environment only simple events can be detected with a high degree of accuracy. Instead of struggling to improve event detection using image processing only, we bring in semantics to direct traditional image processing. Semantics are the underlying facts that hide beneath video frames, which can not be "seen" directly by image processing. In this work we demonstrate that time sequence semantics can be exploited to guide unsupervised re-calibration of the event detection system. We present an instantiation of our ideas by using an appliance as an example--Coffee Pot level detection based on video data--to show that semantics can guide the re-calibration of the detection model. This work exploits time sequence semantics to detect when re-calibration is required to automatically relearn a new detection model for the newly evolved system state and to resume monitoring with a higher rate of accuracy.

  14. Continuous glucose monitoring: quality of hypoglycaemia detection.

    PubMed

    Zijlstra, E; Heise, T; Nosek, L; Heinemann, L; Heckermann, S

    2013-02-01

    To evaluate the accuracy of a (widely used) continuous glucose monitoring (CGM)-system and its ability to detect hypoglycaemic events. A total of 18 patients with type 1 diabetes mellitus used continuous glucose monitoring (Guardian REAL-Time CGMS) during two 9-day in-house periods. A hypoglycaemic threshold alarm alerted patients to sensor readings <70 mg/dl. Continuous glucose monitoring sensor readings were compared to laboratory reference measurements taken every 4 h and in case of a hypoglycaemic alarm. A total of 2317 paired data points were evaluated. Overall, the mean absolute relative difference (MARD) was 16.7%. The percentage of data points in the clinically accurate or acceptable Clarke Error Grid zones A + B was 94.6%. In the hypoglycaemic range, accuracy worsened (MARD 38.8%) leading to a failure to detect more than half of the true hypoglycaemic events (sensitivity 37.5%). Furthermore, more than half of the alarms that warn patients for hypoglycaemia were false (false alert rate 53.3%). Above the low alert threshold, the sensor confirmed 2077 of 2182 reference values (specificity 95.2%). Patients using continuous glucose monitoring should be aware of its limitation to accurately detect hypoglycaemia. © 2012 Blackwell Publishing Ltd.

  15. Remote control improves quality of life in elderly pacemaker patients versus standard ambulatory-based follow-up.

    PubMed

    Comoretto, Rosanna Irene; Facchin, Domenico; Ghidina, Marco; Proclemer, Alessandro; Gregori, Dario

    2017-08-01

    Health-related quality of life (HRQoL) improves shortly after pacemaker (PM) implantation. No studies have investigated the HRQoL trend for elderly patients with a remote device monitoring follow-up system. Using EuroQol-5D Questionnaire and the PM-specific Assessment of Quality of Life and Related Events Questionnaire, HRQoL was measured at baseline and then repeatedly during the 6 months following PM implantation in a cohort of 42 consecutive patients. Twenty-five patients were followed-up with standard outpatient visits, while 17 used a remote monitoring system. Aquarel scores were significantly higher in patients with remote device monitoring system regarding chest discomfort and arrhythmia subscales the first month after PM implant and remained stable until 6 months. Remote monitoring affected the rate of HRQoL improvement in the first 3 months after pacemaker implantation more than ambulatory follow-up. Remote device monitoring has a significant impact on HRQoL in pacemaker patients, increasing its levels up to 6 months after implant. © 2017 John Wiley & Sons, Ltd.

  16. Machine Learning and Data Mining for Comprehensive Test Ban Treaty Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, S; Vaidya, S

    2009-07-30

    The Comprehensive Test Ban Treaty (CTBT) is gaining renewed attention in light of growing worldwide interest in mitigating risks of nuclear weapons proliferation and testing. Since the International Monitoring System (IMS) installed the first suite of sensors in the late 1990's, the IMS network has steadily progressed, providing valuable support for event diagnostics. This progress was highlighted at the recent International Scientific Studies (ISS) Conference in Vienna in June 2009, where scientists and domain experts met with policy makers to assess the current status of the CTBT Verification System. A strategic theme within the ISS Conference centered on exploring opportunitiesmore » for further enhancing the detection and localization accuracy of low magnitude events by drawing upon modern tools and techniques for machine learning and large-scale data analysis. Several promising approaches for data exploitation were presented at the Conference. These are summarized in a companion report. In this paper, we introduce essential concepts in machine learning and assess techniques which could provide both incremental and comprehensive value for event discrimination by increasing the accuracy of the final data product, refining On-Site-Inspection (OSI) conclusions, and potentially reducing the cost of future network operations.« less

  17. Monitoring data transfer latency in CMS computing operations

    DOE PAGES

    Bonacorsi, Daniele; Diotalevi, Tommaso; Magini, Nicolo; ...

    2015-12-23

    During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, andmore » to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different factors that contribute to transfer completion time. As case studies, we analyze several typical CMS transfer workflows, such as distribution of collision event data from CERN or upload of simulated event data from the Tier-2 centres to the archival Tier-1 centres. For each workflow, we present the typical patterns of transfer latencies that have been identified with the latency monitor.We identify the areas in PhEDEx where a development effort can reduce the latency, and we show how we are able to detect stuck transfers which need operator intervention. Lastly, we propose a set of metrics to alert about stuck subscriptions and prompt for manual intervention, with the aim of improving transfer completion times.« less

  18. Monitoring data transfer latency in CMS computing operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonacorsi, Daniele; Diotalevi, Tommaso; Magini, Nicolo

    During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, andmore » to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different factors that contribute to transfer completion time. As case studies, we analyze several typical CMS transfer workflows, such as distribution of collision event data from CERN or upload of simulated event data from the Tier-2 centres to the archival Tier-1 centres. For each workflow, we present the typical patterns of transfer latencies that have been identified with the latency monitor.We identify the areas in PhEDEx where a development effort can reduce the latency, and we show how we are able to detect stuck transfers which need operator intervention. Lastly, we propose a set of metrics to alert about stuck subscriptions and prompt for manual intervention, with the aim of improving transfer completion times.« less

  19. Automatic Detection and Classification of Audio Events for Road Surveillance Applications.

    PubMed

    Almaadeed, Noor; Asim, Muhammad; Al-Maadeed, Somaya; Bouridane, Ahmed; Beghdadi, Azeddine

    2018-06-06

    This work investigates the problem of detecting hazardous events on roads by designing an audio surveillance system that automatically detects perilous situations such as car crashes and tire skidding. In recent years, research has shown several visual surveillance systems that have been proposed for road monitoring to detect accidents with an aim to improve safety procedures in emergency cases. However, the visual information alone cannot detect certain events such as car crashes and tire skidding, especially under adverse and visually cluttered weather conditions such as snowfall, rain, and fog. Consequently, the incorporation of microphones and audio event detectors based on audio processing can significantly enhance the detection accuracy of such surveillance systems. This paper proposes to combine time-domain, frequency-domain, and joint time-frequency features extracted from a class of quadratic time-frequency distributions (QTFDs) to detect events on roads through audio analysis and processing. Experiments were carried out using a publicly available dataset. The experimental results conform the effectiveness of the proposed approach for detecting hazardous events on roads as demonstrated by 7% improvement of accuracy rate when compared against methods that use individual temporal and spectral features.

  20. DiAs Web Monitoring: A Real-Time Remote Monitoring System Designed for Artificial Pancreas Outpatient Trials

    PubMed Central

    Place, Jérôme; Robert, Antoine; Brahim, Najib Ben; Patrick, Keith-Hynes; Farret, Anne; Marie-Josée, Pelletier; Buckingham, Bruce; Breton, Marc; Kovatchev, Boris; Renard, Eric

    2013-01-01

    Background Developments in an artificial pancreas (AP) for patients with type 1 diabetes have allowed a move toward performing outpatient clinical trials. “Home-like” environment implies specific protocol and system adaptations among which the introduction of remote monitoring is meaningful. We present a novel tool allowing multiple patients to monitor AP use in home-like settings. Methods We investigated existing systems, performed interviews of experienced clinical teams, listed required features, and drew several mockups of the user interface. The resulting application was tested on the bench before it was used in three outpatient studies representing 3480 h of remote monitoring. Results Our tool, called DiAs Web Monitoring (DWM), is a web-based application that ensures reception, storage, and display of data sent by AP systems. Continuous glucose monitoring (CGM) and insulin delivery data are presented in a colored chart to facilitate reading and interpretation. Several subjects can be monitored simultaneously on the same screen, and alerts are triggered to help detect events such as hypoglycemia or CGM failures. In the third trial, DWM received approximately 460 data per subject per hour: 77% for log messages, 5% for CGM data. More than 97% of transmissions were achieved in less than 5 min. Conclusions Transition from a hospital setting to home-like conditions requires specific AP supervision to which remote monitoring systems can contribute valuably. DiAs Web Monitoring worked properly when tested in our outpatient studies. It could facilitate subject monitoring and even accelerate medical and technical assessment of the AP. It should now be adapted for long-term studies with an enhanced notification feature. J Diabetes Sci Technol 2013;7(6):1427–1435 PMID:24351169

  1. DiAs web monitoring: a real-time remote monitoring system designed for artificial pancreas outpatient trials.

    PubMed

    Place, Jérôme; Robert, Antoine; Ben Brahim, Najib; Keith-Hynes, Patrick; Farret, Anne; Pelletier, Marie-Josée; Buckingham, Bruce; Breton, Marc; Kovatchev, Boris; Renard, Eric

    2013-11-01

    Developments in an artificial pancreas (AP) for patients with type 1 diabetes have allowed a move toward performing outpatient clinical trials. "Home-like" environment implies specific protocol and system adaptations among which the introduction of remote monitoring is meaningful. We present a novel tool allowing multiple patients to monitor AP use in home-like settings. We investigated existing systems, performed interviews of experienced clinical teams, listed required features, and drew several mockups of the user interface. The resulting application was tested on the bench before it was used in three outpatient studies representing 3480 h of remote monitoring. Our tool, called DiAs Web Monitoring (DWM), is a web-based application that ensures reception, storage, and display of data sent by AP systems. Continuous glucose monitoring (CGM) and insulin delivery data are presented in a colored chart to facilitate reading and interpretation. Several subjects can be monitored simultaneously on the same screen, and alerts are triggered to help detect events such as hypoglycemia or CGM failures. In the third trial, DWM received approximately 460 data per subject per hour: 77% for log messages, 5% for CGM data. More than 97% of transmissions were achieved in less than 5 min. Transition from a hospital setting to home-like conditions requires specific AP supervision to which remote monitoring systems can contribute valuably. DiAs Web Monitoring worked properly when tested in our outpatient studies. It could facilitate subject monitoring and even accelerate medical and technical assessment of the AP. It should now be adapted for long-term studies with an enhanced notification feature. © 2013 Diabetes Technology Society.

  2. Adapting current Arden Syntax knowledge for an object oriented event monitor.

    PubMed

    Choi, Jeeyae; Lussier, Yves A; Mendoça, Eneida A

    2003-01-01

    Arden Syntax for Medical Logic Module (MLM)1 was designed for writing and sharing task-specific health knowledge in 1989. Several researchers have developed frameworks to improve the sharability and adaptability of Arden Syntax MLMs, an issue known as "curly braces" problem. Karadimas et al proposed an Arden Syntax MLM-based decision support system that uses an object oriented model and the dynamic linking features of the Java platform.2 Peleg et al proposed creating a Guideline Expression Language (GEL) based on Arden Syntax's logic grammar.3 The New York Presbyterian Hospital (NYPH) has a collection of about 200 MLMs. In a process of adapting the current MLMs for an object-oriented event monitor, we identified two problems that may influence the "curly braces" one: (1) the query expressions within the curly braces of Arden Syntax used in our institution are cryptic to the physicians, institutional dependent and written ineffectively (unpublished results), and (2) the events are coded individually within a curly braces, resulting sometimes in a large number of events - up to 200.

  3. New approach to information fusion for Lipschitz classifiers ensembles: Application in multi-channel C-OTDR-monitoring systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timofeev, Andrey V.; Egorov, Dmitry V.

    This paper presents new results concerning selection of an optimal information fusion formula for an ensemble of Lipschitz classifiers. The goal of information fusion is to create an integral classificatory which could provide better generalization ability of the ensemble while achieving a practically acceptable level of effectiveness. The problem of information fusion is very relevant for data processing in multi-channel C-OTDR-monitoring systems. In this case we have to effectively classify targeted events which appear in the vicinity of the monitored object. Solution of this problem is based on usage of an ensemble of Lipschitz classifiers each of which corresponds tomore » a respective channel. We suggest a brand new method for information fusion in case of ensemble of Lipschitz classifiers. This method is called “The Weighing of Inversely as Lipschitz Constants” (WILC). Results of WILC-method practical usage in multichannel C-OTDR monitoring systems are presented.« less

  4. Analysis of data from sensitive U.S. monitoring stations for the Fukushima Dai-ichi nuclear reactor accident.

    PubMed

    Biegalski, S R; Bowyer, T W; Eslinger, P W; Friese, J A; Greenwood, L R; Haas, D A; Hayes, J C; Hoffman, I; Keillor, M; Miley, H S; Moring, M

    2012-12-01

    The March 11, 2011 9.0 magnitude undersea megathrust earthquake off the coast of Japan and subsequent tsunami waves triggered a major nuclear event at the Fukushima Dai-ichi nuclear power station. At the time of the event, units 1, 2, and 3 were operating and units 4, 5, and 6 were in a shutdown condition for maintenance. Loss of cooling capacity to the plants along with structural damage caused by the earthquake and tsunami resulted in a breach of the nuclear fuel integrity and release of radioactive fission products to the environment. Fission products started to arrive in the United States via atmospheric transport on March 15, 2011 and peaked by March 23, 2011. Atmospheric activity concentrations of (131)I reached levels of 3.0×10(-2) Bqm(-3) in Melbourne, FL. The noble gas (133)Xe reached atmospheric activity concentrations in Ashland, KS of 17 Bqm(-3). While these levels are not health concerns, they were well above the detection capability of the radionuclide monitoring systems within the International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. SIG-VISA: Signal-based Vertically Integrated Seismic Monitoring

    NASA Astrophysics Data System (ADS)

    Moore, D.; Mayeda, K. M.; Myers, S. C.; Russell, S.

    2013-12-01

    Traditional seismic monitoring systems rely on discrete detections produced by station processing software; however, while such detections may constitute a useful summary of station activity, they discard large amounts of information present in the original recorded signal. We present SIG-VISA (Signal-based Vertically Integrated Seismic Analysis), a system for seismic monitoring through Bayesian inference on seismic signals. By directly modeling the recorded signal, our approach incorporates additional information unavailable to detection-based methods, enabling higher sensitivity and more accurate localization using techniques such as waveform matching. SIG-VISA's Bayesian forward model of seismic signal envelopes includes physically-derived models of travel times and source characteristics as well as Gaussian process (kriging) statistical models of signal properties that combine interpolation of historical data with extrapolation of learned physical trends. Applying Bayesian inference, we evaluate the model on earthquakes as well as the 2009 DPRK test event, demonstrating a waveform matching effect as part of the probabilistic inference, along with results on event localization and sensitivity. In particular, we demonstrate increased sensitivity from signal-based modeling, in which the SIGVISA signal model finds statistical evidence for arrivals even at stations for which the IMS station processing failed to register any detection.

  6. Analysis of data from sensitive U.S. monitoring stations for the Fukushima Dai-ichi nuclear reactor accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biegalski, Steven R.; Bowyer, Ted W.; Eslinger, Paul W.

    The March 11, 2011 9.0 magnitude undersea megathrust earthquake off the coast of Japan and subsequent tsunami waves triggered a major nuclear event at the Fukushima Dai-ichi nuclear power station. At the time of the event, units 1, 2, and 3 were operating and units 4, 5, and 6 were in a shutdown condition for maintenance. Loss of cooling capacity to the plants along with structural damage caused by the earthquake and tsunami resulted in a breach of the nuclear fuel integrity and release of radioactive fission products to the environment. Fission products started to arrive in the United Statesmore » via atmospheric transport on March 15, 2011 and peaked by March 23, 2011. Atmospheric activity concentrations of 131I reached levels of 3.0 * 10*2 Bqm*3 in Melbourne, FL. The noble gas 133Xe reached atmospheric activity concentrations in Ashland, KS of 17 Bqm*3. While these levels are not health concerns, they were well above the detection capability of the radionuclide monitoring systems within the International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty.« less

  7. Use of a Smartphone for Collecting Data on River Discharge and Communication of Flood Risk.

    NASA Astrophysics Data System (ADS)

    Pena-Haro, S.; Lüthi, B.; Philippe, T.

    2015-12-01

    Although many developed countries have well-established systems for river monitoring and flood early warning systems, the population affected in developing countries by flood events is unsettled. Even more, future climate development is likely to increase the intensity and frequency of extreme weather events and therefore bigger impacts on the population can be expected.There are different types of flood forecasting systems, some are based on hydrologic models fed with rainfall predictions and observed river levels. Flood hazard maps are also used to increase preparedness in case of an extreme event, however these maps are static since they do not incorporate daily changing conditions on river stages. However, and especially in developing countries, data on river stages are scarce. Some of the reasons are that traditional fixed monitoring systems do not scale in terms of costs, repair is difficult as well as operation and maintenance, in addition vandalism poses additional challenges. Therefore there is a need of cheaper and easy-to-use systems for collecting information on river stage and discharge. We have developed a mobile device application for determining the water stage and discharge of open-channels (e.g. rivers, artificial channels, irrigation furrows). Via image processing the water level and surface velocity are measured, combining this information with priori knowledge on the channel geometry the discharge is estimated. River stage and discharge measurement via smart phones provides a non-intrusive, accurate and cost-effective monitoring method. No permanent installations, which can be flooded away, are needed. The only requirement is that the field of view contains two reference markers with known scale and with known position relative to the channel geometry, therefore operation and maintenance costs are very low. The other advantage of using smartphones, is that the data collected can be immediately sent via SMS to a central database. This information can be easily gathered for its use within models and redistributed, using the same channels, among interested stakeholders and the community.

  8. Citizen Science Seismic Stations for Monitoring Regional and Local Events

    NASA Astrophysics Data System (ADS)

    Zucca, J. J.; Myers, S.; Srikrishna, D.

    2016-12-01

    The earth has tens of thousands of seismometers installed on its surface or in boreholes that are operated by many organizations for many purposes including the study of earthquakes, volcanos, and nuclear explosions. Although global networks such as the Global Seismic Network and the International Monitoring System do an excellent job of monitoring nuclear test explosions and other seismic events, their thresholds could be lowered with the addition of more stations. In recent years there has been interest in citizen-science approaches to augment government-sponsored monitoring networks (see, for example, Stubbs and Drell, 2013). A modestly-priced seismic station that could be purchased by citizen scientists could enhance regional and local coverage of the GSN, IMS, and other networks if those stations are of high enough quality and distributed optimally. In this paper we present a minimum set of hardware and software specifications that a citizen seismograph station would need in order to add value to global networks. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  9. The Quasi-Eulerian Hydrophone: A New Approach for Ocean Acoustics

    NASA Astrophysics Data System (ADS)

    Matsumoto, H.; Dziak, R. P.; Fowler, M. J.; Hammond, S. R.; Meinig, C.

    2005-12-01

    For the last 10 years Oregon State University and NOAA/Pacific Marine Environmental Laboratory have successfully operated and maintained autonomous hydrophone arrays to monitor low frequency acoustic energy of earthquakes and marine mammal calls in remote ocean areas where no historical record existed. These hydrophones are moored at mid-water depth and require a routine servicing cruise to retrieve the stored data. The system is robust, but it is not real-time and it takes up to a year before acoustic events can be identified from the raw acoustic data. As a result, we frequently miss opportunities to observe ocean acoustic events as they occur. A new type of autonomous hydrophone called a Quasi-Eulerian hydrophone (QUEphone) is under development at OSU/PMEL. This instrument allows near-real-time monitoring of a selected study area. It is a tether-free float with a built-in hydrophone monitoring system and a buoyancy controller. It is capable of repeat ascent/descent cycles in up to 2000 m of water. In contrast to the conventional Lagrangean float, the QUEphone float stays in the same area by maintaining negative buoyancy and remaining on the seafloor for most of its life span. While on the seafloor the QUEphone runs an intelligent event detection algorithm, and upon detection of a significant number of events will surface to transmit a small data file to shore. We have conducted brief test deployments of the QUEphone in both a fresh-water lake and marine waters off Oregon coast, and the results of these tests will be discussed and compared with other hydrophone data. Once fully developed the QUEphone is expected to provide near real-time analysis capability of earthquakes that affect seafloor hydrothermal vents and their associated ecosystems. Such fast reaction will allow for a rapid response to seismic events, enabling researchers to examine how changes in hydrothermal activity affect deep-ocean vent ecosystems.

  10. Application of displacement monitoring system on high temperature steam pipe

    NASA Astrophysics Data System (ADS)

    Ghaffar, M. H. A.; Husin, S.; Baek, J. E.

    2017-10-01

    High-energy piping systems of power plants such as Main Steam (MS) pipe or Hot Reheat (HR) pipe are operating at high temperature and high pressure at base and cyclic loads. In the event of transient condition, a pipe can be deflected dramatically and caused high stress in the pipe, yielding to failure of the piping system. Periodic monitoring and walk down can identify abnormalities but limitations exist in the standard walk down practice. This paper provides a study of pipe displacement monitoring on MS pipe of coal-fired power plant to continuously capture the pipe movement behaviour at different load using 3-Dimensional Displacement Measuring System (3DDMS). The displacement trending at Location 5 and 6 (north and south) demonstrated pipes displace less than 25% to that of design movement. It was determined from synchronisation analysis that Location 7 (north) and Location 8 (south) pipe actual movement difference has exceeded the design movement difference. Visual survey at specified locations with significant displacement trending reveals issues of hydraulic snubber and piping interferences. The study demonstrated that the displacement monitoring is able to capture pipe movement at all time and allows engineer to monitor pipe movement behaviour, aids in identifying issue early for remedy action.

  11. Vaccine safety monitoring systems in developing countries: an example of the Vietnam model.

    PubMed

    Ali, Mohammad; Rath, Barbara; Thiem, Vu Dinh

    2015-01-01

    Only few health intervention programs have been as successful as vaccination programs with respect to preventing morbidity and mortality in developing countries. However, the success of a vaccination program is threatened by rumors and misunderstanding about the risks of vaccines. It is short-sighted to plan the introduction of vaccines into developing countries unless effective vaccine safety monitoring systems are in place. Such systems that track adverse events following immunization (AEFI) is currently lacking in most developing countries. Therefore, any rumor may affect the entire vaccination program. Public health authorities should implement the safety monitoring system of vaccines, and disseminate safety issues in a proactive mode. Effective safety surveillance systems should allow for the conduct of both traditional and alternative epidemiologic studies through the use of prospective data sets. The vaccine safety data link implemented in Vietnam in mid-2002 indicates that it is feasible to establish a vaccine safety monitoring system for the communication of vaccine safety in developing countries. The data link provided the investigators an opportunity to evaluate AEFI related to measles vaccine. Implementing such vaccine safety monitoring system is useful in all developing countries. The system should be able to make objective and clear communication regarding safety issues of vaccines, and the data should be reported to the public on a regular basis for maintaining their confidence in vaccination programs.

  12. Discrimination of tsunamigenic earthquakes by ionospheric sounding using GNSS observations of total electron content from the Sumatran GPS Array

    NASA Astrophysics Data System (ADS)

    Manta, F.; Feng, L.; Occhipinti, G.; Taisne, B.; Hill, E.

    2017-12-01

    Tsunami earthquakes generate tsunamis larger than expected for their seismic magnitude. They rupture the shallow megathrust, which is usually at significant distance from land-based monitoring networks. This distance presents a challenge in accurately estimating the magnitude and source extent of tsunami earthquakes. Whether these parameters can be estimated reliably is critical to the success of tsunami early warning systems. In this work, we investigate the potential role of using GNSS-observed ionospheric total electron content (TEC) to discriminate tsunami earthquakes, by introducing for the first time the TEC Intensity Index (TECII) for rapidly identify tsunamigenic earthquakes. We examine two Mw 7.8 megathrust events along the Sumatran subduction zone with data from the Sumatran GPS Array (SuGAr). Both events triggered a tsunami alert that was canceled later. The Banyaks event (April 6th, 2010) did not generate a tsunami and caused only minor earthquake-related damage to infrastructure. On the contrary, the Mentawai event (October 25th, 2010) produced a large tsunami with run-up heights of >16 m along the southwestern coasts of the Pagai Islands. The tsunami claimed more than 400 lives. The primary difference between the two events was the depth of rupture: the Mentawai event ruptured a very shallow (<6 km) portion of the Sunda megathrust, while the Banyaks event ruptured a deeper portion (20-30 km). While we identify only a minor ionospheric signature of the Banyaks event (TECII = 1.05), we identify a strong characteristic acoustic-gravity wave only 8 minutes after the Mentawai earthquake (TECII = 1.14) and a characteristic signature of a tsunami 40 minutes after the event. These two signals reveal the large surface displacement at the rupture, and the consequent destructive tsunami. This comparative study of two earthquakes with the same magnitude at different depths highlights the potential role of ionospheric monitoring by GNSS to tsunami early warning systems

  13. a Low-Power Wireless Sensor Network for Monitoring the Microcrack Initiations in Aerospace Composites

    NASA Astrophysics Data System (ADS)

    Li, Jian; Plotnikov, Yuri; Lin, Wendy W.

    2008-02-01

    A low power wireless sensor network was developed to monitor the microcrack events in aerospace composites. The microcracks in the composites mostly result from a stress loading or temperature and/or humidity cycles. Generally, a single microcrack is too small to be detected by conventional techniques such as X-ray or ultrasonic C-scan. The whole developed sensor network is aimed to capture the released acoustic signals by the microcracking events in real time. It comprises of a receiving station as well as a series of sensor nodes. Each sensor node includes two acoustic emission transducers as well as two signal amplification and data acquisition channels. Much of our development effort has been focused on reducing the power consumption of each node and improving the detection reliability for each event. Each sensor node is battery-powered and works in a sleep mode most of time. Once a microcrack is initiated in the composite, the acoustic signal triggers the node and wakes it up. The node will then react in several microseconds and digitize the signal. The digitized data is sent to the station wirelessly. The developed wireless sensor network system has been validated with microscopy of microcracked samples after temperature and humidity cycling and has proved to be an effective tool for microcracking detection. Furthermore, our low power consumption design and sophisticated wireless transmission mechanism enables a system with great potential for field structural health monitoring applications.

  14. Versatile Mobile and Stationary Low-Cost Approaches for Hydrological Measurements

    NASA Astrophysics Data System (ADS)

    Kröhnert, M.; Eltner, A.

    2018-05-01

    In the last decades, an increase in the number of extreme precipitation events has been observed, which leads to increasing risks for flash floods and landslides. Thereby, conventional gauging stations are indispensable for monitoring and prediction. However, they are expensive in construction, management, and maintenance. Thus, density of observation networks is rather low, leading to insufficient spatio-temporal resolution to capture hydrological extreme events that occur with short response times especially in small-scale catchments. Smaller creeks and rivers require permanent observation, as well, to allow for a better understanding of the underlying processes and to enhance forecasting reliability. Today's smartphones with inbuilt cameras, positioning sensors and powerful processing units may serve as wide-spread measurement devices for event-based water gauging during floods. With the aid of volunteered geographic information (VGI), the hydrological network of water gauges can be highly densified in its spatial and temporal domain even for currently unobserved catchments. Furthermore, stationary low-cost solutions based on Raspberry Pi imaging systems are versatile for permanent monitoring of hydrological parameters. Both complementary systems, i.e. smartphone and Raspberry Pi camera, share the same methodology to extract water levels automatically, which is explained in the paper in detail. The annotation of 3D reference data by 2D image measurements is addressed depending on camera setup and river section to be monitored. Accuracies for water stage measurements are in range of several millimetres up to few centimetres.

  15. Near-Earth space hazards and their detection (Scientific session of the Physical Sciences Division of the Russian Academy of Sciences, 27 March 2013)

    NASA Astrophysics Data System (ADS)

    2013-08-01

    A scientific session of the Physical Sciences Division of the Russian Academy of Sciences (RAS), titled "Near-Earth space hazards and their detection", was held on 27 March 2013 at the conference hall of the Lebedev Physical Institute, RAS. The agenda posted on the website of the Physical Sciences Division, RAS, http://www.gpad.ac.ru, included the following reports: (1) Emel'yanenko V V, Shustov B M (Institute of Astronomy, RAS, Moscow) "The Chelyabinsk event and the asteroid-comet hazard"; (2) Chugai N N (Institute of Astronomy, RAS, Moscow) "A physical model of the Chelyabinsk event"; (3) Lipunov V M (Lomonosov Moscow State University, Sternberg Astronomical Institute, Moscow) "MASTER global network of optical monitoring"; (4) Beskin G M (Special Astrophysical Observatory, RAS, Arkhyz, Karachai-Cirkassian Republic) "Wide-field optical monitoring systems with subsecond time resolution for the detection and study of cosmic threats". The expanded papers written on the base of oral reports 1 and 4 are given below. • The Chelyabinsk event and the asteroid-comet hazard, V V Emel'yanenko, B M Shustov Physics-Uspekhi, 2013, Volume 56, Number 8, Pages 833-836 • Wide-field subsecond temporal resolution optical monitoring systems for the detection and study of cosmic hazards, G M Beskin, S V Karpov, V L Plokhotnichenko, S F Bondar, A V Perkov, E A Ivanov, E V Katkova, V V Sasyuk, A Shearer Physics-Uspekhi, 2013, Volume 56, Number 8, Pages 836-842

  16. Technical Challenges for a Comprehensive Test Ban: A historical perspective to frame the future (Invited)

    NASA Astrophysics Data System (ADS)

    Wallace, T. C.

    2013-12-01

    In the summer of 1958 scientists from the Soviet block and the US allies met in Geneva to discuss what it would take to monitor a forerunner to a Comprehensive Test Ban Treaty at the 'Conference of Experts to Study the Possibility of Detecting Violations of a Possible Agreement on Suspension of Nuclear Tests'. Although armed with a limited resume of observations, the conference recommended a multi-phenomenology approach (air sampling, acoustics, seismic and electromagnetic) deployed it a network of 170 sites scattered across the Northern Hemisphere, and hypothesized a detection threshold of 1kt for atmospheric tests and 5kt for underground explosions. The conference recommendations spurred vigorous debate, with strong disagreement with the stated detection hypothesis. Nevertheless, the technical challenges posed lead to a very focused effort to improve facilities, methodologies and, most importantly, research and development on event detection, location and identification. In the ensuing 50 years the various challenges arose and were eventually 'solved'; these included quantifying yield determination to enter a Limited Threshold Test Ban, monitoring broad areas of emerging nuclear nations, and after the mid-1990s lowering the global detection threshold to sub-kiloton levels for underground tests. Today there is both an international monitoring regime (ie, the International Monitoring System, or IMS) and a group of countries that have their own national technical means (NTM). The challenges for the international regime are evolving; the IMS has established itself as a very credible monitoring system, but the demand of a CTBT to detect and identify a 'nuclear test' of diminished size (zero yield) poses new technical hurdles. These include signal processing and understanding limits of resolution, location accuracy, integration of heterogeneous data, and accurately characterizing anomalous events. It is possible to extrapolate past technical advances to predict what should be available by 2020; detection of coupled explosions to 100s of tons for all continental areas, as well as a probabilistic assessment of event identification.

  17. Insights in nutrient sources and transport from high-frequency monitoring at the outlet pumping station of an agricultural lowland polder catchment

    NASA Astrophysics Data System (ADS)

    Rozemeijer, J.; Van der Grift, B.; Broers, H. P.; Berendrecht, W.; Oste, L.; Griffioen, J.

    2015-12-01

    In this study, we present new insights in nutrient sources and transport processes in an agricultural-dominated lowland water system based on high-frequency monitoring technology. Starting in October 2014, we have collected semi-continuous measurements of the TP and NO3 concentrations, conductivity and water temperature at a large scale pumping station at the outlet of a 576 km2 polder catchment. The semi-continuous measurements complement a water quality monitoring program at six locations within the drainage area based on conventional monthly or biweekly grab sampling. The NO3 and TP concentrations at the pumping station varied between 0.5 and 10 mgN/L and 0.1 and 0.5 mgP/L. The seasonal trends and short scale concentration dynamics clearly indicated that most of the NO3 loads at the pumping station originated from subsurface drain tubes that were active after intensive rainfall events during the winter months. A transfer function-noise model of hourly NO3 concentrations reveals that a large part of the dynamics in NO3 concentrations during the winter months can be predicted using rainfall data. In February however, NO3 concentrations were higher than predicted due to direct losses after the first manure application. The TP concentration almost doubled during operation of the pumping station. This highlights resuspension of particulate P from channel bed sediments induced by the higher flow velocities during pumping. Rainfall events that caused peaks in NO3 concentrations did not result in TP concentration peaks. Direct effects of run-off, with an association increase in the TP concentration and decrease of the NO3concentration, was only observed during rainfall event at the end of a freeze-thaw cycle. The high-frequency monitoring at the outlet of an agricultural-dominated lowland water system in combination with low-frequency monitoring within the area provided insight in nutrient sources and transport processes that are highly relevant for water quality management.

  18. Preliminary results of the FVS gypsy moth event monitor using remeasurement plot data from Northern West Virginia

    Treesearch

    Matthew P. Perkowski; John R. Brooks; Kurt W. Gottschalk

    2008-01-01

    Predictions based on the Gypsy Moth Event Monitor were compared to remeasurement plot data from stands receiving gypsy moth defoliation. These stands were part of a silvicultural treatment study located in northern West Virginia that included a sanitation thinning, a presalvage thinning and paired no-treatment controls. In all cases the event monitor under predicted...

  19. Laser measurement of respiration activity in preterm infants: Monitoring of peculiar events

    NASA Astrophysics Data System (ADS)

    Scalise, L.; Marchionni, P.; Ercoli, I.; Tomasini, E. P.

    2012-09-01

    The Neonatal Intensive Care Unit (NICU) is a part of a pediatric hospital dedicated to the care of ill or pre-term patients . NICU's patients are underweight and most of the time they need cardiac and respiratory support therapies; they are placed in incubators or in cribs maintaining target environmental and body temperatures and protecting patients from bacteria and virus. Patients are continuously monitored for long period of time (days or weeks) due to their possible several health conditions. the most common vital signs monitored are: respiration rate, heart rate, body temperature, blood saturation, etc. Most of the devices used for transducing such quantities in electronic signals - like spirometer or electrocardiogram (ECG) - are in direct contact with the patient and results, also in consideration of the specific patient, largely invasive. In this paper, we propose a novel measurement system for non-contact and non-invasive assessment of the respiration activity, with particular reference to the detection of peculiar respiration events of extreme interest in intensive care units, such as: irregular inspiration/expiration acts, hiccups and apneas. The sensing device proposed is the Laser Doppler Vibrometer (LDVi) which is an non contact, optical measurement system for the assessment of a surface velocity and displacement. In the past it has been demonstrated to be suitable to measure heart rate (HR) and respiration rate (RR) in adult and in preterm infant trough chest-wall displacements. The measurement system is composed by a LDVi system and a data acquisition board installed on a PC, with no direct contact with the patient. Tests have been conducted on 20 NICU patients, for a total of 7219 data sampled. Results show very high correlation (R=0.99) with the reference instrument used for the patient monitoring (mechanical ventilator), with an uncertainty < ±7 ms (k=2). Moreover, during the tests, some peculiar respiration events, have been recorded on 6 of the monitored patients: irregular inspiration/expiration acts (4), hiccups (1) and apneas (1). The proposed measurement method shows to be able to precisely detect the instantaneous respiration frequency of the patient, avoiding patient contact. Moreover the signal collected allow to identify irregularities in the patient respiration of different severity, allowing an optimized action of the care givers.

  20. Security Information and Event Management Tools and Insider Threat Detection

    DTIC Science & Technology

    2013-09-01

    Orebaugh, A., Scholl , M., & Stine, K. (2011, September). Information security continuous monitoring (ISCM) for federal information systems and...E., Conway, T., Keverline, S., Williams , M., Capelli, D., Willke, B., & Moore, A. (2008, January). Insider threat study: illicit cyber activity in

  1. Increase of transient lower esophageal sphincter relaxation associated with cascade stomach

    PubMed Central

    Kawada, Akiyo; Kusano, Motoyasu; Hosaka, Hiroko; Kuribayashi, Shiko; Shimoyama, Yasuyuki; Kawamura, Osamu; Akiyama, Junichi; Yamada, Masanobu; Akuzawa, Masako

    2017-01-01

    We previously reported that cascade stomach was associated with reflux symptoms and esophagitis. Delayed gastric emptying has been believed to initiate transient lower esophageal sphincter relaxation (TLESR). We hypothesized that cascade stomach may be associated with frequent TLESR with delayed gastric emptying. Eleven subjects with cascade stomach and 11 subjects without cascade stomach were enrolled. Postprandial gastroesophageal manometry and gastric emptying using a continuous 13C breath system were measured simultaneously after a liquid test meal. TLESR events were counted in early period (0–60 min), late period (60–120 min), and total monitoring period. Three parameters of gastric emptying were calculated: the half emptying time, lag time, and gastric emptying coefficient. The median frequency of TLESR events in the cascade stomach and non-cascade stomach groups was 6.0 (median), 4.6 (interquartile range) vs 5.0, 3.0 in the early period, 5.0, 3.2 vs 3.0, 1.8 in the late period, and 10.0, 6.2 vs 8.0, 5.0 in the total monitoring period. TLESR events were significantly more frequent in the cascade stomach group during the late and total monitoring periods. In contrast, gastric emptying parameters showed no significant differences between the two groups. We concluded that TLESR events were significantly more frequent in persons with cascade stomach without delayed gastric emptying. PMID:28584403

  2. "MedTRIS" (Medical Triage and Registration Informatics System): A Web-based Client Server System for the Registration of Patients Being Treated in First Aid Posts at Public Events and Mass Gatherings.

    PubMed

    Gogaert, Stefan; Vande Veegaete, Axel; Scholliers, Annelies; Vandekerckhove, Philippe

    2016-10-01

    First aid (FA) services are provisioned on-site as a preventive measure at most public events. In Flanders, Belgium, the Belgian Red Cross-Flanders (BRCF) is the major provider of these FA services with volunteers being deployed at approximately 10,000 public events annually. The BRCF has systematically registered information on the patients being treated in FA posts at major events and mass gatherings during the last 10 years. This information has been collected in a web-based client server system called "MedTRIS" (Medical Triage and Registration Informatics System). MedTRIS contains data on more than 200,000 patients at 335 mass events. This report describes the MedTRIS architecture, the data collected, and how the system operates in the field. This database consolidates different types of information with regards to FA interventions in a standardized way for a variety of public events. MedTRIS allows close monitoring in "real time" of the situation at mass gatherings and immediate intervention, when necessary; allows more accurate prediction of resources needed; allows to validate conceptual and predictive models for medical resources at (mass) public events; and can contribute to the definition of a standardized minimum data set (MDS) for mass-gathering health research and evaluation. Gogaert S , Vande veegaete A , Scholliers A , Vandekerckhove P . "MedTRIS" (Medical Triage and Registration Informatics System): a web-based client server system for the registration of patients being treated in first aid posts at public events and mass gatherings. Prehosp Disaster Med. 2016;31(5):557-562.

  3. Extreme seismicity and disaster risks: Hazard versus vulnerability (Invited)

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.

    2013-12-01

    Although the extreme nature of earthquakes has been known for millennia due to the resultant devastation from many of them, the vulnerability of our civilization to extreme seismic events is still growing. It is partly because of the increase in the number of high-risk objects and clustering of populations and infrastructure in the areas prone to seismic hazards. Today an earthquake may affect several hundreds thousand lives and cause significant damage up to hundred billion dollars; it can trigger an ecological catastrophe if occurs in close vicinity to a nuclear power plant. Two types of extreme natural events can be distinguished: (i) large magnitude low probability events, and (ii) the events leading to disasters. Although the first-type events may affect earthquake-prone countries directly or indirectly (as tsunamis, landslides etc.), the second-type events occur mainly in economically less-developed countries where the vulnerability is high and the resilience is low. Although earthquake hazards cannot be reduced, vulnerability to extreme events can be diminished by monitoring human systems and by relevant laws preventing an increase in vulnerability. Significant new knowledge should be gained on extreme seismicity through observations, monitoring, analysis, modeling, comprehensive hazard assessment, prediction, and interpretations to assist in disaster risk analysis. The advanced disaster risk communication skill should be developed to link scientists, emergency management authorities, and the public. Natural, social, economic, and political reasons leading to disasters due to earthquakes will be discussed.

  4. Monitoring tools of COMPASS experiment at CERN

    NASA Astrophysics Data System (ADS)

    Bodlak, M.; Frolov, V.; Huber, S.; Jary, V.; Konorov, I.; Levit, D.; Novy, J.; Salac, R.; Tomsa, J.; Virius, M.

    2015-12-01

    This paper briefly introduces the data acquisition system of the COMPASS experiment and is mainly focused on the part that is responsible for the monitoring of the nodes in the whole newly developed data acquisition system of this experiment. The COMPASS is a high energy particle experiment with a fixed target located at the SPS of the CERN laboratory in Geneva, Switzerland. The hardware of the data acquisition system has been upgraded to use FPGA cards that are responsible for data multiplexing and event building. The software counterpart of the system includes several processes deployed in heterogenous network environment. There are two processes, namely Message Logger and Message Browser, taking care of monitoring. These tools handle messages generated by nodes in the system. While Message Logger collects and saves messages to the database, the Message Browser serves as a graphical interface over the database containing these messages. For better performance, certain database optimizations have been used. Lastly, results of performance tests are presented.

  5. Algorithmic network monitoring for a modern water utility: a case study in Jerusalem.

    PubMed

    Armon, A; Gutner, S; Rosenberg, A; Scolnicov, H

    2011-01-01

    We report on the design, deployment, and use of TaKaDu, a real-time algorithmic Water Infrastructure Monitoring solution, with a strong focus on water loss reduction and control. TaKaDu is provided as a commercial service to several customers worldwide. It has been in use at HaGihon, the Jerusalem utility, since mid 2009. Water utilities collect considerable real-time data from their networks, e.g. by means of a SCADA system and sensors measuring flow, pressure, and other data. We discuss how an algorithmic statistical solution analyses this wealth of raw data, flexibly using many types of input and picking out and reporting significant events and failures in the network. Of particular interest to most water utilities is the early detection capability for invisible leaks, also a means for preventing large visible bursts. The system also detects sensor and SCADA failures, various water quality issues, DMA boundary breaches, unrecorded or unintended network changes (like a valve or pump state change), and other events, including types unforeseen during system design. We discuss results from use at HaGihon, showing clear operational value.

  6. Key Design Elements of a Data Utility for National Biosurveillance: Event-driven Architecture, Caching, and Web Service Model

    PubMed Central

    Tsui, Fu-Chiang; Espino, Jeremy U.; Weng, Yan; Choudary, Arvinder; Su, Hoah-Der; Wagner, Michael M.

    2005-01-01

    The National Retail Data Monitor (NRDM) has monitored over-the-counter (OTC) medication sales in the United States since December 2002. The NRDM collects data from over 18,600 retail stores and processes over 0.6 million sales records per day. This paper describes key architectural features that we have found necessary for a data utility component in a national biosurveillance system. These elements include event-driven architecture to provide analyses of data in near real time, multiple levels of caching to improve query response time, high availability through the use of clustered servers, scalable data storage through the use of storage area networks and a web-service function for interoperation with affiliated systems. The methods and architectural principles are relevant to the design of any production data utility for public health surveillance—systems that collect data from multiple sources in near real time for use by analytic programs and user interfaces that have substantial requirements for time-series data aggregated in multiple dimensions. PMID:16779138

  7. Wireless Subsurface Microsensors for Health Monitoring of Thermal Protection Systems on Hypersonic Vehicles

    NASA Technical Reports Server (NTRS)

    Milos, Frank S.; Watters, David G.; Pallix, Joan B.; Bahr, Alfred J.; Huestis, David L.; Arnold, Jim (Technical Monitor)

    2001-01-01

    Health diagnostics is an area where major improvements have been identified for potential implementation into the design of new reusable launch vehicles in order to reduce life cycle costs, to increase safety margins, and to improve mission reliability. NASA Ames is leading the effort to develop inspection and health management technologies for thermal protection systems. This paper summarizes a joint project between NASA Ames and SRI International to develop 'SensorTags,' radio frequency identification devices coupled with event-recording sensors, that can be embedded in the thermal protection system to monitor temperature or other quantities of interest. Two prototype SensorTag designs containing thermal fuses to indicate a temperature overlimit are presented and discussed.

  8. Integrated wireless sensor network for monitoring pregnant women.

    PubMed

    Niţulescu, Adina; Crişan-Vida, Mihaela; Stoicu-Tivadar, Lăcrămioara; Bernad, Elena

    2015-01-01

    The paper presents an integrated monitoring system for pregnant women in the third trimester using a mobile cardiotocograph and body sensors. The medical staff has a useful tool to detect abnormalities and prevent unfortunate events in time. The mobile cardiotocograph sends data in real time to a Smartphone that communicates the information in a cloud. The physician accesses the data using the hospital ObgGyn application. The advantage of using this system is that the pregnant woman can follow her pregnancy status evolution from home, and the physician receives alarms from the system if the data is not in normal range and has available information about the health status at any time and location.

  9. Tools to manage the enterprise-wide picture archiving and communications system environment.

    PubMed

    Lannum, L M; Gumpf, S; Piraino, D

    2001-06-01

    The presentation will focus on the implementation and utilization of a central picture archiving and communications system (PACS) network-monitoring tool that allows for enterprise-wide operations management and support of the image distribution network. The MagicWatch (Siemens, Iselin, NJ) PACS/radiology information system (RIS) monitoring station from Siemens has allowed our organization to create a service support structure that has given us proactive control of our environment and has allowed us to meet the service level performance expectations of the users. The Radiology Help Desk has used the MagicWatch PACS monitoring station as an applications support tool that has allowed the group to monitor network activity and individual systems performance at each node. Fast and timely recognition of the effects of single events within the PACS/RIS environment has allowed the group to proactively recognize possible performance issues and resolve problems. The PACS/operations group performs network management control, image storage management, and software distribution management from a single, central point in the enterprise. The MagicWatch station allows for the complete automation of software distribution, installation, and configuration process across all the nodes in the system. The tool has allowed for the standardization of the workstations and provides a central configuration control for the establishment and maintenance of the system standards. This report will describe the PACS management and operation prior to the implementation of the MagicWatch PACS monitoring station and will highlight the operational benefits of a centralized network and system-monitoring tool.

  10. Unified Desktop for Monitoring & Control Applications - The Open Navigator Framework Applied for Control Centre and EGSE Applications

    NASA Astrophysics Data System (ADS)

    Brauer, U.

    2007-08-01

    The Open Navigator Framework (ONF) was developed to provide a unified and scalable platform for user interface integration. The main objective for the framework was to raise usability of monitoring and control consoles and to provide a reuse of software components in different application areas. ONF is currently applied for the Columbus onboard crew interface, the commanding application for the Columbus Control Centre, the Columbus user facilities specialized user interfaces, the Mission Execution Crew Assistant (MECA) study and EADS Astrium internal R&D projects. ONF provides a well documented and proven middleware for GUI components (Java plugin interface, simplified concept similar to Eclipse). The overall application configuration is performed within a graphical user interface for layout and component selection. The end-user does not have to work in the underlying XML configuration files. ONF was optimized to provide harmonized user interfaces for monitoring and command consoles. It provides many convenience functions designed together with flight controllers and onboard crew: user defined workspaces, incl. support for multi screens efficient communication mechanism between the components integrated web browsing and documentation search &viewing consistent and integrated menus and shortcuts common logging and application configuration (properties) supervision interface for remote plugin GUI access (web based) A large number of operationally proven ONF components have been developed: Command Stack & History: Release of commands and follow up the command acknowledges System Message Panel: Browse, filter and search system messages/events Unified Synoptic System: Generic synoptic display system Situational Awareness : Show overall subsystem status based on monitoring of key parameters System Model Browser: Browse mission database defintions (measurements, commands, events) Flight Procedure Executor: Execute checklist and logical flow interactive procedures Web Browser : Integrated browser reference documentation and operations data Timeline Viewer: View master timeline as Gantt chart Search: Local search of operations products (e.g. documentation, procedures, displays) All GUI components access the underlying spacecraft data (commanding, reporting data, events, command history) via a common library providing adaptors for the current deployments (Columbus MCS, Columbus onboard Data Management System, Columbus Trainer raw packet protocol). New Adaptors are easy to develop. Currently an adaptor to SCOS 2000 is developed as part of a study for the ESTEC standardization section ("USS for ESTEC Reference Facility").

  11. Monitoring and Characterizing the Geysering and Seismic Activity at the Lusi Mud Eruption Site, East Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Karyono, Karyono; Obermann, Anne; Mazzini, Adriano; Lupi, Matteo; Syafri, Ildrem; Abdurrokhim, Abdurrokhim; Masturyono, Masturyono; Hadi, Soffian

    2016-04-01

    The Lusi eruption began on May 29, 2006 in the northeast of Java Island, Indonesia, and to date is still active. Lusi is a newborn sedimentary-hosted hydrothermal system characterized by continuous expulsion of liquefied mud and breccias and geysering activity. Lusi is located upon the Watukosek fault system, a left lateral wrench system connecting the volcanic arc and the bakarc basin. This fault system is still periodically reactivated as shown by field data. In the framework of the Lusi Lab project (ERC grant n° 308126) we conducted several types of monitoring. Based on camera observations, we characterized the Lusi erupting activity by four main behaviors occurring cyclically: (1) Regular activity, which consists in the constant emission of water and mud breccias (i.e. viscous mud containing clay, silt, sand and clasts) associated with the constant expulsion of gas (mainly aqueous vapor with minor amounts of CO2 and CH4) (2) Geysering phase with intense bubbling, consisting in reduced vapor emission and more powerful bursting events that do not seem to have a regular pattern. (3) Geysering phase with intense vapor and degassing discharge and a typically dense plume that propagates up to 100 m height. (4) Quiescent phase marking the end of the geysering activity (and the observed cycle) with no gas emissions or bursts observed. To investigate the possible seismic activity beneath Lusi and the mechanisms controlling the Lusi pulsating behaviour, we deployed a network of 5 seismic stations and a HD camera around the Lusi crater. We characterize the observed types of seismic activity as tremor and volcano-tectonic events. Lusi tremor events occur in 5-10 Hz frequency band, while volcano tectonic events are abundant in the high frequencies range from 5 Hz until 25 Hz. We coupled the seismic monitoring with the images collected with the HD camera to study the correlation between the seismic tremor and the different phases of the geysering activity. Key words: Lusi mud eruption, geysering activity, seismic activity

  12. Technological advances in perioperative monitoring: Current concepts and clinical perspectives

    PubMed Central

    Chilkoti, Geetanjali; Wadhwa, Rachna; Saxena, Ashok Kumar

    2015-01-01

    Minimal mandatory monitoring in the perioperative period recommended by Association of Anesthetists of Great Britain and Ireland and American Society of Anesthesiologists are universally acknowledged and has become an integral part of the anesthesia practice. The technologies in perioperative monitoring have advanced, and the availability and clinical applications have multiplied exponentially. Newer monitoring techniques include depth of anesthesia monitoring, goal-directed fluid therapy, transesophageal echocardiography, advanced neurological monitoring, improved alarm system and technological advancement in objective pain assessment. Various factors that need to be considered with the use of improved monitoring techniques are their validation data, patient outcome, safety profile, cost-effectiveness, awareness of the possible adverse events, knowledge of technical principle and ability of the convenient routine handling. In this review, we will discuss the new monitoring techniques in anesthesia, their advantages, deficiencies, limitations, their comparison to the conventional methods and their effect on patient outcome, if any. PMID:25788767

  13. Technological advances in perioperative monitoring: Current concepts and clinical perspectives.

    PubMed

    Chilkoti, Geetanjali; Wadhwa, Rachna; Saxena, Ashok Kumar

    2015-01-01

    Minimal mandatory monitoring in the perioperative period recommended by Association of Anesthetists of Great Britain and Ireland and American Society of Anesthesiologists are universally acknowledged and has become an integral part of the anesthesia practice. The technologies in perioperative monitoring have advanced, and the availability and clinical applications have multiplied exponentially. Newer monitoring techniques include depth of anesthesia monitoring, goal-directed fluid therapy, transesophageal echocardiography, advanced neurological monitoring, improved alarm system and technological advancement in objective pain assessment. Various factors that need to be considered with the use of improved monitoring techniques are their validation data, patient outcome, safety profile, cost-effectiveness, awareness of the possible adverse events, knowledge of technical principle and ability of the convenient routine handling. In this review, we will discuss the new monitoring techniques in anesthesia, their advantages, deficiencies, limitations, their comparison to the conventional methods and their effect on patient outcome, if any.

  14. Aligning observed and modelled behaviour based on workflow decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Du, YuYue; Liu, Wei

    2017-09-01

    When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.

  15. artdaq: DAQ software development made simple

    NASA Astrophysics Data System (ADS)

    Biery, Kurt; Flumerfelt, Eric; Freeman, John; Ketchum, Wesley; Lukhanin, Gennadiy; Rechenmacher, Ron

    2017-10-01

    For a few years now, the artdaq data acquisition software toolkit has provided numerous experiments with ready-to-use components which allow for rapid development and deployment of DAQ systems. Developed within the Fermilab Scientific Computing Division, artdaq provides data transfer, event building, run control, and event analysis functionality. This latter feature includes built-in support for the art event analysis framework, allowing experiments to run art modules for real-time filtering, compression, disk writing and online monitoring. As art, also developed at Fermilab, is also used for offline analysis, a major advantage of artdaq is that it allows developers to easily switch between developing online and offline software. artdaq continues to be improved. Support for an alternate mode of running whereby data from some subdetector components are only streamed if requested has been added; this option will reduce unnecessary DAQ throughput. Real-time reporting of DAQ metrics has been implemented, along with the flexibility to choose the format through which experiments receive the reports; these formats include the Ganglia, Graphite and syslog software packages, along with flat ASCII files. Additionally, work has been performed investigating more flexible modes of online monitoring, including the capability to run multiple online monitoring processes on different hosts, each running its own set of art modules. Finally, a web-based GUI interface through which users can configure details of their DAQ system has been implemented, increasing the ease of use of the system. Already successfully deployed on the LArlAT, DarkSide-50, DUNE 35ton and Mu2e experiments, artdaq will be employed for SBND and is a strong candidate for use on ICARUS and protoDUNE. With each experiment comes new ideas for how artdaq can be made more flexible and powerful. The above improvements will be described, along with potential ideas for the future.

  16. A multiparameter wearable physiologic monitoring system for space and terrestrial applications

    NASA Technical Reports Server (NTRS)

    Mundt, Carsten W.; Montgomery, Kevin N.; Udoh, Usen E.; Barker, Valerie N.; Thonier, Guillaume C.; Tellier, Arnaud M.; Ricks, Robert D.; Darling, Robert B.; Cagle, Yvonne D.; Cabrol, Nathalie A.; hide

    2005-01-01

    A novel, unobtrusive and wearable, multiparameter ambulatory physiologic monitoring system for space and terrestrial applications, termed LifeGuard, is presented. The core element is a wearable monitor, the crew physiologic observation device (CPOD), that provides the capability to continuously record two standard electrocardiogram leads, respiration rate via impedance plethysmography, heart rate, hemoglobin oxygen saturation, ambient or body temperature, three axes of acceleration, and blood pressure. These parameters can be digitally recorded with high fidelity over a 9-h period with precise time stamps and user-defined event markers. Data can be continuously streamed to a base station using a built-in Bluetooth RF link or stored in 32 MB of on-board flash memory and downloaded to a personal computer using a serial port. The device is powered by two AAA batteries. The design, laboratory, and field testing of the wearable monitors are described.

  17. Anomaly Detection Techniques with Real Test Data from a Spinning Turbine Engine-Like Rotor

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Woike, Mark R.; Oza, Nikunj C.; Matthews, Bryan L.

    2012-01-01

    Online detection techniques to monitor the health of rotating engine components are becoming increasingly attractive to aircraft engine manufacturers in order to increase safety of operation and lower maintenance costs. Health monitoring remains a challenge to easily implement, especially in the presence of scattered loading conditions, crack size, component geometry, and materials properties. The current trend, however, is to utilize noninvasive types of health monitoring or nondestructive techniques to detect hidden flaws and mini-cracks before any catastrophic event occurs. These techniques go further to evaluate material discontinuities and other anomalies that have grown to the level of critical defects that can lead to failure. Generally, health monitoring is highly dependent on sensor systems capable of performing in various engine environmental conditions and able to transmit a signal upon a predetermined crack length, while acting in a neutral form upon the overall performance of the engine system.

  18. Development of a GIS-based integrated framework for coastal seiches monitoring and forecasting: A North Jiangsu shoal case study

    NASA Astrophysics Data System (ADS)

    Qin, Rufu; Lin, Liangzhao

    2017-06-01

    Coastal seiches have become an increasingly important issue in coastal science and present many challenges, particularly when attempting to provide warning services. This paper presents the methodologies, techniques and integrated services adopted for the design and implementation of a Seiches Monitoring and Forecasting Integration Framework (SMAF-IF). The SMAF-IF is an integrated system with different types of sensors and numerical models and incorporates the Geographic Information System (GIS) and web techniques, which focuses on coastal seiche events detection and early warning in the North Jiangsu shoal, China. The in situ sensors perform automatic and continuous monitoring of the marine environment status and the numerical models provide the meteorological and physical oceanographic parameter estimates. A model outputs processing software was developed in C# language using ArcGIS Engine functions, which provides the capabilities of automatically generating visualization maps and warning information. Leveraging the ArcGIS Flex API and ASP.NET web services, a web based GIS framework was designed to facilitate quasi real-time data access, interactive visualization and analysis, and provision of early warning services for end users. The integrated framework proposed in this study enables decision-makers and the publics to quickly response to emergency coastal seiche events and allows an easy adaptation to other regional and scientific domains related to real-time monitoring and forecasting.

  19. Big Data solution for CTBT monitoring: CEA-IDC joint global cross correlation project

    NASA Astrophysics Data System (ADS)

    Bobrov, Dmitry; Bell, Randy; Brachet, Nicolas; Gaillard, Pierre; Kitov, Ivan; Rozhkov, Mikhail

    2014-05-01

    Waveform cross-correlation when applied to historical datasets of seismic records provides dramatic improvements in detection, location, and magnitude estimation of natural and manmade seismic events. With correlation techniques, the amplitude threshold of signal detection can be reduced globally by a factor of 2 to 3 relative to currently standard beamforming and STA/LTA detector. The gain in sensitivity corresponds to a body wave magnitude reduction by 0.3 to 0.4 units and doubles the number of events meeting high quality requirements (e.g. detected by three and more seismic stations of the International Monitoring System (IMS). This gain is crucial for seismic monitoring under the Comprehensive Nuclear-Test-Ban Treaty. The International Data Centre (IDC) dataset includes more than 450,000 seismic events, tens of millions of raw detections and continuous seismic data from the primary IMS stations since 2000. This high-quality dataset is a natural candidate for an extensive cross correlation study and the basis of further enhancements in monitoring capabilities. Without this historical dataset recorded by the permanent IMS Seismic Network any improvements would not be feasible. However, due to the mismatch between the volume of data and the performance of the standard Information Technology infrastructure, it becomes impossible to process all the data within tolerable elapsed time. To tackle this problem known as "BigData", the CEA/DASE is part of the French project "DataScale". One objective is to reanalyze 10 years of waveform data from the IMS network with the cross-correlation technique thanks to a dedicated High Performance Computer (HPC) infrastructure operated by the Centre de Calcul Recherche et Technologie (CCRT) at the CEA of Bruyères-le-Châtel. Within 2 years we are planning to enhance detection and phase association algorithms (also using machine learning and automatic classification) and process about 30 terabytes of data provided by the IDC to update the world seismicity map. From the new events and those in the IDC Reviewed Event Bulletin, we will automatically create various sets of master event templates that will be used for the event location globally by the CTBTO and CEA.

  20. Clinical Impact of Accurate Point-of-Care Glucose Monitoring for Tight Glycemic Control in Severely Burned Children.

    PubMed

    Tran, Nam K; Godwin, Zachary R; Steele, Amanda N; Wolf, Steven E; Palmieri, Tina L

    2016-09-01

    The goal of this study was to retrospectively evaluate the clinical impact of an accurate autocorrecting blood glucose monitoring system in children with severe burns. Blood glucose monitoring system accuracy is essential for providing appropriate intensive insulin therapy and achieving tight glycemic control in critically ill patients. Unfortunately, few comparison studies have been performed to evaluate the clinical impact of accurate blood glucose monitoring system monitoring in the high-risk pediatric burn population. Retrospective analysis of an electronic health record system. Pediatric burn ICU at an academic medical center. Children (aged < 18 yr) with severe burns (≥ 20% total body surface area) receiving intensive insulin therapy guided by either a noncorrecting (blood glucose monitoring system-1) or an autocorrecting blood glucose monitoring system (blood glucose monitoring system-2). Patient demographics, insulin rates, and blood glucose monitoring system measurements were collected. The frequency of hypoglycemia and glycemic variability was compared between the two blood glucose monitoring system groups. A total of 122 patient charts from 2001 to 2014 were reviewed. Sixty-three patients received intensive insulin therapy using blood glucose monitoring system-1 and 59 via blood glucose monitoring system-2. Patient demographics were similar between the two groups. Mean insulin infusion rates (5.1 ± 3.8 U/hr; n = 535 paired measurements vs 2.4 ± 1.3 U/hr; n = 511 paired measurements; p < 0.001), glycemic variability, and frequency of hypoglycemic events (90 vs 12; p < 0.001) were significantly higher in blood glucose monitoring system-1-treated patients. Compared with laboratory measurements, blood glucose monitoring system-2 yielded the most accurate results (mean ± SD bias: -1.7 ± 6.9 mg/dL [-0.09 ± 0.4 mmol/L] vs 7.4 ± 13.5 mg/dL [0.4 ± 0.7 mmol/L]). Blood glucose monitoring system-2 patients achieve glycemic control more quickly (5.7 ± 4.3 vs 13.1 ± 6.9 hr; p< 0.001) and stayed within the target glycemic control range longer compared with blood glucose monitoring system-1 patients (85.2% ± 13.9% vs 57.9% ± 29.1%; p < 0.001). Accurate autocorrecting blood glucose monitoring system optimizes intensive insulin therapy, improves tight glycemic control, and reduces the risk of hypoglycemia and glycemic variability. The use of an autocorrecting blood glucose monitoring system for intensive insulin therapy may improve glycemic control in severely burned children.

Top