Automatic event recognition and anomaly detection with attribute grammar by learning scene semantics
NASA Astrophysics Data System (ADS)
Qi, Lin; Yao, Zhenyu; Li, Li; Dong, Junyu
2007-11-01
In this paper we present a novel framework for automatic event recognition and abnormal behavior detection with attribute grammar by learning scene semantics. This framework combines learning scene semantics by trajectory analysis and constructing attribute grammar-based event representation. The scene and event information is learned automatically. Abnormal behaviors that disobey scene semantics or event grammars rules are detected. By this method, an approach to understanding video scenes is achieved. Further more, with this prior knowledge, the accuracy of abnormal event detection is increased.
NASA Astrophysics Data System (ADS)
Patton, J.; Yeck, W.; Benz, H.
2017-12-01
The U.S. Geological Survey National Earthquake Information Center (USGS NEIC) is implementing and integrating new signal detection methods such as subspace correlation, continuous beamforming, multi-band picking and automatic phase identification into near-real-time monitoring operations. Leveraging the additional information from these techniques help the NEIC utilize a large and varied network on local to global scales. The NEIC is developing an ordered, rapid, robust, and decentralized framework for distributing seismic detection data as well as a set of formalized formatting standards. These frameworks and standards enable the NEIC to implement a seismic event detection framework that supports basic tasks, including automatic arrival time picking, social media based event detections, and automatic association of different seismic detection data into seismic earthquake events. In addition, this framework enables retrospective detection processing such as automated S-wave arrival time picking given a detected event, discrimination and classification of detected events by type, back-azimuth and slowness calculations, and ensuring aftershock and induced sequence detection completeness. These processes and infrastructure improve the NEIC's capabilities, accuracy, and speed of response. In addition, this same infrastructure provides an improved and convenient structure to support access to automatic detection data for both research and algorithmic development.
NASA Astrophysics Data System (ADS)
Knox, H. A.; Draelos, T.; Young, C. J.; Lawry, B.; Chael, E. P.; Faust, A.; Peterson, M. G.
2015-12-01
The quality of automatic detections from seismic sensor networks depends on a large number of data processing parameters that interact in complex ways. The largely manual process of identifying effective parameters is painstaking and does not guarantee that the resulting controls are the optimal configuration settings. Yet, achieving superior automatic detection of seismic events is closely related to these parameters. We present an automated sensor tuning (AST) system that learns near-optimal parameter settings for each event type using neuro-dynamic programming (reinforcement learning) trained with historic data. AST learns to test the raw signal against all event-settings and automatically self-tunes to an emerging event in real-time. The overall goal is to reduce the number of missed legitimate event detections and the number of false event detections. Reducing false alarms early in the seismic pipeline processing will have a significant impact on this goal. Applicable both for existing sensor performance boosting and new sensor deployment, this system provides an important new method to automatically tune complex remote sensing systems. Systems tuned in this way will achieve better performance than is currently possible by manual tuning, and with much less time and effort devoted to the tuning process. With ground truth on detections in seismic waveforms from a network of stations, we show that AST increases the probability of detection while decreasing false alarms.
Sources of Infrasound events listed in IDC Reviewed Event Bulletin
NASA Astrophysics Data System (ADS)
Bittner, Paulina; Polich, Paul; Gore, Jane; Ali, Sherif; Medinskaya, Tatiana; Mialle, Pierrick
2017-04-01
Until 2003 two waveform technologies, i.e. seismic and hydroacoustic were used to detect and locate events included in the International Data Centre (IDC) Reviewed Event Bulletin (REB). The first atmospheric event was published in the REB in 2003, however automatic processing required significant improvements to reduce the number of false events. In the beginning of 2010 the infrasound technology was reintroduced to the IDC operations and has contributed to both automatic and reviewed IDC bulletins. The primary contribution of infrasound technology is to detect atmospheric events. These events may also be observed at seismic stations, which will significantly improve event location. Examples sources of REB events, which were detected by the International Monitoring System (IMS) infrasound network were fireballs (e.g. Bangkok fireball, 2015), volcanic eruptions (e.g. Calbuco, Chile 2015) and large surface explosions (e.g. Tjanjin, China 2015). Query blasts (e.g. Zheleznogorsk) and large earthquakes (e.g. Italy 2016) belong to events primarily recorded at seismic stations of the IMS network but often detected at the infrasound stations. In case of earthquakes analysis of infrasound signals may help to estimate the area affected by ground vibration. Infrasound associations to query blast events may help to obtain better source location. The role of IDC analysts is to verify and improve location of events detected by the automatic system and to add events which were missed in the automatic process. Open source materials may help to identify nature of some events. Well recorded examples may be added to the Reference Infrasound Event Database to help in analysis process. This presentation will provide examples of events generated by different sources which were included in the IDC bulletins.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Jiang, Huaiguang; Tan, Jin
This paper proposes an event-driven approach for reconfiguring distribution systems automatically. Specifically, an optimal synchrophasor sensor placement (OSSP) is used to reduce the number of synchrophasor sensors while keeping the whole system observable. Then, a wavelet-based event detection and location approach is used to detect and locate the event, which performs as a trigger for network reconfiguration. With the detected information, the system is then reconfigured using the hierarchical decentralized approach to seek for the new optimal topology. In this manner, whenever an event happens the distribution network can be reconfigured automatically based on the real-time information that is observablemore » and detectable.« less
NASA Astrophysics Data System (ADS)
Reynen, Andrew; Audet, Pascal
2017-09-01
A new method using a machine learning technique is applied to event classification and detection at seismic networks. This method is applicable to a variety of network sizes and settings. The algorithm makes use of a small catalogue of known observations across the entire network. Two attributes, the polarization and frequency content, are used as input to regression. These attributes are extracted at predicted arrival times for P and S waves using only an approximate velocity model, as attributes are calculated over large time spans. This method of waveform characterization is shown to be able to distinguish between blasts and earthquakes with 99 per cent accuracy using a network of 13 stations located in Southern California. The combination of machine learning with generalized waveform features is further applied to event detection in Oklahoma, United States. The event detection algorithm makes use of a pair of unique seismic phases to locate events, with a precision directly related to the sampling rate of the generalized waveform features. Over a week of data from 30 stations in Oklahoma, United States are used to automatically detect 25 times more events than the catalogue of the local geological survey, with a false detection rate of less than 2 per cent. This method provides a highly confident way of detecting and locating events. Furthermore, a large number of seismic events can be automatically detected with low false alarm, allowing for a larger automatic event catalogue with a high degree of trust.
Gurulingappa, Harsha; Toldo, Luca; Rajput, Abdul Mateen; Kors, Jan A; Taweel, Adel; Tayrouz, Yorki
2013-11-01
The aim of this study was to assess the impact of automatically detected adverse event signals from text and open-source data on the prediction of drug label changes. Open-source adverse effect data were collected from FAERS, Yellow Cards and SIDER databases. A shallow linguistic relation extraction system (JSRE) was applied for extraction of adverse effects from MEDLINE case reports. Statistical approach was applied on the extracted datasets for signal detection and subsequent prediction of label changes issued for 29 drugs by the UK Regulatory Authority in 2009. 76% of drug label changes were automatically predicted. Out of these, 6% of drug label changes were detected only by text mining. JSRE enabled precise identification of four adverse drug events from MEDLINE that were undetectable otherwise. Changes in drug labels can be predicted automatically using data and text mining techniques. Text mining technology is mature and well-placed to support the pharmacovigilance tasks. Copyright © 2013 John Wiley & Sons, Ltd.
Automatic processing of induced events in the geothermal reservoirs Landau and Insheim, Germany
NASA Astrophysics Data System (ADS)
Olbert, Kai; Küperkoch, Ludger; Meier, Thomas
2016-04-01
Induced events can be a risk to local infrastructure that need to be understood and evaluated. They represent also a chance to learn more about the reservoir behavior and characteristics. Prior to the analysis, the waveform data must be processed consistently and accurately to avoid erroneous interpretations. In the framework of the MAGS2 project an automatic off-line event detection and a phase onset time determination algorithm are applied to induced seismic events in geothermal systems in Landau and Insheim, Germany. The off-line detection algorithm works based on a cross-correlation of continuous data taken from the local seismic network with master events. It distinguishes events between different reservoirs and within the individual reservoirs. Furthermore, it provides a location and magnitude estimation. Data from 2007 to 2014 are processed and compared with other detections using the SeisComp3 cross correlation detector and a STA/LTA detector. The detected events are analyzed concerning spatial or temporal clustering. Furthermore the number of events are compared to the existing detection lists. The automatic phase picking algorithm combines an AR-AIC approach with a cost function to find precise P1- and S1-phase onset times which can be used for localization and tomography studies. 800 induced events are processed, determining 5000 P1- and 6000 S1-picks. The phase onset times show a high precision with mean residuals to manual phase picks of 0s (P1) to 0.04s (S1) and standard deviations below ±0.05s. The received automatic picks are applied to relocate a selected number of events to evaluate influences on the location precision.
Sandberg, Warren S; Häkkinen, Matti; Egan, Marie; Curran, Paige K; Fairbrother, Pamela; Choquette, Ken; Daily, Bethany; Sarkka, Jukka-Pekka; Rattner, David
2005-09-01
When procedures and processes to assure patient location based on human performance do not work as expected, patients are brought incrementally closer to a possible "wrong patient-wrong procedure'' error. We developed a system for automated patient location monitoring and management. Real-time data from an active infrared/radio frequency identification tracking system provides patient location data that are robust and can be compared with an "expected process'' model to automatically flag wrong-location events as soon as they occur. The system also generates messages that are automatically sent to process managers via the hospital paging system, thus creating an active alerting function to annunciate errors. We deployed the system to detect and annunciate "patient-in-wrong-OR'' events. The system detected all "wrong-operating room (OR)'' events, and all "wrong-OR'' locations were correctly assigned within 0.50+/-0.28 minutes (mean+/-SD). This corresponded to the measured latency of the tracking system. All wrong-OR events were correctly annunciated via the paging function. This experiment demonstrates that current technology can automatically collect sufficient data to remotely monitor patient flow through a hospital, provide decision support based on predefined rules, and automatically notify stakeholders of errors.
Strategies for automatic processing of large aftershock sequences
NASA Astrophysics Data System (ADS)
Kvaerna, T.; Gibbons, S. J.
2017-12-01
Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.
NASA Astrophysics Data System (ADS)
Kortström, Jari; Tiira, Timo; Kaisko, Outi
2016-03-01
The Institute of Seismology of University of Helsinki is building a new local seismic network, called OBF network, around planned nuclear power plant in Northern Ostrobothnia, Finland. The network will consist of nine new stations and one existing station. The network should be dense enough to provide azimuthal coverage better than 180° and automatic detection capability down to ML -0.1 within a radius of 25 km from the site.The network construction work began in 2012 and the first four stations started operation at the end of May 2013. We applied an automatic seismic signal detection and event location system to a network of 13 stations consisting of the four new stations and the nearest stations of Finnish and Swedish national seismic networks. Between the end of May and December 2013 the network detected 214 events inside the predefined area of 50 km radius surrounding the planned nuclear power plant site. Of those detections, 120 were identified as spurious events. A total of 74 events were associated with known quarries and mining areas. The average location error, calculated as a difference between the announced location from environment authorities and companies and the automatic location, was 2.9 km. During the same time period eight earthquakes between magnitude range 0.1-1.0 occurred within the area. Of these seven could be automatically detected. The results from the phase 1 stations of the OBF network indicates that the planned network can achieve its goals.
Automatic detection of snow avalanches in continuous seismic data using hidden Markov models
NASA Astrophysics Data System (ADS)
Heck, Matthias; Hammer, Conny; van Herwijnen, Alec; Schweizer, Jürg; Fäh, Donat
2018-01-01
Snow avalanches generate seismic signals as many other mass movements. Detection of avalanches by seismic monitoring is highly relevant to assess avalanche danger. In contrast to other seismic events, signals generated by avalanches do not have a characteristic first arrival nor is it possible to detect different wave phases. In addition, the moving source character of avalanches increases the intricacy of the signals. Although it is possible to visually detect seismic signals produced by avalanches, reliable automatic detection methods for all types of avalanches do not exist yet. We therefore evaluate whether hidden Markov models (HMMs) are suitable for the automatic detection of avalanches in continuous seismic data. We analyzed data recorded during the winter season 2010 by a seismic array deployed in an avalanche starting zone above Davos, Switzerland. We re-evaluated a reference catalogue containing 385 events by grouping the events in seven probability classes. Since most of the data consist of noise, we first applied a simple amplitude threshold to reduce the amount of data. As first classification results were unsatisfying, we analyzed the temporal behavior of the seismic signals for the whole data set and found that there is a high variability in the seismic signals. We therefore applied further post-processing steps to reduce the number of false alarms by defining a minimal duration for the detected event, implementing a voting-based approach and analyzing the coherence of the detected events. We obtained the best classification results for events detected by at least five sensors and with a minimal duration of 12 s. These processing steps allowed identifying two periods of high avalanche activity, suggesting that HMMs are suitable for the automatic detection of avalanches in seismic data. However, our results also showed that more sensitive sensors and more appropriate sensor locations are needed to improve the signal-to-noise ratio of the signals and therefore the classification.
A Risk Assessment System with Automatic Extraction of Event Types
NASA Astrophysics Data System (ADS)
Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula
In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.
Automatic Detection and Vulnerability Analysis of Areas Endangered by Heavy Rain
NASA Astrophysics Data System (ADS)
Krauß, Thomas; Fischer, Peter
2016-08-01
In this paper we present a new method for fully automatic detection and derivation of areas endangered by heavy rainfall based only on digital elevation models. Tracking news show that the majority of occuring natural hazards are flood events. So already many flood prediction systems were developed. But most of these existing systems for deriving areas endangered by flooding events are based only on horizontal and vertical distances to existing rivers and lakes. Typically such systems take not into account dangers arising directly from heavy rain events. In a study conducted by us together with a german insurance company a new approach for detection of areas endangered by heavy rain was proven to give a high correlation of the derived endangered areas and the losses claimed at the insurance company. Here we describe three methods for classification of digital terrain models and analyze their usability for automatic detection and vulnerability analysis for areas endangered by heavy rainfall and analyze the results using the available insurance data.
Automatic Detection and Classification of Audio Events for Road Surveillance Applications.
Almaadeed, Noor; Asim, Muhammad; Al-Maadeed, Somaya; Bouridane, Ahmed; Beghdadi, Azeddine
2018-06-06
This work investigates the problem of detecting hazardous events on roads by designing an audio surveillance system that automatically detects perilous situations such as car crashes and tire skidding. In recent years, research has shown several visual surveillance systems that have been proposed for road monitoring to detect accidents with an aim to improve safety procedures in emergency cases. However, the visual information alone cannot detect certain events such as car crashes and tire skidding, especially under adverse and visually cluttered weather conditions such as snowfall, rain, and fog. Consequently, the incorporation of microphones and audio event detectors based on audio processing can significantly enhance the detection accuracy of such surveillance systems. This paper proposes to combine time-domain, frequency-domain, and joint time-frequency features extracted from a class of quadratic time-frequency distributions (QTFDs) to detect events on roads through audio analysis and processing. Experiments were carried out using a publicly available dataset. The experimental results conform the effectiveness of the proposed approach for detecting hazardous events on roads as demonstrated by 7% improvement of accuracy rate when compared against methods that use individual temporal and spectral features.
NASA Astrophysics Data System (ADS)
Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.
2017-12-01
In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long-term series and achieve real-time monitoring. Finally, we employ the SDAR on a synthetic model and Tomakomai Ocean Bottom Cable (OBC) baseline data to prove the feasibility and advantage of our method.
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.
2016-04-01
Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched field processing.
Development of an algorithm for automatic detection and rating of squeak and rattle events
NASA Astrophysics Data System (ADS)
Chandrika, Unnikrishnan Kuttan; Kim, Jay H.
2010-10-01
A new algorithm for automatic detection and rating of squeak and rattle (S&R) events was developed. The algorithm utilizes the perceived transient loudness (PTL) that approximates the human perception of a transient noise. At first, instantaneous specific loudness time histories are calculated over 1-24 bark range by applying the analytic wavelet transform and Zwicker loudness transform to the recorded noise. Transient specific loudness time histories are then obtained by removing estimated contributions of the background noise from instantaneous specific loudness time histories. These transient specific loudness time histories are summed to obtain the transient loudness time history. Finally, the PTL time history is obtained by applying Glasberg and Moore temporal integration to the transient loudness time history. Detection of S&R events utilizes the PTL time history obtained by summing only 18-24 barks components to take advantage of high signal-to-noise ratio in the high frequency range. A S&R event is identified when the value of the PTL time history exceeds the detection threshold pre-determined by a jury test. The maximum value of the PTL time history is used for rating of S&R events. Another jury test showed that the method performs much better if the PTL time history obtained by summing all frequency components is used. Therefore, r ating of S&R events utilizes this modified PTL time history. Two additional jury tests were conducted to validate the developed detection and rating methods. The algorithm developed in this work will enable automatic detection and rating of S&R events with good accuracy and minimum possibility of false alarm.
NASA Astrophysics Data System (ADS)
Kushida, N.; Kebede, F.; Feitio, P.; Le Bras, R.
2016-12-01
The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been developing and testing NET-VISA (Arora et al., 2013), a Bayesian automatic event detection and localization program, and evaluating its performance in a realistic operational mode. In our preliminary testing at the CTBTO, NET-VISA shows better performance than its currently operating automatic localization program. However, given CTBTO's role and its international context, a new technology should be introduced cautiously when it replaces a key piece of the automatic processing. We integrated the results of NET-VISA into the Analyst Review Station, extensively used by the analysts so that they can check the accuracy and robustness of the Bayesian approach. We expect the workload of the analysts to be reduced because of the better performance of NET-VISA in finding missed events and getting a more complete set of stations than the current system which has been operating for nearly twenty years. The results of a series of tests indicate that the expectations born from the automatic tests, which show an overall overlap improvement of 11%, meaning that the missed events rate is cut by 42%, hold for the integrated interactive module as well. New events are found by analysts, which qualify for the CTBTO Reviewed Event Bulletin, beyond the ones analyzed through the standard procedures. Arora, N., Russell, S., and Sudderth, E., NET-VISA: Network Processing Vertically Integrated Seismic Analysis, 2013, Bull. Seismol. Soc. Am., 103, 709-729.
Automatic fall detection using wearable biomedical signal measurement terminal.
Nguyen, Thuy-Trang; Cho, Myeong-Chan; Lee, Tae-Soo
2009-01-01
In our study, we developed a mobile waist-mounted device which can monitor the subject's acceleration signal and detect the fall events in real-time with high accuracy and automatically send an emergency message to a remote server via CDMA module. When fall event happens, the system also generates an alarm sound at 50Hz to alarm other people until a subject can sit up or stand up. A Kionix KXM52-1050 tri-axial accelerometer and a Bellwave BSM856 CDMA standalone modem were used to detect and manage fall events. We used not only a simple threshold algorithm but also some supporting methods to increase an accuracy of our system (nearly 100% in laboratory environment). Timely fall detection can prevent regrettable death due to long-lie effect; therefore increase the independence of elderly people in an unsupervised living environment.
Introduction of the ASGARD Code
NASA Technical Reports Server (NTRS)
Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv; Fayock, Brian
2017-01-01
ASGARD stands for 'Automated Selection and Grouping of events in AIA Regional Data'. The code is a refinement of the event detection method in Ugarte-Urra & Warren (2014). It is intended to automatically detect and group brightenings ('events') in the AIA EUV channels, to record event parameters, and to find related events over multiple channels. Ultimately, the goal is to automatically determine heating and cooling timescales in the corona and to significantly increase statistics in this respect. The code is written in IDL and requires the SolarSoft library. It is parallelized and can run with multiple CPUs. Input files are regions of interest (ROIs) in time series of AIA images from the JSOC cutout service (http://jsoc.stanford.edu/ajax/exportdata.html). The ROIs need to be tracked, co-registered, and limited in time (typically 12 hours).
Alsep data processing: How we processed Apollo Lunar Seismic Data
NASA Technical Reports Server (NTRS)
Latham, G. V.; Nakamura, Y.; Dorman, H. J.
1979-01-01
The Apollo lunar seismic station network gathered data continuously at a rate of 3 x 10 to the 8th power bits per day for nearly eight years until the termination in September, 1977. The data were processed and analyzed using a PDP-15 minicomputer. On the average, 1500 long-period seismic events were detected yearly. Automatic event detection and identification schemes proved unsuccessful because of occasional high noise levels and, above all, the risk of overlooking unusual natural events. The processing procedures finally settled on consist of first plotting all the data on a compressed time scale, visually picking events from the plots, transferring event data to separate sets of tapes and performing detailed analyses using the latter. Many problems remain especially for automatically processing extraterrestrial seismic signals.
Contribution of Infrasound to IDC Reviewed Event Bulletin
NASA Astrophysics Data System (ADS)
Bittner, Paulina; Polich, Paul; Gore, Jane; Ali, Sherif Mohamed; Medinskaya, Tatiana; Mialle, Pierrick
2016-04-01
Until 2003 two waveform technologies, i.e. seismic and hydroacoustic were used to detect and locate events included in the International Data Centre (IDC) Reviewed Event Bulletin (REB). The first atmospheric event was published in the REB in 2003 but infrasound detections could not be used by the Global Association (GA) Software due to the unmanageable high number of spurious associations. Offline improvements of the automatic processing took place to reduce the number of false detections to a reasonable level. In February 2010 the infrasound technology was reintroduced to the IDC operations and has contributed to both automatic and reviewed IDC bulletins. The primary contribution of infrasound technology is to detect atmospheric events. These events may also be observed at seismic stations, which will significantly improve event location. Examples of REB events, which were detected by the International Monitoring System (IMS) infrasound network were fireballs (e.g. Bangkok fireball, 2015), volcanic eruptions (e.g. Calbuco, Chile 2015) and large surface explosions (e.g. Tjanjin, China 2015). Query blasts and large earthquakes belong to events primarily recorded at seismic stations of the IMS network but often detected at the infrasound stations. Presence of infrasound detection associated to an event from a mining area indicates a surface explosion. Satellite imaging and a database of active mines can be used to confirm the origin of such events. This presentation will summarize the contribution of 6 years of infrasound data to IDC bulletins and provide examples of events recorded at the IMS infrasound network. Results of this study may help to improve location of small events with observations on infrasound stations.
NASA Astrophysics Data System (ADS)
Custodio, S.; Matos, C.; Grigoli, F.; Cesca, S.; Heimann, S.; Rio, I.
2015-12-01
Seismic data processing is currently undergoing a step change, benefitting from high-volume datasets and advanced computer power. In the last decade, a permanent seismic network of 30 broadband stations, complemented by dense temporary deployments, covered mainland Portugal. This outstanding regional coverage currently enables the computation of a high-resolution image of the seismicity of Portugal, which contributes to fitting together the pieces of the regional seismo-tectonic puzzle. Although traditional manual inspections are valuable to refine automatic results they are impracticable with the big data volumes now available. When conducted alone they are also less objective since the criteria is defined by the analyst. In this work we present CatchPy, a scanning algorithm to detect earthquakes in continuous datasets. Our main goal is to implement an automatic earthquake detection and location routine in order to have a tool to quickly process large data sets, while at the same time detecting low magnitude earthquakes (i.e. lowering the detection threshold). CatchPY is designed to produce an event database that could be easily located using existing location codes (e.g.: Grigoli et al. 2013, 2014). We use CatchPy to perform automatic detection and location of earthquakes that occurred in Alentejo region (South Portugal), taking advantage of a dense seismic network deployed in the region for two years during the DOCTAR experiment. Results show that our automatic procedure is particularly suitable for small aperture networks. The event detection is performed by continuously computing the short-term-average/long-term-average of two different characteristic functions (CFs). For the P phases we used a CF based on the vertical energy trace while for S phases we used a CF based on the maximum eigenvalue of the instantaneous covariance matrix (Vidale 1991). Seismic event location is performed by waveform coherence analysis, scanning different hypocentral coordinates (Grigoli et al. 2013, 2014). The reliability of automatic detections, phase pickings and locations are tested trough the quantitative comparison with manual results. This work is supported by project QuakeLoc, reference: PTDC/GEO-FIQ/3522/2012
Bridging the semantic gap in sports
NASA Astrophysics Data System (ADS)
Li, Baoxin; Errico, James; Pan, Hao; Sezan, M. Ibrahim
2003-01-01
One of the major challenges facing current media management systems and the related applications is the so-called "semantic gap" between the rich meaning that a user desires and the shallowness of the content descriptions that are automatically extracted from the media. In this paper, we address the problem of bridging this gap in the sports domain. We propose a general framework for indexing and summarizing sports broadcast programs. The framework is based on a high-level model of sports broadcast video using the concept of an event, defined according to domain-specific knowledge for different types of sports. Within this general framework, we develop automatic event detection algorithms that are based on automatic analysis of the visual and aural signals in the media. We have successfully applied the event detection algorithms to different types of sports including American football, baseball, Japanese sumo wrestling, and soccer. Event modeling and detection contribute to the reduction of the semantic gap by providing rudimentary semantic information obtained through media analysis. We further propose a novel approach, which makes use of independently generated rich textual metadata, to fill the gap completely through synchronization of the information-laden textual data with the basic event segments. An MPEG-7 compliant prototype browsing system has been implemented to demonstrate semantic retrieval and summarization of sports video.
NASA Astrophysics Data System (ADS)
Ziegler, A.; Balch, R. S.; Knox, H. A.; Van Wijk, J. W.; Draelos, T.; Peterson, M. G.
2016-12-01
We present results (e.g. seismic detections and STA/LTA detection parameters) from a continuous downhole seismic array in the Farnsworth Field, an oil field in Northern Texas that hosts an ongoing carbon capture, utilization, and storage project. Specifically, we evaluate data from a passive vertical monitoring array consisting of 16 levels of 3-component 15Hz geophones installed in the field and continuously recording since January 2014. This detection database is directly compared to ancillary data (i.e. wellbore pressure) to determine if there is any relationship between seismic observables and CO2 injection and pressure maintenance in the field. Of particular interest is detection of relatively low-amplitude signals constituting long-period long-duration (LPLD) events that may be associated with slow shear-slip analogous to low frequency tectonic tremor. While this category of seismic event provides great insight into dynamic behavior of the pressurized subsurface, it is inherently difficult to detect. To automatically detect seismic events using effective data processing parameters, an automated sensor tuning (AST) algorithm developed by Sandia National Laboratories is being utilized. AST exploits ideas from neuro-dynamic programming (reinforcement learning) to automatically self-tune and determine optimal detection parameter settings. AST adapts in near real-time to changing conditions and automatically self-tune a signal detector to identify (detect) only signals from events of interest, leading to a reduction in the number of missed legitimate event detections and the number of false event detections. Funding for this project is provided by the U.S. Department of Energy's (DOE) National Energy Technology Laboratory (NETL) through the Southwest Regional Partnership on Carbon Sequestration (SWP) under Award No. DE-FC26-05NT42591. Additional support has been provided by site operator Chaparral Energy, L.L.C. and Schlumberger Carbon Services. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Automatic detection of lexical change: an auditory event-related potential study.
Muller-Gass, Alexandra; Roye, Anja; Kirmse, Ursula; Saupe, Katja; Jacobsen, Thomas; Schröger, Erich
2007-10-29
We investigated the detection of rare task-irrelevant changes in the lexical status of speech stimuli. Participants performed a nonlinguistic task on word and pseudoword stimuli that occurred, in separate conditions, rarely or frequently. Task performance for pseudowords was deteriorated relative to words, suggesting unintentional lexical analysis. Furthermore, rare word and pseudoword changes had a similar effect on the event-related potentials, starting as early as 165 ms. This is the first demonstration of the automatic detection of change in lexical status that is not based on a co-occurring acoustic change. We propose that, following lexical analysis of the incoming stimuli, a mental representation of the lexical regularity is formed and used as a template against which lexical change can be detected.
ERIC Educational Resources Information Center
Taft, Laritza M.
2010-01-01
In its report "To Err is Human", The Institute of Medicine recommended the implementation of internal and external voluntary and mandatory automatic reporting systems to increase detection of adverse events. Knowledge Discovery in Databases (KDD) allows the detection of patterns and trends that would be hidden or less detectable if analyzed by…
Adaptive Sensor Tuning for Seismic Event Detection in Environment with Electromagnetic Noise
NASA Astrophysics Data System (ADS)
Ziegler, Abra E.
The goal of this research is to detect possible microseismic events at a carbon sequestration site. Data recorded on a continuous downhole microseismic array in the Farnsworth Field, an oil field in Northern Texas that hosts an ongoing carbon capture, utilization, and storage project, were evaluated using machine learning and reinforcement learning techniques to determine their effectiveness at seismic event detection on a dataset with electromagnetic noise. The data were recorded from a passive vertical monitoring array consisting of 16 levels of 3-component 15 Hz geophones installed in the field and continuously recording since January 2014. Electromagnetic and other noise recorded on the array has significantly impacted the utility of the data and it was necessary to characterize and filter the noise in order to attempt event detection. Traditional detection methods using short-term average/long-term average (STA/LTA) algorithms were evaluated and determined to be ineffective because of changing noise levels. To improve the performance of event detection and automatically and dynamically detect seismic events using effective data processing parameters, an adaptive sensor tuning (AST) algorithm developed by Sandia National Laboratories was utilized. AST exploits neuro-dynamic programming (reinforcement learning) trained with historic event data to automatically self-tune and determine optimal detection parameter settings. The key metric that guides the AST algorithm is consistency of each sensor with its nearest neighbors: parameters are automatically adjusted on a per station basis to be more or less sensitive to produce consistent agreement of detections in its neighborhood. The effects that changes in neighborhood configuration have on signal detection were explored, as it was determined that neighborhood-based detections significantly reduce the number of both missed and false detections in ground-truthed data. The performance of the AST algorithm was quantitatively evaluated during a variety of noise conditions and seismic detections were identified using AST and compared to ancillary injection data. During a period of CO2 injection in a nearby well to the monitoring array, 82% of seismic events were accurately detected, 13% of events were missed, and 5% of detections were determined to be false. Additionally, seismic risk was evaluated from the stress field and faulting regime at FWU to determine the likelihood of pressure perturbations to trigger slip on previously mapped faults. Faults oriented NW-SE were identified as requiring the smallest pore pressure changes to trigger slip and faults oriented N-S will also potentially be reactivated although this is less likely.
Station Set Residual: Event Classification Using Historical Distribution of Observing Stations
NASA Astrophysics Data System (ADS)
Procopio, Mike; Lewis, Jennifer; Young, Chris
2010-05-01
Analysts working at the International Data Centre in support of treaty monitoring through the Comprehensive Nuclear-Test-Ban Treaty Organization spend a significant amount of time reviewing hypothesized seismic events produced by an automatic processing system. When reviewing these events to determine their legitimacy, analysts take a variety of approaches that rely heavily on training and past experience. One method used by analysts to gauge the validity of an event involves examining the set of stations involved in the detection of an event. In particular, leveraging past experience, an analyst can say that an event located in a certain part of the world is expected to be detected by Stations A, B, and C. Implicit in this statement is that such an event would usually not be detected by Stations X, Y, or Z. For some well understood parts of the world, the absence of one or more "expected" stations—or the presence of one or more "unexpected" stations—is correlated with a hypothesized event's legitimacy and to its survival to the event bulletin. The primary objective of this research is to formalize and quantify the difference between the observed set of stations detecting some hypothesized event, versus the expected set of stations historically associated with detecting similar nearby events close in magnitude. This Station Set Residual can be quantified in many ways, some of which are correlated with the analysts' determination of whether or not the event is valid. We propose that this Station Set Residual score can be used to screen out certain classes of "false" events produced by automatic processing with a high degree of confidence, reducing the analyst burden. Moreover, we propose that the visualization of the historically expected distribution of detecting stations can be immediately useful as an analyst aid during their review process.
Embedded security system for multi-modal surveillance in a railway carriage
NASA Astrophysics Data System (ADS)
Zouaoui, Rhalem; Audigier, Romaric; Ambellouis, Sébastien; Capman, François; Benhadda, Hamid; Joudrier, Stéphanie; Sodoyer, David; Lamarque, Thierry
2015-10-01
Public transport security is one of the main priorities of the public authorities when fighting against crime and terrorism. In this context, there is a great demand for autonomous systems able to detect abnormal events such as violent acts aboard passenger cars and intrusions when the train is parked at the depot. To this end, we present an innovative approach which aims at providing efficient automatic event detection by fusing video and audio analytics and reducing the false alarm rate compared to classical stand-alone video detection. The multi-modal system is composed of two microphones and one camera and integrates onboard video and audio analytics and fusion capabilities. On the one hand, for detecting intrusion, the system relies on the fusion of "unusual" audio events detection with intrusion detections from video processing. The audio analysis consists in modeling the normal ambience and detecting deviation from the trained models during testing. This unsupervised approach is based on clustering of automatically extracted segments of acoustic features and statistical Gaussian Mixture Model (GMM) modeling of each cluster. The intrusion detection is based on the three-dimensional (3D) detection and tracking of individuals in the videos. On the other hand, for violent events detection, the system fuses unsupervised and supervised audio algorithms with video event detection. The supervised audio technique detects specific events such as shouts. A GMM is used to catch the formant structure of a shout signal. Video analytics use an original approach for detecting aggressive motion by focusing on erratic motion patterns specific to violent events. As data with violent events is not easily available, a normality model with structured motions from non-violent videos is learned for one-class classification. A fusion algorithm based on Dempster-Shafer's theory analyses the asynchronous detection outputs and computes the degree of belief of each probable event.
Support Vector Machine Model for Automatic Detection and Classification of Seismic Events
NASA Astrophysics Data System (ADS)
Barros, Vesna; Barros, Lucas
2016-04-01
The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support-vector network to various classical learning algorithms used before in seismic detection and classification is an essential final step to analyze the advantages and disadvantages of the model.
Self-similarity Clustering Event Detection Based on Triggers Guidance
NASA Astrophysics Data System (ADS)
Zhang, Xianfei; Li, Bicheng; Tian, Yuxuan
Traditional method of Event Detection and Characterization (EDC) regards event detection task as classification problem. It makes words as samples to train classifier, which can lead to positive and negative samples of classifier imbalance. Meanwhile, there is data sparseness problem of this method when the corpus is small. This paper doesn't classify event using word as samples, but cluster event in judging event types. It adopts self-similarity to convergence the value of K in K-means algorithm by the guidance of event triggers, and optimizes clustering algorithm. Then, combining with named entity and its comparative position information, the new method further make sure the pinpoint type of event. The new method avoids depending on template of event in tradition methods, and its result of event detection can well be used in automatic text summarization, text retrieval, and topic detection and tracking.
Polepalli Ramesh, Balaji; Belknap, Steven M; Li, Zuofeng; Frid, Nadya; West, Dennis P
2014-01-01
Background The Food and Drug Administration’s (FDA) Adverse Event Reporting System (FAERS) is a repository of spontaneously-reported adverse drug events (ADEs) for FDA-approved prescription drugs. FAERS reports include both structured reports and unstructured narratives. The narratives often include essential information for evaluation of the severity, causality, and description of ADEs that are not present in the structured data. The timely identification of unknown toxicities of prescription drugs is an important, unsolved problem. Objective The objective of this study was to develop an annotated corpus of FAERS narratives and biomedical named entity tagger to automatically identify ADE related information in the FAERS narratives. Methods We developed an annotation guideline and annotate medication information and adverse event related entities on 122 FAERS narratives comprising approximately 23,000 word tokens. A named entity tagger using supervised machine learning approaches was built for detecting medication information and adverse event entities using various categories of features. Results The annotated corpus had an agreement of over .9 Cohen’s kappa for medication and adverse event entities. The best performing tagger achieves an overall performance of 0.73 F1 score for detection of medication, adverse event and other named entities. Conclusions In this study, we developed an annotated corpus of FAERS narratives and machine learning based models for automatically extracting medication and adverse event information from the FAERS narratives. Our study is an important step towards enriching the FAERS data for postmarketing pharmacovigilance. PMID:25600332
RoboTAP: Target priorities for robotic microlensing observations
NASA Astrophysics Data System (ADS)
Hundertmark, M.; Street, R. A.; Tsapras, Y.; Bachelet, E.; Dominik, M.; Horne, K.; Bozza, V.; Bramich, D. M.; Cassan, A.; D'Ago, G.; Figuera Jaimes, R.; Kains, N.; Ranc, C.; Schmidt, R. W.; Snodgrass, C.; Wambsganss, J.; Steele, I. A.; Mao, S.; Ment, K.; Menzies, J.; Li, Z.; Cross, S.; Maoz, D.; Shvartzvald, Y.
2018-01-01
Context. The ability to automatically select scientifically-important transient events from an alert stream of many such events, and to conduct follow-up observations in response, will become increasingly important in astronomy. With wide-angle time domain surveys pushing to fainter limiting magnitudes, the capability to follow-up on transient alerts far exceeds our follow-up telescope resources, and effective target prioritization becomes essential. The RoboNet-II microlensing program is a pathfinder project, which has developed an automated target selection process (RoboTAP) for gravitational microlensing events, which are observed in real time using the Las Cumbres Observatory telescope network. Aims: Follow-up telescopes typically have a much smaller field of view compared to surveys, therefore the most promising microlensing events must be automatically selected at any given time from an annual sample exceeding 2000 events. The main challenge is to select between events with a high planet detection sensitivity, with the aim of detecting many planets and characterizing planetary anomalies. Methods: Our target selection algorithm is a hybrid system based on estimates of the planet detection zones around a microlens. It follows automatic anomaly alerts and respects the expected survey coverage of specific events. Results: We introduce the RoboTAP algorithm, whose purpose is to select and prioritize microlensing events with high sensitivity to planetary companions. In this work, we determine the planet sensitivity of the RoboNet follow-up program and provide a working example of how a broker can be designed for a real-life transient science program conducting follow-up observations in response to alerts; we explore the issues that will confront similar programs being developed for the Large Synoptic Survey Telescope (LSST) and other time domain surveys.
National Earthquake Information Center Seismic Event Detections on Multiple Scales
NASA Astrophysics Data System (ADS)
Patton, J.; Yeck, W. L.; Benz, H.; Earle, P. S.; Soto-Cordero, L.; Johnson, C. E.
2017-12-01
The U.S. Geological Survey National Earthquake Information Center (NEIC) monitors seismicity on local, regional, and global scales using automatic picks from more than 2,000 near-real time seismic stations. This presents unique challenges in automated event detection due to the high variability in data quality, network geometries and density, and distance-dependent variability in observed seismic signals. To lower the overall detection threshold while minimizing false detection rates, NEIC has begun to test the incorporation of new detection and picking algorithms, including multiband (Lomax et al., 2012) and kurtosis (Baillard et al., 2014) pickers, and a new bayesian associator (Glass 3.0). The Glass 3.0 associator allows for simultaneous processing of variably scaled detection grids, each with a unique set of nucleation criteria (e.g., nucleation threshold, minimum associated picks, nucleation phases) to meet specific monitoring goals. We test the efficacy of these new tools on event detection in networks of various scales and geometries, compare our results with previous catalogs, and discuss lessons learned. For example, we find that on local and regional scales, rapid nucleation of small events may require event nucleation with both P and higher-amplitude secondary phases (e.g., S or Lg). We provide examples of the implementation of a scale-independent associator for an induced seismicity sequence (local-scale), a large aftershock sequence (regional-scale), and for monitoring global seismicity. Baillard, C., Crawford, W. C., Ballu, V., Hibert, C., & Mangeney, A. (2014). An automatic kurtosis-based P-and S-phase picker designed for local seismic networks. Bulletin of the Seismological Society of America, 104(1), 394-409. Lomax, A., Satriano, C., & Vassallo, M. (2012). Automatic picker developments and optimization: FilterPicker - a robust, broadband picker for real-time seismic monitoring and earthquake early-warning, Seism. Res. Lett. , 83, 531-540, doi: 10.1785/gssrl.83.3.531.
Automated Electroglottographic Inflection Events Detection. A Pilot Study.
Codino, Juliana; Torres, María Eugenia; Rubin, Adam; Jackson-Menaldi, Cristina
2016-11-01
Vocal-fold vibration can be analyzed in a noninvasive way by registering impedance changes within the glottis, through electroglottography. The morphology of the electroglottographic (EGG) signal is related to different vibratory patterns. In the literature, a characteristic knee in the descending portion of the signal has been reported. Some EGG signals do not exhibit this particular knee and have other types of events (inflection events) throughout the ascending and/or descending portion of the vibratory cycle. The goal of this work is to propose an automatic method to identify and classify these events. A computational algorithm was developed based on the mathematical properties of the EGG signal, which detects and reports events throughout the contact phase. Retrospective analysis of EGG signals obtained during routine voice evaluation of adult individuals with a variety of voice disorders was performed using the algorithm as well as human raters. Two judges, both experts in clinical voice analysis, and three general speech pathologists performed manual and visual evaluation of the sample set. The results obtained by the automatic method were compared with those of the human raters. Statistical analysis revealed a significant level of agreement. This automatic tool could allow professionals in the clinical setting to obtain an automatic quantitative and qualitative report of such events present in a voice sample, without having to manually analyze the whole EGG signal. In addition, it might provide the speech pathologist with more information that would complement the standard voice evaluation. It could also be a valuable tool in voice research. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Introduction of the ASGARD code (Automated Selection and Grouping of events in AIA Regional Data)
NASA Astrophysics Data System (ADS)
Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv K.; Fayock, Brian
2017-08-01
We have developed the ASGARD code to automatically detect and group brightenings ("events") in AIA data. The event selection and grouping can be optimized to the respective dataset with a multitude of control parameters. The code was initially written for IRIS data, but has since been optimized for AIA. However, the underlying algorithm is not limited to either and could be used for other data as well.Results from datasets in various AIA channels show that brightenings are reliably detected and that coherent coronal structures can be isolated by using the obtained information about the start, peak, and end times of events. We are presently working on a follow-up algorithm to automatically determine the heating and cooling timescales of coronal structures. This will be done by correlating the information from different AIA channels with different temperature responses. We will present the code and preliminary results.
Real-time Automatic Detectors of P and S Waves Using Singular Values Decomposition
NASA Astrophysics Data System (ADS)
Kurzon, I.; Vernon, F.; Rosenberger, A.; Ben-Zion, Y.
2013-12-01
We implement a new method for the automatic detection of the primary P and S phases using Singular Value Decomposition (SVD) analysis. The method is based on a real-time iteration algorithm of Rosenberger (2010) for the SVD of three component seismograms. Rosenberger's algorithm identifies the incidence angle by applying SVD and separates the waveforms into their P and S components. We have been using the same algorithm with the modification that we filter the waveforms prior to the SVD, and then apply SNR (Signal-to-Noise Ratio) detectors for picking the P and S arrivals, on the new filtered+SVD-separated channels. A recent deployment in San Jacinto Fault Zone area provides a very dense seismic network that allows us to test the detection algorithm in diverse setting, such as: events with different source mechanisms, stations with different site characteristics, and ray paths that diverge from the SVD approximation used in the algorithm, (e.g., rays propagating within the fault and recorded on linear arrays, crossing the fault). We have found that a Butterworth band-pass filter of 2-30Hz, with four poles at each of the corner frequencies, shows the best performance in a large variety of events and stations within the SJFZ. Using the SVD detectors we obtain a similar number of P and S picks, which is a rare thing to see in ordinary SNR detectors. Also for the actual real-time operation of the ANZA and SJFZ real-time seismic networks, the above filter (2-30Hz) shows a very impressive performance, tested on many events and several aftershock sequences in the region from the MW 5.2 of June 2005, through the MW 5.4 of July 2010, to MW 4.7 of March 2013. Here we show the results of testing the detectors on the most complex and intense aftershock sequence, the MW 5.2 of June 2005, in which in the very first hour there were ~4 events a minute. This aftershock sequence was thoroughly reviewed by several analysts, identifying 294 events in the first hour, located in a condensed cluster around the main shock. We used this hour of events to fine-tune the automatic SVD detection, association and location of the real-time system, reaching a 37% automatic identification and location of events, with a minimum of 10 stations per event, all events fall within the same condensed cluster and there are no false events or large offsets of their locations. An ordinary SNR detector did not exceed the 11% success with a minimum of 8 stations per event, 2 false events and a wider spread of events (not within the reviewed cluster). One of the main advantages of the SVD detectors for real-time operations is the actual separation between the P and S components, by that significantly reducing the noise of picks detected by ordinary SNR detectors. The new method has been applied for a significant amount of events within the SJFZ in the past 8 years, and is now in the final stage of real-time implementation in UCSD for the ANZA and SJFZ networks, tuned for automatic detection and location of local events.
Automatic Detection and Classification of Unsafe Events During Power Wheelchair Use.
Pineau, Joelle; Moghaddam, Athena K; Yuen, Hiu Kim; Archambault, Philippe S; Routhier, François; Michaud, François; Boissy, Patrick
2014-01-01
Using a powered wheelchair (PW) is a complex task requiring advanced perceptual and motor control skills. Unfortunately, PW incidents and accidents are not uncommon and their consequences can be serious. The objective of this paper is to develop technological tools that can be used to characterize a wheelchair user's driving behavior under various settings. In the experiments conducted, PWs are outfitted with a datalogging platform that records, in real-time, the 3-D acceleration of the PW. Data collection was conducted over 35 different activities, designed to capture a spectrum of PW driving events performed at different speeds (collisions with fixed or moving objects, rolling on incline plane, and rolling across multiple types obstacles). The data was processed using time-series analysis and data mining techniques, to automatically detect and identify the different events. We compared the classification accuracy using four different types of time-series features: 1) time-delay embeddings; 2) time-domain characterization; 3) frequency-domain features; and 4) wavelet transforms. In the analysis, we compared the classification accuracy obtained when distinguishing between safe and unsafe events during each of the 35 different activities. For the purposes of this study, unsafe events were defined as activities containing collisions against objects at different speed, and the remainder were defined as safe events. We were able to accurately detect 98% of unsafe events, with a low (12%) false positive rate, using only five examples of each activity. This proof-of-concept study shows that the proposed approach has the potential of capturing, based on limited input from embedded sensors, contextual information on PW use, and of automatically characterizing a user's PW driving behavior.
Comparing Automatic CME Detections in Multiple LASCO and SECCHI Catalogs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hess, Phillip; Colaninno, Robin C., E-mail: phillip.hess.ctr@nrl.navy.mil, E-mail: robin.colaninno@nrl.navy.mil
With the creation of numerous automatic detection algorithms, a number of different catalogs of coronal mass ejections (CMEs) spanning the entirety of the Solar and Heliospheric Observatory ( SOHO ) Large Angle Spectrometric Coronagraph (LASCO) mission have been created. Some of these catalogs have been further expanded for use on data from the Solar Terrestrial Earth Observatory ( STEREO ) Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) as well. We compare the results from different automatic detection catalogs (Solar Eruption Event Detection System (SEEDS), Computer Aided CME Tracking (CACTus), and Coronal Image Processing (CORIMP)) to ensure the consistency ofmore » detections in each. Over the entire span of the LASCO catalogs, the automatic catalogs are well correlated with one another, to a level greater than 0.88. Focusing on just periods of higher activity, these correlations remain above 0.7. We establish the difficulty in comparing detections over the course of LASCO observations due to the change in the instrument image cadence in 2010. Without adjusting catalogs for the cadence, CME detection rates show a large spike in cycle 24, despite a notable drop in other indices of solar activity. The output from SEEDS, using a consistent image cadence, shows that the CME rate has not significantly changed relative to sunspot number in cycle 24. These data, and mass calculations from CORIMP, lead us to conclude that any apparent increase in CME rate is a result of the change in cadence. We study detection characteristics of CMEs, discussing potential physical changes in events between cycles 23 and 24. We establish that, for detected CMEs, physical parameters can also be sensitive to the cadence.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunton, Steven
Optical systems provide valuable information for evaluating interactions and associations between organisms and MHK energy converters and for capturing potentially rare encounters between marine organisms and MHK device. The deluge of optical data from cabled monitoring packages makes expert review time-consuming and expensive. We propose algorithms and a processing framework to automatically extract events of interest from underwater video. The open-source software framework consists of background subtraction, filtering, feature extraction and hierarchical classification algorithms. This principle classification pipeline was validated on real-world data collected with an experimental underwater monitoring package. An event detection rate of 100% was achieved using robustmore » principal components analysis (RPCA), Fourier feature extraction and a support vector machine (SVM) binary classifier. The detected events were then further classified into more complex classes – algae | invertebrate | vertebrate, one species | multiple species of fish, and interest rank. Greater than 80% accuracy was achieved using a combination of machine learning techniques.« less
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.
We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.; ...
2016-06-08
We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less
Real-time Flare Detection in Ground-Based Hα Imaging at Kanzelhöhe Observatory
NASA Astrophysics Data System (ADS)
Pötzi, W.; Veronig, A. M.; Riegler, G.; Amerstorfer, U.; Pock, T.; Temmer, M.; Polanec, W.; Baumgartner, D. J.
2015-03-01
Kanzelhöhe Observatory (KSO) regularly performs high-cadence full-disk imaging of the solar chromosphere in the Hα and Ca ii K spectral lines as well as in the solar photosphere in white light. In the frame of ESA's (European Space Agency) Space Situational Awareness (SSA) program, a new system for real-time Hα data provision and automatic flare detection was developed at KSO. The data and events detected are published in near real-time at ESA's SSA Space Weather portal (http://swe.ssa.esa.int/web/guest/kso-federated). In this article, we describe the Hα instrument, the image-recognition algorithms we developed, and the implementation into the KSO Hα observing system. We also present the evaluation results of the real-time data provision and flare detection for a period of five months. The Hα data provision worked in 99.96 % of the images, with a mean time lag of four seconds between image recording and online provision. Within the given criteria for the automatic image-recognition system (at least three Hα images are needed for a positive detection), all flares with an area ≥ 50 micro-hemispheres that were located within 60° of the solar center and occurred during the KSO observing times were detected, a number of 87 events in total. The automatically determined flare importance and brightness classes were correct in ˜ 85 %. The mean flare positions in heliographic longitude and latitude were correct to within ˜ 1°. The median of the absolute differences for the flare start and peak times from the automatic detections in comparison with the official NOAA (and KSO) visual flare reports were 3 min (1 min).
Fehre, Karsten; Plössnig, Manuela; Schuler, Jochen; Hofer-Dückelmann, Christina; Rappelsberger, Andrea; Adlassnig, Klaus-Peter
2015-01-01
The detection of adverse drug events (ADEs) is an important aspect of improving patient safety. The iMedication system employs predefined triggers associated with significant events in a patient's clinical data to automatically detect possible ADEs. We defined four clinically relevant conditions: hyperkalemia, hyponatremia, renal failure, and over-anticoagulation. These are some of the most relevant ADEs in internal medical and geriatric wards. For each patient, ADE risk scores for all four situations are calculated, compared against a threshold, and judged to be monitored, or reported. A ward-based cockpit view summarizes the results.
NASA Astrophysics Data System (ADS)
Roessler, D.; Weber, B.; Ellguth, E.; Spazier, J.
2017-12-01
The geometry of seismic monitoring networks, site conditions and data availability as well as monitoring targets and strategies typically impose trade-offs between data quality, earthquake detection sensitivity, false detections and alert times. Network detection capabilities typically change with alteration of the seismic noise level by human activity or by varying weather and sea conditions. To give helpful information to operators and maintenance coordinators, gempa developed a range of tools to evaluate earthquake detection and network performance including qceval, npeval and sceval. qceval is a module which analyzes waveform quality parameters in real-time and deactivates and reactivates data streams based on waveform quality thresholds for automatic processing. For example, thresholds can be defined for latency, delay, timing quality, spikes and gaps count and rms. As changes in the automatic processing have a direct influence on detection quality and speed, another tool called "npeval" was designed to calculate in real-time the expected time needed to detect and locate earthquakes by evaluating the effective network geometry. The effective network geometry is derived from the configuration of stations participating in the detection. The detection times are shown as an additional layer on the map and updated in real-time as soon as the effective network geometry changes. Yet another new tool, "sceval", is an automatic module which classifies located seismic events (Origins) in real-time. sceval evaluates the spatial distribution of the stations contributing to an Origin. It confirms or rejects the status of Origins, adds comments or leaves the Origin unclassified. The comments are passed to an additional sceval plug-in where the end user can customize event types. This unique identification of real and fake events in earthquake catalogues allows to lower network detection thresholds. In real-time monitoring situations operators can limit the processing to events with unclassified Origins, reducing their workload. Classified Origins can be treated specifically by other procedures. These modules have been calibrated and fully tested by several complex seismic monitoring networks in the region of Indonesia and Northern Chile.
Automatic Classification of Extensive Aftershock Sequences Using Empirical Matched Field Processing
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Harris, David B.; Kværna, Tormod; Dodge, Douglas A.
2013-04-01
The aftershock sequences that follow large earthquakes create considerable problems for data centers attempting to produce comprehensive event bulletins in near real-time. The greatly increased number of events which require processing can overwhelm analyst resources and reduce the capacity for analyzing events of monitoring interest. This exacerbates a potentially reduced detection capability at key stations, due the noise generated by the sequence, and a deterioration in the quality of the fully automatic preliminary event bulletins caused by the difficulty in associating the vast numbers of closely spaced arrivals over the network. Considerable success has been enjoyed by waveform correlation methods for the automatic identification of groups of events belonging to the same geographical source region, facilitating the more time-efficient analysis of event ensembles as opposed to individual events. There are, however, formidable challenges associated with the automation of correlation procedures. The signal generated by a very large earthquake seldom correlates well enough with the signals generated by far smaller aftershocks for a correlation detector to produce statistically significant triggers at the correct times. Correlation between events within clusters of aftershocks is significantly better, although the issues of when and how to initiate new pattern detectors are still being investigated. Empirical Matched Field Processing (EMFP) is a highly promising method for detecting event waveforms suitable as templates for correlation detectors. EMFP is a quasi-frequency-domain technique that calibrates the spatial structure of a wavefront crossing a seismic array in a collection of narrow frequency bands. The amplitude and phase weights that result are applied in a frequency-domain beamforming operation that compensates for scattering and refraction effects not properly modeled by plane-wave beams. It has been demonstrated to outperform waveform correlation as a classifier of ripple-fired mining blasts since the narrowband procedure is insensitive to differences in the source-time functions. For sequences in which the spectral content and time-histories of the signals from the main shock and aftershocks vary greatly, the spatial structure calibrated by EMFP is an invariant that permits reliable detection of events in the specific source region. Examples from the 2005 Kashmir and 2011 Van earthquakes demonstrate how EMFP templates from the main events detect arrivals from the aftershock sequences with high sensitivity and exceptionally low false alarm rates. Classical waveform correlation detectors are demonstrated to fail for these examples. Even arrivals with SNR below unity can produce significant EMFP triggers as the spatial pattern of the incoming wavefront is identified, leading to robust detections at a greater number of stations and potentially more reliable automatic bulletins. False EMFP triggers are readily screened by scanning a space of phase shifts relative to the imposed template. EMFP has the potential to produce a rapid and robust overview of the evolving aftershock sequence such that correlation and subspace detectors can be applied semi-autonomously, with well-chosen parameter specifications, to identify and classify clusters of very closely spaced aftershocks.
Real-time monitoring of clinical processes using complex event processing and transition systems.
Meinecke, Sebastian
2014-01-01
Dependencies between tasks in clinical processes are often complex and error-prone. Our aim is to describe a new approach for the automatic derivation of clinical events identified via the behaviour of IT systems using Complex Event Processing. Furthermore we map these events on transition systems to monitor crucial clinical processes in real-time for preventing and detecting erroneous situations.
Machine intelligence-based decision-making (MIND) for automatic anomaly detection
NASA Astrophysics Data System (ADS)
Prasad, Nadipuram R.; King, Jason C.; Lu, Thomas
2007-04-01
Any event deemed as being out-of-the-ordinary may be called an anomaly. Anomalies by virtue of their definition are events that occur spontaneously with no prior indication of their existence or appearance. Effects of anomalies are typically unknown until they actually occur, and their effects aggregate in time to show noticeable change from the original behavior. An evolved behavior would in general be very difficult to correct unless the anomalous event that caused such behavior can be detected early, and any consequence attributed to the specific anomaly. Substantial time and effort is required to back-track the cause for abnormal behavior and to recreate the event sequence leading to abnormal behavior. There is a critical need therefore to automatically detect anomalous behavior as and when they may occur, and to do so with the operator in the loop. Human-machine interaction results in better machine learning and a better decision-support mechanism. This is the fundamental concept of intelligent control where machine learning is enhanced by interaction with human operators, and vice versa. The paper discusses a revolutionary framework for the characterization, detection, identification, learning, and modeling of anomalous behavior in observed phenomena arising from a large class of unknown and uncertain dynamical systems.
Using waveform cross correlation for automatic recovery of aftershock sequences
NASA Astrophysics Data System (ADS)
Bobrov, Dmitry; Kitov, Ivan; Rozhkov, Mikhail
2017-04-01
Aftershock sequences of the largest earthquakes are difficult to recover. There can be several hundred mid-sized aftershocks per hour within a few hundred km from each other recorded by the same stations. Moreover, these events generate thousands of reflected/refracted phases having azimuth and slowness close to those from the P-waves. Therefore, aftershock sequences with thousands of events represent a major challenge for automatic and interactive processing at the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Organization (CTBTO). Standard methods of detection and phase association do not use all information contained in signals. As a result, wrong association of the first and later phases, both regular and site specific, produces enormous number of wrong event hypotheses and destroys valid event hypotheses in automatic IDC processing. In turn, the IDC analysts have to reject false and recreate valid hypotheses wasting precious human resources. At the current level of the IDC catalogue completeness, the method of waveform cross correlation (WCC) can resolve most of detection and association problems fully utilizing the similarity of waveforms generated by aftershocks. Array seismic stations of the International monitoring system (IMS) can enhance the performance of the WCC method: reduce station-specific detection thresholds, allow accurate estimate of signal attributes, including relative magnitude, and effectively suppress irrelevant arrivals. We have developed and tested a prototype of an aftershock tool matching all IDC processing requirements and merged it with the current IDC pipeline. This tool includes creation of master events consisting of real or synthetic waveform templates at ten and more IMS stations; cross correlation (CC) of real-time waveforms with these templates, association of arrivals detected at CC-traces in event hypotheses; building events matching the IDC quality criteria; and resolution of conflicts between events hypotheses created by neighboring master-events. The final cross correlation standard event lists (XSEL) is a start point for interactive analysis with standard tools. We present select results for the biggest earthquakes, like Sumatra 2004 and Tohoku 2011, as well as for several smaller events with hundreds of aftershocks. The sensitivity and resolution of the aftershock tool is demonstrated on the example of mb=2.2 aftershock found after the September 9, 2016 DPRK test.
Automatic identification of alpine mass movements based on seismic and infrasound signals
NASA Astrophysics Data System (ADS)
Schimmel, Andreas; Hübl, Johannes
2017-04-01
The automatic detection and identification of alpine mass movements like debris flows, debris floods or landslides gets increasing importance for mitigation measures in the densely populated and intensively used alpine regions. Since this mass movement processes emits characteristically seismic and acoustic waves in the low frequency range this events can be detected and identified based on this signals. So already several approaches for detection and warning systems based on seismic or infrasound signals has been developed. But a combination of both methods, which can increase detection probability and reduce false alarms is currently used very rarely and can serve as a promising method for developing an automatic detection and identification system. So this work presents an approach for a detection and identification system based on a combination of seismic and infrasound sensors, which can detect sediment related mass movements from a remote location unaffected by the process. The system is based on one infrasound sensor and one geophone which are placed co-located and a microcontroller where a specially designed detection algorithm is executed which can detect mass movements in real time directly at the sensor site. Further this work tries to get out more information from the seismic and infrasound spectrum produced by different sediment related mass movements to identify the process type and estimate the magnitude of the event. The system is currently installed and tested on five test sites in Austria, two in Italy and one in Switzerland as well as one in Germany. This high number of test sites is used to get a large database of very different events which will be the basis for a new identification method for alpine mass movements. These tests shows promising results and so this system provides an easy to install and inexpensive approach for a detection and warning system.
Automatic analysis of the 2015 Gorkha earthquake aftershock sequence.
NASA Astrophysics Data System (ADS)
Baillard, C.; Lyon-Caen, H.; Bollinger, L.; Rietbrock, A.; Letort, J.; Adhikari, L. B.
2016-12-01
The Mw 7.8 Gorkha earthquake, that partially ruptured the Main Himalayan Thrust North of Kathmandu on the 25th April 2015, was the largest and most catastrophic earthquake striking Nepal since the great M8.4 1934 earthquake. This mainshock was followed by multiple aftershocks, among them, two notable events that occurred on the 12th May with magnitudes of 7.3 Mw and 6.3 Mw. Due to these recent events it became essential for the authorities and for the scientific community to better evaluate the seismic risk in the region through a detailed analysis of the earthquake catalog, amongst others, the spatio-temporal distribution of the Gorkha aftershock sequence. Here we complement this first study by doing a microseismic study using seismic data coming from the eastern part of the Nepalese Seismological Center network associated to one broadband station in Everest. Our primary goal is to deliver an accurate catalog of the aftershock sequence. Due to the exceptional number of events detected we performed an automatic picking/locating procedure which can be splitted in 4 steps: 1) Coarse picking of the onsets using a classical STA/LTA picker, 2) phase association of picked onsets to detect and declare seismic events, 3) Kurtosis pick refinement around theoretical arrival times to increase picking and location accuracy and, 4) local magnitude calculation based amplitude of waveforms. This procedure is time efficient ( 1 sec/event), reduces considerably the location uncertainties ( 2 to 5 km errors) and increases the number of events detected compared to manual processing. Indeed, the automatic detection rate is 10 times higher than the manual detection rate. By comparing to the USGS catalog we were able to give a new attenuation law to compute local magnitudes in the region. A detailed analysis of the seismicity shows a clear migration toward the east of the region and a sudden decrease of seismicity 100 km east of Kathmandu which may reveal the presence of a tectonic feature acting as a seismic barrier. Comparison of the aftershock distribution with respect to the coseismic slip distribution will be discussed.d.
Automatic Near-Real-Time Detection of CMEs in Mauna Loa K-Cor Coronagraph Images
NASA Astrophysics Data System (ADS)
Thompson, W. T.; St. Cyr, O. C.; Burkepile, J. T.; Posner, A.
2017-10-01
A simple algorithm has been developed to detect the onset of coronal mass ejections (CMEs), together with speed estimates, in near-real time using linearly polarized white-light solar coronal images from the Mauna Loa Solar Observatory K-Cor telescope. Ground observations in the low corona can warn of CMEs well before they appear in space coronagraphs. The algorithm used is a variation on the Solar Eruptive Event Detection System developed at George Mason University. It was tested against K-Cor data taken between 29 April 2014 and 20 February 2017, on days identified as containing CMEs. This resulted in testing of 139 days' worth of data containing 171 CMEs. The detection rate varied from close to 80% when solar activity was high down to as low as 20-30% when activity was low. The difference in effectiveness with solar cycle is attributed to the relative prevalence of strong CMEs between active and quiet periods. There were also 12 false detections, leading to an average false detection rate of 8.6%. The K-Cor data were also compared with major solar energetic particle (SEP) storms during this time period. There were three SEP events detected either at Earth or at one of the two STEREO spacecraft when K-Cor was observing during the relevant time period. The algorithm successfully generated alerts for two of these events, with lead times of 1-3 h before the SEP onset at 1 AU. The third event was not detected by the automatic algorithm because of the unusually broad width in position angle.
Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video
NASA Astrophysics Data System (ADS)
Yeo, Boon-Lock; Liu, Bede
1996-03-01
Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.
High-speed event detector for embedded nanopore bio-systems.
Huang, Yiyun; Magierowski, Sebastian; Ghafar-Zadeh, Ebrahim; Wang, Chengjie
2015-08-01
Biological measurements of microscopic phenomena often deal with discrete-event signals. The ability to automatically carry out such measurements at high-speed in a miniature embedded system is desirable but compromised by high-frequency noise along with practical constraints on filter quality and sampler resolution. This paper presents a real-time event-detection method in the context of nanopore sensing that helps to mitigate these drawbacks and allows accurate signal processing in an embedded system. Simulations show at least a 10× improvement over existing on-line detection methods.
NASA Astrophysics Data System (ADS)
Mor, Ilan; Vartsky, David; Dangendorf, Volker; Tittelmeier, Kai.; Weierganz, Mathias; Goldberg, Mark Benjamin; Bar, Doron; Brandis, Michal
2018-06-01
We describe an analysis procedure for automatic unambiguous detection of fast-neutron-induced recoil proton tracks in a micro-capillary array filled with organic liquid scintillator. The detector is viewed by an intensified CCD camera. This imaging neutron detector possesses the capability to perform high position-resolution (few tens of μm), energy-dispersive transmission-imaging using ns-pulsed beams. However, when operated with CW or DC beams, it also features medium-quality spectroscopic capabilities for incident neutrons in the energy range 2-20 MeV. In addition to the recoil proton events which display a continuous extended track structure, the raw images exhibit complex ion-tracks from nuclear interactions of fast-neutrons in the scintillator, capillaries quartz-matrix and CCD. Moreover, as expected, one also observes a multitude of isolated scintillation spots of varying intensity (henceforth denoted "blobs") that originate from several different sources, such as: fragmented proton tracks, gamma-rays, heavy-ion reactions as well as events and noise that occur in the image-intensifier and CCD. In order to identify the continuous-track recoil proton events and distinguish them from all these background events, a rapid, computerized and automatic track-recognition-procedure was developed. Based on an appropriately weighted analysis of track parameters such as: length, width, area and overall light intensity, the method is capable of distinguishing a single continuous-track recoil proton from typically surrounding several thousands of background events that are found in each CCD frame.
Chen, Yen-Lin; Liang, Wen-Yew; Chiang, Chuan-Yen; Hsieh, Tung-Ju; Lee, Da-Cheng; Yuan, Shyan-Ming; Chang, Yang-Lang
2011-01-01
This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions. PMID:22163990
Vertically Integrated Seismological Analysis II : Inference
NASA Astrophysics Data System (ADS)
Arora, N. S.; Russell, S.; Sudderth, E.
2009-12-01
Methods for automatically associating detected waveform features with hypothesized seismic events, and localizing those events, are a critical component of efforts to verify the Comprehensive Test Ban Treaty (CTBT). As outlined in our companion abstract, we have developed a hierarchical model which views detection, association, and localization as an integrated probabilistic inference problem. In this abstract, we provide more details on the Markov chain Monte Carlo (MCMC) methods used to solve this inference task. MCMC generates samples from a posterior distribution π(x) over possible worlds x by defining a Markov chain whose states are the worlds x, and whose stationary distribution is π(x). In the Metropolis-Hastings (M-H) method, transitions in the Markov chain are constructed in two steps. First, given the current state x, a candidate next state x‧ is generated from a proposal distribution q(x‧ | x), which may be (more or less) arbitrary. Second, the transition to x‧ is not automatic, but occurs with an acceptance probability—α(x‧ | x) = min(1, π(x‧)q(x | x‧)/π(x)q(x‧ | x)). The seismic event model outlined in our companion abstract is quite similar to those used in multitarget tracking, for which MCMC has proved very effective. In this model, each world x is defined by a collection of events, a list of properties characterizing those events (times, locations, magnitudes, and types), and the association of each event to a set of observed detections. The target distribution π(x) = P(x | y), the posterior distribution over worlds x given the observed waveform data y at all stations. Proposal distributions then implement several types of moves between worlds. For example, birth moves create new events; death moves delete existing events; split moves partition the detections for an event into two new events; merge moves combine event pairs; swap moves modify the properties and assocations for pairs of events. Importantly, the rules for accepting such complex moves need not be hand-designed. Instead, they are automatically determined by the underlying probabilistic model, which is in turn calibrated via historical data and scientific knowledge. Consider a small seismic event which generates weak signals at several different stations, which might independently be mistaken for noise. A birth move may nevertheless hypothesize an event jointly explaining these detections. If the corresponding waveform data then aligns with the seismological knowledge encoded in the probabilistic model, the event may be detected even though no single station observes it unambiguously. Alternatively, if a large outlier reading is produced at a single station, moves which instantiate a corresponding (false) event would be rejected because of the absence of plausible detections at other sensors. More broadly, one of the main advantages of our MCMC approach is its consistent handling of the relative uncertainties in different information sources. By avoiding low-level thresholds, we expect to improve accuracy and robustness. At the conference, we will present results quantitatively validating our approach, using ground-truth associations and locations provided either by simulation or human analysts.
Automated detection of epileptic ripples in MEG using beamformer-based virtual sensors
NASA Astrophysics Data System (ADS)
Migliorelli, Carolina; Alonso, Joan F.; Romero, Sergio; Nowak, Rafał; Russi, Antonio; Mañanas, Miguel A.
2017-08-01
Objective. In epilepsy, high-frequency oscillations (HFOs) are expressively linked to the seizure onset zone (SOZ). The detection of HFOs in the noninvasive signals from scalp electroencephalography (EEG) and magnetoencephalography (MEG) is still a challenging task. The aim of this study was to automate the detection of ripples in MEG signals by reducing the high-frequency noise using beamformer-based virtual sensors (VSs) and applying an automatic procedure for exploring the time-frequency content of the detected events. Approach. Two-hundred seconds of MEG signal and simultaneous iEEG were selected from nine patients with refractory epilepsy. A two-stage algorithm was implemented. Firstly, beamforming was applied to the whole head to delimitate the region of interest (ROI) within a coarse grid of MEG-VS. Secondly, a beamformer using a finer grid in the ROI was computed. The automatic detection of ripples was performed using the time-frequency response provided by the Stockwell transform. Performance was evaluated through comparisons with simultaneous iEEG signals. Main results. ROIs were located within the seizure-generating lobes in the nine subjects. Precision and sensitivity values were 79.18% and 68.88%, respectively, by considering iEEG-detected events as benchmarks. A higher number of ripples were detected inside the ROI compared to the same region in the contralateral lobe. Significance. The evaluation of interictal ripples using non-invasive techniques can help in the delimitation of the epileptogenic zone and guide placement of intracranial electrodes. This is the first study that automatically detects ripples in MEG in the time domain located within the clinically expected epileptic area taking into account the time-frequency characteristics of the events through the whole signal spectrum. The algorithm was tested against intracranial recordings, the current gold standard. Further studies should explore this approach to enable the localization of noninvasively recorded HFOs to help during pre-surgical planning and to reduce the need for invasive diagnostics.
Machine learning for the automatic detection of anomalous events
NASA Astrophysics Data System (ADS)
Fisher, Wendy D.
In this dissertation, we describe our research contributions for a novel approach to the application of machine learning for the automatic detection of anomalous events. We work in two different domains to ensure a robust data-driven workflow that could be generalized for monitoring other systems. Specifically, in our first domain, we begin with the identification of internal erosion events in earth dams and levees (EDLs) using geophysical data collected from sensors located on the surface of the levee. As EDLs across the globe reach the end of their design lives, effectively monitoring their structural integrity is of critical importance. The second domain of interest is related to mobile telecommunications, where we investigate a system for automatically detecting non-commercial base station routers (BSRs) operating in protected frequency space. The presence of non-commercial BSRs can disrupt the connectivity of end users, cause service issues for the commercial providers, and introduce significant security concerns. We provide our motivation, experimentation, and results from investigating a generalized novel data-driven workflow using several machine learning techniques. In Chapter 2, we present results from our performance study that uses popular unsupervised clustering algorithms to gain insights to our real-world problems, and evaluate our results using internal and external validation techniques. Using EDL passive seismic data from an experimental laboratory earth embankment, results consistently show a clear separation of events from non-events in four of the five clustering algorithms applied. Chapter 3 uses a multivariate Gaussian machine learning model to identify anomalies in our experimental data sets. For the EDL work, we used experimental data from two different laboratory earth embankments. Additionally, we explore five wavelet transform methods for signal denoising. The best performance is achieved with the Haar wavelets. We achieve up to 97.3% overall accuracy and less than 1.4% false negatives in anomaly detection. In Chapter 4, we research using two-class and one-class support vector machines (SVMs) for an effective anomaly detection system. We again use the two different EDL data sets from experimental laboratory earth embankments (each having approximately 80% normal and 20% anomalies) to ensure our workflow is robust enough to work with multiple data sets and different types of anomalous events (e.g., cracks and piping). We apply Haar wavelet-denoising techniques and extract nine spectral features from decomposed segments of the time series data. The two-class SVM with 10-fold cross validation achieved over 94% overall accuracy and 96% F1-score. Our approach provides a means for automatically identifying anomalous events using various machine learning techniques. Detecting internal erosion events in aging EDLs, earlier than is currently possible, can allow more time to prevent or mitigate catastrophic failures. Results show that we can successfully separate normal from anomalous data observations in passive seismic data, and provide a step towards techniques for continuous real-time monitoring of EDL health. Our lightweight non-commercial BSR detection system also has promise in separating commercial from non-commercial BSR scans without the need for prior geographic location information, extensive time-lapse surveys, or a database of known commercial carriers. (Abstract shortened by ProQuest.).
Combating Terrorism Technology Support Office 2006 Review
2006-01-01
emplaced beyond the control point, activated manually or automatically , with warning lights and an audible alarm to alert innocent pedestrians. The...throughout a vehicle. When a tamper event is detected, SERVANT automatically records sensor data and surveillance video and sends an alert to the security...exposure to organophosphate nerve agents, botulinum toxin, cyanide, and carbon monoxide and will be packaged into a portable , lightweight, mobile hand
Sommermeyer, Dirk; Zou, Ding; Grote, Ludger; Hedner, Jan
2012-10-15
To assess the accuracy of novel algorithms using an oximeter-based finger plethysmographic signal in combination with a nasal cannula for the detection and differentiation of central and obstructive apneas. The validity of single pulse oximetry to detect respiratory disturbance events was also studied. Patients recruited from four sleep laboratories underwent an ambulatory overnight cardiorespiratory polygraphy recording. The nasal flow and photoplethysmographic signals of the recording were analyzed by automated algorithms. The apnea hypopnea index (AHI(auto)) was calculated using both signals, and a respiratory disturbance index (RDI(auto)) was calculated from photoplethysmography alone. Apnea events were classified into obstructive and central types using the oximeter derived pulse wave signal and compared with manual scoring. Sixty-six subjects (42 males, age 54 ± 14 yrs, body mass index 28.5 ± 5.9 kg/m(2)) were included in the analysis. AHI(manual) (19.4 ± 18.5 events/h) correlated highly significantly with AHI(auto) (19.9 ± 16.5 events/h) and RDI(auto) (20.4 ± 17.2 events/h); the correlation coefficients were r = 0.94 and 0.95, respectively (p < 0.001) with a mean difference of -0.5 ± 6.6 and -1.0 ± 6.1 events/h. The automatic analysis of AHI(auto) and RDI(auto) detected sleep apnea (cutoff AHI(manual) ≥ 15 events/h) with a sensitivity/specificity of 0.90/0.97 and 0.86/0.94, respectively. The automated obstructive/central apnea indices correlated closely with manually scoring (r = 0.87 and 0.95, p < 0.001) with mean difference of -4.3 ± 7.9 and 0.3 ± 1.5 events/h, respectively. Automatic analysis based on routine pulse oximetry alone may be used to detect sleep disordered breathing with accuracy. In addition, the combination of photoplethysmographic signals with a nasal flow signal provides an accurate distinction between obstructive and central apneic events during sleep.
Perspectives of Cross-Correlation in Seismic Monitoring at the International Data Centre
NASA Astrophysics Data System (ADS)
Bobrov, Dmitry; Kitov, Ivan; Zerbo, Lassina
2014-03-01
We demonstrate that several techniques based on waveform cross-correlation are able to significantly reduce the detection threshold of seismic sources worldwide and to improve the reliability of arrivals by a more accurate estimation of their defining parameters. A master event and the events it can find using waveform cross-correlation at array stations of the International Monitoring System (IMS) have to be close. For the purposes of the International Data Centre (IDC), one can use the spatial closeness of the master and slave events in order to construct a new automatic processing pipeline: all qualified arrivals detected using cross-correlation are associated with events matching the current IDC event definition criteria (EDC) in a local association procedure. Considering the repeating character of global seismicity, more than 90 % of events in the reviewed event bulletin (REB) can be built in this automatic processing. Due to the reduced detection threshold, waveform cross-correlation may increase the number of valid REB events by a factor of 1.5-2.0. Therefore, the new pipeline may produce a more comprehensive bulletin than the current pipeline—the goal of seismic monitoring. The analysts' experience with the cross correlation event list (XSEL) shows that the workload of interactive processing might be reduced by a factor of two or even more. Since cross-correlation produces a comprehensive list of detections for a given master event, no additional arrivals from primary stations are expected to be associated with the XSEL events. The number of false alarms, relative to the number of events rejected from the standard event list 3 (SEL3) in the current interactive processing—can also be reduced by the use of several powerful filters. The principal filter is the difference between the arrival times of the master and newly built events at three or more primary stations, which should lie in a narrow range of a few seconds. In this study, one event at a distance of about 2,000 km from the main shock was formed by three stations, with the stations and both events on the same great circle. Such spurious events are rejected by checking consistency between detections at stations at different back azimuths from the source region. Two additional effective pre-filters are f-k analysis and F prob based on correlation traces instead of original waveforms. Overall, waveform cross-correlation is able to improve the REB completeness, to reduce the workload related to IDC interactive analysis, and to provide a precise tool for quality check for both arrivals and events. Some major improvements in automatic and interactive processing achieved by cross-correlation are illustrated using an aftershock sequence from a large continental earthquake. Exploring this sequence, we describe schematically the next steps for the development of a processing pipeline parallel to the existing IDC one in order to improve the quality of the REB together with the reduction of the magnitude threshold.
Eggerth, Alphons; Modre-Osprian, Robert; Hayn, Dieter; Kastner, Peter; Pölzl, Gerhard; Schreier, Günter
2017-01-01
Automatic event detection is used in telemedicine based heart failure disease management programs supporting physicians and nurses in monitoring of patients' health data. Analysis of the performance of automatic event detection algorithms for prediction of HF related hospitalisations or diuretic dose increases. Rule-Of-Thumb and Moving Average Convergence Divergence (MACD) algorithm were applied to body weight data from 106 heart failure patients of the HerzMobil-Tirol disease management program. The evaluation criteria were based on Youden index and ROC curves. Analysis of data from 1460 monitoring weeks with 54 events showed a maximum Youden index of 0.19 for MACD and RoT with a specificity > 0.90. Comparison of the two algorithms for real-world monitoring data showed similar results regarding total and limited AUC. An improvement of the sensitivity might be possible by including additional health data (e.g. vital signs and self-reported well-being) because body weight variations obviously are not the only cause of HF related hospitalisations or diuretic dose increases.
Park, Jong-Uk; Lee, Hyo-Ki; Lee, Junghun; Urtnasan, Erdenebayar; Kim, Hojoong; Lee, Kyoung-Joung
2015-09-01
This study proposes a method of automatically classifying sleep apnea/hypopnea events based on sleep states and the severity of sleep-disordered breathing (SDB) using photoplethysmogram (PPG) and oxygen saturation (SpO2) signals acquired from a pulse oximeter. The PPG was used to classify sleep state, while the severity of SDB was estimated by detecting events of SpO2 oxygen desaturation. Furthermore, we classified sleep apnea/hypopnea events by applying different categorisations according to the severity of SDB based on a support vector machine. The classification results showed sensitivity performances and positivity predictive values of 74.2% and 87.5% for apnea, 87.5% and 63.4% for hypopnea, and 92.4% and 92.8% for apnea + hypopnea, respectively. These results represent better or comparable outcomes compared to those of previous studies. In addition, our classification method reliably detected sleep apnea/hypopnea events in all patient groups without bias in particular patient groups when our algorithm was applied to a variety of patient groups. Therefore, this method has the potential to diagnose SDB more reliably and conveniently using a pulse oximeter.
Automatic Event Detection in Search for Inter-Moss Loops in IRIS Si IV Slit-Jaw Images
NASA Technical Reports Server (NTRS)
Fayock, Brian; Winebarger, Amy R.; De Pontieu, Bart
2015-01-01
The high-resolution capabilities of the Interface Region Imaging Spectrometer (IRIS) mission have allowed the exploration of the finer details of the solar magnetic structure from the chromosphere to the lower corona that have previously been unresolved. Of particular interest to us are the relatively short-lived, low-lying magnetic loops that have foot points in neighboring moss regions. These inter-moss loops have also appeared in several AIA pass bands, which are generally associated with temperatures that are at least an order of magnitude higher than that of the Si IV emission seen in the 1400 angstrom pass band of IRIS. While the emission lines seen in these pass bands can be associated with a range of temperatures, the simultaneous appearance of these loops in IRIS 1400 and AIA 171, 193, and 211 suggest that they are not in ionization equilibrium. To study these structures in detail, we have developed a series of algorithms to automatically detect signal brightening or events on a pixel-by-pixel basis and group them together as structures for each of the above data sets. These algorithms have successfully picked out all activity fitting certain adjustable criteria. The resulting groups of events are then statistically analyzed to determine which characteristics can be used to distinguish the inter-moss loops from all other structures. While a few characteristic histograms reveal that manually selected inter-moss loops lie outside the norm, a combination of several characteristics will need to be used to determine the statistical likelihood that a group of events be categorized automatically as a loop of interest. The goal of this project is to be able to automatically pick out inter-moss loops from an entire data set and calculate the characteristics that have previously been determined manually, such as length, intensity, and lifetime. We will discuss the algorithms, preliminary results, and current progress of automatic characterization.
Automatic recognition of coronal type II radio bursts: The ARBIS 2 method and first observations
NASA Astrophysics Data System (ADS)
Lobzin, Vasili; Cairns, Iver; Robinson, Peter; Steward, Graham; Patterson, Garth
Major space weather events such as solar flares and coronal mass ejections are usually accompa-nied by solar radio bursts, which can potentially be used for real-time space weather forecasts. Type II radio bursts are produced near the local plasma frequency and its harmonic by fast electrons accelerated by a shock wave moving through the corona and solar wind with a typi-cal speed of 1000 km s-1 . The coronal bursts have dynamic spectra with frequency gradually falling with time and durations of several minutes. We present a new method developed to de-tect type II coronal radio bursts automatically and describe its implementation in an extended Automated Radio Burst Identification System (ARBIS 2). Preliminary tests of the method with spectra obtained in 2002 show that the performance of the current implementation is quite high, ˜ 80%, while the probability of false positives is reasonably low, with one false positive per 100-200 hr for high solar activity and less than one false event per 10000 hr for low solar activity periods. The first automatically detected coronal type II radio bursts are also presented. ARBIS 2 is now operational with IPS Radio and Space Services, providing email alerts and event lists internationally.
Global Infrasound Association Based on Probabilistic Clutter Categorization
NASA Astrophysics Data System (ADS)
Arora, Nimar; Mialle, Pierrick
2016-04-01
The IDC advances its methods and continuously improves its automatic system for the infrasound technology. The IDC focuses on enhancing the automatic system for the identification of valid signals and the optimization of the network detection threshold by identifying ways to refine signal characterization methodology and association criteria. An objective of this study is to reduce the number of associated infrasound arrivals that are rejected from the automatic bulletins when generating the reviewed event bulletins. Indeed, a considerable number of signal detections are due to local clutter sources such as microbaroms, waterfalls, dams, gas flares, surf (ocean breaking waves) etc. These sources are either too diffuse or too local to form events. Worse still, the repetitive nature of this clutter leads to a large number of false event hypotheses due to the random matching of clutter at multiple stations. Previous studies, for example [1], have worked on categorization of clutter using long term trends on detection azimuth, frequency, and amplitude at each station. In this work we continue the same line of reasoning to build a probabilistic model of clutter that is used as part of NETVISA [2], a Bayesian approach to network processing. The resulting model is a fusion of seismic, hydroacoustic and infrasound processing built on a unified probabilistic framework. References: [1] Infrasound categorization Towards a statistics based approach. J. Vergoz, P. Gaillard, A. Le Pichon, N. Brachet, and L. Ceranna. ITW 2011 [2] NETVISA: Network Processing Vertically Integrated Seismic Analysis. N. S. Arora, S. Russell, and E. Sudderth. BSSA 2013
Automatic Multi-sensor Data Quality Checking and Event Detection for Environmental Sensing
NASA Astrophysics Data System (ADS)
LIU, Q.; Zhang, Y.; Zhao, Y.; Gao, D.; Gallaher, D. W.; Lv, Q.; Shang, L.
2017-12-01
With the advances in sensing technologies, large-scale environmental sensing infrastructures are pervasively deployed to continuously collect data for various research and application fields, such as air quality study and weather condition monitoring. In such infrastructures, many sensor nodes are distributed in a specific area and each individual sensor node is capable of measuring several parameters (e.g., humidity, temperature, and pressure), providing massive data for natural event detection and analysis. However, due to the dynamics of the ambient environment, sensor data can be contaminated by errors or noise. Thus, data quality is still a primary concern for scientists before drawing any reliable scientific conclusions. To help researchers identify potential data quality issues and detect meaningful natural events, this work proposes a novel algorithm to automatically identify and rank anomalous time windows from multiple sensor data streams. More specifically, (1) the algorithm adaptively learns the characteristics of normal evolving time series and (2) models the spatial-temporal relationship among multiple sensor nodes to infer the anomaly likelihood of a time series window for a particular parameter in a sensor node. Case studies using different data sets are presented and the experimental results demonstrate that the proposed algorithm can effectively identify anomalous time windows, which may resulted from data quality issues and natural events.
Automatic Detection of Pitching and Throwing Events in Baseball With Inertial Measurement Sensors.
Murray, Nick B; Black, Georgia M; Whiteley, Rod J; Gahan, Peter; Cole, Michael H; Utting, Andy; Gabbett, Tim J
2017-04-01
Throwing loads are known to be closely related to injury risk. However, for logistic reasons, typically only pitchers have their throws counted, and then only during innings. Accordingly, all other throws made are not counted, so estimates of throws made by players may be inaccurately recorded and underreported. A potential solution to this is the use of wearable microtechnology to automatically detect, quantify, and report pitch counts in baseball. This study investigated the accuracy of detection of baseball pitching and throwing in both practice and competition using a commercially available wearable microtechnology unit. Seventeen elite youth baseball players (mean ± SD age 16.5 ± 0.8 y, height 184.1 ± 5.5 cm, mass 78.3 ± 7.7 kg) participated in this study. Participants performed pitching, fielding, and throwing during practice and competition while wearing a microtechnology unit. Sensitivity and specificity of a pitching and throwing algorithm were determined by comparing automatic measures (ie, microtechnology unit) with direct measures (ie, manually recorded pitching counts). The pitching and throwing algorithm was sensitive during both practice (100%) and competition (100%). Specificity was poorer during both practice (79.8%) and competition (74.4%). These findings demonstrate that the microtechnology unit is sensitive to detect pitching and throwing events, but further development of the pitching algorithm is required to accurately and consistently quantify throwing loads using microtechnology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morellas, Vassilios; Johnson, Andrew; Johnston, Chris
2006-07-01
Thermal imaging is rightfully a real-world technology proven to bring confidence to daytime, night-time and all weather security surveillance. Automatic image processing intrusion detection algorithms are also a real world technology proven to bring confidence to system surveillance security solutions. Together, day, night and all weather video imagery sensors and automated intrusion detection software systems create the real power to protect early against crime, providing real-time global homeland protection, rather than simply being able to monitor and record activities for post event analysis. These solutions, whether providing automatic security system surveillance at airports (to automatically detect unauthorized aircraft takeoff andmore » landing activities) or at high risk private, public or government facilities (to automatically detect unauthorized people or vehicle intrusion activities) are on the move to provide end users the power to protect people, capital equipment and intellectual property against acts of vandalism and terrorism. As with any technology, infrared sensors and automatic image intrusion detection systems for global homeland security protection have clear technological strengths and limitations compared to other more common day and night vision technologies or more traditional manual man-in-the-loop intrusion detection security systems. This paper addresses these strength and limitation capabilities. False Alarm (FAR) and False Positive Rate (FPR) is an example of some of the key customer system acceptability metrics and Noise Equivalent Temperature Difference (NETD) and Minimum Resolvable Temperature are examples of some of the sensor level performance acceptability metrics. (authors)« less
Automatic violence detection in digital movies
NASA Astrophysics Data System (ADS)
Fischer, Stephan
1996-11-01
Research on computer-based recognition of violence is scant. We are working on the automatic recognition of violence in digital movies, a first step towards the goal of a computer- assisted system capable of protecting children against TV programs containing a great deal of violence. In the video domain a collision detection and a model-mapping to locate human figures are run, while the creation and comparison of fingerprints to find certain events are run int he audio domain. This article centers on the recognition of fist- fights in the video domain and on the recognition of shots, explosions and cries in the audio domain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Matthew; Draelos, Timothy; Knox, Hunter
2017-05-02
The AST software includes numeric methods to 1) adjust STA/LTA signal detector trigger level (TL) values and 2) filter detections for a network of sensors. AST adapts TL values to the current state of the environment by leveraging cooperation within a neighborhood of sensors. The key metric that guides the dynamic tuning is consistency of each sensor with its nearest neighbors: TL values are automatically adjusted on a per station basis to be more or less sensitive to produce consistent agreement of detections in its neighborhood. The AST algorithm adapts in near real-time to changing conditions in an attempt tomore » automatically self-tune a signal detector to identify (detect) only signals from events of interest.« less
Sommermeyer, Dirk; Zou, Ding; Grote, Ludger; Hedner, Jan
2012-01-01
Study Objective: To assess the accuracy of novel algorithms using an oximeter-based finger plethysmographic signal in combination with a nasal cannula for the detection and differentiation of central and obstructive apneas. The validity of single pulse oximetry to detect respiratory disturbance events was also studied. Methods: Patients recruited from four sleep laboratories underwent an ambulatory overnight cardiorespiratory polygraphy recording. The nasal flow and photoplethysmographic signals of the recording were analyzed by automated algorithms. The apnea hypopnea index (AHIauto) was calculated using both signals, and a respiratory disturbance index (RDIauto) was calculated from photoplethysmography alone. Apnea events were classified into obstructive and central types using the oximeter derived pulse wave signal and compared with manual scoring. Results: Sixty-six subjects (42 males, age 54 ± 14 yrs, body mass index 28.5 ± 5.9 kg/m2) were included in the analysis. AHImanual (19.4 ± 18.5 events/h) correlated highly significantly with AHIauto (19.9 ± 16.5 events/h) and RDIauto (20.4 ± 17.2 events/h); the correlation coefficients were r = 0.94 and 0.95, respectively (p < 0.001) with a mean difference of −0.5 ± 6.6 and −1.0 ± 6.1 events/h. The automatic analysis of AHIauto and RDIauto detected sleep apnea (cutoff AHImanual ≥ 15 events/h) with a sensitivity/specificity of 0.90/0.97 and 0.86/0.94, respectively. The automated obstructive/central apnea indices correlated closely with manually scoring (r = 0.87 and 0.95, p < 0.001) with mean difference of −4.3 ± 7.9 and 0.3 ± 1.5 events/h, respectively. Conclusions: Automatic analysis based on routine pulse oximetry alone may be used to detect sleep disordered breathing with accuracy. In addition, the combination of photoplethysmographic signals with a nasal flow signal provides an accurate distinction between obstructive and central apneic events during sleep. Citation: Sommermeyer D; Zou D; Grote L; Hedner J. Detection of sleep disordered breathing and its central/obstructive character using nasal cannula and finger pulse oximeter. J Clin Sleep Med 2012;8(5):527-533. PMID:23066364
Analysis of the QRS complex for apnea-bradycardia characterization in preterm infants
Altuve, Miguel; Carrault, Guy; Cruz, Julio; Beuchée, Alain; Pladys, Patrick; Hernandez, Alfredo I.
2009-01-01
This work presents an analysis of the information content of new features derived from the electrocardiogram (ECG) for the characterization of apnea-bradycardia events in preterm infants. Automatic beat detection and segmentation methods have been adapted to the ECG signals from preterm infants, through the application of two evolutionary algorithms. ECG data acquired from 32 preterm infants with persistent apnea-bradycardia have been used for quantitative evaluation. The adaptation procedure led to an improved sensitivity and positive predictive value, and a reduced jitter for the detection of the R-wave, QRS onset, QRS offset, and iso-electric level. Additionally, time series representing the RR interval, R-wave amplitude and QRS duration, were automatically extracted for periods at rest, before, during and after apnea-bradycardia episodes. Significant variations (p<0.05) were observed for all time-series when comparing the difference between values at rest versus values just before the bradycardia event, with the difference between values at rest versus values during the bradycardia event. These results reveal changes in the R-wave amplitude and QRS duration, appearing at the onset and termination of apnea-bradycardia episodes, which could be potentially useful for the early detection and characterization of these episodes. PMID:19963984
Automatic Single Event Effects Sensitivity Analysis of a 13-Bit Successive Approximation ADC
NASA Astrophysics Data System (ADS)
Márquez, F.; Muñoz, F.; Palomo, F. R.; Sanz, L.; López-Morillo, E.; Aguirre, M. A.; Jiménez, A.
2015-08-01
This paper presents Analog Fault Tolerant University of Seville Debugging System (AFTU), a tool to evaluate the Single-Event Effect (SEE) sensitivity of analog/mixed signal microelectronic circuits at transistor level. As analog cells can behave in an unpredictable way when critical areas interact with the particle hitting, there is a need for designers to have a software tool that allows an automatic and exhaustive analysis of Single-Event Effects influence. AFTU takes the test-bench SPECTRE design, emulates radiation conditions and automatically evaluates vulnerabilities using user-defined heuristics. To illustrate the utility of the tool, the SEE sensitivity of a 13-bits Successive Approximation Analog-to-Digital Converter (ADC) has been analysed. This circuit was selected not only because it was designed for space applications, but also due to the fact that a manual SEE sensitivity analysis would be too time-consuming. After a user-defined test campaign, it was detected that some voltage transients were propagated to a node where a parasitic diode was activated, affecting the offset cancelation, and therefore the whole resolution of the ADC. A simple modification of the scheme solved the problem, as it was verified with another automatic SEE sensitivity analysis.
Boriani, Giuseppe; Da Costa, Antoine; Ricci, Renato Pietro; Quesada, Aurelio; Favale, Stefano; Iacopino, Saverio; Romeo, Francesco; Risi, Arnaldo; Mangoni di S Stefano, Lorenza; Navarro, Xavier; Biffi, Mauro; Santini, Massimo; Burri, Haran
2013-08-21
Remote monitoring (RM) in patients with advanced heart failure and cardiac resynchronization therapy defibrillators (CRT-D) may reduce delays in clinical decisions by transmitting automatic alerts. However, this strategy has never been tested specifically in this patient population, with alerts for lung fluid overload, and in a European setting. The main objective of Phase 1 (presented here) is to evaluate if RM strategy is able to reduce time from device-detected events to clinical decisions. In this multicenter randomized controlled trial, patients with moderate to severe heart failure implanted with CRT-D devices were randomized to a Remote group (with remote follow-up and wireless automatic alerts) or to a Control group (with standard follow-up without alerts). The primary endpoint of Phase 1 was the delay between an alert event and clinical decisions related to the event in the first 154 enrolled patients followed for 1 year. The median delay from device-detected events to clinical decisions was considerably shorter in the Remote group compared to the Control group: 2 (25(th)-75(th) percentile, 1-4) days vs 29 (25(th)-75(th) percentile, 3-51) days respectively, P=.004. In-hospital visits were reduced in the Remote group (2.0 visits/patient/year vs 3.2 visits/patient/year in the Control group, 37.5% relative reduction, P<.001). Automatic alerts were successfully transmitted in 93% of events occurring outside the hospital in the Remote group. The annual rate of all-cause hospitalizations per patient did not differ between the two groups (P=.65). RM in CRT-D patients with advanced heart failure allows physicians to promptly react to clinically relevant automatic alerts and significantly reduces the burden of in-hospital visits. Clinicaltrials.gov NCT00885677; http://clinicaltrials.gov/show/NCT00885677 (Archived by WebCite at http://www.webcitation.org/6IkcCJ7NF).
Da Costa, Antoine; Ricci, Renato Pietro; Quesada, Aurelio; Favale, Stefano; Iacopino, Saverio; Romeo, Francesco; Risi, Arnaldo; Mangoni di S Stefano, Lorenza; Navarro, Xavier; Biffi, Mauro; Santini, Massimo; Burri, Haran
2013-01-01
Background Remote monitoring (RM) in patients with advanced heart failure and cardiac resynchronization therapy defibrillators (CRT-D) may reduce delays in clinical decisions by transmitting automatic alerts. However, this strategy has never been tested specifically in this patient population, with alerts for lung fluid overload, and in a European setting. Objective The main objective of Phase 1 (presented here) is to evaluate if RM strategy is able to reduce time from device-detected events to clinical decisions. Methods In this multicenter randomized controlled trial, patients with moderate to severe heart failure implanted with CRT-D devices were randomized to a Remote group (with remote follow-up and wireless automatic alerts) or to a Control group (with standard follow-up without alerts). The primary endpoint of Phase 1 was the delay between an alert event and clinical decisions related to the event in the first 154 enrolled patients followed for 1 year. Results The median delay from device-detected events to clinical decisions was considerably shorter in the Remote group compared to the Control group: 2 (25th-75th percentile, 1-4) days vs 29 (25th-75th percentile, 3-51) days respectively, P=.004. In-hospital visits were reduced in the Remote group (2.0 visits/patient/year vs 3.2 visits/patient/year in the Control group, 37.5% relative reduction, P<.001). Automatic alerts were successfully transmitted in 93% of events occurring outside the hospital in the Remote group. The annual rate of all-cause hospitalizations per patient did not differ between the two groups (P=.65). Conclusions RM in CRT-D patients with advanced heart failure allows physicians to promptly react to clinically relevant automatic alerts and significantly reduces the burden of in-hospital visits. Trial Registration Clinicaltrials.gov NCT00885677; http://clinicaltrials.gov/show/NCT00885677 (Archived by WebCite at http://www.webcitation.org/6IkcCJ7NF). PMID:23965236
A Patch-Based Method for Repetitive and Transient Event Detection in Fluorescence Imaging
Boulanger, Jérôme; Gidon, Alexandre; Kervran, Charles; Salamero, Jean
2010-01-01
Automatic detection and characterization of molecular behavior in large data sets obtained by fast imaging in advanced light microscopy become key issues to decipher the dynamic architectures and their coordination in the living cell. Automatic quantification of the number of sudden and transient events observed in fluorescence microscopy is discussed in this paper. We propose a calibrated method based on the comparison of image patches expected to distinguish sudden appearing/vanishing fluorescent spots from other motion behaviors such as lateral movements. We analyze the performances of two statistical control procedures and compare the proposed approach to a frame difference approach using the same controls on a benchmark of synthetic image sequences. We have then selected a molecular model related to membrane trafficking and considered real image sequences obtained in cells stably expressing an endocytic-recycling trans-membrane protein, the Langerin-YFP, for validation. With this model, we targeted the efficient detection of fast and transient local fluorescence concentration arising in image sequences from a data base provided by two different microscopy modalities, wide field (WF) video microscopy using maximum intensity projection along the axial direction and total internal reflection fluorescence microscopy. Finally, the proposed detection method is briefly used to statistically explore the effect of several perturbations on the rate of transient events detected on the pilot biological model. PMID:20976222
NASA Astrophysics Data System (ADS)
Larnier, H.; Sailhac, P.; Chambodut, A.
2018-01-01
Atmospheric electromagnetic waves created by global lightning activity contain information about electrical processes of the inner and the outer Earth. Large signal-to-noise ratio events are particularly interesting because they convey information about electromagnetic properties along their path. We introduce a new methodology to automatically detect and characterize lightning-based waves using a time-frequency decomposition obtained through the application of continuous wavelet transform. We focus specifically on three types of sources, namely, atmospherics, slow tails and whistlers, that cover the frequency range 10 Hz to 10 kHz. Each wave has distinguishable characteristics in the time-frequency domain due to source shape and dispersion processes. Our methodology allows automatic detection of each type of event in the time-frequency decomposition thanks to their specific signature. Horizontal polarization attributes are also recovered in the time-frequency domain. This procedure is first applied to synthetic extremely low frequency time-series with different signal-to-noise ratios to test for robustness. We then apply it on real data: three stations of audio-magnetotelluric data acquired in Guadeloupe, oversea French territories. Most of analysed atmospherics and slow tails display linear polarization, whereas analysed whistlers are elliptically polarized. The diversity of lightning activity is finally analysed in an audio-magnetotelluric data processing framework, as used in subsurface prospecting, through estimation of the impedance response functions. We show that audio-magnetotelluric processing results depend mainly on the frequency content of electromagnetic waves observed in processed time-series, with an emphasis on the difference between morning and afternoon acquisition. Our new methodology based on the time-frequency signature of lightning-induced electromagnetic waves allows automatic detection and characterization of events in audio-magnetotelluric time-series, providing the means to assess quality of response functions obtained through processing.
Exploiting semantics for sensor re-calibration in event detection systems
NASA Astrophysics Data System (ADS)
Vaisenberg, Ronen; Ji, Shengyue; Hore, Bijit; Mehrotra, Sharad; Venkatasubramanian, Nalini
2008-01-01
Event detection from a video stream is becoming an important and challenging task in surveillance and sentient systems. While computer vision has been extensively studied to solve different kinds of detection problems over time, it is still a hard problem and even in a controlled environment only simple events can be detected with a high degree of accuracy. Instead of struggling to improve event detection using image processing only, we bring in semantics to direct traditional image processing. Semantics are the underlying facts that hide beneath video frames, which can not be "seen" directly by image processing. In this work we demonstrate that time sequence semantics can be exploited to guide unsupervised re-calibration of the event detection system. We present an instantiation of our ideas by using an appliance as an example--Coffee Pot level detection based on video data--to show that semantics can guide the re-calibration of the detection model. This work exploits time sequence semantics to detect when re-calibration is required to automatically relearn a new detection model for the newly evolved system state and to resume monitoring with a higher rate of accuracy.
NASA Astrophysics Data System (ADS)
Jia, Rui-Sheng; Sun, Hong-Mei; Peng, Yan-Jun; Liang, Yong-Quan; Lu, Xin-Ming
2017-07-01
Microseismic monitoring is an effective means for providing early warning of rock or coal dynamical disasters, and its first step is microseismic event detection, although low SNR microseismic signals often cannot effectively be detected by routine methods. To solve this problem, this paper presents permutation entropy and a support vector machine to detect low SNR microseismic events. First, an extraction method of signal features based on multi-scale permutation entropy is proposed by studying the influence of the scale factor on the signal permutation entropy. Second, the detection model of low SNR microseismic events based on the least squares support vector machine is built by performing a multi-scale permutation entropy calculation for the collected vibration signals, constructing a feature vector set of signals. Finally, a comparative analysis of the microseismic events and noise signals in the experiment proves that the different characteristics of the two can be fully expressed by using multi-scale permutation entropy. The detection model of microseismic events combined with the support vector machine, which has the features of high classification accuracy and fast real-time algorithms, can meet the requirements of online, real-time extractions of microseismic events.
Variable Discretisation for Anomaly Detection using Bayesian Networks
2017-01-01
UNCLASSIFIED DST- Group –TR–3328 1 Introduction Bayesian network implementations usually require each variable to take on a finite number of mutually...UNCLASSIFIED Variable Discretisation for Anomaly Detection using Bayesian Networks Jonathan Legg National Security and ISR Division Defence Science...and Technology Group DST- Group –TR–3328 ABSTRACT Anomaly detection is the process by which low probability events are automatically found against a
Automatic near-real-time detection of CMEs in Mauna Loa K-Cor coronagraph images
NASA Astrophysics Data System (ADS)
Thompson, William T.; St. Cyr, Orville Chris; Burkepile, Joan; Posner, Arik
2017-08-01
A simple algorithm has been developed to detect the onset of coronal mass ejections (CMEs), together with an estimate of their speed, in near-real-time using images of the linearly polarized white-light solar corona taken by the K-Cor telescope at the Mauna Loa Solar Observatory (MLSO). The algorithm used is a variation on the Solar Eruptive Event Detection System (SEEDS) developed at George Mason University. The algorithm was tested against K-Cor data taken between 29 April 2014 and 20 February 2017, on days which the MLSO website marked as containing CMEs. This resulted in testing of 139 days worth of data containing 171 CMEs. The detection rate varied from close to 80% in 2014-2015 when solar activity was high, down to as low as 20-30% in 2017 when activity was low. The difference in effectiveness with solar cycle is attributed to the difference in relative prevalance of strong CMEs between active and quiet periods. There were also twelve false detections during this time period, leading to an average false detection rate of 8.6% on any given day. However, half of the false detections were clustered into two short periods of a few days each when special conditions prevailed to increase the false detection rate. The K-Cor data were also compared with major Solar Energetic Particle (SEP) storms during this time period. There were three SEP events detected either at Earth or at one of the two STEREO spacecraft where K-Cor was observing during the relevant time period. The K-Cor CME detection algorithm successfully generated alerts for two of these events, with lead times of 1-3 hours before the SEP onset at 1 AU. The third event was not detected by the automatic algorithm because of the unusually broad width of the CME in position angle.
On comprehensive recovery of an aftershock sequence with cross correlation
NASA Astrophysics Data System (ADS)
Kitov, I.; Bobrov, D.; Coyne, J.; Turyomurugyendo, G.
2012-04-01
We have introduced cross correlation between seismic waveforms as a technique for signal detection and automatic event building at the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization. The intuition behind signal detection is simple - small and mid-sized seismic events close in space should produce similar signals at the same seismic stations. Equivalently, these signals have to be characterized by a high cross correlation coefficient. For array stations with many individual sensors distributed over a large area, signals from events at distances beyond, say, 50 km, are subject to destructive interference when cross correlated due to changing time delays between various channels. Thus, any cross correlation coefficient above some predefined threshold can be considered as a signature of a valid signal. With a dense grid of master events (spacing between adjacent masters between 20 km and 50 km corresponds to the statistically estimated correlation distance) with high quality (signal-to-noise ratio above 10) template waveforms at primary array stations of the International Monitoring System one can detect signals from and then build natural and manmade seismic events close to the master ones. The use of cross correlation allows detecting smaller signals (sometimes below noise level) than provided by the current IDC detecting techniques. As a result it is possible to automatically build from 50% to 100% more valid seismic events than included in the Reviewed Event Bulletin (REB). We have developed a tentative pipeline for automatic processing at the IDC. It includes three major stages. Firstly, we calculate cross correlation coefficient for a given master and continuous waveforms at the same stations and carry out signal detection as based on the statistical behavior of signal-to-noise ratio of the cross correlation coefficient. Secondly, a thorough screening is performed for all obtained signals using f-k analysis and F-statistics as applied to the cross-correlation traces at individual channels of all included array stations. Thirdly, local (i.e. confined to the correlation distance around the master event) association of origin times of all qualified signals is fulfilled. These origin times are calculated from the arrival times of these signals, which are reduced to the origin times by the travel times from the master event. An aftershock sequence of a mid-size earthquake is an ideal case to test cross correlation techniques for autiomatic event building. All events should be close to the mainshock and occur within several days. Here we analyse the aftershock sequence of an earthquake in the North Atlantic Ocean with mb(IDC)=4.79. The REB includes 38 events at distances less than 150 km from the mainshock. Our ultimate goal is to excersice the complete iterative procedure to find all possible aftershocks. We start with the mainshock and recover ten aftershocks with the largest number of stations to produce an initial set of master events with the highest quality templates. Then we find all aftershocks in the REB and many additional events, which were not originally found by the IDC. Using all events found after the first iteration as master events we find new events, which are also used in the next iteration. The iterative process stops when no new events can be found. In that sense the final set of aftershocks obtained with cross correlation is a comprehensive one.
Using Antelope and Seiscomp in the framework of the Romanian Seismic Network
NASA Astrophysics Data System (ADS)
Marius Craiu, George; Craiu, Andreea; Marmureanu, Alexandru; Neagoe, Cristian
2014-05-01
The National Institute for Earth Physics (NIEP) operates a real-time seismic network designed to monitor the seismic activity on the Romania territory, dominated by the Vrancea intermediate-depth (60-200 km) earthquakes. The NIEP real-time network currently consists of 102 stations and two seismic arrays equipped with different high quality digitizers (Kinemetrics K2, Quanterra Q330, Quanterra Q330HR, PS6-26, Basalt), broadband and short period seismometers (CMG3ESP, CMG40T, KS2000, KS54000, KS2000, CMG3T, STS2, SH-1, S13, Mark l4c, Ranger, Gs21, Mark 22) and acceleration sensors (Episensor Kinemetrics). The primary goal of the real-time seismic network is to provide earthquake parameters from more broad-band stations with a high dynamic range, for more rapid and accurate computation of the locations and magnitudes of earthquakes. The Seedlink and AntelopeTM program packages are completely automated Antelope seismological system is run at the Data Center in Măgurele. The Antelope data acquisition and processing software is running for real-time processing and post processing. The Antelope real-time system provides automatic event detection, arrival picking, event location, and magnitude calculation. It also provides graphical displays and automatic location within near real time after a local, regional or teleseismic event has occurred SeisComP 3 is another automated system that is run at the NIEP and which provides the following features: data acquisition, data quality control, real-time data exchange and processing, network status monitoring, issuing event alerts, waveform archiving and data distribution, automatic event detection and location, easy access to relevant information about stations, waveforms, and recent earthquakes. The main goal of this paper is to compare both of these data acquisitions systems in order to improve their detection capabilities, location accuracy, magnitude and depth determination and reduce the RMS and other location errors.
Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model.
Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai
2017-02-08
Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences.
Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model
Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai
2017-01-01
Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences. PMID:28208694
Detection of dominant flow and abnormal events in surveillance video
NASA Astrophysics Data System (ADS)
Kwak, Sooyeong; Byun, Hyeran
2011-02-01
We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.
NASA Astrophysics Data System (ADS)
Fink, Wolfgang; Brooks, Alexander J.-W.; Tarbell, Mark A.; Dohm, James M.
2017-05-01
Autonomous reconnaissance missions are called for in extreme environments, as well as in potentially hazardous (e.g., the theatre, disaster-stricken areas, etc.) or inaccessible operational areas (e.g., planetary surfaces, space). Such future missions will require increasing degrees of operational autonomy, especially when following up on transient events. Operational autonomy encompasses: (1) Automatic characterization of operational areas from different vantages (i.e., spaceborne, airborne, surface, subsurface); (2) automatic sensor deployment and data gathering; (3) automatic feature extraction including anomaly detection and region-of-interest identification; (4) automatic target prediction and prioritization; (5) and subsequent automatic (re-)deployment and navigation of robotic agents. This paper reports on progress towards several aspects of autonomous C4ISR systems, including: Caltech-patented and NASA award-winning multi-tiered mission paradigm, robotic platform development (air, ground, water-based), robotic behavior motifs as the building blocks for autonomous tele-commanding, and autonomous decision making based on a Caltech-patented framework comprising sensor-data-fusion (feature-vectors), anomaly detection (clustering and principal component analysis), and target prioritization (hypothetical probing).
Vitikainen, Anne-Mari; Mäkelä, Elina; Lioumis, Pantelis; Jousmäki, Veikko; Mäkelä, Jyrki P
2015-09-30
The use of navigated repetitive transcranial magnetic stimulation (rTMS) in mapping of speech-related brain areas has recently shown to be useful in preoperative workflow of epilepsy and tumor patients. However, substantial inter- and intraobserver variability and non-optimal replicability of the rTMS results have been reported, and a need for additional development of the methodology is recognized. In TMS motor cortex mappings the evoked responses can be quantitatively monitored by electromyographic recordings; however, no such easily available setup exists for speech mappings. We present an accelerometer-based setup for detection of vocalization-related larynx vibrations combined with an automatic routine for voice onset detection for rTMS speech mapping applying naming. The results produced by the automatic routine were compared with the manually reviewed video-recordings. The new method was applied in the routine navigated rTMS speech mapping for 12 consecutive patients during preoperative workup for epilepsy or tumor surgery. The automatic routine correctly detected 96% of the voice onsets, resulting in 96% sensitivity and 71% specificity. Majority (63%) of the misdetections were related to visible throat movements, extra voices before the response, or delayed naming of the previous stimuli. The no-response errors were correctly detected in 88% of events. The proposed setup for automatic detection of voice onsets provides quantitative additional data for analysis of the rTMS-induced speech response modifications. The objectively defined speech response latencies increase the repeatability, reliability and stratification of the rTMS results. Copyright © 2015 Elsevier B.V. All rights reserved.
Diagnosis diagrams for passing signals on an automatic block signaling railway section
NASA Astrophysics Data System (ADS)
Spunei, E.; Piroi, I.; Chioncel, C. P.; Piroi, F.
2018-01-01
This work presents a diagnosis method for railway traffic security installations. More specifically, the authors present a series of diagnosis charts for passing signals on a railway block equipped with an automatic block signaling installation. These charts are based on the exploitation electric schemes, and are subsequently used to develop a diagnosis software package. The thus developed software package contributes substantially to a reduction of failure detection and remedy for these types of installation faults. The use of the software package eliminates making wrong decisions in the fault detection process, decisions that may result in longer remedy times and, sometimes, to railway traffic events.
Detection of cough signals in continuous audio recordings using hidden Markov models.
Matos, Sergio; Birring, Surinder S; Pavord, Ian D; Evans, David H
2006-06-01
Cough is a common symptom of many respiratory diseases. The evaluation of its intensity and frequency of occurrence could provide valuable clinical information in the assessment of patients with chronic cough. In this paper we propose the use of hidden Markov models (HMMs) to automatically detect cough sounds from continuous ambulatory recordings. The recording system consists of a digital sound recorder and a microphone attached to the patient's chest. The recognition algorithm follows a keyword-spotting approach, with cough sounds representing the keywords. It was trained on 821 min selected from 10 ambulatory recordings, including 2473 manually labeled cough events, and tested on a database of nine recordings from separate patients with a total recording time of 3060 min and comprising 2155 cough events. The average detection rate was 82% at a false alarm rate of seven events/h, when considering only events above an energy threshold relative to each recording's average energy. These results suggest that HMMs can be applied to the detection of cough sounds from ambulatory patients. A postprocessing stage to perform a more detailed analysis on the detected events is under development, and could allow the rejection of some of the incorrectly detected events.
2014-01-01
Background Adverse drug reactions and adverse drug events (ADEs) are major public health issues. Many different prospective tools for the automated detection of ADEs in hospital databases have been developed and evaluated. The objective of the present study was to evaluate an automated method for the retrospective detection of ADEs with hyperkalaemia during inpatient stays. Methods We used a set of complex detection rules to take account of the patient’s clinical and biological context and the chronological relationship between the causes and the expected outcome. The dataset consisted of 3,444 inpatient stays in a French general hospital. An automated review was performed for all data and the results were compared with those of an expert chart review. The complex detection rules’ analytical quality was evaluated for ADEs. Results In terms of recall, 89.5% of ADEs with hyperkalaemia “with or without an abnormal symptom” were automatically identified (including all three serious ADEs). In terms of precision, 63.7% of the automatically identified ADEs with hyperkalaemia were true ADEs. Conclusions The use of context-sensitive rules appears to improve the automated detection of ADEs with hyperkalaemia. This type of tool may have an important role in pharmacoepidemiology via the routine analysis of large inter-hospital databases. PMID:25212108
Ficheur, Grégoire; Chazard, Emmanuel; Beuscart, Jean-Baptiste; Merlin, Béatrice; Luyckx, Michel; Beuscart, Régis
2014-09-12
Adverse drug reactions and adverse drug events (ADEs) are major public health issues. Many different prospective tools for the automated detection of ADEs in hospital databases have been developed and evaluated. The objective of the present study was to evaluate an automated method for the retrospective detection of ADEs with hyperkalaemia during inpatient stays. We used a set of complex detection rules to take account of the patient's clinical and biological context and the chronological relationship between the causes and the expected outcome. The dataset consisted of 3,444 inpatient stays in a French general hospital. An automated review was performed for all data and the results were compared with those of an expert chart review. The complex detection rules' analytical quality was evaluated for ADEs. In terms of recall, 89.5% of ADEs with hyperkalaemia "with or without an abnormal symptom" were automatically identified (including all three serious ADEs). In terms of precision, 63.7% of the automatically identified ADEs with hyperkalaemia were true ADEs. The use of context-sensitive rules appears to improve the automated detection of ADEs with hyperkalaemia. This type of tool may have an important role in pharmacoepidemiology via the routine analysis of large inter-hospital databases.
NASA Astrophysics Data System (ADS)
Kitov, I.; Bobrov, D.; Rozhkov, M.
2016-12-01
Aftershocks of larger earthquakes represent an important source of information on the distribution and evolution of stresses and deformations in pre-seismic, co-seismic and post-seismic phases. For the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Organization (CTBTO) largest aftershocks sequences are also a challenge for automatic and interactive processing. The highest rate of events recorded by two and more seismic stations of the International Monitoring System from a relatively small aftershock area may reach hundreds per hour (e.g. Sumatra 2004 and Tohoku 2011). Moreover, there are thousands of reflected/refracted phases per hour with azimuth and slowness within the uncertainty limits of the first P-waves. Misassociation of these later phases, both regular and site specific, as the first P-wave results in creation of numerous wrong event hypotheses in automatic IDC pipeline. In turn, interactive review of such wrong hypotheses is direct waste of analysts' resources. Waveform cross correlation (WCC) is a powerful tool to separate coda phases from actual P-wave arrivals and to fully utilize the repeat character of waveforms generated by events close in space. Array seismic stations of the IMS enhance the performance of the WCC in two important aspects - they reduce detection threshold and effectively suppress arrivals from all sources except master events. An IDC specific aftershock tool has been developed and merged with standard IDC pipeline. The tool includes several procedures: creation of master events consisting of waveform templates at ten and more IMS stations; cross correlation (CC) of real-time waveforms with these templates, association of arrivals detected at CC-traces in event hypotheses; building events matching IDC quality criteria; and resolution of conflicts between events hypotheses created by neighboring master-events. The final cross correlation standard event lists (XSEL) is a start point of interactive analysis. Since global monitoring of underground nuclear tests is based on historical and synthetic data, each aftershock sequence can be tested for the CTBT violation with big earthquakes as an evasion scenario.
ERIC Educational Resources Information Center
Shalgi, Shani; Deouell, Leon Y.
2007-01-01
Automatic change detection is a fundamental capacity of the human brain. In audition, this capacity is indexed by the mismatch negativity (MMN) event-related potential, which is putatively supported by a network consisting of superior temporal and frontal nodes. The aim of this study was to elucidate the roles of these nodes within the neural…
A consistent and uniform research earthquake catalog for the AlpArray region: preliminary results.
NASA Astrophysics Data System (ADS)
Molinari, I.; Bagagli, M.; Kissling, E. H.; Diehl, T.; Clinton, J. F.; Giardini, D.; Wiemer, S.
2017-12-01
The AlpArray initiative (www.alparray.ethz.ch) is a large-scale European collaboration ( 50 institutes involved) to study the entire Alpine orogen at high resolution with a variety of geoscientific methods. AlpArray provides unprecedentedly uniform station coverage for the region with more than 650 broadband seismic stations, 300 of which are temporary. The AlpArray Seismic Network (AASN) is a joint effort of 25 institutes from 10 nations, operates since January 2016 and is expected to continue until the end of 2018. In this study, we establish a uniform earthquake catalogue for the Greater Alpine region during the operation period of the AASN with a aimed completeness of M2.5. The catalog has two main goals: 1) calculation of consistent and precise hypocenter locations 2) provide preliminary but uniform magnitude calculations across the region. The procedure is based on automatic high-quality P- and S-wave pickers, providing consistent phase arrival times in combination with a picking quality assessment. First, we detect all events in the region in 2016/2017 using an STA/LTA based detector. Among the detected events, we select 50 geographically homogeneously distributed events with magnitudes ≥2.5 representative for the entire catalog. We manually pick the selected events to establish a consistent P- and S-phase reference data set, including arrival-time time uncertainties. The reference data, are used to adjust the automatic pickers and to assess their performance. In a first iteration, a simple P-picker algorithm is applied to the entire dataset, providing initial picks for the advanced MannekenPix (MPX) algorithm. In a second iteration, the MPX picker provides consistent and reliable automatic first arrival P picks together with a pick-quality estimate. The derived automatic P picks are then used as initial values for a multi-component S-phase picking algorithm. Subsequently, automatic picks of all well-locatable earthquakes will be considered to calculate final minimum 1D P and S velocity models for the region with appropriate stations corrections. Finally, all the events are relocated with the NonLinLoc algorithm in combination with the updated 1D models. The proposed procedure represents the first step towards uniform earthquake catalog for the entire greater Alpine region using the AASN.
AN AUTOMATIC DETECTION METHOD FOR EXTREME-ULTRAVIOLET DIMMINGS ASSOCIATED WITH SMALL-SCALE ERUPTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alipour, N.; Safari, H.; Innes, D. E.
2012-02-10
Small-scale extreme-ultraviolet (EUV) dimming often surrounds sites of energy release in the quiet Sun. This paper describes a method for the automatic detection of these small-scale EUV dimmings using a feature-based classifier. The method is demonstrated using sequences of 171 Angstrom-Sign images taken by the STEREO/Extreme UltraViolet Imager (EUVI) on 2007 June 13 and by Solar Dynamics Observatory/Atmospheric Imaging Assembly on 2010 August 27. The feature identification relies on recognizing structure in sequences of space-time 171 Angstrom-Sign images using the Zernike moments of the images. The Zernike moments space-time slices with events and non-events are distinctive enough to be separatedmore » using a support vector machine (SVM) classifier. The SVM is trained using 150 events and 700 non-event space-time slices. We find a total of 1217 events in the EUVI images and 2064 events in the AIA images on the days studied. Most of the events are found between latitudes -35 Degree-Sign and +35 Degree-Sign . The sizes and expansion speeds of central dimming regions are extracted using a region grow algorithm. The histograms of the sizes in both EUVI and AIA follow a steep power law with slope of about -5. The AIA slope extends to smaller sizes before turning over. The mean velocity of 1325 dimming regions seen by AIA is found to be about 14 km s{sup -1}.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.
Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the datamore » into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.« less
Monitoring the Earth's Atmosphere with the Global IMS Infrasound Network
NASA Astrophysics Data System (ADS)
Brachet, Nicolas; Brown, David; Mialle, Pierrick; Le Bras, Ronan; Coyne, John; Given, Jeffrey
2010-05-01
The Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) is tasked with monitoring compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT) which bans nuclear weapon explosions underground, in the oceans, and in the atmosphere. The verification regime includes a globally distributed network of seismic, hydroacoustic, infrasound and radionuclide stations which collect and transmit data to the International Data Centre (IDC) in Vienna, Austria shortly after the data are recorded at each station. The infrasound network defined in the Protocol of the CTBT comprises 60 infrasound array stations. Each array is built according to the same technical specifications, it is typically composed of 4 to 9 sensors, with 1 to 3 km aperture geometry. At the end of 2000 only one infrasound station was transmitting data to the IDC. Since then, 41 additional stations have been installed and 70% of the infrasound network is currently certified and contributing data to the IDC. This constitutes the first global infrasound network ever built with such a large and uniform distribution of stations. Infrasound data at the IDC are processed at the station level using the Progressive Multi-Channel Correlation (PMCC) method for the detection and measurement of infrasound signals. The algorithm calculates the signal correlation between sensors at an infrasound array. If the signal is sufficiently correlated and consistent over an extended period of time and frequency range a detection is created. Groups of detections are then categorized according to their propagation and waveform features, and a phase name is assigned for infrasound, seismic or noise detections. The categorization complements the PMCC algorithm to avoid overwhelming the IDC automatic association algorithm with false alarm infrasound events. Currently, 80 to 90% of the detections are identified as noise by the system. Although the noise detections are not used to build events in the context of CTBT monitoring, they represent valuable data for other civil applications like monitoring of natural hazards (volcanic activity, storm tracking) and climate change. Non-noise detections are used in network processing at the IDC along with seismic and hydroacoustic technologies. The arrival phases detected on the three waveform technologies may be combined and used for locating events in an automatically generated bulletin of events. This automatic event bulletin is routinely reviewed by analysts during the interactive review process. However, the fusion of infrasound data with the other waveform technologies has only recently (in early 2010) become part of the IDC operational system, after a software development and testing period that began in 2004. The build-up of the IMS infrasound network, the recent developments of the IDC infrasound software, and the progress accomplished during the last decade in the domain of real-time atmospheric modelling have allowed better understanding of infrasound signals and identification of a growing data set of ground-truth sources. These infragenic sources originate from natural or man-made sources. Some of the detected signals are emitted by local or regional phenomena recorded by a single IMS infrasound station: man-made cultural activity, wind farms, aircraft, artillery exercises, ocean surf, thunderstorms, rumbling volcanoes, iceberg calving, aurora, avalanches. Other signals may be recorded by several IMS infrasound stations at larger distances: ocean swell, sonic booms, and mountain associated waves. Only a small fraction of events meet the event definition criteria considering the Treaty verification mission of the Organization. Candidate event types for the IDC Reviewed Event Bulletin include atmospheric or surface explosions, meteor explosions, rocket launches, signals from large earthquakes and explosive volcanic eruptions.
Monitoring Chewing and Eating in Free-Living Using Smart Eyeglasses.
Zhang, Rui; Amft, Oliver
2018-01-01
We propose to 3-D-print personal fitted regular-look smart eyeglasses frames equipped with bilateral electromyography recording to monitor temporalis muscles' activity for automatic dietary monitoring. Personal fitting supported electrode-skin contacts are at temple ear bend and temple end positions. We evaluated the smart monitoring eyeglasses during in-lab and free-living studies of food chewing and eating event detection with ten participants. The in-lab study was designed to explore three natural food hardness levels and determine parameters of an energy-based chewing cycle detection. Our free-living study investigated whether chewing monitoring and eating event detection using smart eyeglasses is feasible in free-living. An eating event detection algorithm was developed to determine intake activities based on the estimated chewing rate. Results showed an average food hardness classification accuracy of 94% and chewing cycle detection precision and recall above 90% for the in-lab study and above 77% for the free-living study covering 122 hours of recordings. Eating detection revealed the 44 eating events with an average accuracy above 95%. We conclude that smart eyeglasses are suitable for monitoring chewing and eating events in free-living and even could provide further insights into the wearer's natural chewing patterns.
Extracting semantically enriched events from biomedical literature
2012-01-01
Background Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Results Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP’09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP’09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. Conclusions We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare. PMID:22621266
Extracting semantically enriched events from biomedical literature.
Miwa, Makoto; Thompson, Paul; McNaught, John; Kell, Douglas B; Ananiadou, Sophia
2012-05-23
Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP'09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP'09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare.
Top-down and bottom-up modulation of brain structures involved in auditory discrimination.
Diekhof, Esther K; Biedermann, Franziska; Ruebsamen, Rudolf; Gruber, Oliver
2009-11-10
Auditory deviancy detection comprises both automatic and voluntary processing. Here, we investigated the neural correlates of different components of the sensory discrimination process using functional magnetic resonance imaging. Subliminal auditory processing of deviant events that were not detected led to activation in left superior temporal gyrus. On the other hand, both correct detection of deviancy and false alarms activated a frontoparietal network of attentional processing and response selection, i.e. this network was activated regardless of the physical presence of deviant events. Finally, activation in the putamen, anterior cingulate and middle temporal cortex depended on factual stimulus representations and occurred only during correct deviancy detection. These results indicate that sensory discrimination may rely on dynamic bottom-up and top-down interactions.
2008-10-01
AD); Aeolos, a distributed intrusion detection and event correlation infrastructure; STAND, a training-set sanitization technique applicable to ADs...UU 18. NUMBER OF PAGES 25 19a. NAME OF RESPONSIBLE PERSON Frank H. Born a. REPORT U b. ABSTRACT U c . THIS PAGE U 19b. TELEPHONE...Summary of findings 2 (a) Automatic Patch Generation 2 (b) Better Patch Management 2 ( c ) Artificial Diversity 3 (d) Distributed Anomaly Detection 3
Big Data solution for CTBT monitoring: CEA-IDC joint global cross correlation project
NASA Astrophysics Data System (ADS)
Bobrov, Dmitry; Bell, Randy; Brachet, Nicolas; Gaillard, Pierre; Kitov, Ivan; Rozhkov, Mikhail
2014-05-01
Waveform cross-correlation when applied to historical datasets of seismic records provides dramatic improvements in detection, location, and magnitude estimation of natural and manmade seismic events. With correlation techniques, the amplitude threshold of signal detection can be reduced globally by a factor of 2 to 3 relative to currently standard beamforming and STA/LTA detector. The gain in sensitivity corresponds to a body wave magnitude reduction by 0.3 to 0.4 units and doubles the number of events meeting high quality requirements (e.g. detected by three and more seismic stations of the International Monitoring System (IMS). This gain is crucial for seismic monitoring under the Comprehensive Nuclear-Test-Ban Treaty. The International Data Centre (IDC) dataset includes more than 450,000 seismic events, tens of millions of raw detections and continuous seismic data from the primary IMS stations since 2000. This high-quality dataset is a natural candidate for an extensive cross correlation study and the basis of further enhancements in monitoring capabilities. Without this historical dataset recorded by the permanent IMS Seismic Network any improvements would not be feasible. However, due to the mismatch between the volume of data and the performance of the standard Information Technology infrastructure, it becomes impossible to process all the data within tolerable elapsed time. To tackle this problem known as "BigData", the CEA/DASE is part of the French project "DataScale". One objective is to reanalyze 10 years of waveform data from the IMS network with the cross-correlation technique thanks to a dedicated High Performance Computer (HPC) infrastructure operated by the Centre de Calcul Recherche et Technologie (CCRT) at the CEA of Bruyères-le-Châtel. Within 2 years we are planning to enhance detection and phase association algorithms (also using machine learning and automatic classification) and process about 30 terabytes of data provided by the IDC to update the world seismicity map. From the new events and those in the IDC Reviewed Event Bulletin, we will automatically create various sets of master event templates that will be used for the event location globally by the CTBTO and CEA.
Workshop on Algorithms for Time-Series Analysis
NASA Astrophysics Data System (ADS)
Protopapas, Pavlos
2012-04-01
abstract-type="normal">SummaryThis Workshop covered the four major subjects listed below in two 90-minute sessions. Each talk or tutorial allowed questions, and concluded with a discussion. Classification: Automatic classification using machine-learning methods is becoming a standard in surveys that generate large datasets. Ashish Mahabal (Caltech) reviewed various methods, and presented examples of several applications. Time-Series Modelling: Suzanne Aigrain (Oxford University) discussed autoregressive models and multivariate approaches such as Gaussian Processes. Meta-classification/mixture of expert models: Karim Pichara (Pontificia Universidad Católica, Chile) described the substantial promise which machine-learning classification methods are now showing in automatic classification, and discussed how the various methods can be combined together. Event Detection: Pavlos Protopapas (Harvard) addressed methods of fast identification of events with low signal-to-noise ratios, enlarging on the characterization and statistical issues of low signal-to-noise ratios and rare events.
The chordate proteome history database.
Levasseur, Anthony; Paganini, Julien; Dainat, Jacques; Thompson, Julie D; Poch, Olivier; Pontarotti, Pierre; Gouret, Philippe
2012-01-01
The chordate proteome history database (http://ioda.univ-provence.fr) comprises some 20,000 evolutionary analyses of proteins from chordate species. Our main objective was to characterize and study the evolutionary histories of the chordate proteome, and in particular to detect genomic events and automatic functional searches. Firstly, phylogenetic analyses based on high quality multiple sequence alignments and a robust phylogenetic pipeline were performed for the whole protein and for each individual domain. Novel approaches were developed to identify orthologs/paralogs, and predict gene duplication/gain/loss events and the occurrence of new protein architectures (domain gains, losses and shuffling). These important genetic events were localized on the phylogenetic trees and on the genomic sequence. Secondly, the phylogenetic trees were enhanced by the creation of phylogroups, whereby groups of orthologous sequences created using OrthoMCL were corrected based on the phylogenetic trees; gene family size and gene gain/loss in a given lineage could be deduced from the phylogroups. For each ortholog group obtained from the phylogenetic or the phylogroup analysis, functional information and expression data can be retrieved. Database searches can be performed easily using biological objects: protein identifier, keyword or domain, but can also be based on events, eg, domain exchange events can be retrieved. To our knowledge, this is the first database that links group clustering, phylogeny and automatic functional searches along with the detection of important events occurring during genome evolution, such as the appearance of a new domain architecture.
NASA Astrophysics Data System (ADS)
Patanè, Domenico; Ferrari, Ferruccio; Giampiccolo, Elisabetta; Gresta, Stefano
Few automated data acquisition and processing systems operate on mainframes, some run on UNIX-based workstations and others on personal computers, equipped with either DOS/WINDOWS or UNIX-derived operating systems. Several large and complex software packages for automatic and interactive analysis of seismic data have been developed in recent years (mainly for UNIX-based systems). Some of these programs use a variety of artificial intelligence techniques. The first operational version of a new software package, named PC-Seism, for analyzing seismic data from a local network is presented in Patanè et al. (1999). This package, composed of three separate modules, provides an example of a new generation of visual object-oriented programs for interactive and automatic seismic data-processing running on a personal computer. In this work, we mainly discuss the automatic procedures implemented in the ASDP (Automatic Seismic Data-Processing) module and real time application to data acquired by a seismic network running in eastern Sicily. This software uses a multi-algorithm approach and a new procedure MSA (multi-station-analysis) for signal detection, phase grouping and event identification and location. It is designed for an efficient and accurate processing of local earthquake records provided by single-site and array stations. Results from ASDP processing of two different data sets recorded at Mt. Etna volcano by a regional network are analyzed to evaluate its performance. By comparing the ASDP pickings with those revised manually, the detection and subsequently the location capabilities of this software are assessed. The first data set is composed of 330 local earthquakes recorded in the Mt. Etna erea during 1997 by the telemetry analog seismic network. The second data set comprises about 970 automatic locations of more than 2600 local events recorded at Mt. Etna during the last eruption (July 2001) at the present network. For the former data set, a comparison of the automatic results with the manual picks indicates that the ASDP module can accurately pick 80% of the P-waves and 65% of S-waves. The on-line application on the latter data set shows that automatic locations are affected by larger errors, due to the preliminary setting of the configuration parameters in the program. However, both automatic ASDP and manual hypocenter locations are comparable within the estimated error bounds. New improvements of the PC-Seism software for on-line analysis are also discussed.
Progresses with Net-VISA on Global Infrasound Association
NASA Astrophysics Data System (ADS)
Mialle, Pierrick; Arora, Nimar
2017-04-01
Global Infrasound Association algorithms are an important area of active development at the International Data Centre (IDC). These algorithms play an important part of the automatic processing system for verification technologies. A key focus at the IDC is to enhance association and signal characterization methods by incorporating the identification of signals of interest and the optimization of the network detection threshold. The overall objective is to reduce the number of associated infrasound arrivals that are rejected from the automatic bulletins when generating the Reviewed Event Bulletins (REB), and hence reduce IDC analyst workload. Despite good accuracy by the IDC categorization, a number of signal detections due to clutter sources such as microbaroms or surf are built into events. In this work we aim to optimize the association criteria based on knowledge acquired by IDC in the last 6 years, and focus on the specificity of seismo-acoustic events. The resulting work has been incorporated into NETVISA [1], a Bayesian approach to network processing. The model that we propose is a fusion of seismic, hydroacoustic and infrasound processing built on a unified probabilistic framework. References: [1] NETVISA: Network Processing Vertically Integrated Seismic Analysis. N. S. Arora, S. Russell, and E. Sudderth. BSSA 2013
Progresses with Net-VISA on Global Infrasound Association
NASA Astrophysics Data System (ADS)
Mialle, P.; Arora, N. S.
2016-12-01
Global Infrasound Association algorithms are an important area of active development at the International Data Centre (IDC). These algorithms play an important part of the automatic processing system for verification technologies. A key focus at the IDC is to enhance association and signal characterization methods by incorporating the identification of signals of interest and the optimization of the network detection threshold. The overall objective is to reduce the number of associated infrasound arrivals that are rejected from the automatic bulletins when generating the Reviewed Event Bulletins (REB), and hence reduce IDC analyst workload. Despite good accuracy by the IDC categorization, a number of signal detections due to clutter sources such as microbaroms or surf are built into events. In this work we aim to optimize the association criteria based on knowledge acquired by IDC in the last 6 years, and focus on the specificity of seismo-acoustic events. The resulting work has been incorporated into NETVISA [1], a Bayesian approach to network processing. The model that we propose is a fusion of seismic, hydroacoustic and infrasound processing built on a unified probabilistic framework. References: [1] NETVISA: Network Processing Vertically Integrated Seismic Analysis. N. S. Arora, S. Russell, and E. Sudderth. BSSA 2013
Liddell, Belinda J; Williams, Leanne M; Rathjen, Jennifer; Shevrin, Howard; Gordon, Evian
2004-04-01
Current theories of emotion suggest that threat-related stimuli are first processed via an automatically engaged neural mechanism, which occurs outside conscious awareness. This mechanism operates in conjunction with a slower and more comprehensive process that allows a detailed evaluation of the potentially harmful stimulus (LeDoux, 1998). We drew on the Halgren and Marinkovic (1995) model to examine these processes using event-related potentials (ERPs) within a backward masking paradigm. Stimuli used were faces with fear and neutral (as baseline control) expressions, presented above (supraliminal) and below (subliminal) the threshold for conscious detection. ERP data revealed a double dissociation for the supraliminal versus subliminal perception of fear. In the subliminal condition, responses to the perception of fear stimuli were enhanced relative to neutral for the N2 "excitatory" component, which is thought to represent orienting and automatic aspects of face processing. By contrast, supraliminal perception of fear was associated with relatively enhanced responses for the late P3 "inhibitory" component, implicated in the integration of emotional processes. These findings provide evidence in support of Halgren and Marinkovic's temporal model of emotion processing, and indicate that the neural mechanisms for appraising signals of threat may be initiated, not only automatically, but also without the need for conscious detection of these signals.
Automatic Earthquake Detection by Active Learning
NASA Astrophysics Data System (ADS)
Bergen, K.; Beroza, G. C.
2017-12-01
In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of large, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either earthquakes or noise. Earthquake seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak earthquake signals in large seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of earthquake waveforms, but the number of known events, taken from earthquake catalogs, may be insufficient to build an accurate detector. Furthermore, earthquake catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, earthquakes, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic earthquake detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve earthquake detection performance with limited available training data.
NASA Astrophysics Data System (ADS)
Sleeman, Reinoud; van Eck, Torild
1999-06-01
The onset of a seismic signal is determined through joint AR modeling of the noise and the seismic signal, and the application of the Akaike Information Criterion (AIC) using the onset time as parameter. This so-called AR-AIC phase picker has been tested successfully and implemented on the Z-component of the broadband station HGN to provide automatic P-phase picks for a rapid warning system. The AR-AIC picker is shown to provide accurate and robust automatic picks on a large experimental database. Out of 1109 P-phase onsets with signal-to-noise ratio (SNR) above 1 from local, regional and teleseismic earthquakes, our implementation detects 71% and gives a mean difference with manual picks of 0.1 s. An optimal version of the well-established picker of Baer and Kradolfer [Baer, M., Kradolfer, U., An automatic phase picker for local and teleseismic events, Bull. Seism. Soc. Am. 77 (1987) 1437-1445] detects less than 41% and gives a mean difference with manual picks of 0.3 s using the same dataset.
Unification of automatic target tracking and automatic target recognition
NASA Astrophysics Data System (ADS)
Schachter, Bruce J.
2014-06-01
The subject being addressed is how an automatic target tracker (ATT) and an automatic target recognizer (ATR) can be fused together so tightly and so well that their distinctiveness becomes lost in the merger. This has historically not been the case outside of biology and a few academic papers. The biological model of ATT∪ATR arises from dynamic patterns of activity distributed across many neural circuits and structures (including retina). The information that the brain receives from the eyes is "old news" at the time that it receives it. The eyes and brain forecast a tracked object's future position, rather than relying on received retinal position. Anticipation of the next moment - building up a consistent perception - is accomplished under difficult conditions: motion (eyes, head, body, scene background, target) and processing limitations (neural noise, delays, eye jitter, distractions). Not only does the human vision system surmount these problems, but it has innate mechanisms to exploit motion in support of target detection and classification. Biological vision doesn't normally operate on snapshots. Feature extraction, detection and recognition are spatiotemporal. When vision is viewed as a spatiotemporal process, target detection, recognition, tracking, event detection and activity recognition, do not seem as distinct as they are in current ATT and ATR designs. They appear as similar mechanism taking place at varying time scales. A framework is provided for unifying ATT and ATR.
NASA Astrophysics Data System (ADS)
Matos, Catarina; Grigoli, Francesco; Cesca, Simone; Custódio, Susana
2015-04-01
In the last decade a permanent seismic network of 30 broadband stations, complemented by dense temporary deployments, covered Portugal. This extraordinary network coverage enables now the computation of a high-resolution image of the seismicity of Portugal, which in turn will shed light on the seismotectonics of Portugal. The large data volumes available cannot be analyzed by traditional time-consuming manual location procedures. In this presentation we show first results on the automatic detection and location of earthquakes occurred in a selected region in the south of Portugal Our main goal is to implement an automatic earthquake detection and location routine in order to have a tool to quickly process large data sets, while at the same time detecting low magnitude earthquakes (i.e., lowering the detection threshold). We present a modified version of the automatic seismic event location by waveform coherency analysis developed by Grigoli et al. (2013, 2014), designed to perform earthquake detections and locations in continuous data. The event detection is performed by continuously computing the short-term-average/long-term-average of two different characteristic functions (CFs). For the P phases we used a CF based on the vertical energy trace, while for S phases we used a CF based on the maximum eigenvalue of the instantaneous covariance matrix (Vidale 1991). Seismic event detection and location is obtained by performing waveform coherence analysis scanning different hypocentral coordinates. We apply this technique to earthquakes in the Alentejo region (South Portugal), taking advantage from a small aperture seismic network installed in the south of Portugal for two years (2010 - 2011) during the DOCTAR experiment. In addition to the good network coverage, the Alentejo region was chosen for its simple tectonic setting and also because the relationship between seismicity, tectonics and local lithospheric structure is intriguing and still poorly understood. Inside the target area the seismicity clusters mainly within two clouds, oriented SE-NW and SW-NE. Should these clusters be seen as the expression of local active faults? Are they associated to lithological transitions? Or do the locations obtained from the previously sparse permanent network have large errors and generate fake clusters? We present preliminary results from this study, and compare them with manual locations. This work is supported by project QuakeLoc, reference: PTDC/GEO-FIQ/3522/2012
Real World Experience With Ion Implant Fault Detection at Freescale Semiconductor
NASA Astrophysics Data System (ADS)
Sing, David C.; Breeden, Terry; Fakhreddine, Hassan; Gladwin, Steven; Locke, Jason; McHugh, Jim; Rendon, Michael
2006-11-01
The Freescale automatic fault detection and classification (FDC) system has logged data from over 3.5 million implants in the past two years. The Freescale FDC system is a low cost system which collects summary implant statistics at the conclusion of each implant run. The data is collected by either downloading implant data log files from the implant tool workstation, or by exporting summary implant statistics through the tool's automation interface. Compared to the traditional FDC systems which gather trace data from sensors on the tool as the implant proceeds, the Freescale FDC system cannot prevent scrap when a fault initially occurs, since the data is collected after the implant concludes. However, the system can prevent catastrophic scrap events due to faults which are not detected for days or weeks, leading to the loss of hundreds or thousands of wafers. At the Freescale ATMC facility, the practical applications of the FD system fall into two categories: PM trigger rules which monitor tool signals such as ion gauges and charge control signals, and scrap prevention rules which are designed to detect specific failure modes that have been correlated to yield loss and scrap. PM trigger rules are designed to detect shifts in tool signals which indicate normal aging of tool systems. For example, charging parameters gradually shift as flood gun assemblies age, and when charge control rules start to fail a flood gun PM is performed. Scrap prevention rules are deployed to detect events such as particle bursts and excessive beam noise, events which have been correlated to yield loss. The FDC system does have tool log-down capability, and scrap prevention rules often use this capability to automatically log the tool into a maintenance state while simultaneously paging the sustaining technician for data review and disposition of the affected product.
Research on time synchronization scheme of MES systems in manufacturing enterprise
NASA Astrophysics Data System (ADS)
Yuan, Yuan; Wu, Kun; Sui, Changhao; Gu, Jin
2018-04-01
With the popularity of information and automatic production in the manufacturing enterprise, data interaction between business systems is more and more frequent. Therefore, the accuracy of time is getting higher and higher. However, the NTP network time synchronization methods lack the corresponding redundancy and monitoring mechanisms. When failure occurs, it can only make up operations after the event, which has a great effect on production data and systems interaction. Based on this, the paper proposes a RHCS-based NTP server architecture, automatically detect NTP status and failover by script.
Li, Qi; Melton, Kristin; Lingren, Todd; Kirkendall, Eric S; Hall, Eric; Zhai, Haijun; Ni, Yizhao; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre
2014-01-01
Although electronic health records (EHRs) have the potential to provide a foundation for quality and safety algorithms, few studies have measured their impact on automated adverse event (AE) and medical error (ME) detection within the neonatal intensive care unit (NICU) environment. This paper presents two phenotyping AE and ME detection algorithms (ie, IV infiltrations, narcotic medication oversedation and dosing errors) and describes manual annotation of airway management and medication/fluid AEs from NICU EHRs. From 753 NICU patient EHRs from 2011, we developed two automatic AE/ME detection algorithms, and manually annotated 11 classes of AEs in 3263 clinical notes. Performance of the automatic AE/ME detection algorithms was compared to trigger tool and voluntary incident reporting results. AEs in clinical notes were double annotated and consensus achieved under neonatologist supervision. Sensitivity, positive predictive value (PPV), and specificity are reported. Twelve severe IV infiltrates were detected. The algorithm identified one more infiltrate than the trigger tool and eight more than incident reporting. One narcotic oversedation was detected demonstrating 100% agreement with the trigger tool. Additionally, 17 narcotic medication MEs were detected, an increase of 16 cases over voluntary incident reporting. Automated AE/ME detection algorithms provide higher sensitivity and PPV than currently used trigger tools or voluntary incident-reporting systems, including identification of potential dosing and frequency errors that current methods are unequipped to detect. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
The Swiss-Army-Knife Approach to the Nearly Automatic Analysis for Microearthquake Sequences.
NASA Astrophysics Data System (ADS)
Kraft, T.; Simon, V.; Tormann, T.; Diehl, T.; Herrmann, M.
2017-12-01
Many Swiss earthquake sequence have been studied using relative location techniques, which often allowed to constrain the active fault planes and shed light on the tectonic processes that drove the seismicity. Yet, in the majority of cases the number of located earthquakes was too small to infer the details of the space-time evolution of the sequences, or their statistical properties. Therefore, it has mostly been impossible to resolve clear patterns in the seismicity of individual sequences, which are needed to improve our understanding of the mechanisms behind them. Here we present a nearly automatic workflow that combines well-established seismological analysis techniques and allows to significantly improve the completeness of detected and located earthquakes of a sequence. We start from the manually timed routine catalog of the Swiss Seismological Service (SED), which contains the larger events of a sequence. From these well-analyzed earthquakes we dynamically assemble a template set and perform a matched filter analysis on the station with: the best SNR for the sequence; and a recording history of at least 10-15 years, our typical analysis period. This usually allows us to detect events several orders of magnitude below the SED catalog detection threshold. The waveform similarity of the events is then further exploited to derive accurate and consistent magnitudes. The enhanced catalog is then analyzed statistically to derive high-resolution time-lines of the a- and b-value and consequently the occurrence probability of larger events. Many of the detected events are strong enough to be located using double-differences. No further manual interaction is needed; we simply time-shift the arrival-time pattern of the detecting template to the associated detection. Waveform similarity assures a good approximation of the expected arrival-times, which we use to calculate event-pair arrival-time differences by cross correlation. After a SNR and cycle-skipping quality check these are directly fed into hypoDD. Using this procedure we usually improve the number of well-relocated events by a factor 2-5. We demonstrate the successful application of the workflow at the example of natural sequences in Switzerland and present first results of the advanced analysis the was possible with the enhanced catalogs.
NASA Astrophysics Data System (ADS)
Ángel López Comino, José; Cesca, Simone; Heimann, Sebastian; Grigoli, Francesco; Milkereit, Claus; Dahm, Torsten; Zang, Arno
2017-04-01
A crucial issue to analyse the induced seismicity for hydraulic fracturing is the detection and location of massive microseismic or acoustic emissions (AE) activity, with robust and sufficiently accurate automatic algorithms. Waveform stacking and coherence analysis have been tested for local seismic monitoring and mining induced seismicity improving the classical detection and location methods (e.g. short-term-average/long-term-average and automatic picking of the P and S waves first arrivals). These techniques are here applied using a full waveform approach for a hydraulic fracturing experiment (Nova project 54-14-1) that took place 410 m below surface in the Äspö Hard Rock Laboratory (Sweden). Continuous waveform recording with a near field network composed by eleven AE sensors are processed. The piezoelectric sensors have their highest sensitive in the frequency range 1 to 100 kHz, but sampling rates were extended to 1 MHz. We present the results obtained during the conventional, continuous water-injection experiment HF2 (Hydraulic Fracture 2). The event detector is based on the stacking of characteristic functions. It follows a delay-and-stack approach, where the likelihood of the hypocenter location in a pre-selected seismogenic volume is mapped by assessing the coherence of the P onset times at different stations. A low detector threshold is chosen, in order not to loose weaker events. This approach also increases the number of false detections. Therefore, the dataset has been revised manually, and detected events classified in terms of true AE events related to the fracturing process, electronic noise related to 50 Hz overtones, long period and other signals. The location of the AE events is further refined using a more accurate waveform stacking method which uses both P and S phases. A 3D grid is generated around the hydraulic fracturing volume and we retrieve a multidimensional matrix, whose absolute maximum corresponds to the spatial coordinates of the seismic event. The relative location accuracy is improved using a master event approach to correct for travel time perturbations. The master event is selected based on a good signal to noise ratio leading to a robust location with small uncertainties. Relative magnitudes are finally estimated upon the decay of the maximal recorded amplitude from the AE location. The resulting catalogue is composed of more than 4000 AEs. Their hypocenters are spatially clustered in a planar region, resembling the main fracture plane; its orientation and size are estimated from the spatial distribution of AEs. This work is funded by the EU H2020 SHEER project. Nova project 54-14-1 was financially supported by the GFZ German Research Center for Geosciences (75%), the KIT Karlsruhe Institute of Technology (15%) and the Nova Center for University Studies, Research and Development (10%). An additional in-kind contribution of SKB for using Äspö Hard Rock Laboratory as test site for geothermal research is greatly acknowledged.
Tackle and impact detection in elite Australian football using wearable microsensor technology.
Gastin, Paul B; McLean, Owen C; Breed, Ray V P; Spittle, Michael
2014-01-01
The effectiveness of a wearable microsensor device (MinimaxX(TM) S4, Catapult Innovations, Melbourne, VIC, Australia) to automatically detect tackles and impact events in elite Australian football (AF) was assessed during four matches. Video observation was used as the criterion measure. A total of 352 tackles were observed, with 78% correctly detected as tackles by the manufacturer's software. Tackles against (i.e. tackled by an opponent) were more accurately detected than tackles made (90% v 66%). Of the 77 tackles that were not detected at all, the majority (74%) were categorised as low-intensity. In contrast, a total of 1510 "tackle" events were detected, with only 18% of these verified as tackles. A further 57% were from contested ball situations involving player contact. The remaining 25% were in general play where no contact was evident; these were significantly lower in peak Player Load™ than those involving player contact (P < 0.01). The tackle detection algorithm, developed primarily for rugby, was not suitable for tackle detection in AF. The underlying sensor data may have the potential to detect a range of events within contact sports such as AF, yet to do so is a complex task and requires sophisticated sport and event-specific algorithms.
Investigating Montara platform oil spill accident by implementing RST-OIL approach.
NASA Astrophysics Data System (ADS)
Satriano, Valeria; Ciancia, Emanuele; Coviello, Irina; Di Polito, Carmine; Lacava, Teodosio; Pergola, Nicola; Tramutoli, Valerio
2016-04-01
Oil Spills represent one of the most harmful events to marine ecosystems and their timely detection is crucial for their mitigation and management. The potential of satellite data for their detection and monitoring has been largely investigated. Traditional satellite techniques usually identify oil spill presence applying a fixed threshold scheme only after the occurrence of an event, which make them not well suited for their prompt identification. The Robust Satellite Technique (RST) approach, in its oil spill detection version (RST-OIL), being based on the comparison of the latest satellite acquisition with its historical value, previously identified, allows the automatic and near real-time detection of events. Such a technique has been already successfully applied on data from different sources (AVHRR-Advanced Very High Resolution Radiometer and MODIS-Moderate Resolution Imaging Spectroradiometer) showing excellent performance in detecting oil spills both during day- and night-time conditions, with an high level of sensitivity (detection also of low intensity events) and reliability (no false alarm on scene). In this paper, RST-OIL has been implemented on MODIS thermal infrared data for the analysis of the Montara Platform (Timor Sea - Australia) oil spill disaster occurred in August 2009. Preliminary achievements are presented and discussed in this paper.
Assessing the severity of sleep apnea syndrome based on ballistocardiogram
Zhou, Xingshe; Zhao, Weichao; Liu, Fan; Ni, Hongbo; Yu, Zhiwen
2017-01-01
Background Sleep Apnea Syndrome (SAS) is a common sleep-related breathing disorder, which affects about 4-7% males and 2-4% females all around the world. Different approaches have been adopted to diagnose SAS and measure its severity, including the gold standard Polysomnography (PSG) in sleep study field as well as several alternative techniques such as single-channel ECG, pulse oximeter and so on. However, many shortcomings still limit their generalization in home environment. In this study, we aim to propose an efficient approach to automatically assess the severity of sleep apnea syndrome based on the ballistocardiogram (BCG) signal, which is non-intrusive and suitable for in home environment. Methods We develop an unobtrusive sleep monitoring system to capture the BCG signals, based on which we put forward a three-stage sleep apnea syndrome severity assessment framework, i.e., data preprocessing, sleep-related breathing events (SBEs) detection, and sleep apnea syndrome severity evaluation. First, in the data preprocessing stage, to overcome the limits of BCG signals (e.g., low precision and reliability), we utilize wavelet decomposition to obtain the outline information of heartbeats, and apply a RR correction algorithm to handle missing or spurious RR intervals. Afterwards, in the event detection stage, we propose an automatic sleep-related breathing event detection algorithm named Physio_ICSS based on the iterative cumulative sums of squares (i.e., the ICSS algorithm), which is originally used to detect structural breakpoints in a time series. In particular, to efficiently detect sleep-related breathing events in the obtained time series of RR intervals, the proposed algorithm not only explores the practical factors of sleep-related breathing events (e.g., the limit of lasting duration and possible occurrence sleep stages) but also overcomes the event segmentation issue (e.g., equal-length segmentation method might divide one sleep-related breathing event into different fragments and lead to incorrect results) of existing approaches. Finally, by fusing features extracted from multiple domains, we can identify sleep-related breathing events and assess the severity level of sleep apnea syndrome effectively. Conclusions Experimental results on 136 individuals of different sleep apnea syndrome severities validate the effectiveness of the proposed framework, with the accuracy of 94.12% (128/136). PMID:28445548
Detection of goal events in soccer videos
NASA Astrophysics Data System (ADS)
Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas
2005-01-01
In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.
Classifying seismic waveforms from scratch: a case study in the alpine environment
NASA Astrophysics Data System (ADS)
Hammer, C.; Ohrnberger, M.; Fäh, D.
2013-01-01
Nowadays, an increasing amount of seismic data is collected by daily observatory routines. The basic step for successfully analyzing those data is the correct detection of various event types. However, the visually scanning process is a time-consuming task. Applying standard techniques for detection like the STA/LTA trigger still requires the manual control for classification. Here, we present a useful alternative. The incoming data stream is scanned automatically for events of interest. A stochastic classifier, called hidden Markov model, is learned for each class of interest enabling the recognition of highly variable waveforms. In contrast to other automatic techniques as neural networks or support vector machines the algorithm allows to start the classification from scratch as soon as interesting events are identified. Neither the tedious process of collecting training samples nor a time-consuming configuration of the classifier is required. An approach originally introduced for the volcanic task force action allows to learn classifier properties from a single waveform example and some hours of background recording. Besides a reduction of required workload this also enables to detect very rare events. Especially the latter feature provides a milestone point for the use of seismic devices in alpine warning systems. Furthermore, the system offers the opportunity to flag new signal classes that have not been defined before. We demonstrate the application of the classification system using a data set from the Swiss Seismological Survey achieving very high recognition rates. In detail we document all refinements of the classifier providing a step-by-step guide for the fast set up of a well-working classification system.
Acoustic Event Detection and Classification
NASA Astrophysics Data System (ADS)
Temko, Andrey; Nadeu, Climent; Macho, Dušan; Malkin, Robert; Zieger, Christian; Omologo, Maurizio
The human activity that takes place in meeting rooms or classrooms is reflected in a rich variety of acoustic events (AE), produced either by the human body or by objects handled by humans, so the determination of both the identity of sounds and their position in time may help to detect and describe that human activity. Indeed, speech is usually the most informative sound, but other kinds of AEs may also carry useful information, for example, clapping or laughing inside a speech, a strong yawn in the middle of a lecture, a chair moving or a door slam when the meeting has just started. Additionally, detection and classification of sounds other than speech may be useful to enhance the robustness of speech technologies like automatic speech recognition.
Infrasound Monitoring of Local, Regional and Global Events
2007-09-01
detect and associate signals from the March 9th 2005 eruption at Mount Saint Helens, and locate the event to be within 5 km of the caldera . The...are located within 5 km of the center of the caldera at Mount Saint Helens. Figure 4. Locations of grid nodes that were automatically associated...photograph, and are located within 5 km of the center of the caldera . 29th Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring
Global Seismic Event Detection Using Surface Waves: 15 Possible Antarctic Glacial Sliding Events
NASA Astrophysics Data System (ADS)
Chen, X.; Shearer, P. M.; Walker, K. T.; Fricker, H. A.
2008-12-01
To identify overlooked or anomalous seismic events not listed in standard catalogs, we have developed an algorithm to detect and locate global seismic events using intermediate-period (35-70s) surface waves. We apply our method to continuous vertical-component seismograms from the global seismic networks as archived in the IRIS UV FARM database from 1997 to 2007. We first bandpass filter the seismograms, apply automatic gain control, and compute envelope functions. We then examine 1654 target event locations defined at 5 degree intervals and stack the seismogram envelopes along the predicted Rayleigh-wave travel times. The resulting function has spatial and temporal peaks that indicate possible seismic events. We visually check these peaks using a graphical user interface to eliminate artifacts and assign an overall reliability grade (A, B or C) to the new events. We detect 78% of events in the Global Centroid Moment Tensor (CMT) catalog. However, we also find 840 new events not listed in the PDE, ISC and REB catalogs. Many of these new events were previously identified by Ekstrom (2006) using a different Rayleigh-wave detection scheme. Most of these new events are located along oceanic ridges and transform faults. Some new events can be associated with volcanic eruptions such as the 2000 Miyakejima sequence near Japan and others with apparent glacial sliding events in Greenland (Ekstrom et al., 2003). We focus our attention on 15 events detected from near the Antarctic coastline and relocate them using a cross-correlation approach. The events occur in 3 groups which are well-separated from areas of cataloged earthquake activity. We speculate that these are iceberg calving and/or glacial sliding events, and hope to test this by inverting for their source mechanisms and examining remote sensing data from their source regions.
NASA Astrophysics Data System (ADS)
Hosotani, Daisuke; Yoda, Ikushi; Hishiyama, Yoshiyuki; Sakaue, Katsuhiko
Many people are involved in accidents every year at railroad crossings, but there is no suitable sensor for detecting pedestrians. We are therefore developing a ubiquitous stereo vision based system for ensuring safety at railroad crossings. In this system, stereo cameras are installed at the corners and are pointed toward the center of the railroad crossing to monitor the passage of people. The system determines automatically and in real-time whether anyone or anything is inside the railroad crossing, and whether anyone remains in the crossing. The system can be configured to automatically switch over to a surveillance monitor or automatically connect to an emergency brake system in the event of trouble. We have developed an original stereovision device and installed the remote controlled experimental system applied human detection algorithm in the commercial railroad crossing. Then we store and analyze image data and tracking data throughout two years for standardization of system requirement specification.
Verifying the Comprehensive Nuclear-Test-Ban Treaty by Radioxenon Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ringbom, Anders
2005-05-24
The current status of the ongoing establishment of a verification system for the Comprehensive Nuclear-Test-Ban Treaty using radioxenon detection is discussed. As an example of equipment used in this application the newly developed fully automatic noble gas sampling and detection system SAUNA is described, and data collected with this system are discussed. It is concluded that the most important remaining scientific challenges in the field concern event categorization and meteorological backtracking.
Solar Demon: near real-time solar eruptive event detection on SDO/AIA images
NASA Astrophysics Data System (ADS)
Kraaikamp, Emil; Verbeeck, Cis
Solar flares, dimmings and EUV waves have been observed routinely in extreme ultra-violet (EUV) images of the Sun since 1996. These events are closely associated with coronal mass ejections (CMEs), and therefore provide useful information for early space weather alerts. The Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) generates such a massive dataset that it becomes impossible to find most of these eruptive events manually. Solar Demon is a set of automatic detection algorithms that attempts to solve this problem by providing both near real-time warnings of eruptive events and a catalog of characterized events. Solar Demon has been designed to detect and characterize dimmings, EUV waves, as well as solar flares in near real-time on SDO/AIA data. The detection modules are running continuously at the Royal Observatory of Belgium on both quick-look data and synoptic science data. The output of Solar Demon can be accessed in near real-time on the Solar Demon website, and includes images, movies, light curves, and the numerical evolution of several parameters. Solar Demon is the result of collaboration between the FP7 projects AFFECTS and COMESEP. Flare detections of Solar Demon are integrated into the COMESEP alert system. Here we present the Solar Demon detection algorithms and their output. We will focus on the algorithm and its operational implementation. Examples of interesting flare, dimming and EUV wave events, and general statistics of the detections made so far during solar cycle 24 will be presented as well.
Near-real time 3D probabilistic earthquakes locations at Mt. Etna volcano
NASA Astrophysics Data System (ADS)
Barberi, G.; D'Agostino, M.; Mostaccio, A.; Patane', D.; Tuve', T.
2012-04-01
Automatic procedure for locating earthquake in quasi-real time must provide a good estimation of earthquakes location within a few seconds after the event is first detected and is strongly needed for seismic warning system. The reliability of an automatic location algorithm is influenced by several factors such as errors in picking seismic phases, network geometry, and velocity model uncertainties. On Mt. Etna, the seismic network is managed by INGV and the quasi-real time earthquakes locations are performed by using an automatic-picking algorithm based on short-term-average to long-term-average ratios (STA/LTA) calculated from an approximate squared envelope function of the seismogram, which furnish a list of P-wave arrival times, and the location algorithm Hypoellipse, with a 1D velocity model. The main purpose of this work is to investigate the performances of a different automatic procedure to improve the quasi-real time earthquakes locations. In fact, as the automatic data processing may be affected by outliers (wrong picks), the use of a traditional earthquake location techniques based on a least-square misfit function (L2-norm) often yield unstable and unreliable solutions. Moreover, on Mt. Etna, the 1D model is often unable to represent the complex structure of the volcano (in particular the strong lateral heterogeneities), whereas the increasing accuracy in the 3D velocity models at Mt. Etna during recent years allows their use today in routine earthquake locations. Therefore, we selected, as reference locations, all the events occurred on Mt. Etna in the last year (2011) which was automatically detected and located by means of the Hypoellipse code. By using this dataset (more than 300 events), we applied a nonlinear probabilistic earthquake location algorithm using the Equal Differential Time (EDT) likelihood function, (Font et al., 2004; Lomax, 2005) which is much more robust in the presence of outliers in the data. Successively, by using a probabilistic non linear method (NonLinLoc, Lomax, 2001) and the 3D velocity model, derived from the one developed by Patanè et al. (2006) integrated with that obtained by Chiarabba et al. (2004), we obtained the best possible constraint on the location of the focii expressed as a probability density function (PDF) for the hypocenter location in 3D space. As expected, the obtained results, compared with the reference ones, show that the NonLinLoc software (applied to a 3D velocity model) is more reliable than the Hypoellipse code (applied to layered 1D velocity models), leading to more reliable automatic locations also when outliers are present.
Characterizing Interference in Radio Astronomy Observations through Active and Unsupervised Learning
NASA Technical Reports Server (NTRS)
Doran, G.
2013-01-01
In the process of observing signals from astronomical sources, radio astronomers must mitigate the effects of manmade radio sources such as cell phones, satellites, aircraft, and observatory equipment. Radio frequency interference (RFI) often occurs as short bursts (< 1 ms) across a broad range of frequencies, and can be confused with signals from sources of interest such as pulsars. With ever-increasing volumes of data being produced by observatories, automated strategies are required to detect, classify, and characterize these short "transient" RFI events. We investigate an active learning approach in which an astronomer labels events that are most confusing to a classifier, minimizing the human effort required for classification. We also explore the use of unsupervised clustering techniques, which automatically group events into classes without user input. We apply these techniques to data from the Parkes Multibeam Pulsar Survey to characterize several million detected RFI events from over a thousand hours of observation.
iMSRC: converting a standard automated microscope into an intelligent screening platform.
Carro, Angel; Perez-Martinez, Manuel; Soriano, Joaquim; Pisano, David G; Megias, Diego
2015-05-27
Microscopy in the context of biomedical research is demanding new tools to automatically detect and capture objects of interest. The few extant packages addressing this need, however, have enjoyed limited uptake due to complexity of use and installation. To overcome these drawbacks, we developed iMSRC, which combines ease of use and installation with high flexibility and enables applications such as rare event detection and high-resolution tissue sample screening, saving time and resources.
Closing the Loop in ICU Decision Support: Physiologic Event Detection, Alerts, and Documentation
Norris, Patrick R.; Dawant, Benoit M.
2002-01-01
Automated physiologic event detection and alerting is a challenging task in the ICU. Ideally care providers should be alerted only when events are clinically significant and there is opportunity for corrective action. However, the concepts of clinical significance and opportunity are difficult to define in automated systems, and effectiveness of alerting algorithms is difficult to measure. This paper describes recent efforts on the Simon project to capture information from ICU care providers about patient state and therapy in response to alerts, in order to assess the value of event definitions and progressively refine alerting algorithms. Event definitions for intracranial pressure and cerebral perfusion pressure were studied by implementing a reliable system to automatically deliver alerts to clinical users’ alphanumeric pagers, and to capture associated documentation about patient state and therapy when the alerts occurred. During a 6-month test period in the trauma ICU at Vanderbilt University Medical Center, 530 alerts were detected in 2280 hours of data spanning 14 patients. Clinical users electronically documented 81% of these alerts as they occurred. Retrospectively classifying documentation based on therapeutic actions taken, or reasons why actions were not taken, provided useful information about ways to potentially improve event definitions and enhance system utility.
Detection of Epileptic Seizure Event and Onset Using EEG
Ahammad, Nabeel; Fathima, Thasneem; Joseph, Paul
2014-01-01
This study proposes a method of automatic detection of epileptic seizure event and onset using wavelet based features and certain statistical features without wavelet decomposition. Normal and epileptic EEG signals were classified using linear classifier. For seizure event detection, Bonn University EEG database has been used. Three types of EEG signals (EEG signal recorded from healthy volunteer with eye open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. Important features such as energy, entropy, standard deviation, maximum, minimum, and mean at different subbands were computed and classification was done using linear classifier. The performance of classifier was determined in terms of specificity, sensitivity, and accuracy. The overall accuracy was 84.2%. In the case of seizure onset detection, the database used is CHB-MIT scalp EEG database. Along with wavelet based features, interquartile range (IQR) and mean absolute deviation (MAD) without wavelet decomposition were extracted. Latency was used to study the performance of seizure onset detection. Classifier gave a sensitivity of 98.5% with an average latency of 1.76 seconds. PMID:24616892
Detection and Classification of Motor Vehicle Noise in a Forested Landscape
NASA Astrophysics Data System (ADS)
Brown, Casey L.; Reed, Sarah E.; Dietz, Matthew S.; Fristrup, Kurt M.
2013-11-01
Noise emanating from human activity has become a common addition to natural soundscapes and has the potential to harm wildlife and erode human enjoyment of nature. In particular, motor vehicles traveling along roads and trails produce high levels of both chronic and intermittent noise, eliciting varied responses from a wide range of animal species. Anthropogenic noise is especially conspicuous in natural areas where ambient background sound levels are low. In this article, we present an acoustic method to detect and analyze motor vehicle noise. Our approach uses inexpensive consumer products to record sound, sound analysis software to automatically detect sound events within continuous recordings and measure their acoustic properties, and statistical classification methods to categorize sound events. We describe an application of this approach to detect motor vehicle noise on paved, gravel, and natural-surface roads, and off-road vehicle trails in 36 sites distributed throughout a national forest in the Sierra Nevada, CA, USA. These low-cost, unobtrusive methods can be used by scientists and managers to detect anthropogenic noise events for many potential applications, including ecological research, transportation and recreation planning, and natural resource management.
Tweeting Earthquakes using TensorFlow
NASA Astrophysics Data System (ADS)
Casarotti, E.; Comunello, F.; Magnoni, F.
2016-12-01
The use of social media is emerging as a powerful tool for disseminating trusted information about earthquakes. Since 2009, the Twitter account @INGVterremoti provides constant and timely details about M2+ seismic events detected by the Italian National Seismic Network, directly connected with the seismologists on duty at Istituto Nazionale di Geofisica e Vulcanologia (INGV). Currently, it updates more than 150,000 followers. Nevertheless, since it provides only the manual revision of seismic parameters, the timing (approximately between 10 and 20 minutes after an event) has started to be under evaluation. Undeniably, mobile internet, social network sites and Twitter in particular require a more rapid and "real-time" reaction. During the last 36 months, INGV tested the tweeting of the automatic detection of M3+ earthquakes, studying the reliability of the information both in term of seismological accuracy that from the point of view of communication and social research. A set of quality parameters (i.e. number of seismic stations, gap, relative error of the location) has been recognized to reduce false alarms and the uncertainty of the automatic detection. We present an experiment to further improve the reliability of this process using TensorFlow™ (an open source software library originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization).
Evaluating the Reverse Time Migration Method on the dense Lapnet / Polenet seismic array in Europe
NASA Astrophysics Data System (ADS)
Dupont, Aurélien; Le Pichon, Alexis
2013-04-01
In this study, results are obtained using the reverse time migration method used as benchmark to evaluate the implemented method by Walker et al., (2010, 2011). Explosion signals recorded by the USArray and extracted from the TAIRED catalogue (TA Infrasound Reference Event Database user community / Vernon et al., 2012) are investigated. The first one is an explosion at Camp Minden, Louisiana (2012-10-16 04:25:00 UTC) and the second one is a natural gas explosion near Price, Utah (2012-11-20 15:20:00 UTC). We compare our results to automatic solutions (www.iris.edu/spud/infrasoundevent). The good agreement between both solutions validates our detection method. In a second time, we analyse data from the Lapnet / Polenet dense seismic network (Kozlovskaya et al., 2008). Detection and location in two-dimensional space and time of infrasound events presumably due to acoustic-to-seismic coupling, during the 2007-2009 period in Europe, are presented. The aim of this work is to integrate near-real time network performance predictions at regional scales to improve automatic detection of infrasonic sources. The use of dense seismic networks provides a valuable tool to monitor infrasonic phenomena, since seismic location has recently proved to be more accurate than infrasound locations due to the large number of seismic sensors.
Improving the Detectability of the Catalan Seismic Network for Local Seismic Activity Monitoring
NASA Astrophysics Data System (ADS)
Jara, Jose Antonio; Frontera, Tànit; Batlló, Josep; Goula, Xavier
2016-04-01
The seismic survey of the territory of Catalonia is mainly performed by the regional seismic network operated by the Cartographic and Geologic Institute of Catalonia (ICGC). After successive deployments and upgrades, the current network consists of 16 permanent stations equipped with 3 component broadband seismometers (STS2, STS2.5, CMG3ESP and CMG3T), 24 bits digitizers (Nanometrics Trident) and VSAT telemetry. Data are continuously sent in real-time via Hispasat 1D satellite to the ICGC datacenter in Barcelona. Additionally, data from other 10 stations of neighboring areas (Spain, France and Andorra) are continuously received since 2011 via Internet or VSAT, contributing both to detect and to locate events affecting the region. More than 300 local events with Ml ≥ 0.7 have been yearly detected and located in the region. Nevertheless, small magnitude earthquakes, especially those located in the south and south-west of Catalonia may still go undetected by the automatic detection system (DAS), based on Earthworm (USGS). Thus, in order to improve the detection and characterization of these missed events, one or two new stations should be installed. Before making the decision about where to install these new stations, the performance of each existing station is evaluated taking into account the fraction of detected events using the station records, compared to the total number of events in the catalogue, occurred during the station operation time from January 1, 2011 to December 31, 2014. These evaluations allow us to build an Event Detection Probability Map (EDPM), a required tool to simulate EDPMs resulting from different network topology scenarios depending on where these new stations are sited, and becoming essential for the decision-making process to increase and optimize the event detection probability of the seismic network.
Ong, Lee-Ling S; Xinghua Zhang; Kundukad, Binu; Dauwels, Justin; Doyle, Patrick; Asada, H Harry
2016-08-01
An approach to automatically detect bacteria division with temporal models is presented. To understand how bacteria migrate and proliferate to form complex multicellular behaviours such as biofilms, it is desirable to track individual bacteria and detect cell division events. Unlike eukaryotic cells, prokaryotic cells such as bacteria lack distinctive features, causing bacteria division difficult to detect in a single image frame. Furthermore, bacteria may detach, migrate close to other bacteria and may orientate themselves at an angle to the horizontal plane. Our system trains a hidden conditional random field (HCRF) model from tracked and aligned bacteria division sequences. The HCRF model classifies a set of image frames as division or otherwise. The performance of our HCRF model is compared with a Hidden Markov Model (HMM). The results show that a HCRF classifier outperforms a HMM classifier. From 2D bright field microscopy data, it is a challenge to separate individual bacteria and associate observations to tracks. Automatic detection of sequences with bacteria division will improve tracking accuracy.
Wang, Yang; Wang, Xiaohua; Liu, Fangnan; Jiang, Xiaoning; Xiao, Yun; Dong, Xuehan; Kong, Xianglei; Yang, Xuemei; Tian, Donghua; Qu, Zhiyong
2016-01-01
Few studies have looked at the relationship between psychological and the mental health status of pregnant women in rural China. The current study aims to explore the potential mediating effect of negative automatic thoughts between negative life events and antenatal depression. Data were collected in June 2012 and October 2012. 495 rural pregnant women were interviewed. Depressive symptoms were measured by the Edinburgh postnatal depression scale, stresses of pregnancy were measured by the pregnancy pressure scale, negative automatic thoughts were measured by the automatic thoughts questionnaire, and negative life events were measured by the life events scale for pregnant women. We used logistic regression and path analysis to test the mediating effect. The prevalence of antenatal depression was 13.7%. In the logistic regression, the only socio-demographic and health behavior factor significantly related to antenatal depression was sleep quality. Negative life events were not associated with depression in the fully adjusted model. Path analysis showed that the eventual direct and general effects of negative automatic thoughts were 0.39 and 0.51, which were larger than the effects of negative life events. This study suggested that there was a potentially significant mediating effect of negative automatic thoughts. Pregnant women who had lower scores of negative automatic thoughts were more likely to suffer less from negative life events which might lead to antenatal depression.
Security Event Recognition for Visual Surveillance
NASA Astrophysics Data System (ADS)
Liao, W.; Yang, C.; Yang, M. Ying; Rosenhahn, B.
2017-05-01
With rapidly increasing deployment of surveillance cameras, the reliable methods for automatically analyzing the surveillance video and recognizing special events are demanded by different practical applications. This paper proposes a novel effective framework for security event analysis in surveillance videos. First, convolutional neural network (CNN) framework is used to detect objects of interest in the given videos. Second, the owners of the objects are recognized and monitored in real-time as well. If anyone moves any object, this person will be verified whether he/she is its owner. If not, this event will be further analyzed and distinguished between two different scenes: moving the object away or stealing it. To validate the proposed approach, a new video dataset consisting of various scenarios is constructed for more complex tasks. For comparison purpose, the experiments are also carried out on the benchmark databases related to the task on abandoned luggage detection. The experimental results show that the proposed approach outperforms the state-of-the-art methods and effective in recognizing complex security events.
NASA Astrophysics Data System (ADS)
Vasterling, Margarete; Wegler, Ulrich; Bruestle, Andrea; Becker, Jan
2016-04-01
Real time information on the locations and magnitudes of induced earthquakes is essential for response plans based on the magnitude frequency distribution. We developed and tested a real time cross-correlation detector focusing on induced microseismicity in deep geothermal reservoirs. The incoming seismological data are cross-correlated in real time with a set of known master events. We use the envelopes of the seismograms rather than the seismograms themselves to account for small changes in the source locations or in the focal mechanisms. Two different detection conditions are implemented: After first passing a single trace correlation condition, secondly a network correlation is calculated taking the amplitude information of the seismic network into account. The magnitude is estimated by using the respective ratio of the maximum amplitudes of the master event and the detected event. The detector is implemented as a real time tool and put into practice as a SeisComp3 module, an established open source software for seismological real time data handling and analysis. We validated the reliability and robustness of the detector by an offline playback test using four month of data from monitoring the power plant in Insheim (Upper Rhine Graben, SW Germany). Subsequently, in October 2013 the detector was installed as real time monitoring system within the project "MAGS2 - Microseismic Activity of Geothermal Systems". Master events from the two neighboring geothermal power plants in Insheim and Landau and two nearby quarries are defined. After detection, manual phase determination and event location are performed at the local seismological survey of the Geological Survey and Mining Authority of Rhineland-Palatinate. Until November 2015 the detector identified 454 events out of which 95% were assigned correctly to the respective source. 5% were misdetections caused by local tectonic events. To evaluate the completeness of the automatically obtained catalogue, it is compared to the event catalogue of the Seismological Service of Southwestern Germany and to the events reported by the company tasked with seismic monitoring of the Insheim power plant. Events missed by the cross-correlation detector are generally very small. They are registered at too few stations to meet the detection criteria. Most of these small events were not locatable. The automatic catalogue has a magnitude of completeness around 0.0 and is significantly more detailed than the catalogue from standard processing of the Seismological Service of Southwestern Germany for this region. For events in the magnitude range of the master event the magnitude estimated from the amplitude ratio reproduces the local magnitude well. For weaker events there tends to be a small offset. Altogether, the developed real time cross correlation detector provides robust detections with reliable association of the events to the respective sources and valid magnitude estimates. Thus, it provides input parameters for the mitigation of seismic hazard by using response plans in real time.
NASA Astrophysics Data System (ADS)
Becker, Holger; Schattschneider, Sebastian; Klemm, Richard; Hlawatsch, Nadine; Gärtner, Claudia
2015-03-01
The continuous monitoring of the environment for lethal pathogens is a central task in the field of biothreat detection. Typical scenarios involve air-sampling in locations such as public transport systems or large public events and a subsequent analysis of the samples by a portable instrument. Lab-on-a-chip technologies are one of the promising technological candidates for such a system. We have developed an integrated microfluidic system with automatic sampling for the detection of CBRNE-related pathogens. The chip contains a two-pronged analysis strategy, on the one hand an immunological track using antibodies immobilized on a frit and a subsequent photometric detection, on the other hand a molecular biology approach using continuous-flow PCR with a fluorescence end-point detection. The cartridge contains two-component molded rotary valve to allow active fluid control and switching between channels. The accompanying instrument contains all elements for fluidic and valve actuation, thermal control, as well as the two detection modalities. Reagents are stored in dedicated reagent packs which are connected directly to the cartridge. With this system, we have been able to demonstrate the detection of a variety of pathogen species.
A novel automatic method for monitoring Tourette motor tics through a wearable device.
Bernabei, Michel; Preatoni, Ezio; Mendez, Martin; Piccini, Luca; Porta, Mauro; Andreoni, Giuseppe
2010-09-15
The aim of this study was to propose a novel automatic method for quantifying motor-tics caused by the Tourette Syndrome (TS). In this preliminary report, the feasibility of the monitoring process was tested over a series of standard clinical trials in a population of 12 subjects affected by TS. A wearable instrument with an embedded three-axial accelerometer was used to detect and classify motor tics during standing and walking activities. An algorithm was devised to analyze acceleration data by: eliminating noise; detecting peaks connected to pathological events; and classifying intensity and frequency of motor tics into quantitative scores. These indexes were compared with the video-based ones provided by expert clinicians, which were taken as the gold-standard. Sensitivity, specificity, and accuracy of tic detection were estimated, and an agreement analysis was performed through the least square regression and the Bland-Altman test. The tic recognition algorithm showed sensitivity = 80.8% ± 8.5% (mean ± SD), specificity = 75.8% ± 17.3%, and accuracy = 80.5% ± 12.2%. The agreement study showed that automatic detection tended to overestimate the number of tics occurred. Although, it appeared this may be a systematic error due to the different recognition principles of the wearable and video-based systems. Furthermore, there was substantial concurrency with the gold-standard in estimating the severity indexes. The proposed methodology gave promising performances in terms of automatic motor-tics detection and classification in a standard clinical context. The system may provide physicians with a quantitative aid for TS assessment. Further developments will focus on the extension of its application to everyday long-term monitoring out of clinical environments. © 2010 Movement Disorder Society.
Automatic Detection of Seizures with Applications
NASA Technical Reports Server (NTRS)
Olsen, Dale E.; Harris, John C.; Cutchis, Protagoras N.; Cristion, John A.; Lesser, Ronald P.; Webber, W. Robert S.
1993-01-01
There are an estimated two million people with epilepsy in the United States. Many of these people do not respond to anti-epileptic drug therapy. Two devices can be developed to assist in the treatment of epilepsy. The first is a microcomputer-based system designed to process massive amounts of electroencephalogram (EEG) data collected during long-term monitoring of patients for the purpose of diagnosing seizures, assessing the effectiveness of medical therapy, or selecting patients for epilepsy surgery. Such a device would select and display important EEG events. Currently many such events are missed. A second device could be implanted and would detect seizures and initiate therapy. Both of these devices require a reliable seizure detection algorithm. A new algorithm is described. It is believed to represent an improvement over existing seizure detection algorithms because better signal features were selected and better standardization methods were used.
NASA Astrophysics Data System (ADS)
Bossu, R.; Steed, R.; Mazet-Roux, G.; Roussel, F.; Frobert, L.
2015-12-01
Many seismic events are only picked up by seismometers but the only earthquakes that really interest the public (and the authorities) are those which are felt by the population. It is not a magnitude issue only; even a small magnitude earthquake, if widely felt can create a public desire for information. In LastQuake, felt events are automatically discriminated through the reactions of the population on the Internet. It uses three different and complementary methods. Twitter Earthquake detection, initially developed by the USGS, detects surges in the number of tweets containing the word "earthquake" in different languages. Flashsourcing, developed by EMSC, detects traffic surges caused by eyewitnesses on its website - one of the top global earthquake information websites. Both detections happen typically within 2 minutes of an event's occurrence. Finally, an earthquake is also confirmed as being felt when at least 3 independent felt reports (questionnaires) are collected. LastQuake automatically merges seismic data, direct (crowdsourced) and indirect eyewitnesses' contributions, damage scenarios and tsunami alerts to provide information on felt earthquakes and their effects in a time ranging from a few tens of seconds to 90 minutes. It is based on visual communication to erase language hurdles, for instance, it crowdsources felt reports through simple cartoons as well as geo-located pics. It was massively adopted in Nepal within hours of the Gorkha earthquake and collected thousands of felt reports and more than 100 informative pics. LastQuake is also a seismic risk reduction tools thanks to its very rapid information. When such information does not exist, people tend to call emergency services, crowds emerge and rumors spread. In its next release, LastQuake will also have "do/don't do" cartoons popping up after an earthquake to encourage appropriate behavior.
Meal Microstructure Characterization from Sensor-Based Food Intake Detection.
Doulah, Abul; Farooq, Muhammad; Yang, Xin; Parton, Jason; McCrory, Megan A; Higgins, Janine A; Sazonov, Edward
2017-01-01
To avoid the pitfalls of self-reported dietary intake, wearable sensors can be used. Many food ingestion sensors offer the ability to automatically detect food intake using time resolutions that range from 23 ms to 8 min. There is no defined standard time resolution to accurately measure ingestive behavior or a meal microstructure. This paper aims to estimate the time resolution needed to accurately represent the microstructure of meals such as duration of eating episode, the duration of actual ingestion, and number of eating events. Twelve participants wore the automatic ingestion monitor (AIM) and kept a standard diet diary to report their food intake in free-living conditions for 24 h. As a reference, participants were also asked to mark food intake with a push button sampled every 0.1 s. The duration of eating episodes, duration of ingestion, and number of eating events were computed from the food diary, AIM, and the push button resampled at different time resolutions (0.1-30s). ANOVA and multiple comparison tests showed that the duration of eating episodes estimated from the diary differed significantly from that estimated by the AIM and the push button ( p -value <0.001). There were no significant differences in the number of eating events for push button resolutions of 0.1, 1, and 5 s, but there were significant differences in resolutions of 10-30s ( p -value <0.05). The results suggest that the desired time resolution of sensor-based food intake detection should be ≤5 s to accurately detect meal microstructure. Furthermore, the AIM provides more accurate measurement of the eating episode duration than the diet diary.
Meal Microstructure Characterization from Sensor-Based Food Intake Detection
Doulah, Abul; Farooq, Muhammad; Yang, Xin; Parton, Jason; McCrory, Megan A.; Higgins, Janine A.; Sazonov, Edward
2017-01-01
To avoid the pitfalls of self-reported dietary intake, wearable sensors can be used. Many food ingestion sensors offer the ability to automatically detect food intake using time resolutions that range from 23 ms to 8 min. There is no defined standard time resolution to accurately measure ingestive behavior or a meal microstructure. This paper aims to estimate the time resolution needed to accurately represent the microstructure of meals such as duration of eating episode, the duration of actual ingestion, and number of eating events. Twelve participants wore the automatic ingestion monitor (AIM) and kept a standard diet diary to report their food intake in free-living conditions for 24 h. As a reference, participants were also asked to mark food intake with a push button sampled every 0.1 s. The duration of eating episodes, duration of ingestion, and number of eating events were computed from the food diary, AIM, and the push button resampled at different time resolutions (0.1–30s). ANOVA and multiple comparison tests showed that the duration of eating episodes estimated from the diary differed significantly from that estimated by the AIM and the push button (p-value <0.001). There were no significant differences in the number of eating events for push button resolutions of 0.1, 1, and 5 s, but there were significant differences in resolutions of 10–30s (p-value <0.05). The results suggest that the desired time resolution of sensor-based food intake detection should be ≤5 s to accurately detect meal microstructure. Furthermore, the AIM provides more accurate measurement of the eating episode duration than the diet diary. PMID:28770206
iMSRC: converting a standard automated microscope into an intelligent screening platform
Carro, Angel; Perez-Martinez, Manuel; Soriano, Joaquim; Pisano, David G.; Megias, Diego
2015-01-01
Microscopy in the context of biomedical research is demanding new tools to automatically detect and capture objects of interest. The few extant packages addressing this need, however, have enjoyed limited uptake due to complexity of use and installation. To overcome these drawbacks, we developed iMSRC, which combines ease of use and installation with high flexibility and enables applications such as rare event detection and high-resolution tissue sample screening, saving time and resources. PMID:26015081
Continuous robust sound event classification using time-frequency features and deep learning
Song, Yan; Xiao, Wei; Phan, Huy
2017-01-01
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478
Continuous robust sound event classification using time-frequency features and deep learning.
McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy
2017-01-01
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.
SAMuS: Service-Oriented Architecture for Multisensor Surveillance in Smart Homes
Van de Walle, Rik
2014-01-01
The design of a service-oriented architecture for multisensor surveillance in smart homes is presented as an integrated solution enabling automatic deployment, dynamic selection, and composition of sensors. Sensors are implemented as Web-connected devices, with a uniform Web API. RESTdesc is used to describe the sensors and a novel solution is presented to automatically compose Web APIs that can be applied with existing Semantic Web reasoners. We evaluated the solution by building a smart Kinect sensor that is able to dynamically switch between IR and RGB and optimizing person detection by incorporating feedback from pressure sensors, as such demonstrating the collaboration among sensors to enhance detection of complex events. The performance results show that the platform scales for many Web APIs as composition time remains limited to a few hundred milliseconds in almost all cases. PMID:24778579
A substitution method to improve completeness of events documentation in anesthesia records.
Lamer, Antoine; De Jonckheere, Julien; Marcilly, Romaric; Tavernier, Benoît; Vallet, Benoît; Jeanne, Mathieu; Logier, Régis
2015-12-01
AIMS are optimized to find and display data and curves about one specific intervention but is not retrospective analysis on a huge volume of interventions. Such a system present two main limitation; (1) the transactional database architecture, (2) the completeness of documentation. In order to solve the architectural problem, data warehouses were developed to propose architecture suitable for analysis. However, completeness of documentation stays unsolved. In this paper, we describe a method which allows determining of substitution rules in order to detect missing anesthesia events in an anesthesia record. Our method is based on the principle that missing event could be detected using a substitution one defined as the nearest documented event. As an example, we focused on the automatic detection of the start and the end of anesthesia procedure when these events were not documented by the clinicians. We applied our method on a set of records in order to evaluate; (1) the event detection accuracy, (2) the improvement of valid records. For the year 2010-2012, we obtained event detection with a precision of 0.00 (-2.22; 2.00) min for the start of anesthesia and 0.10 (0.00; 0.35) min for the end of anesthesia. On the other hand, we increased by 21.1% the data completeness (from 80.3 to 97.2% of the total database) for the start and the end of anesthesia events. This method seems to be efficient to replace missing "start and end of anesthesia" events. This method could also be used to replace other missing time events in this particular data warehouse as well as in other kind of data warehouses.
Considerations in Phase Estimation and Event Location Using Small-aperture Regional Seismic Arrays
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Kværna, Tormod; Ringdal, Frode
2010-05-01
The global monitoring of earthquakes and explosions at decreasing magnitudes necessitates the fully automatic detection, location and classification of an ever increasing number of seismic events. Many seismic stations of the International Monitoring System are small-aperture arrays designed to optimize the detection and measurement of regional phases. Collaboration with operators of mines within regional distances of the ARCES array, together with waveform correlation techniques, has provided an unparalleled opportunity to assess the ability of a small-aperture array to provide robust and accurate direction and slowness estimates for phase arrivals resulting from well-constrained events at sites of repeating seismicity. A significant reason for the inaccuracy of current fully-automatic event location estimates is the use of f- k slowness estimates measured in variable frequency bands. The variability of slowness and azimuth measurements for a given phase from a given source region is reduced by the application of almost any constant frequency band. However, the frequency band resulting in the most stable estimates varies greatly from site to site. Situations are observed in which regional P- arrivals from two sites, far closer than the theoretical resolution of the array, result in highly distinct populations in slowness space. This means that the f- k estimates, even at relatively low frequencies, can be sensitive to source and path-specific characteristics of the wavefield and should be treated with caution when inferring a geographical backazimuth under the assumption of a planar wavefront arriving along the great-circle path. Moreover, different frequency bands are associated with different biases meaning that slowness and azimuth station corrections (commonly denoted SASCs) cannot be calibrated, and should not be used, without reference to the frequency band employed. We demonstrate an example where fully-automatic locations based on a source-region specific fixed-parameter template are more stable than the corresponding analyst reviewed estimates. The reason is that the analyst selects a frequency band and analysis window which appears optimal for each event. In this case, the frequency band which produces the most consistent direction estimates has neither the best SNR or the greatest beam-gain, and is therefore unlikely to be chosen by an analyst without calibration data.
NASA Astrophysics Data System (ADS)
Zollo, Aldo
2016-04-01
RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude of the equivalent Wood-Anderson displacement recordings. The moment magnitude (Mw) is then estimated from the inversion of displacement spectra. The duration magnitude (Md) is rapidly computed, based on a simple and automatic measurement of the seismic wave coda duration. Starting from the magnitude estimates, other relevant pieces of information are also computed, such as the corner frequency, the seismic moment, the source radius and the seismic energy. The ground-shaking maps on a Google map are produced, for peak ground acceleration (PGA), peak ground velocity (PGV) and instrumental intensity (in SHAKEMAP® format), or a plot of the measured peak ground values. Furthermore, based on a specific decisional scheme, the automatic discrimination between local earthquakes occurred within the network and regional/teleseismic events occurred outside the network is performed. Finally, for largest events, if a consistent number of P-wave polarity reading are available, the focal mechanism is also computed. For each event, all of the available pieces of information are stored in a local database and the results of the automatic analyses are published on an interactive web page. "The Bulletin" shows a map with event location and stations, as well as a table listing all the events, with the associated parameters. The catalogue fields are the event ID, the origin date and time, latitude, longitude, depth, Ml, Mw, Md, the number of triggered stations, the S-displacement spectra, and shaking maps. Some of these entries also provide additional information, such as the focal mechanism (when available). The picked traces are uploaded in the database and from the web interface of the Bulletin the traces can be download for more specific analysis. This innovative software represents a smart solution, with a friendly and interactive interface, for high-level analysis of seismic data analysis and it may represent a relevant tool not only for seismologists, but also for non-expert external users who are interested in the seismological data. The software is a valid tool for the automatic analysis of the background seismicity at different time scales and can be a relevant tool for the monitoring of both natural and induced seismicity.
Automatic localization of backscattering events due to particulate in urban areas
NASA Astrophysics Data System (ADS)
Gaudio, P.; Gelfusa, M.; Malizia, Andrea; Parracino, Stefano; Richetta, M.; Murari, A.; Vega, J.
2014-10-01
Particulate matter (PM), emitted by vehicles in urban traffic, can greatly affect environment air quality and have direct implications on both human health and infrastructure integrity. The consequences for society are relevant and can impact also on national health. Limits and thresholds of pollutants emitted by vehicles are typically regulated by government agencies. In the last few years, the interest in PM emissions has grown substantially due to both air quality issues and global warming. Lidar-Dial techniques are widely recognized as a costeffective alternative to monitor large regions of the atmosphere. To maximize the effectiveness of the measurements and to guarantee reliable, automatic monitoring of large areas, new data analysis techniques are required. In this paper, an original tool, the Universal Multi-Event Locator (UMEL), is applied to the problem of automatically indentifying the time location of peaks in Lidar measurements for the detection of particulate matter emitted by anthropogenic sources like vehicles. The method developed is based on Support Vector Regression and presents various advantages with respect to more traditional techniques. In particular, UMEL is based on the morphological properties of the signals and therefore the method is insensitive to the details of the noise present in the detection system. The approach is also fully general, purely software and can therefore be applied to a large variety of problems without any additional cost. The potential of the proposed technique is exemplified with the help of data acquired during an experimental campaign in the field in Rome.
Automatic detection of lift-off and touch-down of a pick-up walker using 3D kinematics.
Grootveld, L; Thies, S B; Ogden, D; Howard, D; Kenney, L P J
2014-02-01
Walking aids have been associated with falls and it is believed that incorrect use limits their usefulness. Measures are therefore needed that characterize their stable use and the classification of key events in walking aid movement is the first step in their development. This study presents an automated algorithm for detection of lift-off (LO) and touch-down (TD) events of a pick-up walker. For algorithm design and initial testing, a single user performed trials for which the four individual walker feet lifted off the ground and touched down again in various sequences, and for different amounts of frame loading (Dataset_1). For further validation, ten healthy young subjects walked with the pick-up walker on flat ground (Dataset_2a) and on a narrow beam (Dataset_2b), to challenge balance. One 88-year-old walking frame user was also assessed. Kinematic data were collected with a 3D optoelectronic camera system. The algorithm detected over 93% of events (Dataset_1), and 95% and 92% in Dataset_2a and b, respectively. Of the various LO/TD sequences, those associated with natural progression resulted in up to 100% correctly identified events. For the 88-year-old walking frame user, 96% of LO events and 93% of TD events were detected, demonstrating the potential of the approach. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Advanced Clinical Decision Support for Vaccine Adverse Event Detection and Reporting.
Baker, Meghan A; Kaelber, David C; Bar-Shain, David S; Moro, Pedro L; Zambarano, Bob; Mazza, Megan; Garcia, Crystal; Henry, Adam; Platt, Richard; Klompas, Michael
2015-09-15
Reporting of adverse events (AEs) following vaccination can help identify rare or unexpected complications of immunizations and aid in characterizing potential vaccine safety signals. We developed an open-source, generalizable clinical decision support system called Electronic Support for Public Health-Vaccine Adverse Event Reporting System (ESP-VAERS) to assist clinicians with AE detection and reporting. ESP-VAERS monitors patients' electronic health records for new diagnoses, changes in laboratory values, and new allergies following vaccinations. When suggestive events are found, ESP-VAERS sends the patient's clinician a secure electronic message with an invitation to affirm or refute the message, add comments, and submit an automated, prepopulated electronic report to VAERS. High-probability AEs are reported automatically if the clinician does not respond. We implemented ESP-VAERS in December 2012 throughout the MetroHealth System, an integrated healthcare system in Ohio. We queried the VAERS database to determine MetroHealth's baseline reporting rates from January 2009 to March 2012 and then assessed changes in reporting rates with ESP-VAERS. In the 8 months following implementation, 91 622 vaccinations were given. ESP-VAERS sent 1385 messages to responsible clinicians describing potential AEs. Clinicians opened 1304 (94.2%) messages, responded to 209 (15.1%), and confirmed 16 for transmission to VAERS. An additional 16 high-probability AEs were sent automatically. Reported events included seizure, pleural effusion, and lymphocytopenia. The odds of a VAERS report submission during the implementation period were 30.2 (95% confidence interval, 9.52-95.5) times greater than the odds during the comparable preimplementation period. An open-source, electronic health record-based clinical decision support system can increase AE detection and reporting rates in VAERS. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Information processing requirements for on-board monitoring of automatic landing
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Karmarkar, J. S.
1977-01-01
A systematic procedure is presented for determining the information processing requirements for on-board monitoring of automatic landing systems. The monitoring system detects landing anomalies through use of appropriate statistical tests. The time-to-correct aircraft perturbations is determined from covariance analyses using a sequence of suitable aircraft/autoland/pilot models. The covariance results are used to establish landing safety and a fault recovery operating envelope via an event outcome tree. This procedure is demonstrated with examples using the NASA Terminal Configured Vehicle (B-737 aircraft). The procedure can also be used to define decision height, assess monitoring implementation requirements, and evaluate alternate autoland configurations.
Deacon, D; Nousak, J M; Pilotti, M; Ritter, W; Yang, C M
1998-07-01
The effects of global and feature-specific probabilities of auditory stimuli were manipulated to determine their effects on the mismatch negativity (MMN) of the human event-related potential. The question of interest was whether the automatic comparison of stimuli indexed by the MMN was performed on representations of individual stimulus features or on gestalt representations of their combined attributes. The design of the study was such that both feature and gestalt representations could have been available to the comparator mechanism generating the MMN. The data were consistent with the interpretation that the MMN was generated following an analysis of stimulus features.
Automatic detection of freezing of gait events in patients with Parkinson's disease.
Tripoliti, Evanthia E; Tzallas, Alexandros T; Tsipouras, Markos G; Rigas, George; Bougia, Panagiota; Leontiou, Michael; Konitsiotis, Spiros; Chondrogiorgi, Maria; Tsouli, Sofia; Fotiadis, Dimitrios I
2013-04-01
The aim of this study is to detect freezing of gait (FoG) events in patients suffering from Parkinson's disease (PD) using signals received from wearable sensors (six accelerometers and two gyroscopes) placed on the patients' body. For this purpose, an automated methodology has been developed which consists of four stages. In the first stage, missing values due to signal loss or degradation are replaced and then (second stage) low frequency components of the raw signal are removed. In the third stage, the entropy of the raw signal is calculated. Finally (fourth stage), four classification algorithms have been tested (Naïve Bayes, Random Forests, Decision Trees and Random Tree) in order to detect the FoG events. The methodology has been evaluated using several different configurations of sensors in order to conclude to the set of sensors which can produce optimal FoG episode detection. Signals recorded from five healthy subjects, five patients with PD who presented the symptom of FoG and six patients who suffered from PD but they do not present FoG events. The signals included 93 FoG events with 405.6s total duration. The results indicate that the proposed methodology is able to detect FoG events with 81.94% sensitivity, 98.74% specificity, 96.11% accuracy and 98.6% area under curve (AUC) using the signals from all sensors and the Random Forests classification algorithm. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
A new way of searching for transients: the ADWO method and its results
NASA Astrophysics Data System (ADS)
Bagoly, Z.; Szecsi, D.; Ripa, J.; Racz, I. I.; Csabai, I.; Dobos, L.; Horvath, I.; Balazs, L. G.; Toth, L. V.
2017-12-01
With the detection of gravitational wave emissions from from merging compact objects, it is now more important than ever to effectively mine the data-set of gamma-satellites for non-triggered, short-duration transients. Hence we developed a new method called the Automatized Detector Weight Optimization (ADWO), applicable for space-borne detectors such as Fermi's GBM and RHESSI's Ge detectors. Provided that the trigger time of an astrophysical event is well known (as in the case of a gravitational wave detection) but the detector response matrix is uncertain, ADWO combines the data of all detectors and energy channels to provide the best signal-to-noise ratio. We used ADWO to successfully identify any potential electromagnetic counterpart of gravitational wave events, as well as to detect previously un-triggered short-duration GRBs in the data-sets.
NASA Astrophysics Data System (ADS)
Brax, Christoffer; Niklasson, Lars
2009-05-01
Maritime Domain Awareness is important for both civilian and military applications. An important part of MDA is detection of unusual vessel activities such as piracy, smuggling, poaching, collisions, etc. Today's interconnected sensorsystems provide us with huge amounts of information over large geographical areas which can make the operators reach their cognitive capacity and start to miss important events. We propose and agent-based situation management system that automatically analyse sensor information to detect unusual activity and anomalies. The system combines knowledge-based detection with data-driven anomaly detection. The system is evaluated using information from both radar and AIS sensors.
Brain correlates of automatic visual change detection.
Cléry, H; Andersson, F; Fonlupt, P; Gomot, M
2013-07-15
A number of studies support the presence of visual automatic detection of change, but little is known about the brain generators involved in such processing and about the modulation of brain activity according to the salience of the stimulus. The study presented here was designed to locate the brain activity elicited by unattended visual deviant and novel stimuli using fMRI. Seventeen adult participants were presented with a passive visual oddball sequence while performing a concurrent visual task. Variations in BOLD signal were observed in the modality-specific sensory cortex, but also in non-specific areas involved in preattentional processing of changing events. A degree-of-deviance effect was observed, since novel stimuli elicited more activity in the sensory occipital regions and at the medial frontal site than small changes. These findings could be compared to those obtained in the auditory modality and might suggest a "general" change detection process operating in several sensory modalities. Copyright © 2013 Elsevier Inc. All rights reserved.
Cyber Surveillance for Flood Disasters
Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han
2015-01-01
Regional heavy rainfall is usually caused by the influence of extreme weather conditions. Instant heavy rainfall often results in the flooding of rivers and the neighboring low-lying areas, which is responsible for a large number of casualties and considerable property loss. The existing precipitation forecast systems mostly focus on the analysis and forecast of large-scale areas but do not provide precise instant automatic monitoring and alert feedback for individual river areas and sections. Therefore, in this paper, we propose an easy method to automatically monitor the flood object of a specific area, based on the currently widely used remote cyber surveillance systems and image processing methods, in order to obtain instant flooding and waterlogging event feedback. The intrusion detection mode of these surveillance systems is used in this study, wherein a flood is considered a possible invasion object. Through the detection and verification of flood objects, automatic flood risk-level monitoring of specific individual river segments, as well as the automatic urban inundation detection, has become possible. The proposed method can better meet the practical needs of disaster prevention than the method of large-area forecasting. It also has several other advantages, such as flexibility in location selection, no requirement of a standard water-level ruler, and a relatively large field of view, when compared with the traditional water-level measurements using video screens. The results can offer prompt reference for appropriate disaster warning actions in small areas, making them more accurate and effective. PMID:25621609
Dormann, H; Criegee-Rieck, M; Neubert, A; Egger, T; Levy, M; Hahn, E G; Brune, K
2004-02-01
To investigate the effectiveness of a computer monitoring system that detects adverse drug reactions (ADRs) by laboratory signals in gastroenterology. A prospective, 6-month, pharmaco-epidemiological survey was carried out on a gastroenterological ward at the University Hospital Erlangen-Nuremberg. Two methods were used to identify ADRs. (i) All charts were reviewed daily by physicians and clinical pharmacists. (ii) A computer monitoring system generated a daily list of automatic laboratory signals and alerts of ADRs, including patient data and dates of events. One hundred and nine ADRs were detected in 474 admissions (377 patients). The computer monitoring system generated 4454 automatic laboratory signals from 39 819 laboratory parameters tested, and issued 2328 alerts, 914 (39%) of which were associated with ADRs; 574 (25%) were associated with ADR-positive admissions. Of all the alerts generated, signals of hepatotoxicity (1255), followed by coagulation disorders (407) and haematological toxicity (207), were prevalent. Correspondingly, the prevailing ADRs were concerned with the metabolic and hepato-gastrointestinal system (61). The sensitivity was 91%: 69 of 76 ADR-positive patients were indicated by an alert. The specificity of alerts was increased from 23% to 76% after implementation of an automatic laboratory signal trend monitoring algorithm. This study shows that a computer monitoring system is a useful tool for the systematic and automated detection of ADRs in gastroenterological patients.
Forghani-Arani, Farnoush; Behura, Jyoti; Haines, Seth S.; Batzle, Mike
2013-01-01
In studies on heavy oil, shale reservoirs, tight gas and enhanced geothermal systems, the use of surface passive seismic data to monitor induced microseismicity due to the fluid flow in the subsurface is becoming more common. However, in most studies passive seismic records contain days and months of data and manually analysing the data can be expensive and inaccurate. Moreover, in the presence of noise, detecting the arrival of weak microseismic events becomes challenging. Hence, the use of an automated, accurate and computationally fast technique for event detection in passive seismic data is essential. The conventional automatic event identification algorithm computes a running-window energy ratio of the short-term average to the long-term average of the passive seismic data for each trace. We show that for the common case of a low signal-to-noise ratio in surface passive records, the conventional method is not sufficiently effective at event identification. Here, we extend the conventional algorithm by introducing a technique that is based on the cross-correlation of the energy ratios computed by the conventional method. With our technique we can measure the similarities amongst the computed energy ratios at different traces. Our approach is successful at improving the detectability of events with a low signal-to-noise ratio that are not detectable with the conventional algorithm. Also, our algorithm has the advantage to identify if an event is common to all stations (a regional event) or to a limited number of stations (a local event). We provide examples of applying our technique to synthetic data and a field surface passive data set recorded at a geothermal site.
Electrophysiological evidence of automatic early semantic processing.
Hinojosa, José A; Martín-Loeches, Manuel; Muñoz, Francisco; Casado, Pilar; Pozo, Miguel A
2004-01-01
This study investigates the automatic-controlled nature of early semantic processing by means of the Recognition Potential (RP), an event-related potential response that reflects lexical selection processes. For this purpose tasks differing in their processing requirements were used. Half of the participants performed a physical task involving a lower-upper case discrimination judgement (shallow processing requirements), whereas the other half carried out a semantic task, consisting in detecting animal names (deep processing requirements). Stimuli were identical in the two tasks. Reaction time measures revealed that the physical task was easier to perform than the semantic task. However, RP effects elicited by the physical and semantic tasks did not differ in either latency, amplitude, or topographic distribution. Thus, the results from the present study suggest that early semantic processing is automatically triggered whenever a linguistic stimulus enters the language processor.
The Event Detection and the Apparent Velocity Estimation Based on Computer Vision
NASA Astrophysics Data System (ADS)
Shimojo, M.
2012-08-01
The high spatial and time resolution data obtained by the telescopes aboard Hinode revealed the new interesting dynamics in solar atmosphere. In order to detect such events and estimate the velocity of dynamics automatically, we examined the estimation methods of the optical flow based on the OpenCV that is the computer vision library. We applied the methods to the prominence eruption observed by NoRH, and the polar X-ray jet observed by XRT. As a result, it is clear that the methods work well for solar images if the images are optimized for the methods. It indicates that the optical flow estimation methods in the OpenCV library are very useful to analyze the solar phenomena.
NASA Astrophysics Data System (ADS)
Lagos, Soledad R.; Velis, Danilo R.
2018-02-01
We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.
Closing the loop in ICU decision support: physiologic event detection, alerts, and documentation.
Norris, P. R.; Dawant, B. M.
2001-01-01
Automated physiologic event detection and alerting is a challenging task in the ICU. Ideally care providers should be alerted only when events are clinically significant and there is opportunity for corrective action. However, the concepts of clinical significance and opportunity are difficult to define in automated systems, and effectiveness of alerting algorithms is difficult to measure. This paper describes recent efforts on the Simon project to capture information from ICU care providers about patient state and therapy in response to alerts, in order to assess the value of event definitions and progressively refine alerting algorithms. Event definitions for intracranial pressure and cerebral perfusion pressure were studied by implementing a reliable system to automatically deliver alerts to clinical users alphanumeric pagers, and to capture associated documentation about patient state and therapy when the alerts occurred. During a 6-month test period in the trauma ICU at Vanderbilt University Medical Center, 530 alerts were detected in 2280 hours of data spanning 14 patients. Clinical users electronically documented 81% of these alerts as they occurred. Retrospectively classifying documentation based on therapeutic actions taken, or reasons why actions were not taken, provided useful information about ways to potentially improve event definitions and enhance system utility. PMID:11825238
Ho, Daniel W H; Sze, Karen M F; Ng, Irene O L
2015-08-28
Viral integration into the human genome upon infection is an important risk factor for various human malignancies. We developed viral integration site detection tool called Virus-Clip, which makes use of information extracted from soft-clipped sequencing reads to identify exact positions of human and virus breakpoints of integration events. With initial read alignment to virus reference genome and streamlined procedures, Virus-Clip delivers a simple, fast and memory-efficient solution to viral integration site detection. Moreover, it can also automatically annotate the integration events with the corresponding affected human genes. Virus-Clip has been verified using whole-transcriptome sequencing data and its detection was validated to have satisfactory sensitivity and specificity. Marked advancement in performance was detected, compared to existing tools. It is applicable to versatile types of data including whole-genome sequencing, whole-transcriptome sequencing, and targeted sequencing. Virus-Clip is available at http://web.hku.hk/~dwhho/Virus-Clip.zip.
Varma, Niraj; Epstein, Andrew E; Schweikert, Robert; Michalski, Justin; Love, Charles J
2016-03-01
The incidence of unscheduled encounters and problem occurrence between ICD implant and first in-person evaluation (IPE) recommended at 12 weeks is unknown. Automatic remote home monitoring (HM) may be useful in this potentially unstable period. ICD patients were randomized 2:1 to HM enabled post-implant (n = 908) or to conventional monitoring (CM; n = 431). Groups were compared between implant and prior to first scheduled IPE for IPE incidence, causes, and actionability (reprogramming, system revision, medication changes) and event detection time. HM and CM patients were similar (mean age 63 years, 72% male, LVEF 29%, primary prevention 73%, DDD 57%). In the post-implant interval assessed (HM 100 ± 21.3 days vs. CM 101 ± 20.8 days, P = 0.54), 85.4% (776/908) HM patients and 87.7% CM (378/431) patients had no cause for IPE (P = 0.31). When IPE occurred, actionability in HM (64/177 [36.2%]) was greater versus CM (15/62 [24.2%], P = 0.12). Actionable items were discovered sooner with HM (P = 0.025). Device reprogramming or lead revision was triggered following 53/177 (29.9%) IPEs in HM versus 9/62 (14.5%) in CM (P = 0.018). Arrhythmia detection was enhanced by HM: 276 atrial and ventricular episodes were detected in 135 follow-ups in contrast to CM (65 episodes at 17 IPEs). More silent arrhythmic episodes were discovered by HM (7.2% vs. 1.5% [P = 0.15]). Since 27/42 (64.3%) IPEs driven by HM alerts were actionable, event notification was a valuable method for problem detection. Importantly, HM did not increase incidence of non-actionable IPEs (P = 0.72). Activation of automatic remote monitoring should be encouraged soon post-ICD implant. © 2015 Wiley Periodicals, Inc.
Complex effusive events at Kilauea as documented by the GOES satellite and remote video cameras
Harris, A.J.L.; Thornber, C.R.
1999-01-01
GOES provides thermal data for all of the Hawaiian volcanoes once every 15 min. We show how volcanic radiance time series produced from this data stream can be used as a simple measure of effusive activity. Two types of radiance trends in these time series can be used to monitor effusive activity: (a) Gradual variations in radiance reveal steady flow-field extension and tube development. (b) Discrete spikes correlate with short bursts of activity, such as lava fountaining or lava-lake overflows. We are confident that any effusive event covering more than 10,000 m2 of ground in less than 60 min will be unambiguously detectable using this approach. We demonstrate this capability using GOES, video camera and ground-based observational data for the current eruption of Kilauea volcano (Hawai'i). A GOES radiance time series was constructed from 3987 images between 19 June and 12 August 1997. This time series displayed 24 radiance spikes elevated more than two standard deviations above the mean; 19 of these are correlated with video-recorded short-burst effusive events. Less ambiguous events are interpreted, assessed and related to specific volcanic events by simultaneous use of permanently recording video camera data and ground-observer reports. The GOES radiance time series are automatically processed on data reception and made available in near-real-time, so such time series can contribute to three main monitoring functions: (a) automatically alerting major effusive events; (b) event confirmation and assessment; and (c) establishing effusive event chronology.
Hsu, Yi-Fang; Szűcs, Dénes
2012-02-15
Several functional magnetic resonance imaging (fMRI) studies have used neural adaptation paradigms to detect anatomical locations of brain activity related to number processing. However, currently not much is known about the temporal structure of number adaptation. In the present study, we used electroencephalography (EEG) to elucidate the time course of neural events in symbolic number adaptation. The numerical distance of deviants relative to standards was manipulated. In order to avoid perceptual confounds, all levels of deviants consisted of perceptually identical stimuli. Multiple successive numerical distance effects were detected in event-related potentials (ERPs). Analysis of oscillatory activity further showed at least two distinct stages of neural processes involved in the automatic analysis of numerical magnitude, with the earlier effect emerging at around 200ms and the later effect appearing at around 400ms. The findings support for the hypothesis that numerical magnitude processing involves a succession of cognitive events. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.
Automatic classification of seismic events within a regional seismograph network
NASA Astrophysics Data System (ADS)
Tiira, Timo; Kortström, Jari; Uski, Marja
2015-04-01
A fully automatic method for seismic event classification within a sparse regional seismograph network is presented. The tool is based on a supervised pattern recognition technique, Support Vector Machine (SVM), trained here to distinguish weak local earthquakes from a bulk of human-made or spurious seismic events. The classification rules rely on differences in signal energy distribution between natural and artificial seismic sources. Seismic records are divided into four windows, P, P coda, S, and S coda. For each signal window STA is computed in 20 narrow frequency bands between 1 and 41 Hz. The 80 discrimination parameters are used as a training data for the SVM. The SVM models are calculated for 19 on-line seismic stations in Finland. The event data are compiled mainly from fully automatic event solutions that are manually classified after automatic location process. The station-specific SVM training events include 11-302 positive (earthquake) and 227-1048 negative (non-earthquake) examples. The best voting rules for combining results from different stations are determined during an independent testing period. Finally, the network processing rules are applied to an independent evaluation period comprising 4681 fully automatic event determinations, of which 98 % have been manually identified as explosions or noise and 2 % as earthquakes. The SVM method correctly identifies 94 % of the non-earthquakes and all the earthquakes. The results imply that the SVM tool can identify and filter out blasts and spurious events from fully automatic event solutions with a high level of confidence. The tool helps to reduce work-load in manual seismic analysis by leaving only ~5 % of the automatic event determinations, i.e. the probable earthquakes for more detailed seismological analysis. The approach presented is easy to adjust to requirements of a denser or wider high-frequency network, once enough training examples for building a station-specific data set are available.
NASA Astrophysics Data System (ADS)
Wyer, P.; Zurek, B.
2017-12-01
Extensive additions to the Royal Dutch Meteorological Institute (KNMI) seismic monitoring network over recent years have yielded corresponding gains in detection of low magnitude seismicity induced by production of the Groningen gas field. A review of the weakest events in the seismic catalog demonstrates that waveforms from individual stations in the 30 x 35 km network area overlap sufficiently for normalized analytic envelopes to be constructively stacked without compensation for moveout, detection of individual station triggers or the need for more advanced approaches such as template matching. This observation opens the possibility of updating the historical catalog to current detection levels without having to implement more computationally expensive steps when reprocessing the legacy continuous data. A more consistent long term catalog would better constrain the frequency-size distribution (Gutenberg-Richter relationship) and provide a richer dataset for calibration of geomechanical and seismological models. To test the viability of a direct stacking approach, normalized waveform envelopes are partitioned by station into two discrete RMS stacks. Candidate seismic events are then identified as simultaneous STA/LTA triggers on both stacks. This partitioning has a minor impact on signal, but avoids the majority of false detections otherwise obtained on a single stack. Undesired detection of anthropogenic sources and earthquakes occurring outside the field can be further minimized by tuning the waveform frequency filters and trigger configuration. After minimal optimization, data from as few as 14 legacy stations are sufficient for robust automatic detection of known events approaching ML0 from the recent catalog. Ongoing work will determine residual false detection rates and whether previously unknown past events can be detected with sensitivities comparable to the modern KNMI catalog.
An Energy-Efficient Multi-Tier Architecture for Fall Detection Using Smartphones.
Guvensan, M Amac; Kansiz, A Oguz; Camgoz, N Cihan; Turkmen, H Irem; Yavuz, A Gokhan; Karsligil, M Elif
2017-06-23
Automatic detection of fall events is vital to providing fast medical assistance to the causality, particularly when the injury causes loss of consciousness. Optimization of the energy consumption of mobile applications, especially those which run 24/7 in the background, is essential for longer use of smartphones. In order to improve energy-efficiency without compromising on the fall detection performance, we propose a novel 3-tier architecture that combines simple thresholding methods with machine learning algorithms. The proposed method is implemented on a mobile application, called uSurvive, for Android smartphones. It runs as a background service and monitors the activities of a person in daily life and automatically sends a notification to the appropriate authorities and/or user defined contacts when it detects a fall. The performance of the proposed method was evaluated in terms of fall detection performance and energy consumption. Real life performance tests conducted on two different models of smartphone demonstrate that our 3-tier architecture with feature reduction could save up to 62% of energy compared to machine learning only solutions. In addition to this energy saving, the hybrid method has a 93% of accuracy, which is superior to thresholding methods and better than machine learning only solutions.
Centralized Alert-Processing and Asset Planning for Sensorwebs
NASA Technical Reports Server (NTRS)
Castano, Rebecca; Chien, Steve A.; Rabideau, Gregg R.; Tang, Benyang
2010-01-01
A software program provides a Sensorweb architecture for alert-processing, event detection, asset allocation and planning, and visualization. It automatically tasks and re-tasks various types of assets such as satellites and robotic vehicles in response to alerts (fire, weather) extracted from various data sources, including low-level Webcam data. JPL has adapted cons iderable Sensorweb infrastructure that had been previously applied to NASA Earth Science applications. This NASA Earth Science Sensorweb has been in operational use since 2003, and has proven reliability of the Sensorweb technologies for robust event detection and autonomous response using space and ground assets. Unique features of the software include flexibility to a range of detection and tasking methods including those that require aggregation of data over spatial and temporal ranges, generality of the response structure to represent and implement a range of response campaigns, and the ability to respond rapidly.
46 CFR 161.002-9 - Automatic fire detecting system, power supply.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 6 2011-10-01 2011-10-01 false Automatic fire detecting system, power supply. 161.002-9 Section 161.002-9 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) EQUIPMENT...-9 Automatic fire detecting system, power supply. The power supply for an automatic fire detecting...
46 CFR 161.002-9 - Automatic fire detecting system, power supply.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 6 2010-10-01 2010-10-01 false Automatic fire detecting system, power supply. 161.002-9 Section 161.002-9 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) EQUIPMENT...-9 Automatic fire detecting system, power supply. The power supply for an automatic fire detecting...
Alvarado-Rojas, Catalina; Le Van Quyen, Michel; Valderrama, Mario
2016-01-01
High Frequency Oscillations (HFOs) in the brain have been associated with different physiological and pathological processes. In epilepsy, HFOs might reflect a mechanism of epileptic phenomena, serving as a biomarker of epileptogenesis and epileptogenicity. Despite the valuable information provided by HFOs, their correct identification is a challenging task. A comprehensive application, RIPPLELAB, was developed to facilitate the analysis of HFOs. RIPPLELAB provides a wide range of tools for HFOs manual and automatic detection and visual validation; all of them are accessible from an intuitive graphical user interface. Four methods for automated detection—as well as several options for visualization and validation of detected events—were implemented and integrated in the application. Analysis of multiple files and channels is possible, and new options can be added by users. All features and capabilities implemented in RIPPLELAB for automatic detection were tested through the analysis of simulated signals and intracranial EEG recordings from epileptic patients (n = 16; 3,471 analyzed hours). Visual validation was also tested, and detected events were classified into different categories. Unlike other available software packages for EEG analysis, RIPPLELAB uniquely provides the appropriate graphical and algorithmic environment for HFOs detection (visual and automatic) and validation, in such a way that the power of elaborated detection methods are available to a wide range of users (experts and non-experts) through the use of this application. We believe that this open-source tool will facilitate and promote the collaboration between clinical and research centers working on the HFOs field. The tool is available under public license and is accessible through a dedicated web site. PMID:27341033
Semiautomated tremor detection using a combined cross-correlation and neural network approach
Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.
2013-01-01
Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low‒amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross‒correlation technique, followed by a Self‒Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being “semiautomated”. We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal‒to‒noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal‒to‒noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.
Semiautomated tremor detection using a combined cross-correlation and neural network approach
NASA Astrophysics Data System (ADS)
Horstmann, T.; Harrington, R. M.; Cochran, E. S.
2013-09-01
Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low-amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross-correlation technique, followed by a Self-Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being "semiautomated". We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal-to-noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal-to-noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.
NASA Astrophysics Data System (ADS)
Given, J. W.; Bobrov, D.; Kitov, I. O.; Spiliopoulos, S.
2012-12-01
The Technical Secretariat (TS) of the Comprehensive Nuclear Test-Ban Treaty Organization (CTBTO) will carry out the verification of the CTBT which obligates each State Party not to carry out any nuclear explosions, independently of their size and purpose. The International Data Centre (IDC) receives, collects, processes, analyses, reports on and archives data from the International Monitoring System(IMS). The IDC is responsible for automatic and interactive processing of the IMS data and for standard IDC products. The IDC is also required by the Treaty to progressively enhance its technical capabilities. In this study, we use waveform cross correlation as a technique to improve the detection capability and reliability of the seismic part of the IMS. In order to quantitatively estimate the gain obtained by cross correlation on the current sensitivity of automatic and interactive processing we compared seismic bulletins built for the North Atlantic (NA), which is an isolated region with earthquakes concentrating around the Mid-Atlantic Ridge. This avoids the influence of adjacent seismic regions on the final bulletins: the Reviewed Event Bulletin (REB) issued by the International Data Centre and the cross correlation Standard Event List (XSEL). We have cross correlated waveforms from ~1500 events reported in the REB since 2009. The resulting cross correlation matrix revealed the best candidates for master events. High-quality signals (SNR>5.0) recorded at eighteen array stations from approximately 50 master events evenly distributed over the seismically active zone in the NA were selected as templates. These templates are used for a continuous calculation of cross correlation coefficients since 2011. All detections obtained by cross-correlation are then used to build events according to the current IDC definition, i.e. at least three primary stations with accurate arrival times, azimuth and slowness estimates. The qualified event hypotheses populated the XSEL. In order to confirm the XSEL events not found in the REB, a portion of the newly built events was reviewed interactively by experienced analysts. The influence of all defining parameters (cross correlation coefficient threshold and SNR, F-statistics and fk-analysis, azimuth and slowness estimates, relative magnitude, etc.) on the final XSEL has been studied using the relevant frequency distributions for all detections vs only for those which were associated with the XSEL events. These distributions are also station and master dependent. This allows estimating the thresholds for all defining parameters, which may be adjusted to balance the rate of missed events and false alarms.
Neural network pattern recognition of lingual-palatal pressure for automated detection of swallow.
Hadley, Aaron J; Krival, Kate R; Ridgel, Angela L; Hahn, Elizabeth C; Tyler, Dustin J
2015-04-01
We describe a novel device and method for real-time measurement of lingual-palatal pressure and automatic identification of the oral transfer phase of deglutition. Clinical measurement of the oral transport phase of swallowing is a complicated process requiring either placement of obstructive sensors or sitting within a fluoroscope or articulograph for recording. Existing detection algorithms distinguish oral events with EMG, sound, and pressure signals from the head and neck, but are imprecise and frequently result in false detection. We placed seven pressure sensors on a molded mouthpiece fitting over the upper teeth and hard palate and recorded pressure during a variety of swallow and non-swallow activities. Pressure measures and swallow times from 12 healthy and 7 Parkinson's subjects provided training data for a time-delay artificial neural network to categorize the recordings as swallow or non-swallow events. User-specific neural networks properly categorized 96 % of swallow and non-swallow events, while a generalized population-trained network was able to properly categorize 93 % of swallow and non-swallow events across all recordings. Lingual-palatal pressure signals are sufficient to selectively and specifically recognize the initiation of swallowing in healthy and dysphagic patients.
ADJUST: An automatic EEG artifact detector based on the joint use of spatial and temporal features.
Mognon, Andrea; Jovicich, Jorge; Bruzzone, Lorenzo; Buiatti, Marco
2011-02-01
A successful method for removing artifacts from electroencephalogram (EEG) recordings is Independent Component Analysis (ICA), but its implementation remains largely user-dependent. Here, we propose a completely automatic algorithm (ADJUST) that identifies artifacted independent components by combining stereotyped artifact-specific spatial and temporal features. Features were optimized to capture blinks, eye movements, and generic discontinuities on a feature selection dataset. Validation on a totally different EEG dataset shows that (1) ADJUST's classification of independent components largely matches a manual one by experts (agreement on 95.2% of the data variance), and (2) Removal of the artifacted components detected by ADJUST leads to neat reconstruction of visual and auditory event-related potentials from heavily artifacted data. These results demonstrate that ADJUST provides a fast, efficient, and automatic way to use ICA for artifact removal. Copyright © 2010 Society for Psychophysiological Research.
Active suppression of distractors that match the contents of visual working memory.
Sawaki, Risa; Luck, Steven J
2011-08-01
The biased competition theory proposes that items matching the contents of visual working memory will automatically have an advantage in the competition for attention. However, evidence for an automatic effect has been mixed, perhaps because the memory-driven attentional bias can be overcome by top-down suppression. To test this hypothesis, the Pd component of the event-related potential waveform was used as a marker of attentional suppression. While observers maintained a color in working memory, task-irrelevant probe arrays were presented that contained an item matching the color being held in memory. We found that the memory-matching probe elicited a Pd component, indicating that it was being actively suppressed. This result suggests that sensory inputs matching the information being held in visual working memory are automatically detected and generate an "attend-to-me" signal, but this signal can be overridden by an active suppression mechanism to prevent the actual capture of attention.
Track-based event recognition in a realistic crowded environment
NASA Astrophysics Data System (ADS)
van Huis, Jasper R.; Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Dijk, Judith; van Rest, Jeroen H.
2014-10-01
Automatic detection of abnormal behavior in CCTV cameras is important to improve the security in crowded environments, such as shopping malls, airports and railway stations. This behavior can be characterized at different time scales, e.g., by small-scale subtle and obvious actions or by large-scale walking patterns and interactions between people. For example, pickpocketing can be recognized by the actual snatch (small scale), when he follows the victim, or when he interacts with an accomplice before and after the incident (longer time scale). This paper focusses on event recognition by detecting large-scale track-based patterns. Our event recognition method consists of several steps: pedestrian detection, object tracking, track-based feature computation and rule-based event classification. In the experiment, we focused on single track actions (walk, run, loiter, stop, turn) and track interactions (pass, meet, merge, split). The experiment includes a controlled setup, where 10 actors perform these actions. The method is also applied to all tracks that are generated in a crowded shopping mall in a selected time frame. The results show that most of the actions can be detected reliably (on average 90%) at a low false positive rate (1.1%), and that the interactions obtain lower detection rates (70% at 0.3% FP). This method may become one of the components that assists operators to find threatening behavior and enrich the selection of videos that are to be observed.
Discovering Event Structure in Continuous Narrative Perception and Memory.
Baldassano, Christopher; Chen, Janice; Zadbood, Asieh; Pillow, Jonathan W; Hasson, Uri; Norman, Kenneth A
2017-08-02
During realistic, continuous perception, humans automatically segment experiences into discrete events. Using a novel model of cortical event dynamics, we investigate how cortical structures generate event representations during narrative perception and how these events are stored to and retrieved from memory. Our data-driven approach allows us to detect event boundaries as shifts between stable patterns of brain activity without relying on stimulus annotations and reveals a nested hierarchy from short events in sensory regions to long events in high-order areas (including angular gyrus and posterior medial cortex), which represent abstract, multimodal situation models. High-order event boundaries are coupled to increases in hippocampal activity, which predict pattern reinstatement during later free recall. These areas also show evidence of anticipatory reinstatement as subjects listen to a familiar narrative. Based on these results, we propose that brain activity is naturally structured into nested events, which form the basis of long-term memory representations. Copyright © 2017 Elsevier Inc. All rights reserved.
Autonomous software: Myth or magic?
NASA Astrophysics Data System (ADS)
Allan, A.; Naylor, T.; Saunders, E. S.
2008-03-01
We discuss work by the eSTAR project which demonstrates a fully closed loop autonomous system for the follow up of possible micro-lensing anomalies. Not only are the initial micro-lensing detections followed up in real time, but ongoing events are prioritised and continually monitored, with the returned data being analysed automatically. If the ``smart software'' running the observing campaign detects a planet-like anomaly, further follow-up will be scheduled autonomously and other telescopes and telescope networks alerted to the possible planetary detection. We further discuss the implications of this, and how such projects can be used to build more general autonomous observing and control systems.
South-Central Tibetan Seismicity from HiCLIMB Seismic Array Data
NASA Astrophysics Data System (ADS)
Carpenter, S.; Nabelek, J.; Braunmiller, J.
2010-12-01
The HiCLIMB broadband passive seismic experiment (2002-2005) operated 233 sites along a 800-km long north-south array extending from the Himalayan foreland into the Central Tibetan Plateau and a flanking 350x350 km lateral array in southern Tibet and eastern Nepal. We use data from the experiment’s second phase (June 2004 to August 2005), when stations operated in Tibet, to locate earthquakes in south-central Tibet, a region with no permanent seismic network where little is known about its seismicity. We used the Antelope software for automatic detection and arrival time picking, event-arrival association and event location. Requiring a low detection and event association threshold initially resulted in ~110,000 declared events. The large database size rendered manual inspection unfeasible and we developed automated post-processing modules to weed out spurious detections and erroneous phase and event associations, which stemmed, e.g., from multiple coincident earthquakes within the array or misplaced seismicity from the great 2004 Sumatra earthquake. The resulting database contains ~32,000 events within 5° distance from the closest station. We consider ~7,600 events defined by more than 30 P and S arrivals well located and discuss them here. Seismicity in the subset correlates well with mapped faults and structures seen on satellite imagery attesting to high location quality. This is confirmed by non-systematic, kilometer-scale differences between automatic and manual locations for selected events. Seismicity in south-central Tibet is intense north of the Yarlung-Tsangpo Suture. Almost 90% of events occurred in the Lhasa Terrane mainly along north-south trending rifts. Vigorous activity (>4,800 events) accompanied two M>6 earthquakes in the Payang Basin (84°E), ~100 km west of the linear array. The Tangra-Yum Co (86.5°E) and Pumqu-Xianza (88°E) rifts were very active (~1,000 events) without dominant main shocks indicating swarm like-behavior possibly related to shallow magmatic or geothermal activity. Seismicity in the Qiangtang Terrane accounts for less than 10% of activity; seismicity is distributed and, except for the Yibuk-Caka Rift (87°E), difficult to associate with known structures. Lower seismicity may be apparent and simply reflect a larger distance to the array. Fewer than 5% of events occurred south of the Yarlong Tsangpo Suture in the Tethyan Himalaya, the only region where in addition to shallow seismicity a significant number of deep (mantle) events was located. Hypocenter depth, particularly for shallow events, is usually not well constrained due to array geometry and large distances to closest sites. The nature of deep events inside the array, though, is resolved.
NASA Astrophysics Data System (ADS)
Pötzi, W.; Veronig, A. M.; Temmer, M.
2018-06-01
In the framework of the Space Situational Awareness program of the European Space Agency (ESA/SSA), an automatic flare detection system was developed at Kanzelhöhe Observatory (KSO). The system has been in operation since mid-2013. The event detection algorithm was upgraded in September 2017. All data back to 2014 was reprocessed using the new algorithm. In order to evaluate both algorithms, we apply verification measures that are commonly used for forecast validation. In order to overcome the problem of rare events, which biases the verification measures, we introduce a new event-based method. We divide the timeline of the Hα observations into positive events (flaring period) and negative events (quiet period), independent of the length of each event. In total, 329 positive and negative events were detected between 2014 and 2016. The hit rate for the new algorithm reached 96% (just five events were missed) and a false-alarm ratio of 17%. This is a significant improvement of the algorithm, as the original system had a hit rate of 85% and a false-alarm ratio of 33%. The true skill score and the Heidke skill score both reach values of 0.8 for the new algorithm; originally, they were at 0.5. The mean flare positions are accurate within {±} 1 heliographic degree for both algorithms, and the peak times improve from a mean difference of 1.7± 2.9 minutes to 1.3± 2.3 minutes. The flare start times that had been systematically late by about 3 minutes as determined by the original algorithm, now match the visual inspection within -0.47± 4.10 minutes.
Post-processing of seismic parameter data based on valid seismic event determination
McEvilly, Thomas V.
1985-01-01
An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.
Automatic attention to emotional stimuli: neural correlates.
Carretié, Luis; Hinojosa, José A; Martín-Loeches, Manuel; Mercado, Francisco; Tapia, Manuel
2004-08-01
We investigated the capability of emotional and nonemotional visual stimulation to capture automatic attention, an aspect of the interaction between cognitive and emotional processes that has received scant attention from researchers. Event-related potentials were recorded from 37 subjects using a 60-electrode array, and were submitted to temporal and spatial principal component analyses to detect and quantify the main components, and to source localization software (LORETA) to determine their spatial origin. Stimuli capturing automatic attention were of three types: emotionally positive, emotionally negative, and nonemotional pictures. Results suggest that initially (P1: 105 msec after stimulus), automatic attention is captured by negative pictures, and not by positive or nonemotional ones. Later (P2: 180 msec), automatic attention remains captured by negative pictures, but also by positive ones. Finally (N2: 240 msec), attention is captured only by positive and nonemotional stimuli. Anatomically, this sequence is characterized by decreasing activation of the visual association cortex (VAC) and by the growing involvement, from dorsal to ventral areas, of the anterior cingulate cortex (ACC). Analyses suggest that the ACC and not the VAC is responsible for experimental effects described above. Intensity, latency, and location of neural activity related to automatic attention thus depend clearly on the stimulus emotional content and on its associated biological importance. Copyright 2004 Wiley-Liss, Inc.
A new moonquake catalog from Apollo 17 geophone data
NASA Astrophysics Data System (ADS)
Dimech, Jesse-Lee; Knapmeyer-Endrun, Brigitte; Weber, Renee
2017-04-01
New lunar seismic events have been detected on geophone data from the Apollo 17 Lunar Seismic Profile Experiment (LSPE). This dataset is already known to contain an abundance of thermal seismic events, and potentially some meteorite impacts, but prior to this study only 26 days of LSPE "listening mode" data has been analysed. In this new analysis, additional listening mode data collected between August 1976 and April 1977 is incorporated. To the authors knowledge these 8-months of data have not yet been used to detect seismic moonquake events. The geophones in question are situated adjacent to the Apollo 17 site in the Taurus-Littrow valley, about 5.5 km east of Lee-Lincoln scarp, and between the North and South Massifs. Any of these features are potential seismic sources. We have used an event-detection and classification technique based on 'Hidden Markov Models' to automatically detect and categorize seismic signals, in order to objectively generate a seismic event catalog. Currently, 2.5 months of the 8-month listening mode dataset has been processed, totaling 14,338 detections. Of these, 672 detections (classification "n1") have a sharp onset with a steep risetime suggesting they occur close to the recording geophone. These events almost all occur in association with lunar sunrise over the span of 1-2 days. One possibility is that these events originate from the nearby Apollo 17 lunar lander due to rapid heating at sunrise. A further 10,004 detections (classification "d1") show strong diurnal periodicity, with detections increasing during the lunar day and reaching a peak at sunset, and therefore probably represent thermal events from the lunar regolith immediately surrounding the Apollo 17 landing site. The final 3662 detections (classification "d2") have emergent onsets and relatively long durations. These detections have peaks associated with lunar sunrise and sunset, but also sometimes have peaks at seemingly random times. Their source mechanism has not yet been investigated. It's possible that many of these are misclassified d1/n1 events, and further QC work needs to be undertaken. But it is also possible that many of these represent more distant thermal moonquakes e.g. from the North and South massif, or even the ridge adjacent to the Lee-Lincoln scarp. The unknown event spikes will be the subject of closer inspection once the HMM technique has been refined.
Automatic behavior sensing for a bomb-detecting dog
NASA Astrophysics Data System (ADS)
Nguyen, Hoa G.; Nans, Adam; Talke, Kurt; Candela, Paul; Everett, H. R.
2015-05-01
Bomb-detecting dogs are trained to detect explosives through their sense of smell and often perform a specific behavior to indicate a possible bomb detection. This behavior is noticed by the dog handler, who confirms the probable explosives, determines the location, and forwards the information to an explosive ordnance disposal (EOD) team. To improve the speed and accuracy of this process and better integrate it with the EOD team's robotic explosive disposal operation, SPAWAR Systems Center Pacific has designed and prototyped an electronic dog collar that automatically tracks the dog's location and attitude, detects the indicative behavior, and records the data. To account for the differences between dogs, a 5-minute training routine can be executed before the mission to establish initial values for the k-mean clustering algorithm that classifies a specific dog's behavior. The recorded data include GPS location of the suspected bomb, the path the dog took to approach this location, and a video clip covering the detection event. The dog handler reviews and confirms the data before it is packaged up and forwarded on to the EOD team. The EOD team uses the video clip to better identify the type of bomb and for awareness of the surrounding environment before they arrive at the scene. Before the robotic neutralization operation commences at the site, the location and path data (which are supplied in a format understandable by the next-generation EOD robots—the Advanced EOD Robotic System) can be loaded into the robotic controller to automatically guide the robot to the bomb site. This paper describes the project with emphasis on the dog-collar hardware, behavior-classification software, and feasibility testing.
Matsumoto, Kayo; Hirayama, Chifumi; Sakuma, Yoko; Itoi, Yoichi; Sunadori, Asami; Kitamura, Junko; Nakahashi, Takeshi; Sugawara, Tamie; Ohkusa, Yasushi
2016-01-01
Objectives Detecting outbreaks early and then activating countermeasures based on such information is extremely important for infection control at childcare facilities. The Sumida ward began operating the Nursery School Absenteeism Surveillance System (NSASSy) in August 2013, and has since conducted real-time monitoring at nursery schools. The Public Health Center can detect outbreaks early and support appropriate intervention. This paper describes the experiences of Sumida Public Health Center related to early detection and intervention since the initiation of the system.Methods In this study, we investigated infectious disease outbreaks detected at 62 nursery schools in the Sumida ward, which were equipped with NSASSy from early November 2013 through late March 2015. We classified the information sources of the detected outbreak and responses of the public health center. The sources were (1) direct contact from some nursery schools, (2) messages from public officers with jurisdiction over nursery schools, (3) automatic detection by NSASSy, and (4) manual detection by public health center officers using NSASSy. The responses made by the health center were described and classified into 11 categories including verification of outbreak and advice for caregivers.Results The number of outbreaks detected by the aforementioned four information sources was zero, 25, 15, and 7 events, respectively, during the first 5 months after beginning NSASSy. These numbers became 5, 7, 53, and 25 events, respectively, during the subsequent 12 months. The number of outbreaks detected increased by 47% during the first 5 months, and by 87% in the following 12 months. The responses were primarily confirming the situation and offering advice to caregivers.Conclusion The Sumida Public Health Center ward could achieve early detection with automatic or manual detection of NSASSy. This system recently has become an important detection resource, and has contributed greatly to early detection. Because the Public Health Center can use it to achieve real-time monitoring, they can recognize emergent situations and intervene earlier, and thereby give feedback to the nursery schools. The system can contribute to providing effective countermeasures in these settings.
Automatic detection of unattended changes in room acoustics.
Frey, Johannes Daniel; Wendt, Mike; Jacobsen, Thomas
2015-01-01
Previous research has shown that the human auditory system continuously monitors its acoustic environment, detecting a variety of irregularities (e.g., deviance from prior stimulation regularity in pitch, loudness, duration, and (perceived) sound source location). Detection of irregularities can be inferred from a component of the event-related brain potential (ERP), referred to as the mismatch negativity (MMN), even in conditions in which participants are instructed to ignore the auditory stimulation. The current study extends previous findings by demonstrating that auditory irregularities brought about by a change in room acoustics elicit a MMN in a passive oddball protocol (acoustic stimuli with differing room acoustics, that were otherwise identical, were employed as standard and deviant stimuli), in which participants watched a fiction movie (silent with subtitles). While the majority of participants reported no awareness for any changes in the auditory stimulation, only one out of 14 participants reported to have become aware of changing room acoustics or sound source location. Together, these findings suggest automatic monitoring of room acoustics. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Le Bras, R. J.; Arora, N. S.; Kushida, N.; Kebede, F.; Feitio, P.; Tomuta, E.
2017-12-01
The International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has reached out to the broader scientific community through a series of conferences, the later one of which took place in June 2017 in Vienna, Austria. Stemming out of this outreach effort, after the inception of research and development efforts in 2009, the NET-VISA software, following a Bayesian modelling approach, has been elaborated to improve on the key step of automatic association of joint seismic, hydro-acoustic, and infrasound detections. When compared with the current operational system, it has been consistently shown on off-line tests to improve the overlap with the analyst-reviewed Reviewed Event Bulletin (REB) by ten percent for an average of 85% overlap, while the inconsistency rate is essentially the same at about 50%. Testing by analysts in realistic conditions on a few days of data has also demonstrated the software performance in finding additional events which qualify for publication in the REB. Starting in August 2017, the automatic events produced by the software will be reviewed by analysts at the CTBTO, and we report on the initial evaluation of this introduction into operations.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., and smoke detecting alarm bells. 78.47-13 Section 78.47-13 Shipping COAST GUARD, DEPARTMENT OF.... § 78.47-13 Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells. (a) The fire detecting and manual alarm automatic sprinklers, and smoke detecting alarm bells in the...
Code of Federal Regulations, 2012 CFR
2012-10-01
..., and smoke detecting alarm bells. 78.47-13 Section 78.47-13 Shipping COAST GUARD, DEPARTMENT OF.... § 78.47-13 Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells. (a) The fire detecting and manual alarm automatic sprinklers, and smoke detecting alarm bells in the...
Code of Federal Regulations, 2010 CFR
2010-10-01
..., and smoke detecting alarm bells. 78.47-13 Section 78.47-13 Shipping COAST GUARD, DEPARTMENT OF.... § 78.47-13 Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells. (a) The fire detecting and manual alarm automatic sprinklers, and smoke detecting alarm bells in the...
Code of Federal Regulations, 2014 CFR
2014-10-01
..., and smoke detecting alarm bells. 78.47-13 Section 78.47-13 Shipping COAST GUARD, DEPARTMENT OF.... § 78.47-13 Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells. (a) The fire detecting and manual alarm automatic sprinklers, and smoke detecting alarm bells in the...
Code of Federal Regulations, 2013 CFR
2013-10-01
..., and smoke detecting alarm bells. 78.47-13 Section 78.47-13 Shipping COAST GUARD, DEPARTMENT OF.... § 78.47-13 Fire detecting and manual alarm, automatic sprinkler, and smoke detecting alarm bells. (a) The fire detecting and manual alarm automatic sprinklers, and smoke detecting alarm bells in the...
NASA Astrophysics Data System (ADS)
Le Bras, Ronan; Kushida, Noriyuki; Mialle, Pierrick; Tomuta, Elena; Arora, Nimar
2017-04-01
The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been developing a Bayesian method and software to perform the key step of automatic association of seismological, hydroacoustic, and infrasound (SHI) parametric data. In our preliminary testing in the CTBTO, NET_VISA shows much better performance than its currently operating automatic association module, with a rate for automatic events matching the analyst-reviewed events increased by 10%, signifying that the percentage of missed events is lowered by 40%. Initial tests involving analysts also showed that the new software will complete the automatic bulletins of the CTBTO by adding previously missed events. Because products by the CTBTO are also widely distributed to its member States as well as throughout the seismological community, the introduction of a new technology must be carried out carefully, and the first step of operational integration is to first use NET-VISA results within the interactive analysts' software so that the analysts can check the robustness of the Bayesian approach. We report on the latest results both on the progress for automatic processing and for the initial introduction of NET-VISA results in the analyst review process
Tervaniemi, Mari; Just, Viola; Koelsch, Stefan; Widmann, Andreas; Schröger, Erich
2005-02-01
Previously, professional violin players were found to automatically discriminate tiny pitch changes, not discriminable by nonmusicians. The present study addressed the pitch processing accuracy in musicians with expertise in playing a wide selection of instruments (e.g., piano; wind and string instruments). Of specific interest was whether also musicians with such divergent backgrounds have facilitated accuracy in automatic and/or attentive levels of auditory processing. Thirteen professional musicians and 13 nonmusicians were presented with frequent standard sounds and rare deviant sounds (0.8, 2, or 4% higher in frequency). Auditory event-related potentials evoked by these sounds were recorded while first the subjects read a self-chosen book and second they indicated behaviorally the detection of sounds with deviant frequency. Musicians detected the pitch changes faster and more accurately than nonmusicians. The N2b and P3 responses recorded during attentive listening had larger amplitude in musicians than in nonmusicians. Interestingly, the superiority in pitch discrimination accuracy in musicians over nonmusicians was observed not only with the 0.8% but also with the 2% frequency changes. Moreover, also nonmusicians detected quite reliably the smallest pitch changes of 0.8%. However, the mismatch negativity (MMN) and P3a recorded during a reading condition did not differentiate musicians and nonmusicians. These results suggest that musical expertise may exert its effects merely at attentive levels of processing and not necessarily already at the preattentive levels.
Single Event Effect Testing of the Analog Devices ADV212
NASA Technical Reports Server (NTRS)
Wilcox, Ted; Campola, Michael; Kadari, Madhu; Nadendla, Seshagiri R.
2017-01-01
The Analog Devices ADV212 was initially tested for single event effects (SEE) at the Texas AM University Cyclotron Facility (TAMU) in July of 2013. Testing revealed a sensitivity to device hang-ups classified as single event functional interrupts (SEFI), soft data errors classified as single event upsets (SEU), and, of particular concern, single event latch-ups (SEL). All error types occurred so frequently as to make accurate measurements of the exposure time, and thus total particle fluence, challenging. To mitigate some of the risk posed by single event latch-ups, circuitry was added to the electrical design to detect a high current event and automatically recycle power and reboot the device. An additional heavy-ion test was scheduled to validate the operation of the recovery circuitry and the continuing functionality of the ADV212 after a substantial number of latch-up events. As a secondary goal, more precise data would be gathered by an improved test method, described in this test report.
Homaeinezhad, M R; Erfanianmoshiri-Nejad, M; Naseri, H
2014-01-01
The goal of this study is to introduce a simple, standard and safe procedure to detect and to delineate P and T waves of the electrocardiogram (ECG) signal in real conditions. The proposed method consists of four major steps: (1) a secure QRS detection and delineation algorithm, (2) a pattern recognition algorithm designed for distinguishing various ECG clusters which take place between consecutive R-waves, (3) extracting template of the dominant events of each cluster waveform and (4) application of the correlation analysis in order to delineate automatically the P- and T-waves in noisy conditions. The performance characteristics of the proposed P and T detection-delineation algorithm are evaluated versus various ECG signals whose qualities are altered from the best to the worst cases based on the random-walk noise theory. Also, the method is applied to the MIT-BIH Arrhythmia and the QT databases for comparing some parts of its performance characteristics with a number of P and T detection-delineation algorithms. The conducted evaluations indicate that in a signal with low quality value of about 0.6, the proposed method detects the P and T events with sensitivity Se=85% and positive predictive value of P+=89%, respectively. In addition, at the same quality, the average delineation errors associated with those ECG events are 45 and 63ms, respectively. Stable delineation error, high detection accuracy and high noise tolerance were the most important aspects considered during development of the proposed method. © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Deuerlein, Jochen; Meyer-Harries, Lea; Guth, Nicolai
2017-07-01
Drinking water distribution networks are part of critical infrastructures and are exposed to a number of different risks. One of them is the risk of unintended or deliberate contamination of the drinking water within the pipe network. Over the past decade research has focused on the development of new sensors that are able to detect malicious substances in the network and early warning systems for contamination. In addition to the optimal placement of sensors, the automatic identification of the source of a contamination is an important component of an early warning and event management system for security enhancement of water supply networks. Many publications deal with the algorithmic development; however, only little information exists about the integration within a comprehensive real-time event detection and management system. In the following the analytical solution and the software implementation of a real-time source identification module and its integration within a web-based event management system are described. The development was part of the SAFEWATER project, which was funded under FP 7 of the European Commission.
Automatic near-real-time detection of CMEs in Mauna Loa K-Cor coronagraph images
NASA Astrophysics Data System (ADS)
Thompson, W. T.; St Cyr, O. C.; Burkepile, J.; Posner, A.
2017-12-01
A simple algorithm has been developed to detect the onset of coronal massejections (CMEs), together with an estimate of their speed, in near-real-timeusing images of the linearly polarized white-light solar corona taken by theK-Cor telescope at the Mauna Loa Solar Observatory (MLSO). The algorithm usedis a variation on the Solar Eruptive Event Detection System (SEEDS) developedat George Mason University. The algorithm was tested against K-Cor data takenbetween 29 April 2014 and 20 February 2017, on days which the MLSO websitemarked as containing CMEs. This resulted in testing of 139 days worth of datacontaining 171 CMEs. The detection rate varied from close to 80% in 2014-2015when solar activity was high, down to as low as 20-30% in 2017 when activitywas low. The difference in effectiveness with solar cycle is attributed to thedifference in relative prevalance of strong CMEs between active and quietperiods. There were also twelve false detections during this time period,leading to an average false detection rate of 8.6% on any given day. However,half of the false detections were clustered into two short periods of a fewdays each when special conditions prevailed to increase the false detectionrate. The K-Cor data were also compared with major Solar Energetic Particle(SEP) storms during this time period. There were three SEP events detectedeither at Earth or at one of the two STEREO spacecraft where K-Cor wasobserving during the relevant time period. The K-Cor CME detection algorithmsuccessfully generated alerts for two of these events, with lead times of 1-3hours before the SEP onset at 1 AU. The third event was not detected by theautomatic algorithm because of the unusually broad width of the CME in positionangle.
Automatic Identification of Alpine Mass Movements by a Combination of Seismic and Infrasound Sensors
Hübl, Johannes; McArdell, Brian W.; Walter, Fabian
2018-01-01
The automatic detection and identification of alpine mass movements such as debris flows, debris floods, or landslides have been of increasing importance for devising mitigation measures in densely populated and intensively used alpine regions. Since these mass movements emit characteristic seismic and acoustic waves in the low-frequency range (<30 Hz), several approaches have already been developed for detection and warning systems based on these signals. However, a combination of the two methods, for improving detection probability and reducing false alarms, is still applied rarely. This paper presents an update and extension of a previously published approach for a detection and identification system based on a combination of seismic and infrasound sensors. Furthermore, this work evaluates the possible early warning times at several test sites and aims to analyze the seismic and infrasound spectral signature produced by different sediment-related mass movements to identify the process type and estimate the magnitude of the event. Thus, this study presents an initial method for estimating the peak discharge and total volume of debris flows based on infrasound data. Tests on several catchments show that this system can detect and identify mass movements in real time directly at the sensor site with high accuracy and a low false alarm ratio. PMID:29789449
An Energy-Efficient Multi-Tier Architecture for Fall Detection on Smartphones
Guvensan, M. Amac; Kansiz, A. Oguz; Camgoz, N. Cihan; Turkmen, H. Irem; Yavuz, A. Gokhan; Karsligil, M. Elif
2017-01-01
Automatic detection of fall events is vital to providing fast medical assistance to the causality, particularly when the injury causes loss of consciousness. Optimization of the energy consumption of mobile applications, especially those which run 24/7 in the background, is essential for longer use of smartphones. In order to improve energy-efficiency without compromising on the fall detection performance, we propose a novel 3-tier architecture that combines simple thresholding methods with machine learning algorithms. The proposed method is implemented on a mobile application, called uSurvive, for Android smartphones. It runs as a background service and monitors the activities of a person in daily life and automatically sends a notification to the appropriate authorities and/or user defined contacts when it detects a fall. The performance of the proposed method was evaluated in terms of fall detection performance and energy consumption. Real life performance tests conducted on two different models of smartphone demonstrate that our 3-tier architecture with feature reduction could save up to 62% of energy compared to machine learning only solutions. In addition to this energy saving, the hybrid method has a 93% of accuracy, which is superior to thresholding methods and better than machine learning only solutions. PMID:28644378
NASA Astrophysics Data System (ADS)
Karpov, S.; Beskin, G.; Biryukov, A.; Bondar, S.; Ivanov, E.; Katkova, E.; Orekhova, N.; Perkov, A.; Sasyuk, V.
2017-07-01
Here we present the summary of first years of operation and the first results of a novel 9-channel wide-field optical monitoring system with sub-second temporal resolution, Mini-MegaTORTORA (MMT-9), which is in operation now at Special Astrophysical Observatory on Russian Caucasus. The system is able to observe the sky simultaneously in either wide (900 square degrees) or narrow (100 square degrees) fields of view, either in clear light or with any combination of color (Johnson-Cousins B, V or R) and polarimetric filters installed, with exposure times ranging from 0.1 s to hundreds of seconds.The real-time system data analysis pipeline performs automatic detection of rapid transient events, both near-Earth and extragalactic. The objects routinely detected by MMT also include faint meteors and artificial satellites.
NASA Astrophysics Data System (ADS)
Karpov, S.; Beskin, G.; Biryukov, A.; Bondar, S.; Ivanov, E.; Katkova, E.; Orekhova, N.; Perkov, A.; Sasyuk, V.
2017-06-01
Here we present the summary of first years of operation and the first results of a novel 9-channel wide-field optical monitoring system with sub-second temporal resolution, Mini-MegaTORTORA (MMT-9), which is in operation now at Special Astrophysical Observatory on Russian Caucasus. The system is able to observe the sky simultaneously in either wide (˜900 square degrees) or narrow (˜100 square degrees) fields of view, either in clear light or with any combination of color (Johnson-Cousins B, V or R) and polarimetric filters installed, with exposure times ranging from 0.1 s to hundreds of seconds.The real-time system data analysis pipeline performs automatic detection of rapid transient events, both near-Earth and extragalactic. The objects routinely detected by MMT include faint meteors and artificial satellites.
An Ultralow-Power Sleep Spindle Detection System on Chip.
Iranmanesh, Saam; Rodriguez-Villegas, Esther
2017-08-01
This paper describes a full system-on-chip to automatically detect sleep spindle events from scalp EEG signals. These events, which are known to play an important role on memory consolidation during sleep, are also characteristic of a number of neurological diseases. The operation of the system is based on a previously reported algorithm, which used the Teager energy operator, together with the Spectral Edge Frequency (SEF50) achieving more than 70% sensitivity and 98% specificity. The algorithm is now converted into a hardware analog based customized implementation in order to achieve extremely low levels of power. Experimental results prove that the system, which is fabricated in a 0.18 μm CMOS technology, is able to operate from a 1.25 V power supply consuming only 515 nW, with an accuracy that is comparable to its software counterpart.
Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin
2015-04-01
Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differentiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dudik, Joshua M.; Kurosu, Atsuko; Coyle, James L
2015-01-01
Background Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. Methods In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Results Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differen-tiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. Conclusions In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. PMID:25658505
Fisher, Robert S; Afra, Pegah; Macken, Micheal; Minecan, Daniela N; Bagić, Anto; Benbadis, Selim R; Helmers, Sandra L; Sinha, Saurabh R; Slater, Jeremy; Treiman, David; Begnaud, Jason; Raman, Pradheep; Najimipour, Bita
2016-02-01
The Automatic Stimulation Mode (AutoStim) feature of the Model 106 Vagus Nerve Stimulation (VNS) Therapy System stimulates the left vagus nerve on detecting tachycardia. This study evaluates performance, safety of the AutoStim feature during a 3-5-day Epilepsy Monitoring Unit (EMU) stay and long- term clinical outcomes of the device stimulating in all modes. The E-37 protocol (NCT01846741) was a prospective, unblinded, U.S. multisite study of the AspireSR(®) in subjects with drug-resistant partial onset seizures and history of ictal tachycardia. VNS Normal and Magnet Modes stimulation were present at all times except during the EMU stay. Outpatient visits at 3, 6, and 12 months tracked seizure frequency, severity, quality of life, and adverse events. Twenty implanted subjects (ages 21-69) experienced 89 seizures in the EMU. 28/38 (73.7%) of complex partial and secondarily generalized seizures exhibited ≥20% increase in heart rate change. 31/89 (34.8%) of seizures were treated by Automatic Stimulation on detection; 19/31 (61.3%) seizures ended during the stimulation with a median time from stimulation onset to seizure end of 35 sec. Mean duty cycle at six-months increased from 11% to 16%. At 12 months, quality of life and seizure severity scores improved, and responder rate was 50%. Common adverse events were dysphonia (n = 7), convulsion (n = 6), and oropharyngeal pain (n = 3). The Model 106 performed as intended in the study population, was well tolerated and associated with clinical improvement from baseline. The study design did not allow determination of which factors were responsible for improvements. © 2015 The Authors. Neuromodulation: Technology at the Neural Interface published by Wiley Periodicals, Inc. on behalf of International Neuromodulation Society.
46 CFR 161.002-2 - Types of fire-protective systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., but not be limited to, automatic fire and smoke detecting systems, manual fire alarm systems, sample extraction smoke detection systems, watchman's supervisory systems, and combinations of these systems. (b) Automatic fire detecting systems. For the purpose of this subpart, automatic fire and smoke detecting...
46 CFR 161.002-2 - Types of fire-protective systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., but not be limited to, automatic fire and smoke detecting systems, manual fire alarm systems, sample extraction smoke detection systems, watchman's supervisory systems, and combinations of these systems. (b) Automatic fire detecting systems. For the purpose of this subpart, automatic fire and smoke detecting...
Color categories affect pre-attentive color perception.
Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna
2010-10-01
Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.
Proverbio, Alice Mado; Crotti, Nicola; Manfredi, Mirella; Adorni, Roberta; Zani, Alberto
2012-01-01
While the existence of a mirror neuron system (MNS) representing and mirroring simple purposeful actions (such as reaching) is known, neural mechanisms underlying the representation of complex actions (such as ballet, fencing, etc.) that are learned by imitation and exercise are not well understood. In this study, correct and incorrect basketball actions were visually presented to professional basketball players and naïve viewers while their EEG was recorded. The participants had to respond to rare targets (unanimated scenes). No category or group differences were found at perceptual level, ruling out the possibility that correct actions might be more visually familiar. Large, anterior N400 responses of event-related brain potentials to incorrectly performed basketball actions were recorded in skilled brains only. The swLORETA inverse solution for incorrect–correct contrast showed that the automatic detection of action ineffectiveness/incorrectness involved the fronto/parietal MNS, the cerebellum, the extra-striate body area, and the superior temporal sulcus. PMID:23181191
NASA Astrophysics Data System (ADS)
Vasterling, Margarete; Wegler, Ulrich; Becker, Jan; Brüstle, Andrea; Bischoff, Monika
2017-01-01
We develop and test a real-time envelope cross-correlation detector for use in seismic response plans to mitigate hazard of induced seismicity. The incoming seismological data are cross-correlated in real-time with a set of previously recorded master events. For robustness against small changes in the earthquake source locations or in the focal mechanisms we cross-correlate the envelopes of the seismograms rather than the seismograms themselves. Two sequenced detection conditions are implemented: After passing a single trace cross-correlation condition, a network cross-correlation is calculated taking amplitude ratios between stations into account. Besides detecting the earthquake and assigning it to the respective reservoir, real-time magnitudes are important for seismic response plans. We estimate the magnitudes of induced microseismicity using the relative amplitudes between master event and detected event. The real-time detector is implemented as a SeisComP3 module. We carry out offline and online performance tests using seismic monitoring data of the Insheim and Landau geothermal power plants (Upper Rhine Graben, Germany), also including blasts from a nearby quarry. The comparison of the automatic real-time catalogue with a manually processed catalogue shows, that with the implemented parameters events are always correctly assigned to the respective reservoir (4 km distance between reservoirs) or the quarry (8 km and 10 km distance, respectively, from the reservoirs). The real-time catalogue achieves a magnitude of completeness around 0.0. Four per cent of the events assigned to the Insheim reservoir and zero per cent of the Landau events are misdetections. All wrong detections are local tectonic events, whereas none are caused by seismic noise.
Analyzing depression tendency of web posts using an event-driven depression tendency warning model.
Tung, Chiaming; Lu, Wenhsiang
2016-01-01
The Internet has become a platform to express individual moods/feelings of daily life, where authors share their thoughts in web blogs, micro-blogs, forums, bulletin board systems or other media. In this work, we investigate text-mining technology to analyze and predict the depression tendency of web posts. In this paper, we defined depression factors, which include negative events, negative emotions, symptoms, and negative thoughts from web posts. We proposed an enhanced event extraction (E3) method to automatically extract negative event terms. In addition, we also proposed an event-driven depression tendency warning (EDDTW) model to predict the depression tendency of web bloggers or post authors by analyzing their posted articles. We compare the performance among the proposed EDDTW model, negative emotion evaluation (NEE) model, and the diagnostic and statistical manual of mental disorders-based depression tendency evaluation method. The EDDTW model obtains the best recall rate and F-measure at 0.668 and 0.624, respectively, while the diagnostic and statistical manual of mental disorders-based method achieves the best precision rate of 0.666. The main reason is that our enhanced event extraction method can increase recall rate by enlarging the negative event lexicon at the expense of precision. Our EDDTW model can also be used to track the change or trend of depression tendency for each post author. The depression tendency trend can help doctors to diagnose and even track depression of web post authors more efficiently. This paper presents an E3 method to automatically extract negative event terms in web posts. We also proposed a new EDDTW model to predict the depression tendency of web posts and possibly help bloggers or post authors to early detect major depressive disorder. Copyright © 2015 Elsevier B.V. All rights reserved.
Wang, Jingbo; Templeton, Dennise C.; Harris, David B.
2015-07-30
Using empirical matched field processing (MFP), we compare 4 yr of continuous seismic data to a set of 195 master templates from within an active geothermal field and identify over 140 per cent more events than were identified using traditional detection and location techniques alone. In managed underground reservoirs, a substantial fraction of seismic events can be excluded from the official catalogue due to an inability to clearly identify seismic-phase onsets. Empirical MFP can improve the effectiveness of current seismic detection and location methodologies by using conventionally located events with higher signal-to-noise ratios as master events to define wavefield templatesmore » that could then be used to map normally discarded indistinct seismicity. Since MFP does not require picking, it can be carried out automatically and rapidly once suitable templates are defined. In this application, we extend MFP by constructing local-distance empirical master templates using Southern California Earthquake Data Center archived waveform data of events originating within the Salton Sea Geothermal Field. We compare the empirical templates to continuous seismic data collected between 1 January 2008 and 31 December 2011. The empirical MFP method successfully identifies 6249 additional events, while the original catalogue reported 4352 events. The majority of these new events are lower-magnitude events with magnitudes between M0.2–M0.8. Here, the increased spatial-temporal resolution of the microseismicity map within the geothermal field illustrates how empirical MFP, when combined with conventional methods, can significantly improve seismic network detection capabilities, which can aid in long-term sustainability and monitoring of managed underground reservoirs.« less
Savino, Giovanni; Pierini, Marco; Thompson, Jason; Fitzharris, Michael; Lenné, Michael G
2016-11-16
Autonomous emergency braking (AEB) acts to slow down a vehicle when an unavoidable impending collision is detected. In addition to documented benefits when applied to passenger cars, AEB has also shown potential when applied to motorcycles (MAEB). However, the feasibility of MAEB as practically applied to motorcycles in the real world is not well understood. In this study we performed a field trial involving 16 riders on a test motorcycle subjected to automatic decelerations, thus simulating MAEB activation. The tests were conducted along a rectilinear path at nominal speed of 40 km/h and with mean deceleration of 0.15 g (15% of full braking) deployed at random times. Riders were also exposed to one final undeclared brake activation with the aim of providing genuinely unexpected automatic braking events. Participants were consistently able to manage automatic decelerations of the vehicle with minor to moderate effort. Results of undeclared activations were consistent with those of standard runs. This study demonstrated the feasibility of a moderate automatic deceleration in a scenario of motorcycle travelling in a straight path, supporting the notion that the application of AEB on motorcycles is practicable. Furthermore, the proposed field trial can be used as a reference for future regulation or consumer tests in order to address safety and acceptability of unexpected automatic decelerations on a motorcycle.
Computer systems for automatic earthquake detection
Stewart, S.W.
1974-01-01
U.S Geological Survey seismologists in Menlo park, California, are utilizing the speed, reliability, and efficiency of minicomputers to monitor seismograph stations and to automatically detect earthquakes. An earthquake detection computer system, believed to be the only one of its kind in operation, automatically reports about 90 percent of all local earthquakes recorded by a network of over 100 central California seismograph stations. The system also monitors the stations for signs of malfunction or abnormal operation. Before the automatic system was put in operation, all of the earthquakes recorded had to be detected by manually searching the records, a time-consuming process. With the automatic detection system, the stations are efficiently monitored continuously.
NASA Astrophysics Data System (ADS)
Hibert, Clement; Malet, Jean-Philippe; Provost, Floriane; Michéa, David; Geertsema, Marten
2017-04-01
Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with volumes below one millions of cubic meters. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. We present here the preliminary results of the application of this processing chain in two contexts: i) In Himalaya with the data acquired between 2002 and 2005 by the Hi-Climb network; ii) In Alaska using data recorded by the permanent regional network and the USArray, which is currently being deployed in this region. The landslide seismic catalogues are compared to geomorphological catalogues in terms of number of events and dates when possible.
Extracting rate changes in transcriptional regulation from MEDLINE abstracts.
Liu, Wenting; Miao, Kui; Li, Guangxia; Chang, Kuiyu; Zheng, Jie; Rajapakse, Jagath C
2014-01-01
Time delays are important factors that are often neglected in gene regulatory network (GRN) inference models. Validating time delays from knowledge bases is a challenge since the vast majority of biological databases do not record temporal information of gene regulations. Biological knowledge and facts on gene regulations are typically extracted from bio-literature with specialized methods that depend on the regulation task. In this paper, we mine evidences for time delays related to the transcriptional regulation of yeast from the PubMed abstracts. Since the vast majority of abstracts lack quantitative time information, we can only collect qualitative evidences of time delays. Specifically, the speed-up or delay in transcriptional regulation rate can provide evidences for time delays (shorter or longer) in GRN. Thus, we focus on deriving events related to rate changes in transcriptional regulation. A corpus of yeast regulation related abstracts was manually labeled with such events. In order to capture these events automatically, we create an ontology of sub-processes that are likely to result in transcription rate changes by combining textual patterns and biological knowledge. We also propose effective feature extraction methods based on the created ontology to identify the direct evidences with specific details of these events. Our ontologies outperform existing state-of-the-art gene regulation ontologies in the automatic rule learning method applied to our corpus. The proposed deterministic ontology rule-based method can achieve comparable performance to the automatic rule learning method based on decision trees. This demonstrates the effectiveness of our ontology in identifying rate-changing events. We also tested the effectiveness of the proposed feature mining methods on detecting direct evidence of events. Experimental results show that the machine learning method on these features achieves an F1-score of 71.43%. The manually labeled corpus of events relating to rate changes in transcriptional regulation for yeast is available in https://sites.google.com/site/wentingntu/data. The created ontologies summarized both biological causes of rate changes in transcriptional regulation and corresponding positive and negative textual patterns from the corpus. They are demonstrated to be effective in identifying rate-changing events, which shows the benefits of combining textual patterns and biological knowledge on extracting complex biological events.
Active suppression of distractors that match the contents of visual working memory
Sawaki, Risa; Luck, Steven J.
2011-01-01
The biased competition theory proposes that items matching the contents of visual working memory will automatically have an advantage in the competition for attention. However, evidence for an automatic effect has been mixed, perhaps because the memory-driven attentional bias can be overcome by top-down suppression. To test this hypothesis, the Pd component of the event-related potential waveform was used as a marker of attentional suppression. While observers maintained a color in working memory, task-irrelevant probe arrays were presented that contained an item matching the color being held in memory. We found that the memory-matching probe elicited a Pd component, indicating that it was being actively suppressed. This result suggests that sensory inputs matching the information being held in visual working memory are automatically detected and generate an “attend-to-me” signal, but this signal can be overridden by an active suppression mechanism to prevent the actual capture of attention. PMID:22053147
Oscillatory brain dynamics associated with the automatic processing of emotion in words.
Wang, Lin; Bastiaansen, Marcel
2014-10-01
This study examines the automaticity of processing the emotional aspects of words, and characterizes the oscillatory brain dynamics that accompany this automatic processing. Participants read emotionally negative, neutral and positive nouns while performing a color detection task in which only perceptual-level analysis was required. Event-related potentials and time frequency representations were computed from the concurrently measured EEG. Negative words elicited a larger P2 and a larger late positivity than positive and neutral words, indicating deeper semantic/evaluative processing of negative words. In addition, sustained alpha power suppressions were found for the emotional compared to neutral words, in the time range from 500 to 1000ms post-stimulus. These results suggest that sustained attention was allocated to the emotional words, whereas the attention allocated to the neutral words was released after an initial analysis. This seems to hold even when the emotional content of the words is task-irrelevant. Copyright © 2014 Elsevier Inc. All rights reserved.
Automatic machine learning based prediction of cardiovascular events in lung cancer screening data
NASA Astrophysics Data System (ADS)
de Vos, Bob D.; de Jong, Pim A.; Wolterink, Jelmer M.; Vliegenthart, Rozemarijn; Wielingen, Geoffrey V. F.; Viergever, Max A.; Išgum, Ivana
2015-03-01
Calcium burden determined in CT images acquired in lung cancer screening is a strong predictor of cardiovascular events (CVEs). This study investigated whether subjects undergoing such screening who are at risk of a CVE can be identified using automatic image analysis and subject characteristics. Moreover, the study examined whether these individuals can be identified using solely image information, or if a combination of image and subject data is needed. A set of 3559 male subjects undergoing Dutch-Belgian lung cancer screening trial was included. Low-dose non-ECG synchronized chest CT images acquired at baseline were analyzed (1834 scanned in the University Medical Center Groningen, 1725 in the University Medical Center Utrecht). Aortic and coronary calcifications were identified using previously developed automatic algorithms. A set of features describing number, volume and size distribution of the detected calcifications was computed. Age of the participants was extracted from image headers. Features describing participants' smoking status, smoking history and past CVEs were obtained. CVEs that occurred within three years after the imaging were used as outcome. Support vector machine classification was performed employing different feature sets using sets of only image features, or a combination of image and subject related characteristics. Classification based solely on the image features resulted in the area under the ROC curve (Az) of 0.69. A combination of image and subject features resulted in an Az of 0.71. The results demonstrate that subjects undergoing lung cancer screening who are at risk of CVE can be identified using automatic image analysis. Adding subject information slightly improved the performance.
flowAI: automatic and interactive anomaly discerning tools for flow cytometry data.
Monaco, Gianni; Chen, Hao; Poidinger, Michael; Chen, Jinmiao; de Magalhães, João Pedro; Larbi, Anis
2016-08-15
Flow cytometry (FCM) is widely used in both clinical and basic research to characterize cell phenotypes and functions. The latest FCM instruments analyze up to 20 markers of individual cells, producing high-dimensional data. This requires the use of the latest clustering and dimensionality reduction techniques to automatically segregate cell sub-populations in an unbiased manner. However, automated analyses may lead to false discoveries due to inter-sample differences in quality and properties. We present an R package, flowAI, containing two methods to clean FCM files from unwanted events: (i) an automatic method that adopts algorithms for the detection of anomalies and (ii) an interactive method with a graphical user interface implemented into an R shiny application. The general approach behind the two methods consists of three key steps to check and remove suspected anomalies that derive from (i) abrupt changes in the flow rate, (ii) instability of signal acquisition and (iii) outliers in the lower limit and margin events in the upper limit of the dynamic range. For each file analyzed our software generates a summary of the quality assessment from the aforementioned steps. The software presented is an intuitive solution seeking to improve the results not only of manual but also and in particular of automatic analysis on FCM data. R source code available through Bioconductor: http://bioconductor.org/packages/flowAI/ CONTACTS: mongianni1@gmail.com or Anis_Larbi@immunol.a-star.edu.sg Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Wolterink, Jelmer M; Leiner, Tim; de Vos, Bob D; Coatrieux, Jean-Louis; Kelm, B Michael; Kondo, Satoshi; Salgado, Rodrigo A; Shahzad, Rahil; Shu, Huazhong; Snoeren, Miranda; Takx, Richard A P; van Vliet, Lucas J; van Walsum, Theo; Willems, Tineke P; Yang, Guanyu; Zheng, Yefeng; Viergever, Max A; Išgum, Ivana
2016-05-01
The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiac CT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiac CT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiac CT using a publicly available standardized framework. Cardiac CT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CT scanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohen's kappa for patient CVD risk categorization by the evaluated methods ranged from 0.80 to 1.00. A publicly available standardized framework for the evaluation of (semi)automatic methods for CAC identification in cardiac CT is described. An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible. CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVD risk determination.
Pre-trained D-CNN models for detecting complex events in unconstrained videos
NASA Astrophysics Data System (ADS)
Robinson, Joseph P.; Fu, Yun
2016-05-01
Rapid event detection faces an emergent need to process large videos collections; whether surveillance videos or unconstrained web videos, the ability to automatically recognize high-level, complex events is a challenging task. Motivated by pre-existing methods being complex, computationally demanding, and often non-replicable, we designed a simple system that is quick, effective and carries minimal overhead in terms of memory and storage. Our system is clearly described, modular in nature, replicable on any Desktop, and demonstrated with extensive experiments, backed by insightful analysis on different Convolutional Neural Networks (CNNs), as stand-alone and fused with others. With a large corpus of unconstrained, real-world video data, we examine the usefulness of different CNN models as features extractors for modeling high-level events, i.e., pre-trained CNNs that differ in architectures, training data, and number of outputs. For each CNN, we use 1-fps from all training exemplar to train one-vs-rest SVMs for each event. To represent videos, frame-level features were fused using a variety of techniques. The best being to max-pool between predetermined shot boundaries, then average-pool to form the final video-level descriptor. Through extensive analysis, several insights were found on using pre-trained CNNs as off-the-shelf feature extractors for the task of event detection. Fusing SVMs of different CNNs revealed some interesting facts, finding some combinations to be complimentary. It was concluded that no single CNN works best for all events, as some events are more object-driven while others are more scene-based. Our top performance resulted from learning event-dependent weights for different CNNs.
Schwartz, Frank L; Vernier, Stanley J; Shubrook, Jay H; Marling, Cynthia R
2010-11-01
We have developed a prototypical case-based reasoning system to enhance management of patients with type 1 diabetes mellitus (T1DM). The system is capable of automatically analyzing large volumes of life events, self-monitoring of blood glucose readings, continuous glucose monitoring system results, and insulin pump data to detect clinical problems. In a preliminary study, manual entry of large volumes of life-event and other data was too burdensome for patients. In this study, life-event and pump data collection were automated, and then the system was reevaluated. Twenty-three adult T1DM patients on insulin pumps completed the five-week study. A usual daily schedule was entered into the database, and patients were only required to upload their insulin pump data to Medtronic's CareLink® Web site weekly. Situation assessment routines were run weekly for each participant to detect possible problems, and once the trial was completed, the case-retrieval module was tested. Using the situation assessment routines previously developed, the system found 295 possible problems. The enhanced system detected only 2.6 problems per patient per week compared to 4.9 problems per patient per week in the preliminary study (p=.017). Problems detected by the system were correctly identified in 97.9% of the cases, and 96.1% of these were clinically useful. With less life-event data, the system is unable to detect certain clinical problems and detects fewer problems overall. Additional work is needed to provide device/software interfaces that allow patients to provide this data quickly and conveniently. © 2010 Diabetes Technology Society.
PAGER - Rapid Assessment of an Earthquake's Impact
Earle, Paul S.; Wald, David J.
2007-01-01
PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system to rapidly assess the number of people and regions exposed to severe shaking by an earthquake, and inform emergency responders, government agencies, and the media to the scope of the potential disaster. PAGER monitors the U.S. Geological Survey?s near real-time U.S. and global earthquake detections and automatically identifies events that are of societal importance, well in advance of ground-truth or news accounts.
Bartoletti, A; Bocconcelli, P; De Santo, T; Ghidini Ottonelli, A; Giuli, S; Massa, R; Svetlich, C; Tarsi, G; Corbucci, G; Tronconi, F; Vitale, E
2013-08-01
Aim of the study was to compare the diagnostic yield of implantable loop recorders (ILR) of two successive generations for the assessment of syncope. Data on patients who had undergone ILR implantation for unexplained syncope in four Italian public hospitals were retrospectively acquired from the Medtronic Clinical Service database. After implantation, routine follow-up examinations were performed every 90 days, while urgent examinations were carried out in the event of syncope recurrence. The following findings were regarded as diagnostic: ECG documentation of a syncope recurrence; documentation of any of the arrhythmias listed by the current guidelines as diagnostic findings even if asymptomatic. Between November 2002 and March 2010, 107 patients received an ILR (40 Medtronic Reveal® Plus; 67 Medtronic Reveal® DX/XT) and underwent at least one follow-up examination. Diagnoses were made in 7 (17.5%) and 24 (35.8%) (P=0.043) patients, with a median time of 228 and 65 days, respectively. Three (42.9%) and 21 (87.5%) (P=0.029) diagnoses were based on automatically detected events, while adverse outcomes occurred in 6 and in 1 (P=0.01) patients, respectively. Our results show that the new-generation device offer a higher diagnostic yield, mainly as a result of its improved automatic detection function, and is associated with fewer adverse outcomes.
Building a time-saving and adaptable tool to report adverse drug events.
Parès, Yves; Declerck, Gunnar; Hussain, Sajjad; Ng, Romain; Jaulent, Marie-Christine
2013-01-01
The difficult task of detecting adverse drug events (ADEs) and the tedious process of building manual reports of ADE occurrences out of patient profiles result in a majority of adverse reactions not being reported to health regulatory authorities. The SALUS individual case safety report (ICSR) reporting tool, a component currently developed within the SALUS project, aims to support semi-automatic reporting of ADEs to regulatory authorities. In this paper, we present an initial design and current state of of our ICSR reporting tool that features: (i) automatic pre-population of reporting forms through extraction of the patient data contained in an Electronic Health Record (EHR); (ii) generation and electronic submission of the completed ICSRs by the physician to regulatory authorities; and (iii) integration of the reporting process into the physician's work-flow to limit the disturbance. The objective is to increase the rates of ADE reporting and the quality of the reported data. The SALUS interoperability platform supports patient data extraction independently of the EHR data model in use and allows generation of reports using the format expected by regulatory authorities.
NASA Astrophysics Data System (ADS)
Heleno, S.; Matias, M.; Pina, P.; Sousa, A. J.
2015-09-01
A method for semi-automatic landslide detection, with the ability to separate source and run-out areas, is presented in this paper. It combines object-based image analysis and a Support Vector Machine classifier on a GeoEye-1 multispectral image, sensed 3 days after the major damaging landslide event that occurred in Madeira island (20 February 2010), with a pre-event LIDAR Digital Elevation Model. The testing is developed in a 15 km2-wide study area, where 95 % of the landslides scars are detected by this supervised approach. The classifier presents a good performance in the delineation of the overall landslide area. In addition, fair results are achieved in the separation of the source from the run-out landslide areas, although in less illuminated slopes this discrimination is less effective than in sunnier east facing-slopes.
NASA Astrophysics Data System (ADS)
Karpov, S.; Beskin, G.; Biryukov, A.; Bondar, S.; Ivanov, E.; Katkova, E.; Perkov, A.; Sasyuk, V.
2016-06-01
Here we present a summary of first years of operation and first results of a novel 9-channel wide-field optical monitoring system with sub-second temporal resolution, Mini-MegaTORTORA (MMT-9), which is in operation now at Special Astrophysical Observatory on Russian Caucasus. The system is able to observe the sky simultaneously in either wide (~900 square degrees) or narrow (~100 square degrees) fields of view, either in clear light or with any combination of color (Johnson-Cousins B, V or R) and polarimetric filters installed, with exposure times ranging from 0.1 s to hundreds of seconds. The real-time system data analysis pipeline performs automatic detection of rapid transient events, both near-Earth and extragalactic. The objects routinely detected by MMT include faint meteors and artificial satellites. The pipeline for a longer time scales variability analysis is still in development.
Sequence of eruptive events in the Vesuvio area recorded in shallow-water Ionian Sea sediments
NASA Astrophysics Data System (ADS)
Taricco, C.; Alessio, S.; Vivaldo, G.
2008-01-01
The dating of the cores we drilled from the Gallipoli terrace in the Gulf of Taranto (Ionian Sea), previously obtained by tephroanalysis, is checked by applying a method to objectively recognize volcanic events. This automatic statistical procedure allows identifying pulse-like features in a series and evaluating quantitatively the confidence level at which the significant peaks are detected. We applied it to the 2000-years-long pyroxenes series of the GT89-3 core, on which the dating is based. The method confirms the dating previously performed by detecting at a high confidence level the peaks originally used and indicates a few possible undocumented eruptions. Moreover, a spectral analysis, focussed on the long-term variability of the pyroxenes series and performed by several advanced methods, reveals that the volcanic pulses are superimposed to a millennial trend and a 400 years oscillation.
An Analysis of Eruptions Detected by the LMSAL Eruption Patrol
NASA Astrophysics Data System (ADS)
Hurlburt, N. E.; Higgins, P. A.; Jaffey, S.
2014-12-01
Observations of the solar atmosphere reveals a wide range of real and apparent motions, from small scale jets and spicules to global-scale coronal mass ejections. Identifying and characterizing these motions are essential to advance our understanding the drivers of space weather. Automated and visual identifications are used in identifying CMEs. To date, the precursors to these — eruptions near the solar surface — have been identified primarily by visual inspection. Here we report on an analysis of the eruptions detected by the Eruption Patrol, a data mining module designed to automatically identify eruptions from data collected by Solar Dynamics Observatory's Atmospheric Imaging Assembly (SDO/AIA). We describe the module and use it both to explore relations with other solar events recorded in the Heliophysics Event Knowledgebase and to identify and access data collected by the Interface Region Imaging Spectrograph (IRIS) and Solar Optical Telescope (SOT) on Hinode for further analysis.
NASA Astrophysics Data System (ADS)
Le Bras, R.; Rozhkov, M.; Bobrov, D.; Kitov, I. O.; Sanina, I.
2017-12-01
Association of weak seismic signals generated by low-magnitude aftershocks of the DPRK underground tests into event hypotheses represent a challenge for routine automatic and interactive processing at the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization, due to the relatively low station density of the International Monitoring System (IMS) seismic network. Since 2011, as an alternative, the IDC has been testing various prototype techniques of signal detection and event creation based on waveform cross correlation. Using signals measured by seismic stations of the IMS from DPRK explosions as waveform templates, the IDC detected several small (estimated mb between 2.2 and 3.6) seismic events after two DPRK tests conducted on September 9, 2016 and September 3, 2017. The obtained detections were associated with reliable event hypothesis and then used to locate these events relative to the epicenters of the DPRK explosions. We observe high similarity of the detected signals with the corresponding waveform templates. The newly found signals also correlate well between themselves. In addition, the values of the signal-to-noise ratios (SNR) estimated using the traces of cross correlation coefficients, increase with template length (from 5 s to 150 s), providing strong evidence in favour of their spatial closeness, which allows interpreting them as explosion aftershocks. We estimated the relative magnitudes of all aftershocks using the ratio of RMS amplitudes of the master and slave signal in the cross correlation windows characterized by the highest SNR. Additional waveform data from regional non-IMS stations MDJ and SEHB provide independent validation of these aftershock hypotheses. Since waveform templates from any single master event may be sub-efficient at some stations, we have also developed a method of joint usage of the DPRK and the biggest aftershocks templates to build more robust event hypotheses.
Cai, Yi; Du, Jingcheng; Huang, Jing; Ellenberg, Susan S; Hennessy, Sean; Tao, Cui; Chen, Yong
2017-07-05
To identify safety signals by manual review of individual report in large surveillance databases is time consuming; such an approach is very unlikely to reveal complex relationships between medications and adverse events. Since the late 1990s, efforts have been made to develop data mining tools to systematically and automatically search for safety signals in surveillance databases. Influenza vaccines present special challenges to safety surveillance because the vaccine changes every year in response to the influenza strains predicted to be prevalent that year. Therefore, it may be expected that reporting rates of adverse events following flu vaccines (number of reports for a specific vaccine-event combination/number of reports for all vaccine-event combinations) may vary substantially across reporting years. Current surveillance methods seldom consider these variations in signal detection, and reports from different years are typically collapsed together to conduct safety analyses. However, merging reports from different years ignores the potential heterogeneity of reporting rates across years and may miss important safety signals. Reports of adverse events between years 1990 to 2013 were extracted from the Vaccine Adverse Event Reporting System (VAERS) database and formatted into a three-dimensional data array with types of vaccine, groups of adverse events and reporting time as the three dimensions. We propose a random effects model to test the heterogeneity of reporting rates for a given vaccine-event combination across reporting years. The proposed method provides a rigorous statistical procedure to detect differences of reporting rates among years. We also introduce a new visualization tool to summarize the result of the proposed method when applied to multiple vaccine-adverse event combinations. We applied the proposed method to detect safety signals of FLU3, an influenza vaccine containing three flu strains, in the VAERS database. We showed that it had high statistical power to detect the variation in reporting rates across years. The identified vaccine-event combinations with significant different reporting rates over years suggested potential safety issues due to changes in vaccines which require further investigation. We developed a statistical model to detect safety signals arising from heterogeneity of reporting rates of a given vaccine-event combinations across reporting years. This method detects variation in reporting rates over years with high power. The temporal trend of reporting rate across years may reveal the impact of vaccine update on occurrence of adverse events and provide evidence for further investigations.
Flow detection via sparse frame analysis for suspicious event recognition in infrared imagery
NASA Astrophysics Data System (ADS)
Fernandes, Henrique C.; Batista, Marcos A.; Barcelos, Celia A. Z.; Maldague, Xavier P. V.
2013-05-01
It is becoming increasingly evident that intelligent systems are very bene¯cial for society and that the further development of such systems is necessary to continue to improve society's quality of life. One area that has drawn the attention of recent research is the development of automatic surveillance systems. In our work we outline a system capable of monitoring an uncontrolled area (an outside parking lot) using infrared imagery and recognizing suspicious events in this area. The ¯rst step is to identify moving objects and segment them from the scene's background. Our approach is based on a dynamic background-subtraction technique which robustly adapts detection to illumination changes. It is analyzed only regions where movement is occurring, ignoring in°uence of pixels from regions where there is no movement, to segment moving objects. Regions where movement is occurring are identi¯ed using °ow detection via sparse frame analysis. During the tracking process the objects are classi¯ed into two categories: Persons and Vehicles, based on features such as size and velocity. The last step is to recognize suspicious events that may occur in the scene. Since the objects are correctly segmented and classi¯ed it is possible to identify those events using features such as velocity and time spent motionless in one spot. In this paper we recognize the suspicious event suspicion of object(s) theft from inside a parked vehicle at spot X by a person" and results show that the use of °ow detection increases the recognition of this suspicious event from 78:57% to 92:85%.
Automatic detection of confusion in elderly users of a web-based health instruction video.
Postma-Nilsenová, Marie; Postma, Eric; Tates, Kiek
2015-06-01
Because of cognitive limitations and lower health literacy, many elderly patients have difficulty understanding verbal medical instructions. Automatic detection of facial movements provides a nonintrusive basis for building technological tools supporting confusion detection in healthcare delivery applications on the Internet. Twenty-four elderly participants (70-90 years old) were recorded while watching Web-based health instruction videos involving easy and complex medical terminology. Relevant fragments of the participants' facial expressions were rated by 40 medical students for perceived level of confusion and analyzed with automatic software for facial movement recognition. A computer classification of the automatically detected facial features performed more accurately and with a higher sensitivity than the human observers (automatic detection and classification, 64% accuracy, 0.64 sensitivity; human observers, 41% accuracy, 0.43 sensitivity). A drill-down analysis of cues to confusion indicated the importance of the eye and eyebrow region. Confusion caused by misunderstanding of medical terminology is signaled by facial cues that can be automatically detected with currently available facial expression detection technology. The findings are relevant for the development of Web-based services for healthcare consumers.
Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter
NASA Astrophysics Data System (ADS)
Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.
2018-04-01
Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.
Detecting Service Chains and Feature Interactions in Sensor-Driven Home Network Services
Inada, Takuya; Igaki, Hiroshi; Ikegami, Kosuke; Matsumoto, Shinsuke; Nakamura, Masahide; Kusumoto, Shinji
2012-01-01
Sensor-driven services often cause chain reactions, since one service may generate an environmental impact that automatically triggers another service. We first propose a framework that can formalize and detect such service chains based on ECA (event, condition, action) rules. Although the service chain can be a major source of feature interactions, not all service chains lead to harmful interactions. Therefore, we then propose a method that identifies feature interactions within the service chains. Specifically, we characterize the degree of deviation of every service chain by evaluating the gap between expected and actual service states. An experimental evaluation demonstrates that the proposed method successfully detects 11 service chains and 6 feature interactions within 7 practical sensor-driven services. PMID:23012499
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, David R.; Wagstaff, Kiri L.; Majid, Walid A.
2011-07-10
Recent investigations reveal an important new class of transient radio phenomena that occur on submillisecond timescales. Often, transient surveys' data volumes are too large to archive exhaustively. Instead, an online automatic system must excise impulsive interference and detect candidate events in real time. This work presents a case study using data from multiple geographically distributed stations to perform simultaneous interference excision and transient detection. We present several algorithms that incorporate dedispersed data from multiple sites, and report experiments with a commensal real-time transient detection system on the Very Long Baseline Array. We test the system using observations of pulsar B0329+54.more » The multiple-station algorithms enhanced sensitivity for detection of individual pulses. These strategies could improve detection performance for a future generation of geographically distributed arrays such as the Australian Square Kilometre Array Pathfinder and the Square Kilometre Array.« less
Detection and analysis of microseismic events using a Matched Filtering Algorithm (MFA)
NASA Astrophysics Data System (ADS)
Caffagni, Enrico; Eaton, David W.; Jones, Joshua P.; van der Baan, Mirko
2016-07-01
A new Matched Filtering Algorithm (MFA) is proposed for detecting and analysing microseismic events recorded by downhole monitoring of hydraulic fracturing. This method requires a set of well-located template (`parent') events, which are obtained using conventional microseismic processing and selected on the basis of high signal-to-noise (S/N) ratio and representative spatial distribution of the recorded microseismicity. Detection and extraction of `child' events are based on stacked, multichannel cross-correlation of the continuous waveform data, using the parent events as reference signals. The location of a child event relative to its parent is determined using an automated process, by rotation of the multicomponent waveforms into the ray-centred co-ordinates of the parent and maximizing the energy of the stacked amplitude envelope within a search volume around the parent's hypocentre. After correction for geometrical spreading and attenuation, the relative magnitude of the child event is obtained automatically using the ratio of stacked envelope peak with respect to its parent. Since only a small number of parent events require interactive analysis such as picking P- and S-wave arrivals, the MFA approach offers the potential for significant reduction in effort for downhole microseismic processing. Our algorithm also facilitates the analysis of single-phase child events, that is, microseismic events for which only one of the S- or P-wave arrivals is evident due to unfavourable S/N conditions. A real-data example using microseismic monitoring data from four stages of an open-hole slickwater hydraulic fracture treatment in western Canada demonstrates that a sparse set of parents (in this case, 4.6 per cent of the originally located events) yields a significant (more than fourfold increase) in the number of located events compared with the original catalogue. Moreover, analysis of the new MFA catalogue suggests that this approach leads to more robust interpretation of the induced microseismicity and novel insights into dynamic rupture processes based on the average temporal (foreshock-aftershock) relationship of child events to parents.
Social-aware Event Handling within the FallRisk Project.
De Backere, Femke; Van den Bergh, Jan; Coppers, Sven; Elprama, Shirley; Nelis, Jelle; Verstichel, Stijn; Jacobs, An; Coninx, Karin; Ongenae, Femke; De Turck, Filip
2017-01-09
With the uprise of the Internet of Things, wearables and smartphones are moving to the foreground. Ambient Assisted Living solutions are, for example, created to facilitate ageing in place. One example of such systems are fall detection systems. Currently, there exists a wide variety of fall detection systems using different methodologies and technologies. However, these systems often do not take into account the fall handling process, which starts after a fall is identified or this process only consists of sending a notification. The FallRisk system delivers an accurate analysis of incidents occurring in the home of the older adults using several sensors and smart devices. Moreover, the input from these devices can be used to create a social-aware event handling process, which leads to assisting the older adult as soon as possible and in the best possible way. The FallRisk system consists of several components, located in different places. When an incident is identified by the FallRisk system, the event handling process will be followed to assess the fall incident and select the most appropriate caregiver, based on the input of the smartphones of the caregivers. In this process, availability and location are automatically taken into account. The event handling process was evaluated during a decision tree workshop to verify if the current day practices reflect the requirements of all the stakeholders. Other knowledge, which is uncovered during this workshop can be taken into account to further improve the process. The FallRisk offers a way to detect fall incidents in an accurate way and uses context information to assign the incident to the most appropriate caregiver. This way, the consequences of the fall are minimized and help is at location as fast as possible. It could be concluded that the current guidelines on fall handling reflect the needs of the stakeholders. However, current technology evolutions, such as the uptake of wearables and smartphones, enables the improvement of these guidelines, such as the automatic ordering of the caregivers based on their location and availability.
MIDAS: Software for the detection and analysis of lunar impact flashes
NASA Astrophysics Data System (ADS)
Madiedo, José M.; Ortiz, José L.; Morales, Nicolás; Cabrera-Caño, Jesús
2015-06-01
Since 2009 we are running a project to identify flashes produced by the impact of meteoroids on the surface of the Moon. For this purpose we are employing small telescopes and high-sensitivity CCD video cameras. To automatically identify these events a software package called MIDAS was developed and tested. This package can also perform the photometric analysis of these flashes and estimate the value of the luminous efficiency. Besides, we have implemented in MIDAS a new method to establish which is the likely source of the meteoroids (known meteoroid stream or sporadic background). The main features of this computer program are analyzed here, and some examples of lunar impact events are presented.
Unsupervised EEG analysis for automated epileptic seizure detection
NASA Astrophysics Data System (ADS)
Birjandtalab, Javad; Pouyan, Maziyar Baran; Nourani, Mehrdad
2016-07-01
Epilepsy is a neurological disorder which can, if not controlled, potentially cause unexpected death. It is extremely crucial to have accurate automatic pattern recognition and data mining techniques to detect the onset of seizures and inform care-givers to help the patients. EEG signals are the preferred biosignals for diagnosis of epileptic patients. Most of the existing pattern recognition techniques used in EEG analysis leverage the notion of supervised machine learning algorithms. Since seizure data are heavily under-represented, such techniques are not always practical particularly when the labeled data is not sufficiently available or when disease progression is rapid and the corresponding EEG footprint pattern will not be robust. Furthermore, EEG pattern change is highly individual dependent and requires experienced specialists to annotate the seizure and non-seizure events. In this work, we present an unsupervised technique to discriminate seizures and non-seizures events. We employ power spectral density of EEG signals in different frequency bands that are informative features to accurately cluster seizure and non-seizure events. The experimental results tried so far indicate achieving more than 90% accuracy in clustering seizure and non-seizure events without having any prior knowledge on patient's history.
NASA Astrophysics Data System (ADS)
Provost, F.; Malet, J. P.; Hibert, C.; Doubre, C.
2017-12-01
The Super-Sauze landslide is a clay-rich landslide located the Southern French Alps. The landslide exhibits a complex pattern of deformation: a large number of rockfalls are observed in the 100 m height main scarp while the deformation of the upper part of the accumulated material is mainly affected by material shearing along stable in-situ crests. Several fissures are locally observed. The shallowest layer of the accumulated material tends to behave in a brittle manner but may undergo fluidization and/or rapid acceleration. Previous studies have demonstrated the presence of a rich endogenous micro-seismicity associated to the deformation of the landslide. However, the lack of long-term seismic records and suitable processing chains prevented a full interpretation of the links between the external forcings, the deformation and the recorded seismic signals. Since 2013, two permanent seismic arrays are installed in the upper part of the landslide. We here present the methodology adopted to process this dataset. The processing chain consists of a set of automated methods for automatic and robust detection, classification and location of the recorded seismicity. Thousands of events are detected and further automatically classified. The classification method is based on the description of the signal through attributes (e.g. waveform, spectral content properties). These attributes are used as inputs to classify the signal using a Random Forest machine-learning algorithm in four classes: endogenous micro-quakes, rockfalls, regional earthquakes and natural/anthropogenic noises. The endogenous landslide sources (i.e. micro-quake and rockfall) are further located. The location method is adapted to the type of event. The micro-quakes are located with a 3D velocity model derived from a seismic tomography campaign and an optimization of the first arrival picking with the inter-trace correlation of the P-wave arrivals. The rockfalls are located by optimizing the inter-trace correlation of the whole signal. We analyze the temporal relationships of the endogenous seismic events with rainfall and landslide displacements. Sub-families of landslide micro-quakes are also identified and an interpretation of their source mechanism is proposed from their signal properties and spatial location.
Code of Federal Regulations, 2014 CFR
2014-10-01
... the main reservoir air pressure. The operating valve handle for such connection shall be conveniently... automatically prevent loss of pressure in event of failure of the main reservoir air pressure. (c) Power reverse... automatically prevent the loss of pressure in the event of a failure of main air pressure and have storage...
Code of Federal Regulations, 2011 CFR
2011-10-01
... the main reservoir air pressure. The operating valve handle for such connection shall be conveniently... automatically prevent loss of pressure in event of failure of the main reservoir air pressure. (c) Power reverse... automatically prevent the loss of pressure in the event of a failure of main air pressure and have storage...
Code of Federal Regulations, 2012 CFR
2012-10-01
... the main reservoir air pressure. The operating valve handle for such connection shall be conveniently... automatically prevent loss of pressure in event of failure of the main reservoir air pressure. (c) Power reverse... automatically prevent the loss of pressure in the event of a failure of main air pressure and have storage...
Code of Federal Regulations, 2013 CFR
2013-10-01
... the main reservoir air pressure. The operating valve handle for such connection shall be conveniently... automatically prevent loss of pressure in event of failure of the main reservoir air pressure. (c) Power reverse... automatically prevent the loss of pressure in the event of a failure of main air pressure and have storage...
Traulsen, I; Breitenberger, S; Auer, W; Stamer, E; Müller, K; Krieter, J
2016-06-01
Lameness is an important issue in group-housed sows. Automatic detection systems are a beneficial diagnostic tool to support management. The aim of the present study was to evaluate data of a positioning system including acceleration measurements to detect lameness in group-housed sows. Data were acquired at the Futterkamp research farm from May 2012 until April 2013. In the gestation unit, 212 group-housed sows were equipped with an ear sensor to sample position and acceleration per sow and second. Three activity indices were calculated per sow and day: path length walked by a sow during the day (Path), number of squares (25×25 cm) visited during the day (Square) and variance of the acceleration measurement during the day (Acc). In addition, data on lameness treatments of the sows and a weekly lameness score were used as reference systems. To determine the influence of a lameness event, all indices were analysed in a linear random regression model. Test day, parity class and day before treatment had a significant influence on all activity indices (P<0.05). In healthy sows, indices Path and Square increased with increasing parity, whereas variance slightly decreased. The indices Path and Square showed a decreasing trend in a 14-day period before a lameness treatment and to a smaller extent before a lameness score of 2 (severe lameness). For the index acceleration, there was no obvious difference between the lame and non-lame periods. In conclusion, positioning and acceleration measurements with ear sensors can be used to describe the activity pattern of sows. However, improvements in sampling rate and analysis techniques should be made for a practical application as an automatic lameness detection system.
Gupta, Rahul; Audhkhasi, Kartik; Lee, Sungbok; Narayanan, Shrikanth
2017-01-01
Non-verbal communication involves encoding, transmission and decoding of non-lexical cues and is realized using vocal (e.g. prosody) or visual (e.g. gaze, body language) channels during conversation. These cues perform the function of maintaining conversational flow, expressing emotions, and marking personality and interpersonal attitude. In particular, non-verbal cues in speech such as paralanguage and non-verbal vocal events (e.g. laughters, sighs, cries) are used to nuance meaning and convey emotions, mood and attitude. For instance, laughters are associated with affective expressions while fillers (e.g. um, ah, um) are used to hold floor during a conversation. In this paper we present an automatic non-verbal vocal events detection system focusing on the detect of laughter and fillers. We extend our system presented during Interspeech 2013 Social Signals Sub-challenge (that was the winning entry in the challenge) for frame-wise event detection and test several schemes for incorporating local context during detection. Specifically, we incorporate context at two separate levels in our system: (i) the raw frame-wise features and, (ii) the output decisions. Furthermore, our system processes the output probabilities based on a few heuristic rules in order to reduce erroneous frame-based predictions. Our overall system achieves an Area Under the Receiver Operating Characteristics curve of 95.3% for detecting laughters and 90.4% for fillers on the test set drawn from the data specifications of the Interspeech 2013 Social Signals Sub-challenge. We perform further analysis to understand the interrelation between the features and obtained results. Specifically, we conduct a feature sensitivity analysis and correlate it with each feature's stand alone performance. The observations suggest that the trained system is more sensitive to a feature carrying higher discriminability with implications towards a better system design. PMID:28713197
Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira
2015-01-01
Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound action potentials in neural regeneration studies.
Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira
2015-01-01
Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound action potentials in neural regeneration studies. PMID:26325291
Video-tracker trajectory analysis: who meets whom, when and where
NASA Astrophysics Data System (ADS)
Jäger, U.; Willersinn, D.
2010-04-01
Unveiling unusual or hostile events by observing manifold moving persons in a crowd is a challenging task for human operators, especially when sitting in front of monitor walls for hours. Typically, hostile events are rare. Thus, due to tiredness and negligence the operator may miss important events. In such situations, an automatic alarming system is able to support the human operator. The system incorporates a processing chain consisting of (1) people tracking, (2) event detection, (3) data retrieval, and (4) display of relevant video sequence overlaid by highlighted regions of interest. In this paper we focus on the event detection stage of the processing chain mentioned above. In our case, the selected event of interest is the encounter of people. Although being based on a rather simple trajectory analysis, this kind of event embodies great practical importance because it paves the way to answer the question "who meets whom, when and where". This, in turn, forms the basis to detect potential situations where e.g. money, weapons, drugs etc. are handed over from one person to another in crowded environments like railway stations, airports or busy streets and places etc.. The input to the trajectory analysis comes from a multi-object video-based tracking system developed at IOSB which is able to track multiple individuals within a crowd in real-time [1]. From this we calculate the inter-distances between all persons on a frame-to-frame basis. We use a sequence of simple rules based on the individuals' kinematics to detect the event mentioned above to output the frame number, the persons' IDs from the tracker and the pixel coordinates of the meeting position. Using this information, a data retrieval system may extract the corresponding part of the recorded video image sequence and finally allows for replaying the selected video clip with a highlighted region of interest to attract the operator's attention for further visual inspection.
Causality Patterns for Detecting Adverse Drug Reactions From Social Media: Text Mining Approach.
Bollegala, Danushka; Maskell, Simon; Sloane, Richard; Hajne, Joanna; Pirmohamed, Munir
2018-05-09
Detecting adverse drug reactions (ADRs) is an important task that has direct implications for the use of that drug. If we can detect previously unknown ADRs as quickly as possible, then this information can be provided to the regulators, pharmaceutical companies, and health care organizations, thereby potentially reducing drug-related morbidity and saving lives of many patients. A promising approach for detecting ADRs is to use social media platforms such as Twitter and Facebook. A high level of correlation between a drug name and an event may be an indication of a potential adverse reaction associated with that drug. Although numerous association measures have been proposed by the signal detection community for identifying ADRs, these measures are limited in that they detect correlations but often ignore causality. This study aimed to propose a causality measure that can detect an adverse reaction that is caused by a drug rather than merely being a correlated signal. To the best of our knowledge, this was the first causality-sensitive approach for detecting ADRs from social media. Specifically, the relationship between a drug and an event was represented using a set of automatically extracted lexical patterns. We then learned the weights for the extracted lexical patterns that indicate their reliability for expressing an adverse reaction of a given drug. Our proposed method obtains an ADR detection accuracy of 74% on a large-scale manually annotated dataset of tweets, covering a standard set of drugs and adverse reactions. By using lexical patterns, we can accurately detect the causality between drugs and adverse reaction-related events. ©Danushka Bollegala, Simon Maskell, Richard Sloane, Joanna Hajne, Munir Pirmohamed. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 09.05.2018.
Patton, John M.; Guy, Michelle R.; Benz, Harley M.; Buland, Raymond P.; Erickson, Brian K.; Kragness, David S.
2016-08-18
This report provides an overview of the capabilities and design of Hydra, the global seismic monitoring and analysis system used for earthquake response and catalog production at the U.S. Geological Survey National Earthquake Information Center (NEIC). Hydra supports the NEIC’s worldwide earthquake monitoring mission in areas such as seismic event detection, seismic data insertion and storage, seismic data processing and analysis, and seismic data output.The Hydra system automatically identifies seismic phase arrival times and detects the occurrence of earthquakes in near-real time. The system integrates and inserts parametric and waveform seismic data into discrete events in a database for analysis. Hydra computes seismic event parameters, including locations, multiple magnitudes, moment tensors, and depth estimates. Hydra supports the NEIC’s 24/7 analyst staff with a suite of seismic analysis graphical user interfaces.In addition to the NEIC’s monitoring needs, the system supports the processing of aftershock and temporary deployment data, and supports the NEIC’s quality assurance procedures. The Hydra system continues to be developed to expand its seismic analysis and monitoring capabilities.
Differences between automatically detected and steady-state fractional flow reserve.
Härle, Tobias; Meyer, Sven; Vahldiek, Felix; Elsässer, Albrecht
2016-02-01
Measurement of fractional flow reserve (FFR) has become a standard diagnostic tool in the catheterization laboratory. FFR evaluation studies were based on pressure recordings during steady-state maximum hyperemia. Commercially available computer systems detect the lowest Pd/Pa ratio automatically, which might not always be measured during steady-state hyperemia. We sought to compare the automatically detected FFR and true steady-state FFR. Pressure measurement traces of 105 coronary lesions from 77 patients with intermediate coronary lesions or multivessel disease were reviewed. In all patients, hyperemia had been achieved by intravenous adenosine administration using a dosage of 140 µg/kg/min. In 42 lesions (40%) automatically detected FFR was lower than true steady-state FFR. Mean bias was 0.009 (standard deviation 0.015, limits of agreement -0.02, 0.037). In 4 lesions (3.8%) both methods lead to different treatment recommendations, in all 4 cases instantaneous wave-free ratio confirmed steady-state FFR. Automatically detected FFR was slightly lower than steady-state FFR in more than one-third of cases. Consequently, interpretation of automatically detected FFR values closely below the cutoff value requires special attention.
Detection of rain events in radiological early warning networks with spectro-dosimetric systems
NASA Astrophysics Data System (ADS)
Dąbrowski, R.; Dombrowski, H.; Kessler, P.; Röttger, A.; Neumaier, S.
2017-10-01
Short-term pronounced increases of the ambient dose equivalent rate, due to rainfall are a well-known phenomenon. Increases in the same order of magnitude or even below may also be caused by a nuclear or radiological event, i.e. by artificial radiation. Hence, it is important to be able to identify natural rain events in dosimetric early warning networks and to distinguish them from radiological events. Novel spectrometric systems based on scintillators may be used to differentiate between the two scenarios, because the measured gamma spectra provide significant nuclide-specific information. This paper describes three simple, automatic methods to check whether an dot H*(10) increase is caused by a rain event or by artificial radiation. These methods were applied to measurements of three spectrometric systems based on CeBr3, LaBr3 and SrI2 scintillation crystals, investigated and tested for their practicability at a free-field reference site of PTB.
[The effects of interpretation bias for social events and automatic thoughts on social anxiety].
Aizawa, Naoki
2015-08-01
Many studies have demonstrated that individuals with social anxiety interpret ambiguous social situations negatively. It is, however, not clear whether the interpretation bias discriminatively contributes to social anxiety in comparison with depressive automatic thoughts. The present study investigated the effects of negative interpretation bias and automatic thoughts on social anxiety. The Social Intent Interpretation-Questionnaire, which measures the tendency to interpret ambiguous social events as implying other's rejective intents, the short Japanese version of the Automatic Thoughts Questionnaire-Revised, and the Anthropophobic Tendency Scale were administered to 317 university students. Covariance structure analysis indicated that both rejective intent interpretation bias and negative automatic thoughts contributed to mental distress in social situations mediated by a sense of powerlessness and excessive concern about self and others in social situations. Positive automatic thoughts reduced mental distress. These results indicate the importance of interpretation bias and negative automatic thoughts in the development and maintenance of social anxiety. Implications for understanding of the cognitive features of social anxiety were discussed.
NASA Technical Reports Server (NTRS)
Simpson, Amy A.; Wilson, Jennifer G.; Brown, Robert G.
2015-01-01
Data from multiple sources is needed to investigate lightning characteristics over differing terrain (on-shore vs. off-shore) by comparing natural cloud-to-ground lightning behavior differences depending on the characteristics of attachment mediums. The KSC Lightning Research Database (KLRD) was created to reduce manual data entry time and aid research by combining information from various data sources into a single record for each unique lightning event of interest. The KLRD uses automatic data handling functions to import data from a lightning detection network and identify and record lighting events of interest. Additional automatic functions import data from the NASA Buoy 41009 (located approximately 20 miles off the coast) and the KSC Electric Field Mill network, then match these electric field mill values to the corresponding lightning events. The KLRD calculates distances between each lightning event and the various electric field mills, aids in identifying the location type for each stroke (i.e., on-shore vs. off-shore, etc.), provides statistics on the number of strokes per flash, and produces customizable reports for quick retrieval and logical display of data. Data from February 2014 to date covers 48 unique storm dates with 2295 flashes containing 5700 strokes, of which 2612 are off-shore and 1003 are on-shore. The number of strokes per flash ranges from 1 to 22. The ratio of single to subsequent stroke flashes is 1.29 for off-shore strokes and 2.19 for on-shore strokes.
Tu, Yiheng; Huang, Gan; Hung, Yeung Sam; Hu, Li; Hu, Yong; Zhang, Zhiguo
2013-01-01
Event-related potentials (ERPs) are widely used in brain-computer interface (BCI) systems as input signals conveying a subject's intention. A fast and reliable single-trial ERP detection method can be used to develop a BCI system with both high speed and high accuracy. However, most of single-trial ERP detection methods are developed for offline EEG analysis and thus have a high computational complexity and need manual operations. Therefore, they are not applicable to practical BCI systems, which require a low-complexity and automatic ERP detection method. This work presents a joint spatial-time-frequency filter that combines common spatial patterns (CSP) and wavelet filtering (WF) for improving the signal-to-noise (SNR) of visual evoked potentials (VEP), which can lead to a single-trial ERP-based BCI.
USDA-ARS?s Scientific Manuscript database
Assimilating accurate behavioral events over a long period can be labor intensive and relatively expensive. If an automatic device could accurately record the duration and frequency for a given behavioral event, it would be a valuable alternative to the traditional use of human observers for behavio...
Automated detection and characterization of harmonic tremor in continuous seismic data
NASA Astrophysics Data System (ADS)
Roman, Diana C.
2017-06-01
Harmonic tremor is a common feature of volcanic, hydrothermal, and ice sheet seismicity and is thus an important proxy for monitoring changes in these systems. However, no automated methods for detecting harmonic tremor currently exist. Because harmonic tremor shares characteristics with speech and music, digital signal processing techniques for analyzing these signals can be adapted. I develop a novel pitch-detection-based algorithm to automatically identify occurrences of harmonic tremor and characterize their frequency content. The algorithm is applied to seismic data from Popocatepetl Volcano, Mexico, and benchmarked against a monthlong manually detected catalog of harmonic tremor events. During a period of heightened eruptive activity from December 2014 to May 2015, the algorithm detects 1465 min of harmonic tremor, which generally precede periods of heightened explosive activity. These results demonstrate the algorithm's ability to accurately characterize harmonic tremor while highlighting the need for additional work to understand its causes and implications at restless volcanoes.
NASA Astrophysics Data System (ADS)
Di Stefano, R.; Chiaraluce, L.; Valoroso, L.; Waldhauser, F.; Latorre, D.; Piccinini, D.; Tinti, E.
2014-12-01
The Alto Tiberina Near Fault Observatory (TABOO) in the upper Tiber Valley (northern Appennines) is a INGV research infrastructure devoted to the study of preparatory processes and deformation characteristics of the Alto Tiberina Fault (ATF), a 60 km long, low-angle normal fault active since the Quaternary. The TABOO seismic network, covering an area of 120 × 120 km, consists of 60 permanent surface and 250 m deep borehole stations equipped with 3-components, 0.5s to 120s velocimeters, and strong motion sensors. Continuous seismic recordings are transmitted in real-time to the INGV, where we set up an automatic procedure that produces high-resolution earthquakes catalogues (location, magnitudes, 1st motion polarities) in near-real-time. A sensitive event detection engine running on the continuous data stream is followed by advanced phase identification, arrival-time picking, and quality assessment algorithms (MPX). Pick weights are determined from a statistical analysis of a set of predictors designed to correctly apply an a-priori chosen weighting scheme. The MPX results are used to routinely update earthquakes catalogues based on a variety of (1D and 3D) velocity models and location techniques. We are also applying the DD-RT procedure which uses cross-correlation and double-difference methods in real-time to relocate events with high precision relative to a high-resolution background catalog. P- and S-onset and location information are used to automatically compute focal mechanisms, VP/VS variations in space and time, and periodically update 3D VP and VP/VS tomographic models. We present results from four years of operation, during which this monitoring system analyzed over 1.2 million detections and recovered ~60,000 earthquakes at a detection threshold of ML 0.5. The high-resolution information is being used to study changes in seismicity patterns and fault and rock properties along the ATF in space and time, and to elaborate ground shaking scenarios adopting diverse slip distributions and rupture directivity models.
2016-06-01
TECHNICAL REPORT Algorithm for Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar...Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar Gradiometer Leon Vaizer, Jesse Angle, Neil...of Magnetic Dipole Targets Using LSG i June 2016 TABLE OF CONTENTS INTRODUCTION
Kohlmetz, C; Altenmüller, E; Schuppert, M; Wieringa, B M; Münte, T F
2001-01-01
Music perception deficits following acute neurological damage are thought to be rare. By a newly devised test battery of music-perception skills, however, we were able to identify among a group of 12 patients with acute hemispheric stroke six patients with music perception deficits (amusia) while six others had no such deficits. In addition we recorded event-related brain potentials (ERPs) in a passive listening task with frequent standard and infrequent pitch deviants designed to elicit the mismatch negativity (MMN). The MMN in the patients with amusia was grossly reduced, while the non-amusic patients and control subjects had MMNs of equal size. These data show that amusia is quite common in unselected stroke patients. The MMN reduction suggests that amusia is related to unspecific automatic stimulus classification deficits in these patients.
Automatic Detection of Landslides at Stromboli Volcano
NASA Astrophysics Data System (ADS)
Giudicepietro, F.; Esposito, A. M.; D'Auria, L.; Peluso, R.; Martini, M.
2011-12-01
In this work we present an automatic system for the landslide detection at Stromboli volcano that has proved to be effective both during the 2007 effusive eruption and in the recent (2 August 2011) volcanic activity. The study of the landslides at Stromboli is important because they could be considered short-term precursors of effusive eruptions, as seen during the 2007 eruption, and in addition they allow to improve the monitoring of the gravitational instabilities of the Sciara del Fuoco flank. The proposed system uses a two-class MLP (Multi Layer Perceptron) neural network in order to discriminate the landslides from other seismic signals usually recorded at Stromboli, such as explosion-quakes and volcanic tremor. To train and test the network we used a dataset of 537 signals, including 267 landslides and 270 other events (130 explosion-quakes and 140 tremor signals). The net performance is of 98.7%, if averaged over different net configurations, and of 99.5% for the best net performance. Based on the neural network response, the automatic system calculates a Landslide Percentage Index (LPI) defined on the number of signals identified as landslides by the net on a given temporal interval in order to recognize anomalies in the landslide rate. This system was sensitive to the signals produced by the flow of lava front during a recent mild effusive episode on the "La Sciara del Fuoco" slope.
Semi-automatic mapping of cultural heritage from airborne laser scanning using deep learning
NASA Astrophysics Data System (ADS)
Due Trier, Øivind; Salberg, Arnt-Børre; Holger Pilø, Lars; Tonning, Christer; Marius Johansen, Hans; Aarsten, Dagrun
2016-04-01
This paper proposes to use deep learning to improve semi-automatic mapping of cultural heritage from airborne laser scanning (ALS) data. Automatic detection methods, based on traditional pattern recognition, have been applied in a number of cultural heritage mapping projects in Norway for the past five years. Automatic detection of pits and heaps have been combined with visual interpretation of the ALS data for the mapping of deer hunting systems, iron production sites, grave mounds and charcoal kilns. However, the performance of the automatic detection methods varies substantially between ALS datasets. For the mapping of deer hunting systems on flat gravel and sand sediment deposits, the automatic detection results were almost perfect. However, some false detections appeared in the terrain outside of the sediment deposits. These could be explained by other pit-like landscape features, like parts of river courses, spaces between boulders, and modern terrain modifications. However, these were easy to spot during visual interpretation, and the number of missed individual pitfall traps was still low. For the mapping of grave mounds, the automatic method produced a large number of false detections, reducing the usefulness of the semi-automatic approach. The mound structure is a very common natural terrain feature, and the grave mounds are less distinct in shape than the pitfall traps. Still, applying automatic mound detection on an entire municipality did lead to a new discovery of an Iron Age grave field with more than 15 individual mounds. Automatic mound detection also proved to be useful for a detailed re-mapping of Norway's largest Iron Age grave yard, which contains almost 1000 individual graves. Combined pit and mound detection has been applied to the mapping of more than 1000 charcoal kilns that were used by an iron work 350-200 years ago. The majority of charcoal kilns were indirectly detected as either pits on the circumference, a central mound, or both. However, kilns with a flat interior and a shallow ditch along the circumference were often missed by the automatic detection method. The successfulness of automatic detection seems to depend on two factors: (1) the density of ALS ground hits on the cultural heritage structures being sought, and (2) to what extent these structures stand out from natural terrain structures. The first factor may, to some extent, be improved by using a higher number of ALS pulses per square meter. The second factor is difficult to change, and also highlights another challenge: how to make a general automatic method that is applicable in all types of terrain within a country. The mixed experience with traditional pattern recognition for semi-automatic mapping of cultural heritage led us to consider deep learning as an alternative approach. The main principle is that a general feature detector has been trained on a large image database. The feature detector is then tailored to a specific task by using a modest number of images of true and false examples of the features being sought. Results of using deep learning are compared with previous results using traditional pattern recognition.
NASA Astrophysics Data System (ADS)
Piatyszek, E.; Voignier, P.; Graillot, D.
2000-05-01
One of the aims of sewer networks is the protection of population against floods and the reduction of pollution rejected to the receiving water during rainy events. To meet these goals, managers have to equip the sewer networks with and to set up real-time control systems. Unfortunately, a component fault (leading to intolerable behaviour of the system) or sensor fault (deteriorating the process view and disturbing the local automatism) makes the sewer network supervision delicate. In order to ensure an adequate flow management during rainy events it is essential to set up procedures capable of detecting and diagnosing these anomalies. This article introduces a real-time fault detection method, applicable to sewer networks, for the follow-up of rainy events. This method consists in comparing the sensor response with a forecast of this response. This forecast is provided by a model and more precisely by a state estimator: a Kalman filter. This Kalman filter provides not only a flow estimate but also an entity called 'innovation'. In order to detect abnormal operations within the network, this innovation is analysed with the binary sequential probability ratio test of Wald. Moreover, by crossing available information on several nodes of the network, a diagnosis of the detected anomalies is carried out. This method provided encouraging results during the analysis of several rains, on the sewer network of Seine-Saint-Denis County, France.
Luo, Jiaying; Xiao, Sichang; Qiu, Zhihui; Song, Ning; Luo, Yuanming
2013-04-01
Whether the therapeutic nasal continuous positive airway pressure (CPAP) derived from manual titration is the same as derived from automatic titration is controversial. The purpose of this study was to compare the therapeutic pressure derived from manual titration with automatic titration. Fifty-one patients with obstructive sleep apnoea (OSA) (mean apnoea/hypopnoea index (AHI) = 50.6 ± 18.6 events/h) who were newly diagnosed after an overnight full polysomnography and who were willing to accept CPAP as a long-term treatment were recruited for the study. Manual titration during full polysomnography monitoring and unattended automatic titration with an automatic CPAP device (REMstar Auto) were performed. A separate cohort study of one hundred patients with OSA (AHI = 54.3 ± 18.9 events/h) was also performed by observing the efficacy of CPAP derived from manual titration. The treatment pressure derived from automatic titration (9.8 ± 2.2 cmH(2)O) was significantly higher than that derived from manual titration (7.3 ± 1.5 cmH(2)O; P < 0.001) in 51 patients. The cohort study of 100 patients showed that AHI was satisfactorily decreased after CPAP treatment using a pressure derived from manual titration (54.3 ± 18.9 events/h before treatment and 3.3 ± 1.7 events/h after treatment; P < 0.001). The results suggest that automatic titration pressure derived from REMstar Auto is usually higher than the pressure derived from manual titration. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
Ray, Laura B.; Sockeel, Stéphane; Soon, Melissa; Bore, Arnaud; Myhr, Ayako; Stojanoski, Bobby; Cusack, Rhodri; Owen, Adrian M.; Doyon, Julien; Fogel, Stuart M.
2015-01-01
A spindle detection method was developed that: (1) extracts the signal of interest (i.e., spindle-related phasic changes in sigma) relative to ongoing “background” sigma activity using complex demodulation, (2) accounts for variations of spindle characteristics across the night, scalp derivations and between individuals, and (3) employs a minimum number of sometimes arbitrary, user-defined parameters. Complex demodulation was used to extract instantaneous power in the spindle band. To account for intra- and inter-individual differences, the signal was z-score transformed using a 60 s sliding window, per channel, over the course of the recording. Spindle events were detected with a z-score threshold corresponding to a low probability (e.g., 99th percentile). Spindle characteristics, such as amplitude, duration and oscillatory frequency, were derived for each individual spindle following detection, which permits spindles to be subsequently and flexibly categorized as slow or fast spindles from a single detection pass. Spindles were automatically detected in 15 young healthy subjects. Two experts manually identified spindles from C3 during Stage 2 sleep, from each recording; one employing conventional guidelines, and the other, identifying spindles with the aid of a sigma (11–16 Hz) filtered channel. These spindles were then compared between raters and to the automated detection to identify the presence of true positives, true negatives, false positives and false negatives. This method of automated spindle detection resolves or avoids many of the limitations that complicate automated spindle detection, and performs well compared to a group of non-experts, and importantly, has good external validity with respect to the extant literature in terms of the characteristics of automatically detected spindles. PMID:26441604
Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection
Vesperini, Fabio; Schuller, Björn
2017-01-01
In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases. PMID:28182121
Bayesian Monitoring Systems for the CTBT: Historical Development and New Results
NASA Astrophysics Data System (ADS)
Russell, S.; Arora, N. S.; Moore, D.
2016-12-01
A project at Berkeley, begun in 2009 in collaboration with CTBTO andmore recently with LLNL, has reformulated the global seismicmonitoring problem in a Bayesian framework. A first-generation system,NETVISA, has been built comprising a spatial event prior andgenerative models of event transmission and detection, as well as aMonte Carlo inference algorithm. The probabilistic model allows forseamless integration of various disparate sources of information,including negative information (the absence of detections). Workingfrom arrivals extracted by traditional station processing fromInternational Monitoring System (IMS) data, NETVISA achieves areduction of around 60% in the number of missed events compared withthe currently deployed network processing system. It also finds manyevents that are missed by the human analysts who postprocess the IMSoutput. Recent improvements include the integration of models forinfrasound and hydroacoustic detections and a global depth model fornatural seismicity trained from ISC data. NETVISA is now fullycompatible with the CTBTO operating environment. A second-generation model called SIGVISA extends NETVISA's generativemodel all the way from events to raw signal data, avoiding theerror-prone bottom-up detection phase of station processing. SIGVISA'smodel automatically captures the phenomena underlying existingdetection and location techniques such as multilateration, waveformcorrelation matching, and double-differencing, and integrates theminto a global inference process that also (like NETVISA) handles denovo events. Initial results for the Western US in early 2008 (whenthe transportable US Array was operating) shows that SIGVISA finds,from IMS data only, more than twice the number of events recorded inthe CTBTO Late Event Bulletin (LEB). For mb 1.0-2.5, the ratio is more than10; put another way, for this data set, SIGVISA lowers the detectionthreshold by roughly one magnitude compared to LEB. The broader message of this work is that probabilistic inference basedon a vertically integrated generative model that directly expressesgeophysical knowledge can be a much more effective approach forinterpreting scientific data than the traditional bottom-up processingpipeline.
A comparison of earthquake backprojection imaging methods for dense local arrays
NASA Astrophysics Data System (ADS)
Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.
2018-03-01
Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we therefore recommend backprojecting kurtosis waveforms, followed by a second pass on the detected events using noise-filtered raw waveforms to achieve the best of all criteria.
Automatic P-S phase picking procedure based on Kurtosis: Vanuatu region case study
NASA Astrophysics Data System (ADS)
Baillard, C.; Crawford, W. C.; Ballu, V.; Hibert, C.
2012-12-01
Automatic P and S phase picking is indispensable for large seismological data sets. Robust algorithms, based on short term and long term average ratio comparison (Allen, 1982), are commonly used for event detection, but further improvements can be made in phase identification and picking. We present a picking scheme using consecutively Kurtosis-derived Characteristic Functions (CF) and Eigenvalue decompositions on 3-component seismic data to independently pick P and S arrivals. When computed over a sliding window of the signal, a sudden increase in the CF reveals a transition from a gaussian to a non-gaussian distribution, characterizing the phase onset (Saragiotis, 2002). One advantage of the method is that it requires much fewer adjustable parameters than competing methods. We modified the Kurtosis CF to improve pick precision, by computing the CF over several frequency bandwidths, window sizes and smoothing parameters. Once phases were picked, we determined the onset type (P or S) using polarization parameters (rectilinearity, azimuth and dip) calculated using Eigenvalue decompositions of the covariance matrix (Cichowicz, 1993). Finally, we removed bad picks using a clustering procedure and the signal-to-noise ratio (SNR). The pick quality index was also assigned based on the SNR value. Amplitude calculation is integrated into the procedure to enable automatic magnitude calculation. We applied this procedure to data from a network of 30 wideband seismometers (including 10 oceanic bottom seismometers) in Vanuatu that ran for 10 months from May 2008 to February 2009. We manually picked the first 172 events of June, whose local magnitudes range from 0.7 to 3.7. We made a total of 1601 picks, 1094 P and 507 S. We then applied our automatic picking to the same dataset. 70% of the manually picked onsets were picked automatically. For P-picks, the difference between manual and automatic picks is 0.01 ± 0.08 s overall; for the best quality picks (quality index 0: 64% of the P-picks) the difference is -0.01 ± 0.07 s. For S-picks, the difference is -0.09 ± 0.26 s overall and -0.06 ± 0.14 s for good quality picks (index 1: 26% of the S-picks). Residuals showed no dependence on the event magnitudes. The method independently picks S and P waves with good precision and only a few parameters to adjust for relatively small earthquakes (mostly ≤ 2 Ml). The automatic procedure was then applied to the whole dataset. Earthquake locations obtained by inverting onset arrivals revealed clustering and lineations that helped us to constrain the subduction plane. Those key parameters will be integrated to a 3D finite-difference modeling and compared to GPS data in order to better understand the complex geodynamics behavior of the Vanuatu region.
Spectrotemporal processing drives fast access to memory traces for spoken words.
Tavano, A; Grimm, S; Costa-Faidella, J; Slabu, L; Schröger, E; Escera, C
2012-05-01
The Mismatch Negativity (MMN) component of the event-related potentials is generated when a detectable spectrotemporal feature of the incoming sound does not match the sensory model set up by preceding repeated stimuli. MMN is enhanced at frontocentral scalp sites for deviant words when compared to acoustically similar deviant pseudowords, suggesting that automatic access to long-term memory traces for spoken words contributes to MMN generation. Does spectrotemporal feature matching also drive automatic lexical access? To test this, we recorded human auditory event-related potentials (ERPs) to disyllabic spoken words and pseudowords within a passive oddball paradigm. We first aimed at replicating the word-related MMN enhancement effect for Spanish, thereby adding to the available cross-linguistic evidence (e.g., Finnish, English). We then probed its resilience to spectrotemporal perturbation by inserting short (20 ms) and long (120 ms) silent gaps between first and second syllables of deviant and standard stimuli. A significantly enhanced, frontocentrally distributed MMN to deviant words was found for stimuli with no gap. The long gap yielded no deviant word MMN, showing that prior expectations of word form limits in a given language influence deviance detection processes. Crucially, the insertion of a short gap suppressed deviant word MMN enhancement at frontocentral sites. We propose that spectrotemporal point-wise matching constitutes a core mechanism for fast serial computations in audition and language, bridging sensory and long-term memory systems. Copyright © 2012 Elsevier Inc. All rights reserved.
Automatic Line Calling Badminton System
NASA Astrophysics Data System (ADS)
Affandi Saidi, Syahrul; Adawiyah Zulkiplee, Nurabeahtul; Muhammad, Nazmizan; Sarip, Mohd Sharizan Md
2018-05-01
A system and relevant method are described to detect whether a projectile impact occurs on one side of a boundary line or the other. The system employs the use of force sensing resistor-based sensors that may be designed in segments or assemblies and linked to a mechanism with a display. An impact classification system is provided for distinguishing between various events, including a footstep, ball impact and tennis racquet contact. A sensor monitoring system is provided for determining the condition of sensors and providing an error indication if sensor problems exist. A service detection system is provided when the system is used for tennis that permits activation of selected groups of sensors and deactivation of others.
Optimal filter parameters for low SNR seismograms as a function of station and event location
NASA Astrophysics Data System (ADS)
Leach, Richard R.; Dowla, Farid U.; Schultz, Craig A.
1999-06-01
Global seismic monitoring requires deployment of seismic sensors worldwide, in many areas that have not been studied or have few useable recordings. Using events with lower signal-to-noise ratios (SNR) would increase the amount of data from these regions. Lower SNR events can add significant numbers to data sets, but recordings of these events must be carefully filtered. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. To reduce this laborious process, we have developed an automated method to provide optimal filters for low SNR regional or teleseismic events. As seismic signals are often localized in frequency and time with distinct time-frequency characteristics, our method is based on the decomposition of a time series into a set of subsignals, each representing a band with f/Δ f constant (constant Q). The SNR is calculated on the pre-event noise and signal window. The band pass signals with high SNR are used to indicate the cutoff filter limits for the optimized filter. Results indicate a significant improvement in SNR, particularly for low SNR events. The method provides an optimum filter which can be immediately applied to unknown regions. The filtered signals are used to map the seismic frequency response of a region and may provide improvements in travel-time picking, azimuth estimation, regional characterization, and event detection. For example, when an event is detected and a preliminary location is determined, the computer could automatically select optimal filter bands for data from non-reporting stations. Results are shown for a set of low SNR events as well as 379 regional and teleseismic events recorded at stations ABKT, KIV, and ANTO in the Middle East.
NASA Astrophysics Data System (ADS)
Bernardi, F.; Lomax, A.; Michelini, A.; Lauciani, V.; Piatanesi, A.; Lorito, S.
2015-09-01
In this paper we present and discuss the performance of the procedure for earthquake location and characterization implemented in the Italian Candidate Tsunami Service Provider at the Istituto Nazionale di Geofisica e Vulcanologia (INGV) in Rome. Following the ICG/NEAMTWS guidelines, the first tsunami warning messages are based only on seismic information, i.e., epicenter location, hypocenter depth, and magnitude, which are automatically computed by the software Early-est. Early-est is a package for rapid location and seismic/tsunamigenic characterization of earthquakes. The Early-est software package operates using offline-event or continuous-real-time seismic waveform data to perform trace processing and picking, and, at a regular report interval, phase association, event detection, hypocenter location, and event characterization. Early-est also provides mb, Mwp, and Mwpd magnitude estimations. mb magnitudes are preferred for events with Mwp ≲ 5.8, while Mwpd estimations are valid for events with Mwp ≳ 7.2. In this paper we present the earthquake parameters computed by Early-est between the beginning of March 2012 and the end of December 2014 on a global scale for events with magnitude M ≥ 5.5, and we also present the detection timeline. We compare the earthquake parameters automatically computed by Early-est with the same parameters listed in reference catalogs. Such reference catalogs are manually revised/verified by scientists. The goal of this work is to test the accuracy and reliability of the fully automatic locations provided by Early-est. In our analysis, the epicenter location, hypocenter depth and magnitude parameters do not differ significantly from the values in the reference catalogs. Both mb and Mwp magnitudes show differences to the reference catalogs. We thus derived correction functions in order to minimize the differences and correct biases between our values and the ones from the reference catalogs. Correction of the Mwp distance dependency is particularly relevant, since this magnitude refers to the larger and probably tsunamigenic earthquakes. Mwp values at stations with epicentral distance Δ ≲ 30° are significantly overestimated with respect to the CMT-global solutions, whereas Mwp values at stations with epicentral distance Δ ≳ 90° are slightly underestimated. After applying such distance correction the Mwp provided by Early-est differs from CMT-global catalog values of about δ Mwp ≈ 0.0 ∓ 0.2. Early-est continuously acquires time-series data and updates the earthquake source parameters. Our analysis shows that the epicenter coordinates and the magnitude values converge within less than 10 min (5 min in the Mediterranean region) toward the stable values. Our analysis shows that we can compute Mwp magnitudes that do not display short epicentral distance dependency overestimation, and we can provide robust and reliable earthquake source parameters to compile tsunami warning messages within less than 15 min after the event origin time.
Explosion Source Location Study Using Collocated Acoustic and Seismic Networks in Israel
NASA Astrophysics Data System (ADS)
Pinsky, V.; Gitterman, Y.; Arrowsmith, S.; Ben-Horin, Y.
2013-12-01
We explore a joined analysis of seismic and infrasonic signals for improvement in automatic monitoring of small local/regional events, such as construction and quarry blasts, military chemical explosions, sonic booms, etc. using collocated seismic and infrasonic networks recently build in Israel (ISIN) in the frame of the project sponsored by the Bi-national USA-Israel Science Foundation (BSF). The general target is to create an automatic system, which will provide detection, location and identification of explosions in real-time or close-to-real time manner. At the moment the network comprises 15 stations hosting a microphone and seismometer (or accelerometer), operated by the Geophysical Institute of Israel (GII), plus two infrasonic arrays, operated by the National Data Center, Soreq: IOB in the South (Negev desert) and IMA in the North of Israel (Upper Galilee),collocated with the IMS seismic array MMAI. The study utilizes a ground-truth data-base of numerous Rotem phosphate quarry blasts, a number of controlled explosions for demolition of outdated ammunitions and experimental surface explosions for a structure protection research, at the Sayarim Military Range. A special event, comprising four military explosions in a neighboring country, that provided both strong seismic (up to 400 km) and infrasound waves (up to 300 km), is also analyzed. For all of these events the ground-truth coordinates and/or the results of seismic location by the Israel Seismic Network (ISN) have been provided. For automatic event detection and phase picking we tested the new recursive picker, based on Statistically optimal detector. The results were compared to the manual picks. Several location techniques have been tested using the ground-truth event recordings and the preliminary results obtained have been compared to the ground-truth locations: 1) a number of events have been located as intersection of azimuths estimated using the wide-band F-K analysis technique applied to the infrasonic phases of the two distant arrays; 2) a standard robust grid-search location procedure based on phase picks and a constant celerity for a phase (tropospheric or stratospheric) was applied; 3) a joint coordinate grid-search procedure using array waveforms and phase picks was tested, 4) the Bayesian Infrasonic Source Localization (BISL) method, incorporating semi-empirical model-based prior information, was modified for array+network configuration and applied to the ground-truth events. For this purpose we accumulated data of the former observations of the air-to-ground infrasonic phases to compute station specific ground-truth Celerity-Range Histograms (ssgtCRH) and/or model-based CRH (mbCRH), which allow to essentially improve the location results. For building the mbCRH the local meteo-data and the ray-tracing modeling in 3 available azimuth ranges, accounting seasonal variations of winds directivity (quadrants North:315-45, South: 135-225, East 45-135) have been used.
Automatic multimodal detection for long-term seizure documentation in epilepsy.
Fürbass, F; Kampusch, S; Kaniusas, E; Koren, J; Pirker, S; Hopfengärtner, R; Stefan, H; Kluge, T; Baumgartner, C
2017-08-01
This study investigated sensitivity and false detection rate of a multimodal automatic seizure detection algorithm and the applicability to reduced electrode montages for long-term seizure documentation in epilepsy patients. An automatic seizure detection algorithm based on EEG, EMG, and ECG signals was developed. EEG/ECG recordings of 92 patients from two epilepsy monitoring units including 494 seizures were used to assess detection performance. EMG data were extracted by bandpass filtering of EEG signals. Sensitivity and false detection rate were evaluated for each signal modality and for reduced electrode montages. All focal seizures evolving to bilateral tonic-clonic (BTCS, n=50) and 89% of focal seizures (FS, n=139) were detected. Average sensitivity in temporal lobe epilepsy (TLE) patients was 94% and 74% in extratemporal lobe epilepsy (XTLE) patients. Overall detection sensitivity was 86%. Average false detection rate was 12.8 false detections in 24h (FD/24h) for TLE and 22 FD/24h in XTLE patients. Utilization of 8 frontal and temporal electrodes reduced average sensitivity from 86% to 81%. Our automatic multimodal seizure detection algorithm shows high sensitivity with full and reduced electrode montages. Evaluation of different signal modalities and electrode montages paces the way for semi-automatic seizure documentation systems. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
A Hierarchical Convolutional Neural Network for vesicle fusion event classification.
Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke
2017-09-01
Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Letort, Jean; Guilbert, Jocelyn; Cotton, Fabrice; Bondár, István; Cano, Yoann; Vergoz, Julien
2015-06-01
The depth of an earthquake is difficult to estimate because of the trade-off between depth and origin time estimations, and because it can be biased by lateral Earth heterogeneities. To face this challenge, we have developed a new, blind and fully automatic teleseismic depth analysis. The results of this new method do not depend on epistemic uncertainties due to depth-phase picking and identification. The method consists of a modification of the cepstral analysis from Letort et al. and Bonner et al., which aims to detect surface reflected (pP, sP) waves in a signal at teleseismic distances (30°-90°) through the study of the spectral holes in the shape of the signal spectrum. The ability of our automatic method to improve depth estimations is shown by relocation of the recent moderate seismicity of the Guerrero subduction area (Mexico). We have therefore estimated the depth of 152 events using teleseismic data from the IRIS stations and arrays. One advantage of this method is that it can be applied for single stations (from IRIS) as well as for classical arrays. In the Guerrero area, our new cepstral analysis efficiently clusters event locations and provides an improved view of the geometry of the subduction. Moreover, we have also validated our method through relocation of the same events using the new International Seismological Centre (ISC)-locator algorithm, as well as comparing our cepstral depths with the available Harvard-Centroid Moment Tensor (CMT) solutions and the three available ground thrust (GT5) events (where lateral localization is assumed to be well constrained with uncertainty <5 km) for this area. These comparisons indicate an overestimation of focal depths in the ISC catalogue for deeper parts of the subduction, and they show a systematic bias between the estimated cepstral depths and the ISC-locator depths. Using information from the CMT catalogue relating to the predominant focal mechanism for this area, this bias can be explained as a misidentification of sP phases by pP phases, which shows the greater interest for the use of this new automatic cepstral analysis, as it is less sensitive to phase identification.
Automatic Residential/Commercial Classification of Parcels with Solar Panel Detections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Omitaomu, Olufemi A; Kotikot, Susan
A computational method to automatically detect solar panels on rooftops to aid policy and financial assessment of solar distributed generation. The code automatically classifies parcels containing solar panels in the U.S. as residential or commercial. The code allows the user to specify an input dataset containing parcels and detected solar panels, and then uses information about the parcels and solar panels to automatically classify the rooftops as residential or commercial using machine learning techniques. The zip file containing the code includes sample input and output datasets for the Boston and DC areas.
NASA Astrophysics Data System (ADS)
Matgen, Patrick; Giustarini, Laura; Hostache, Renaud
2012-10-01
This paper introduces an automatic flood mapping application that is hosted on the Grid Processing on Demand (GPOD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver operationally flooded areas using both recent and historical acquisitions of SAR data. Having as a short-term target the flooding-related exploitation of data generated by the upcoming ESA SENTINEL-1 SAR mission, the flood mapping application consists of two building blocks: i) a set of query tools for selecting the "crisis image" and the optimal corresponding "reference image" from the G-POD archive and ii) an algorithm for extracting flooded areas via change detection using the previously selected "crisis image" and "reference image". Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate reference image. Potential users will also be able to apply the implemented flood delineation algorithm. The latter combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. Both algorithms are computationally efficient and operate with minimum data requirements. The case study of the high magnitude flooding event that occurred in July 2007 on the Severn River, UK, and that was observed with a moderateresolution SAR sensor as well as airborne photography highlights the performance of the proposed online application. The flood mapping application on G-POD can be used sporadically, i.e. whenever a major flood event occurs and there is a demand for SAR-based flood extent maps. In the long term, a potential extension of the application could consist in systematically extracting flooded areas from all SAR images acquired on a daily, weekly or monthly basis.
NASA Astrophysics Data System (ADS)
Neagoe, Cristian; Grecu, Bogdan; Manea, Liviu
2016-04-01
National Institute for Earth Physics (NIEP) operates a real time seismic network which is designed to monitor the seismic activity on the Romanian territory, which is dominated by the intermediate earthquakes (60-200 km) from Vrancea area. The ability to reduce the impact of earthquakes on society depends on the existence of a large number of high-quality observational data. The development of the network in recent years and an advanced seismic acquisition are crucial to achieving this objective. The software package used to perform the automatic real-time locations is Seiscomp3. An accurate choice of the Seiscomp3 setting parameters is necessary to ensure the best performance of the real-time system i.e., the most accurate location for the earthquakes and avoiding any false events. The aim of this study is to optimize the algorithms of the real-time system that detect and locate the earthquakes in the monitored area. This goal is pursued by testing different parameters (e.g., STA/LTA, filters applied to the waveforms) on a data set of representative earthquakes of the local seismicity. The results are compared with the locations from the Romanian Catalogue ROMPLUS.
Automatic RST-based system for a rapid detection of man-made disasters
NASA Astrophysics Data System (ADS)
Tramutoli, Valerio; Corrado, Rosita; Filizzola, Carolina; Livia Grimaldi, Caterina Sara; Mazzeo, Giuseppe; Marchese, Francesco; Pergola, Nicola
2010-05-01
Man-made disasters may cause injuries to citizens and damages to critical infrastructures. When it is not possible to prevent or foresee such disasters it is hoped at least to rapidly detect the accident in order to intervene as soon as possible to minimize damages. In this context, the combination of a Robust Satellite Technique (RST), able to identify for sure actual (i.e. no false alarm) accidents, and satellite sensors with high temporal resolution seems to assure both a reliable and a timely detection of abrupt Thermal Infrared (TIR) transients related to dangerous explosions. A processing chain, based on the RST approach, has been developed in the framework of the GMOSS and G-MOSAIC projects by DIFA-UNIBAS team, suitable for automatically identify on MSG-SEVIRI images harmful events. Maps of thermal anomalies are generated every 15 minutes (i.e. SEVIRI temporal repetition rate) over a selected area together with kml files (containing information on latitude and longitude of "thermally" anomalous SEVIRI pixel centre, time of image acquisition, relative intensity of anomalies, etc.) for a rapid visualization of the accident position even on Google Earth. Results achieved in the cases of gas pipelines recently exploded or attacked in Russia and in Iraq will be presented in this work.
AMSNEXRAD-Automated detection of meteorite strewnfields in doppler weather radar
NASA Astrophysics Data System (ADS)
Hankey, Michael; Fries, Marc; Matson, Rob; Fries, Jeff
2017-09-01
For several years meteorite recovery in the United States has been greatly enhanced by using Doppler weather radar images to determine possible fall zones for meteorites produced by witnessed fireballs. While most fireball events leave no record on the Doppler radar, some large fireballs do. Based on the successful recovery of 10 meteorite falls 'under the radar', and the discovery of radar on more than 10 historic falls, it is believed that meteoritic dust and or actual meteorites falling to the ground have been recorded on Doppler weather radar (Fries et al., 2014). Up until this point, the process of detecting the radar signatures associated with meteorite falls has been a manual one and dependent on prior accurate knowledge of the fall time and estimated ground track. This manual detection process is labor intensive and can take several hours per event. Recent technological developments by NOAA now help enable the automation of these tasks. This in combination with advancements by the American Meteor Society (Hankey et al., 2014) in the tracking and plotting of witnessed fireballs has opened the possibility for automatic detection of meteorites in NEXRAD Radar Archives. Here in the processes for fireball triangulation, search area determination, radar interfacing, data extraction, storage, search, detection and plotting are explained.
Towards a social and context-aware multi-sensor fall detection and risk assessment platform.
De Backere, F; Ongenae, F; Van den Abeele, F; Nelis, J; Bonte, P; Clement, E; Philpott, M; Hoebeke, J; Verstichel, S; Ackaert, A; De Turck, F
2015-09-01
For elderly people fall incidents are life-changing events that lead to degradation or even loss of autonomy. Current fall detection systems are not integrated and often associated with undetected falls and/or false alarms. In this paper, a social- and context-aware multi-sensor platform is presented, which integrates information gathered by a plethora of fall detection systems and sensors at the home of the elderly, by using a cloud-based solution, making use of an ontology. Within the ontology, both static and dynamic information is captured to model the situation of a specific patient and his/her (in)formal caregivers. This integrated contextual information allows to automatically and continuously assess the fall risk of the elderly, to more accurately detect falls and identify false alarms and to automatically notify the appropriate caregiver, e.g., based on location or their current task. The main advantage of the proposed platform is that multiple fall detection systems and sensors can be integrated, as they can be easily plugged in, this can be done based on the specific needs of the patient. The combination of several systems and sensors leads to a more reliable system, with better accuracy. The proof of concept was tested with the use of the visualizer, which enables a better way to analyze the data flow within the back-end and with the use of the portable testbed, which is equipped with several different sensors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Review of automatic detection of pig behaviours by using image analysis
NASA Astrophysics Data System (ADS)
Han, Shuqing; Zhang, Jianhua; Zhu, Mengshuai; Wu, Jianzhai; Kong, Fantao
2017-06-01
Automatic detection of lying, moving, feeding, drinking, and aggressive behaviours of pigs by means of image analysis can save observation input by staff. It would help staff make early detection of diseases or injuries of pigs during breeding and improve management efficiency of swine industry. This study describes the progress of pig behaviour detection based on image analysis and advancement in image segmentation of pig body, segmentation of pig adhesion and extraction of pig behaviour characteristic parameters. Challenges for achieving automatic detection of pig behaviours were summarized.
Ahrens, Merle-Marie; Veniero, Domenica; Gross, Joachim; Harvey, Monika; Thut, Gregor
2015-01-01
Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic) prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing) at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing). Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design) and task-irrelevant (by instruction), and by creating instead endogenous (orthogonal) expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech. PMID:26623650
Van Weyenberg, Stephanie; Van Nuffel, Annelies; Lauwers, Ludwig; Vangeyte, Jürgen
2017-01-01
Simple Summary Most prototypes of systems to automatically detect lameness in dairy cattle are still not available on the market. Estimating their potential adoption rate could support developers in defining development goals towards commercially viable and well-adopted systems. We simulated the potential market shares of such prototypes to assess the effect of altering the system cost and detection performance on the potential adoption rate. We found that system cost and lameness detection performance indeed substantially influence the potential adoption rate. In order for farmers to prefer automatic detection over current visual detection, the usefulness that farmers attach to a system with specific characteristics should be higher than that of visual detection. As such, we concluded that low system costs and high detection performances are required before automatic lameness detection systems become applicable in practice. Abstract Most automatic lameness detection system prototypes have not yet been commercialized, and are hence not yet adopted in practice. Therefore, the objective of this study was to simulate the effect of detection performance (percentage missed lame cows and percentage false alarms) and system cost on the potential market share of three automatic lameness detection systems relative to visual detection: a system attached to the cow, a walkover system, and a camera system. Simulations were done using a utility model derived from survey responses obtained from dairy farmers in Flanders, Belgium. Overall, systems attached to the cow had the largest market potential, but were still not competitive with visual detection. Increasing the detection performance or lowering the system cost led to higher market shares for automatic systems at the expense of visual detection. The willingness to pay for extra performance was €2.57 per % less missed lame cows, €1.65 per % less false alerts, and €12.7 for lame leg indication, respectively. The presented results could be exploited by system designers to determine the effect of adjustments to the technology on a system’s potential adoption rate. PMID:28991188
Comparative study of predicted and experimentally detected interplanetary shocks
NASA Astrophysics Data System (ADS)
Kartalev, M. D.; Grigorov, K. G.; Smith, Z.; Dryer, M.; Fry, C. D.; Sun, Wei; Deehr, C. S.
2002-03-01
We compare the real time space weather prediction shock arrival times at 1 AU made by the USAF/NOAA Shock Time of Arrival (STOA) and Interplanetary Shock Propagation Model (ISPM) models, and the Exploration Physics International/University of Alaska Hakamada-Akasofu-Fry Solar Wind Model (HAF-v2) to a real time analysis analysis of plasma and field ACE data. The comparison is made using an algorithm that was developed on the basis of wavelet data analysis and MHD identification procedure. The shock parameters are estimated for selected "candidate events". An appropriate automatically performing Web-based interface periodically utilizes solar wind observations made by the ACE at L1. Near real time results as well an archive of the registered interesting events are available on a specially developed web site. A number of events are considered. These studies are essential for the validation of real time space weather forecasts made from solar data.
Amplitude modulation of alpha-band rhythm caused by mimic collision: MEG study.
Yokosawa, Koichi; Watanabe, Tatsuya; Kikuzawa, Daichi; Aoyama, Gakuto; Takahashi, Makoto; Kuriki, Shinya
2013-01-01
Detection of a collision risk and avoiding the collision are important for survival. We have been investigating neural responses when humans anticipate a collision or intend to take evasive action by applying collision-simulating images in a predictable manner. Collision-simulating images and control images were presented in random order to 9 healthy male volunteers. A cue signal was also given visually two seconds before each stimulus to enable each participant to anticipate the upcoming stimulus. Magnetoencephalograms (MEG) were recorded with a 76-ch helmet system. The amplitude of alpha band (8-13 Hz) rhythm when anticipating the upcoming collision-simulating image was significantly smaller than that when anticipating control images even just after the cue signal. This result demonstrates that anticipating a negative (dangerous) event induced event-related desynchronization (ERD) of alpha band activity, probably caused by attention. The results suggest the feasibility of detecting endogenous brain activities by monitoring alpha band rhythm and its possible applications to engineering systems, such as an automatic collision evasion system for automobiles.
Optical Meteor Systems Used by the NASA Meteoroid Environment Office
NASA Technical Reports Server (NTRS)
Kingery, A. M.; Blaauw, R. C.; Cooke, W. J.; Moser, D. E.
2015-01-01
The NASA Meteoroid Environment Office (MEO) uses two main meteor camera networks to characterize the meteoroid environment: an all sky system and a wide field system to study cm and mm size meteors respectively. The NASA All Sky Fireball Network consists of fifteen meteor video cameras in the United States, with plans to expand to eighteen cameras by the end of 2015. The camera design and All-Sky Guided and Real-time Detection (ASGARD) meteor detection software [1, 2] were adopted from the University of Western Ontario's Southern Ontario Meteor Network (SOMN). After seven years of operation, the network has detected over 12,000 multi-station meteors, including meteors from at least 53 different meteor showers. The network is used for speed distribution determination, characterization of meteor showers and sporadic sources, and for informing the public on bright meteor events. The NASA Wide Field Meteor Network was established in December of 2012 with two cameras and expanded to eight cameras in December of 2014. The two camera configuration saw 5470 meteors over two years of operation with two cameras, and has detected 3423 meteors in the first five months of operation (Dec 12, 2014 - May 12, 2015) with eight cameras. We expect to see over 10,000 meteors per year with the expanded system. The cameras have a 20 degree field of view and an approximate limiting meteor magnitude of +5. The network's primary goal is determining the nightly shower and sporadic meteor fluxes. Both camera networks function almost fully autonomously with little human interaction required for upkeep and analysis. The cameras send their data to a central server for storage and automatic analysis. Every morning the servers automatically generates an e-mail and web page containing an analysis of the previous night's events. The current status of the networks will be described, alongside with preliminary results. In addition, future projects, CCD photometry and broadband meteor color camera system, will be discussed.
NASA Astrophysics Data System (ADS)
Rodas, Claudio; Pulido, Manuel
2017-09-01
A climatological characterization of Rossby wave generation events in the middle atmosphere of the Southern Hemisphere is conducted using 20 years of Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalysis. An automatic detection technique of wave generation events is developed and applied to MERRA reanalysis. The Rossby wave generation events with wave period of 1.25 to 5.5 days and zonal wave number from one to three dominate the Eliassen-Palm flux divergence around the stratopause at high latitudes in the examined 20 year period. These produce an eastward forcing of the general circulation between May and mid-August in that region. Afterward from mid-August to the final warming date, Rossby wave generation events are still present but the Eliassen-Palm flux divergence in the polar stratopause is dominated by low-frequency Rossby waves that propagate from the troposphere. The Rossby wave generation events are associated with potential vorticity gradient inversion, and so they are a manifestation of the dominant barotropic/baroclinic unstable modes that grow at the cost of smearing the negative meridional gradient of potential vorticity. The most likely region of wave generation is found between 60° and 80°S and at a height of 0.7 hPa, but events were detected from 40 hPa to 0.3 hPa (which is the top of the examined region). The mean number of events per year is 24, and its mean duration is 3.35 days. The event duration follows an exponential distribution.
NASA Astrophysics Data System (ADS)
Wormanns, Dag; Fiebich, Martin; Saidi, Mustafa; Diederich, Stefan; Heindel, Walter
2001-05-01
The purpose of the study was to evaluate a computer aided diagnosis (CAD) workstation with automatic detection of pulmonary nodules at low-dose spiral CT in a clinical setting for early detection of lung cancer. Two radiologists in consensus reported 88 consecutive spiral CT examinations. All examinations were reviewed using a UNIX-based CAD workstation with a self-developed algorithm for automatic detection of pulmonary nodules. The algorithm was designed to detect nodules with at least 5 mm diameter. The results of automatic nodule detection were compared to the consensus reporting of two radiologists as gold standard. Additional CAD findings were regarded as nodules initially missed by the radiologists or as false positive results. A total of 153 nodules were detected with all modalities (diameter: 85 nodules <5mm, 63 nodules 5-9 mm, 5 nodules >= 10 mm). Reasons for failure of automatic nodule detection were assessed. Sensitivity of radiologists for nodules >=5 mm was 85%, sensitivity of CAD was 38%. For nodules >=5 mm without pleural contact sensitivity was 84% for radiologists at 45% for CAD. CAD detected 15 (10%) nodules not mentioned in the radiologist's report but representing real nodules, among them 10 (15%) nodules with a diameter $GREW5 mm. Reasons for nodules missed by CAD include: exclusion because of morphological features during region analysis (33%), nodule density below the detection threshold (26%), pleural contact (33%), segmentation errors (5%) and other reasons (2%). CAD improves detection of pulmonary nodules at spiral CT significantly and is a valuable second opinion in a clinical setting for lung cancer screening. Optimization of region analysis and an appropriate density threshold have a potential for further improvement of automatic nodule detection.
ERIC Educational Resources Information Center
Servant, Mathieu; Cassey, Peter; Woodman, Geoffrey F.; Logan, Gordon D.
2018-01-01
Automaticity allows us to perform tasks in a fast, efficient, and effortless manner after sufficient practice. Theories of automaticity propose that across practice processing transitions from being controlled by working memory to being controlled by long-term memory retrieval. Recent event-related potential (ERP) studies have sought to test this…
NASA Astrophysics Data System (ADS)
Hart, Andrew F.; Cinquini, Luca; Khudikyan, Shakeh E.; Thompson, David R.; Mattmann, Chris A.; Wagstaff, Kiri; Lazio, Joseph; Jones, Dayton
2015-01-01
“Fast radio transients” are defined here as bright millisecond pulses of radio-frequency energy. These short-duration pulses can be produced by known objects such as pulsars or potentially by more exotic objects such as evaporating black holes. The identification and verification of such an event would be of great scientific value. This is one major goal of the Very Long Baseline Array (VLBA) Fast Transient Experiment (V-FASTR), a software-based detection system installed at the VLBA. V-FASTR uses a “commensal” (piggy-back) approach, analyzing all array data continually during routine VLBA observations and identifying candidate fast transient events. Raw data can be stored from a buffer memory, which enables a comprehensive off-line analysis. This is invaluable for validating the astrophysical origin of any detection. Candidates discovered by the automatic system must be reviewed each day by analysts to identify any promising signals that warrant a more in-depth investigation. To support the timely analysis of fast transient detection candidates by V-FASTR scientists, we have developed a metadata-driven, collaborative candidate review framework. The framework consists of a software pipeline for metadata processing composed of both open source software components and project-specific code written expressly to extract and catalog metadata from the incoming V-FASTR data products, and a web-based data portal that facilitates browsing and inspection of the available metadata for candidate events extracted from the VLBA radio data.
Automatic spatiotemporal matching of detected pleural thickenings
NASA Astrophysics Data System (ADS)
Chaisaowong, Kraisorn; Keller, Simon Kai; Kraus, Thomas
2014-01-01
Pleural thickenings can be found in asbestos exposed patient's lung. Non-invasive diagnosis including CT imaging can detect aggressive malignant pleural mesothelioma in its early stage. In order to create a quantitative documentation of automatic detected pleural thickenings over time, the differences in volume and thickness of the detected thickenings have to be calculated. Physicians usually estimate the change of each thickening via visual comparison which provides neither quantitative nor qualitative measures. In this work, automatic spatiotemporal matching techniques of the detected pleural thickenings at two points of time based on the semi-automatic registration have been developed, implemented, and tested so that the same thickening can be compared fully automatically. As result, the application of the mapping technique using the principal components analysis turns out to be advantageous than the feature-based mapping using centroid and mean Hounsfield Units of each thickening, since the resulting sensitivity was improved to 98.46% from 42.19%, while the accuracy of feature-based mapping is only slightly higher (84.38% to 76.19%).
A vibration-based health monitoring program for a large and seismically vulnerable masonry dome
NASA Astrophysics Data System (ADS)
Pecorelli, M. L.; Ceravolo, R.; De Lucia, G.; Epicoco, R.
2017-05-01
Vibration-based health monitoring of monumental structures must rely on efficient and, as far as possible, automatic modal analysis procedures. Relatively low excitation energy provided by traffic, wind and other sources is usually sufficient to detect structural changes, as those produced by earthquakes and extreme events. Above all, in-operation modal analysis is a non-invasive diagnostic technique that can support optimal strategies for the preservation of architectural heritage, especially if complemented by model-driven procedures. In this paper, the preliminary steps towards a fully automated vibration-based monitoring of the world’s largest masonry oval dome (internal axes of 37.23 by 24.89 m) are presented. More specifically, the paper reports on signal treatment operations conducted to set up the permanent dynamic monitoring system of the dome and to realise a robust automatic identification procedure. Preliminary considerations on the effects of temperature on dynamic parameters are finally reported.
10 CFR 1045.38 - Automatic declassification prohibition.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Automatic declassification prohibition. (a) Documents containing RD and FRD remain classified until a... Energy Act, no date or event for automatic declassification ever applies to RD and FRD documents, even if such documents also contain NSI. (c) E.O. 12958 acknowledges that RD and FRD are exempt from all...
10 CFR 1045.38 - Automatic declassification prohibition.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Automatic declassification prohibition. (a) Documents containing RD and FRD remain classified until a... Energy Act, no date or event for automatic declassification ever applies to RD and FRD documents, even if such documents also contain NSI. (c) E.O. 12958 acknowledges that RD and FRD are exempt from all...
10 CFR 1045.38 - Automatic declassification prohibition.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Automatic declassification prohibition. (a) Documents containing RD and FRD remain classified until a... Energy Act, no date or event for automatic declassification ever applies to RD and FRD documents, even if such documents also contain NSI. (c) E.O. 12958 acknowledges that RD and FRD are exempt from all...
10 CFR 1045.38 - Automatic declassification prohibition.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Automatic declassification prohibition. (a) Documents containing RD and FRD remain classified until a... Energy Act, no date or event for automatic declassification ever applies to RD and FRD documents, even if such documents also contain NSI. (c) E.O. 12958 acknowledges that RD and FRD are exempt from all...
10 CFR 1045.38 - Automatic declassification prohibition.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Automatic declassification prohibition. (a) Documents containing RD and FRD remain classified until a... Energy Act, no date or event for automatic declassification ever applies to RD and FRD documents, even if such documents also contain NSI. (c) E.O. 12958 acknowledges that RD and FRD are exempt from all...
Event-Related Potential Evidence that Automatic Recollection Can Be Voluntarily Avoided
ERIC Educational Resources Information Center
Bergstrom, Zara M.; de Fockert, Jan; Richardson-Klavehn, Alan
2009-01-01
Voluntary control processes can be recruited to facilitate recollection in situations where a retrieval cue fails to automatically bring to mind a desired episodic memory. We investigated whether voluntary control processes can also stop recollection of unwanted memories that would otherwise have been automatically recollected. Participants were…
NASA Astrophysics Data System (ADS)
Krysta, Monika; Kushida, Noriyuki; Kotselko, Yuriy; Carter, Jerry
2016-04-01
Possibilities of associating information from four pillars constituting CTBT monitoring and verification regime, namely seismic, infrasound, hydracoustic and radionuclide networks, have been explored by the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) for a long time. Based on a concept of overlying waveform events with the geographical regions constituting possible sources of the detected radionuclides, interactive and non-interactive tools were built in the past. Based on the same concept, a design of a prototype of a Fused Event Bulletin was proposed recently. One of the key design elements of the proposed approach is the ability to access fusion results from either the radionuclide or from the waveform technologies products, which are available on different time scales and through various different automatic and interactive products. To accommodate various time scales a dynamic product evolving while the results of the different technologies are being processed and compiled is envisioned. The product would be available through the Secure Web Portal (SWP). In this presentation we describe implementation of the data fusion functionality in the test framework of the SWP. In addition, we address possible refinements to the already implemented concepts.
Learned Helplessness at Fifty: Insights from Neuroscience
Maier, Steven F.; Seligman, Martin E. P.
2016-01-01
Learned helplessness, the failure to escape shock induced by uncontrollable aversive events, was discovered half a century ago. Seligman and Maier (1967) theorized that animals learned that outcomes were independent of their responses—that nothing they did mattered – and that this learning undermined trying to escape. The mechanism of learned helplessness is now very well-charted biologically and the original theory got it backwards. Passivity in response to shock is not learned. It is the default, unlearned response to prolonged aversive events and it is mediated by the serotonergic activity of the dorsal raphe nucleus, which in turn inhibits escape. This passivity can be overcome by learning control, with the activity of the medial prefrontal cortex, which subserves the detection of control leading to the automatic inhibition of the dorsal raphe nucleus. So animals learn that they can control aversive events, but the passive failure to learn to escape is an unlearned reaction to prolonged aversive stimulation. In addition, alterations of the ventromedial prefrontal cortex-dorsal raphe pathway can come to subserve the expectation of control. We speculate that default passivity and the compensating detection and expectation of control may have substantial implications for how to treat depression. PMID:27337390
Learned helplessness at fifty: Insights from neuroscience.
Maier, Steven F; Seligman, Martin E P
2016-07-01
Learned helplessness, the failure to escape shock induced by uncontrollable aversive events, was discovered half a century ago. Seligman and Maier (1967) theorized that animals learned that outcomes were independent of their responses-that nothing they did mattered-and that this learning undermined trying to escape. The mechanism of learned helplessness is now very well-charted biologically, and the original theory got it backward. Passivity in response to shock is not learned. It is the default, unlearned response to prolonged aversive events and it is mediated by the serotonergic activity of the dorsal raphe nucleus, which in turn inhibits escape. This passivity can be overcome by learning control, with the activity of the medial prefrontal cortex, which subserves the detection of control leading to the automatic inhibition of the dorsal raphe nucleus. So animals learn that they can control aversive events, but the passive failure to learn to escape is an unlearned reaction to prolonged aversive stimulation. In addition, alterations of the ventromedial prefrontal cortex-dorsal raphe pathway can come to subserve the expectation of control. We speculate that default passivity and the compensating detection and expectation of control may have substantial implications for how to treat depression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Trunk, Attila; Stefanics, Gábor; Zentai, Norbert; Kovács-Bálint, Zsófia; Thuróczy, György; Hernádi, István
2013-01-01
Potential effects of a 30 min exposure to third generation (3G) Universal Mobile Telecommunications System (UMTS) mobile phone-like electromagnetic fields (EMFs) were investigated on human brain electrical activity in two experiments. In the first experiment, spontaneous electroencephalography (sEEG) was analyzed (n = 17); in the second experiment, auditory event-related potentials (ERPs) and automatic deviance detection processes reflected by mismatch negativity (MMN) were investigated in a passive oddball paradigm (n = 26). Both sEEG and ERP experiments followed a double-blind protocol where subjects were exposed to either genuine or sham irradiation in two separate sessions. In both experiments, electroencephalograms (EEG) were recorded at midline electrode sites before and after exposure while subjects were watching a silent documentary. Spectral power of sEEG data was analyzed in the delta, theta, alpha, and beta frequency bands. In the ERP experiment, subjects were presented with a random series of standard (90%) and frequency-deviant (10%) tones in a passive binaural oddball paradigm. The amplitude and latency of the P50, N100, P200, MMN, and P3a components were analyzed. We found no measurable effects of a 30 min 3G mobile phone irradiation on the EEG spectral power in any frequency band studied. Also, we found no significant effects of EMF irradiation on the amplitude and latency of any of the ERP components. In summary, the present results do not support the notion that a 30 min unilateral 3G EMF exposure interferes with human sEEG activity, auditory evoked potentials or automatic deviance detection indexed by MMN. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Applbaum, David; Dorman, Lev; Pustil'Nik, Lev; Sternlieb, Abraham; Zagnetko, Alexander; Zukerman, Igor
It is well known that during great SEP events, fluxes of energetic particles can be so big that the memory of computers and other electronics in space may be destroyed, and satellites and spacecraft may cease to function. According to the NOAA Space Weather Prediction Cen-ter, the following scales constitute dangerous solar radiation storms: S5-extreme (flux level of particles with energy ∼ 10 MeV more than 105 ); S4 - severe(f luxmorethan104 ); andS3 - strong(f luxmorethan103 ). In these persiods, it is necessary to switch off some of the electronics for a few hours energy particles (meaning those with a few GeV/nucleon and higher), whose transportation to Earthfrom the S20 minutes after they accelerate and escape into the solar wind) than the main bulk of the smaller energy particle 60 minutes later). Here we describe the principles and experience of the automatic function of the "SEP - Search" program. The positive result, showing the exact beginning of an SEP event on the Emilio Segre Observ 10.8GV ), is determined now automatically by simultaneously increasing by 2.5 St.Dev. in two sections of the ne search "programnext uses 1-mindata for checking whether or not the observed increase reflects the beginning Research "automatically starts to work online. We determine also the probabilities of false and missed alerts.
The algorithm for automatic detection of the calibration object
NASA Astrophysics Data System (ADS)
Artem, Kruglov; Irina, Ugfeld
2017-06-01
The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.
VAP/VAT: video analytics platform and test bed for testing and deploying video analytics
NASA Astrophysics Data System (ADS)
Gorodnichy, Dmitry O.; Dubrofsky, Elan
2010-04-01
Deploying Video Analytics in operational environments is extremely challenging. This paper presents a methodological approach developed by the Video Surveillance and Biometrics Section (VSB) of the Science and Engineering Directorate (S&E) of the Canada Border Services Agency (CBSA) to resolve these problems. A three-phase approach to enable VA deployment within an operational agency is presented and the Video Analytics Platform and Testbed (VAP/VAT) developed by the VSB section is introduced. In addition to allowing the integration of third party and in-house built VA codes into an existing video surveillance infrastructure, VAP/VAT also allows the agency to conduct an unbiased performance evaluation of the cameras and VA software available on the market. VAP/VAT consists of two components: EventCapture, which serves to Automatically detect a "Visual Event", and EventBrowser, which serves to Display & Peruse of "Visual Details" captured at the "Visual Event". To deal with Open architecture as well as with Closed architecture cameras, two video-feed capture mechanisms have been developed within the EventCapture component: IPCamCapture and ScreenCapture.
ERIC Educational Resources Information Center
Liu, Hongyan; Hu, Zhiguo; Peng, Danling; Yang, Yanhui; Li, Kuncheng
2010-01-01
The brain activity associated with automatic semantic priming has been extensively studied. Thus far there has been no prior study that directly contrasts the neural mechanisms of semantic and affective priming. The present study employed event-related fMRI to examine the common and distinct neural bases underlying conceptual and affective priming…
Farquhar, J; Hill, N J
2013-04-01
Detecting event related potentials (ERPs) from single trials is critical to the operation of many stimulus-driven brain computer interface (BCI) systems. The low strength of the ERP signal compared to the noise (due to artifacts and BCI irrelevant brain processes) makes this a challenging signal detection problem. Previous work has tended to focus on how best to detect a single ERP type (such as the visual oddball response). However, the underlying ERP detection problem is essentially the same regardless of stimulus modality (e.g., visual or tactile), ERP component (e.g., P300 oddball response, or the error-potential), measurement system or electrode layout. To investigate whether a single ERP detection method might work for a wider range of ERP BCIs we compare detection performance over a large corpus of more than 50 ERP BCI datasets whilst systematically varying the electrode montage, spectral filter, spatial filter and classifier training methods. We identify an interesting interaction between spatial whitening and regularised classification which made detection performance independent of the choice of spectral filter low-pass frequency. Our results show that pipeline consisting of spectral filtering, spatial whitening, and regularised classification gives near maximal performance in all cases. Importantly, this pipeline is simple to implement and completely automatic with no expert feature selection or parameter tuning required. Thus, we recommend this combination as a "best-practice" method for ERP detection problems.
Electrophysiological Correlates of Automatic Visual Change Detection in School-Age Children
ERIC Educational Resources Information Center
Clery, Helen; Roux, Sylvie; Besle, Julien; Giard, Marie-Helene; Bruneau, Nicole; Gomot, Marie
2012-01-01
Automatic stimulus-change detection is usually investigated in the auditory modality by studying Mismatch Negativity (MMN). Although the change-detection process occurs in all sensory modalities, little is known about visual deviance detection, particularly regarding the development of this brain function throughout childhood. The aim of the…
NASA Astrophysics Data System (ADS)
Liang, Sheng-Fu; Chen, Yi-Chun; Wang, Yu-Lin; Chen, Pin-Tzu; Yang, Chia-Hsiang; Chiueh, Herming
2013-08-01
Objective. Around 1% of the world's population is affected by epilepsy, and nearly 25% of patients cannot be treated effectively by available therapies. The presence of closed-loop seizure-triggered stimulation provides a promising solution for these patients. Realization of fast, accurate, and energy-efficient seizure detection is the key to such implants. In this study, we propose a two-stage on-line seizure detection algorithm with low-energy consumption for temporal lobe epilepsy (TLE). Approach. Multi-channel signals are processed through independent component analysis and the most representative independent component (IC) is automatically selected to eliminate artifacts. Seizure-like intracranial electroencephalogram (iEEG) segments are fast detected in the first stage of the proposed method and these seizures are confirmed in the second stage. The conditional activation of the second-stage signal processing reduces the computational effort, and hence energy, since most of the non-seizure events are filtered out in the first stage. Main results. Long-term iEEG recordings of 11 patients who suffered from TLE were analyzed via leave-one-out cross validation. The proposed method has a detection accuracy of 95.24%, a false alarm rate of 0.09/h, and an average detection delay time of 9.2 s. For the six patients with mesial TLE, a detection accuracy of 100.0%, a false alarm rate of 0.06/h, and an average detection delay time of 4.8 s can be achieved. The hierarchical approach provides a 90% energy reduction, yielding effective and energy-efficient implementation for real-time epileptic seizure detection. Significance. An on-line seizure detection method that can be applied to monitor continuous iEEG signals of patients who suffered from TLE was developed. An IC selection strategy to automatically determine the most seizure-related IC for seizure detection was also proposed. The system has advantages of (1) high detection accuracy, (2) low false alarm, (3) short detection latency, and (4) energy-efficient design for hardware implementation.
Dolecheck, K A; Silvia, W J; Heersche, G; Chang, Y M; Ray, D L; Stone, A E; Wadsworth, B A; Bewley, J M
2015-12-01
This study included 2 objectives. The first objective was to describe estrus-related changes in parameters automatically recorded by the CowManager SensOor (Agis Automatisering, Harmelen, the Netherlands), DVM bolus (DVM Systems LLC, Greeley, CO), HR Tag (SCR Engineers Ltd., Netanya, Israel), IceQube (IceRobotics Ltd., Edinburgh, UK), and Track a Cow (Animart Inc., Beaver Dam, WI). This objective was accomplished using 35 cows in 3 groups between January and June 2013 at the University of Kentucky Coldstream Dairy. We used a modified Ovsynch with G7G protocol to partially synchronize ovulation, ending after the last PGF2α injection (d 0) to allow estrus expression. Visual observation for standing estrus was conducted for four 30-min periods at 0330, 1000, 1430, and 2200h on d 2, 3, 4, and 5. Eighteen of the 35 cows stood to be mounted at least once during the observation period. These cows were used to compare differences between the 6h before and after the first standing event (estrus) and the 2wk preceding that period (nonestrus) for all technology parameters. Differences between estrus and nonestrus were observed for CowManager SensOor minutes feeding per hour, minutes of high ear activity per hour, and minutes ruminating per hour; twice daily DVM bolus reticulorumen temperature; HR Tag neck activity per 2h and minutes ruminating per 2h; IceQube lying bouts per hour, minutes lying per hour, and number of steps per hour; and Track a Cow leg activity per hour and minutes lying per hour. No difference between estrus and nonestrus was observed for CowManager SensOor ear surface temperature per hour. The second objective of this study was to explore the estrus detection potential of machine-learning techniques using automatically collected data. Three machine-learning techniques (random forest, linear discriminant analysis, and neural network) were applied to automatically collected parameter data from the 18 cows observed in standing estrus. Machine learning accuracy for all technologies ranged from 91.0 to 100.0%. When we compared visual observation with progesterone profiles of all 32 cows, we found 65.6% accuracy. Based on these results, machine-learning techniques have potential to be applied to automatically collected technology data for estrus detection. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Bicen, A Ozan; West, Leanne L; Cesar, Liliana; Inan, Omer T
2018-01-01
Intravenous (IV) therapy is prevalent in hospital settings, where fluids are typically delivered with an IV into a peripheral vein of the patient. IV infiltration is the inadvertent delivery of fluids into the extravascular space rather than into the vein (and requires urgent treatment to avoid scarring and severe tissue damage), for which medical staff currently needs to check patients periodically. In this paper, the performance of two non-invasive sensing modalities, electrical bioimpedance (EBI), and skin strain sensing, for the automatic detection of IV infiltration was investigated in an animal model. Infiltrations were physically simulated on the hind limb of anesthetized pigs, where the sensors for EBI and skin strain sensing were co-located. The obtained data were used to examine the ability to distinguish between infusion into the vein and an infiltration event using bioresistance and bioreactance (derived from EBI), as well as skin strain. Skin strain and bioresistance sensing could achieve detection rates greater than 0.9 for infiltration fluid volumes of 2 and 10 mL, respectively, for a given false positive, i.e., false alarm rate of 0.05. Furthermore, the fusion of multiple sensing modalities could achieve a detection rate of 0.97 with a false alarm rate of 0.096 for 5mL fluid volume of infiltration. EBI and skin strain sensing can enable non-invasive and real-time IV infiltration detection systems. Fusion of multiple sensing modalities can help to detect expanded range of leaking fluid volumes. The provided performance results and comparisons in this paper are an important step towards clinical translation of sensing technologies for detecting IV infiltration.
Bicen, A. Ozan; West, Leanne L.; Cesar, Liliana
2018-01-01
Intravenous (IV) therapy is prevalent in hospital settings, where fluids are typically delivered with an IV into a peripheral vein of the patient. IV infiltration is the inadvertent delivery of fluids into the extravascular space rather than into the vein (and requires urgent treatment to avoid scarring and severe tissue damage), for which medical staff currently needs to check patients periodically. In this paper, the performance of two non-invasive sensing modalities, electrical bioimpedance (EBI), and skin strain sensing, for the automatic detection of IV infiltration was investigated in an animal model. Infiltrations were physically simulated on the hind limb of anesthetized pigs, where the sensors for EBI and skin strain sensing were co-located. The obtained data were used to examine the ability to distinguish between infusion into the vein and an infiltration event using bioresistance and bioreactance (derived from EBI), as well as skin strain. Skin strain and bioresistance sensing could achieve detection rates greater than 0.9 for infiltration fluid volumes of 2 and 10 mL, respectively, for a given false positive, i.e., false alarm rate of 0.05. Furthermore, the fusion of multiple sensing modalities could achieve a detection rate of 0.97 with a false alarm rate of 0.096 for 5mL fluid volume of infiltration. EBI and skin strain sensing can enable non-invasive and real-time IV infiltration detection systems. Fusion of multiple sensing modalities can help to detect expanded range of leaking fluid volumes. The provided performance results and comparisons in this paper are an important step towards clinical translation of sensing technologies for detecting IV infiltration. PMID:29692956
Evaluating Southern Ocean Carbon Eddy-Pump From Biogeochemical-Argo Floats
NASA Astrophysics Data System (ADS)
Llort, Joan; Langlais, C.; Matear, R.; Moreau, S.; Lenton, A.; Strutton, Peter G.
2018-02-01
The vertical transport of surface water and carbon into ocean's interior, known as subduction, is one of the main mechanisms through which the ocean influences Earth's climate. New instrumental approaches have shown the occurrence of localized and intermittent subduction episodes associated with small-scale ocean circulation features. These studies also revealed the importance of such events for the export of organic matter, the so-called eddy-pump. However, the transient and localized nature of episodic subduction hindered its large-scale evaluation to date. In this work, we present an approach to detect subduction events at the scale of the Southern Ocean using measurements collected by biogeochemical autonomous floats (BGCArgo). We show how subduction events can be automatically identified as anomalies of spiciness and Apparent Oxygen Utilization (AOU) below the mixed layer. Using this methodology over more than 4,000 profiles, we detected 40 subduction events unevenly distributed across the Sothern Ocean. Events were more likely found in hot spots of eddy kinetic energy (EKE), downstream major bathymetric features. Moreover, the bio-optical measurements provided by BGCArgo allowed measuring the amount of Particulate Organic Carbon (POC) being subducted and assessing the contribution of these events to the total downward carbon flux at 100 m (EP100). We estimated that the eddy-pump represents less than 19% to the EP100 in the Southern Ocean, although we observed particularly strong events able to locally duplicate the EP100. This approach provides a novel perspective on where episodic subduction occurs that will be naturally improved as BGCArgo observations continue to increase.
Application of image recognition-based automatic hyphae detection in fungal keratitis.
Wu, Xuelian; Tao, Yuan; Qiu, Qingchen; Wu, Xinyi
2018-03-01
The purpose of this study is to evaluate the accuracy of two methods in diagnosis of fungal keratitis, whereby one method is automatic hyphae detection based on images recognition and the other method is corneal smear. We evaluate the sensitivity and specificity of the method in diagnosis of fungal keratitis, which is automatic hyphae detection based on image recognition. We analyze the consistency of clinical symptoms and the density of hyphae, and perform quantification using the method of automatic hyphae detection based on image recognition. In our study, 56 cases with fungal keratitis (just single eye) and 23 cases with bacterial keratitis were included. All cases underwent the routine inspection of slit lamp biomicroscopy, corneal smear examination, microorganism culture and the assessment of in vivo confocal microscopy images before starting medical treatment. Then, we recognize the hyphae images of in vivo confocal microscopy by using automatic hyphae detection based on image recognition to evaluate its sensitivity and specificity and compare with the method of corneal smear. The next step is to use the index of density to assess the severity of infection, and then find the correlation with the patients' clinical symptoms and evaluate consistency between them. The accuracy of this technology was superior to corneal smear examination (p < 0.05). The sensitivity of the technology of automatic hyphae detection of image recognition was 89.29%, and the specificity was 95.65%. The area under the ROC curve was 0.946. The correlation coefficient between the grading of the severity in the fungal keratitis by the automatic hyphae detection based on image recognition and the clinical grading is 0.87. The technology of automatic hyphae detection based on image recognition was with high sensitivity and specificity, able to identify fungal keratitis, which is better than the method of corneal smear examination. This technology has the advantages when compared with the conventional artificial identification of confocal microscope corneal images, of being accurate, stable and does not rely on human expertise. It was the most useful to the medical experts who are not familiar with fungal keratitis. The technology of automatic hyphae detection based on image recognition can quantify the hyphae density and grade this property. Being noninvasive, it can provide an evaluation criterion to fungal keratitis in a timely, accurate, objective and quantitative manner.
1987-06-29
LIJ LJ LJ x j 0 3 % 311’ ~% S ~ *5* -... . *~: ~ ’~ % V~%..\\- V CN 0 Q t.4 0 0Ct) r" : 4 It (n- --n M~ 0g0 i t0>-100 k C -0 -W U -1 14 s -3 Z ID v to...studied; i.e., teleseismic events with a high signal to noise ( S /N) ratio. I. making it difficult to determine the correct spectral bandwidth for the...1 = NUMBE8R OF OBSERVATION4SINIl EeD C SEGII,41 z REOUJ-t OIFFMNCE SETWE~ s C 11ADI SEGME~NT AMILfl’VES C C SEG( It .51 PAWLJTE DIFFERENCE BErflI C 1-1
Yoshida, Wataru; Kezuka, Aki; Murakami, Yoshiyuki; Lee, Jinhee; Abe, Koichi; Motoki, Hiroaki; Matsuo, Takafumi; Shimura, Nobuaki; Noda, Mamoru; Igimi, Shizunobu; Ikebukuro, Kazunori
2013-11-01
An automatic polymerase chain reaction (PCR) product detection system for food safety monitoring using zinc finger (ZF) protein fused to luciferase was developed. ZF protein fused to luciferase specifically binds to target double stranded DNA sequence and has luciferase enzymatic activity. Therefore, PCR products that comprise ZF protein recognition sequence can be detected by measuring the luciferase activity of the fusion protein. We previously reported that PCR products from Legionella pneumophila and Escherichia coli (E. coli) O157 genomic DNA were detected by Zif268, a natural ZF protein, fused to luciferase. In this study, Zif268-luciferase was applied to detect the presence of Salmonella and coliforms. Moreover, an artificial zinc finger protein (B2) fused to luciferase was constructed for a Norovirus detection system. In the luciferase activity detection assay, several bound/free separation process is required. Therefore, an analyzer that automatically performed the bound/free separation process was developed to detect PCR products using the ZF-luciferase fusion protein. By means of the automatic analyzer with ZF-luciferase fusion protein, target pathogenic genomes were specifically detected in the presence of other pathogenic genomes. Moreover, we succeeded in the detection of 10 copies of E. coli BL21 without extraction of genomic DNA by the automatic analyzer and E. coli was detected with a logarithmic dependency in the range of 1.0×10 to 1.0×10(6) copies. Copyright © 2013 Elsevier B.V. All rights reserved.
[Application of automatic photography in Schistosoma japonicum miracidium hatching experiments].
Ming-Li, Zhou; Ai-Ling, Cai; Xue-Feng, Wang
2016-05-20
To explore the value of automatic photography in the observation of results of Schistosoma japonicum miracidium hatching experiments. Some fresh S. japonicum eggs were added into cow feces, and the samples of feces were divided into a low infested experimental group and a high infested group (40 samples each group). In addition, there was a negative control group with 40 samples of cow feces without S. japonicum eggs. The conventional nylon bag S. japonicum miracidium hatching experiments were performed. The process was observed with the method of flashlight and magnifying glass combined with automatic video (automatic photography method), and, at the same time, with the naked eye observation method. The results were compared. In the low infested group, the miracidium positive detection rates were 57.5% and 85.0% by the naked eye observation method and automatic photography method, respectively ( χ 2 = 11.723, P < 0.05). In the high infested group, the positive detection rates were 97.5% and 100% by the naked eye observation method and automatic photography method, respectively ( χ 2 = 1.253, P > 0.05). In the two infested groups, the average positive detection rates were 77.5% and 92.5% by the naked eye observation method and automatic photography method, respectively ( χ 2 = 6.894, P < 0.05). The automatic photography can effectively improve the positive detection rate in the S. japonicum miracidium hatching experiments.
NASA Astrophysics Data System (ADS)
Carestia, Mariachiara; Pizzoferrato, Roberto; Gelfusa, Michela; Cenciarelli, Orlando; Ludovici, Gian Marco; Gabriele, Jessica; Malizia, Andrea; Murari, Andrea; Vega, Jesus; Gaudio, Pasquale
2015-11-01
Biosecurity and biosafety are key concerns of modern society. Although nanomaterials are improving the capacities of point detectors, standoff detection still appears to be an open issue. Laser-induced fluorescence of biological agents (BAs) has proved to be one of the most promising optical techniques to achieve early standoff detection, but its strengths and weaknesses are still to be fully investigated. In particular, different BAs tend to have similar fluorescence spectra due to the ubiquity of biological endogenous fluorophores producing a signal in the UV range, making data analysis extremely challenging. The Universal Multi Event Locator (UMEL), a general method based on support vector regression, is commonly used to identify characteristic structures in arrays of data. In the first part of this work, we investigate fluorescence emission spectra of different simulants of BAs and apply UMEL for their automatic classification. In the second part of this work, we elaborate a strategy for the application of UMEL to the discrimination of different BAs' simulants spectra. Through this strategy, it has been possible to discriminate between these BAs' simulants despite the high similarity of their fluorescence spectra. These preliminary results support the use of SVR methods to classify BAs' spectral signatures.
Salmi, T; Sovijärvi, A R; Brander, P; Piirilä, P
1988-11-01
Reliable long-term assessment of cough is necessary in many clinical and scientific settings. A new method for long-term recording and automatic analysis of cough is presented. The method is based on simultaneous recording of two independent signals: high-pass filtered cough sounds and cough-induced fast movements of the body. The acoustic signals are recorded with a dynamic microphone in the acoustic focus of a glass fiber paraboloid mirror. Body movements are recorded with a static charge-sensitive bed located under an ordinary plastic foam mattress. The patient can be studied lying or sitting with no transducers or electrodes attached. A microcomputer is used for sampling of signals, detection of cough, statistical analyses, and on-line printing of results. The method was validated in seven adult patients with a total of 809 spontaneous cough events, using clinical observation as a reference. The sensitivity of the method to detect cough was 99.0 percent, and the positive predictivity was 98.1 percent. The system ignored speaking and snoring. The method provides a convenient means of reliable long-term follow-up of cough in clinical work and research.
A new type of tri-axial accelerometers with high dynamic range MEMS for earthquake early warning
NASA Astrophysics Data System (ADS)
Peng, Chaoyong; Chen, Yang; Chen, Quansheng; Yang, Jiansi; Wang, Hongti; Zhu, Xiaoyi; Xu, Zhiqiang; Zheng, Yu
2017-03-01
Earthquake Early Warning System (EEWS) has shown its efficiency for earthquake damage mitigation. As the progress of low-cost Micro Electro Mechanical System (MEMS), many types of MEMS-based accelerometers have been developed and widely used in deploying large-scale, dense seismic networks for EEWS. However, the noise performance of these commercially available MEMS is still insufficient for weak seismic signals, leading to the large scatter of early-warning parameters estimation. In this study, we developed a new type of tri-axial accelerometer based on high dynamic range MEMS with low noise level using for EEWS. It is a MEMS-integrated data logger with built-in seismological processing. The device is built on a custom-tailored Linux 2.6.27 operating system and the method for automatic detecting seismic events is STA/LTA algorithms. When a seismic event is detected, peak ground parameters of all data components will be calculated at an interval of 1 s, and τc-Pd values will be evaluated using the initial 3 s of P wave. These values will then be organized as a trigger packet actively sent to the processing center for event combining detection. The output data of all three components are calibrated to sensitivity 500 counts/cm/s2. Several tests and a real field test deployment were performed to obtain the performances of this device. The results show that the dynamic range can reach 98 dB for the vertical component and 99 dB for the horizontal components, and majority of bias temperature coefficients are lower than 200 μg/°C. In addition, the results of event detection and real field deployment have shown its capabilities for EEWS and rapid intensity reporting.
An automated approach towards detecting complex behaviours in deep brain oscillations.
Mace, Michael; Yousif, Nada; Naushahi, Mohammad; Abdullah-Al-Mamun, Khondaker; Wang, Shouyan; Nandi, Dipankar; Vaidyanathan, Ravi
2014-03-15
Extracting event-related potentials (ERPs) from neurological rhythms is of fundamental importance in neuroscience research. Standard ERP techniques typically require the associated ERP waveform to have low variance, be shape and latency invariant and require many repeated trials. Additionally, the non-ERP part of the signal needs to be sampled from an uncorrelated Gaussian process. This limits methods of analysis to quantifying simple behaviours and movements only when multi-trial data-sets are available. We introduce a method for automatically detecting events associated with complex or large-scale behaviours, where the ERP need not conform to the aforementioned requirements. The algorithm is based on the calculation of a detection contour and adaptive threshold. These are combined using logical operations to produce a binary signal indicating the presence (or absence) of an event with the associated detection parameters tuned using a multi-objective genetic algorithm. To validate the proposed methodology, deep brain signals were recorded from implanted electrodes in patients with Parkinson's disease as they participated in a large movement-based behavioural paradigm. The experiment involved bilateral recordings of local field potentials from the sub-thalamic nucleus (STN) and pedunculopontine nucleus (PPN) during an orientation task. After tuning, the algorithm is able to extract events achieving training set sensitivities and specificities of [87.5 ± 6.5, 76.7 ± 12.8, 90.0 ± 4.1] and [92.6 ± 6.3, 86.0 ± 9.0, 29.8 ± 12.3] (mean ± 1 std) for the three subjects, averaged across the four neural sites. Furthermore, the methodology has the potential for utility in real-time applications as only a single-trial ERP is required. Copyright © 2013 Elsevier B.V. All rights reserved.
... Learn more about getting to NIH Get Email Alerts Receive automatic alerts about NHLBI related news and ... Connect With Us Contact Us Directly Get Email Alerts Receive automatic alerts about NHLBI related news and ...
Roye, Anja; Jacobsen, Thomas; Schröger, Erich
2007-08-01
In this human event-related brain potential (ERP) study, we have used one's personal--relative to another person's--ringtone presented in a two-deviant passive oddball paradigm to investigate the long-term memory effects of self-selected personal significance of a sound on the automatic deviance detection and involuntary attention system. Our findings extend the knowledge of long-term effects usually reported in group-approaches in the domains of speech, music and environmental sounds. In addition to the usual mismatch negativity (MMN) and P3a component elicited by deviants in contrast to standard stimuli, we observed a posterior ERP deflection directly following the MMN for the personally significant deviant only. This specific impact of personal significance started around 200 ms after sound onset and involved neural generators that were different from the mere physical deviance detection mechanism. Whereas the early part of the P3a component was unaffected by personal significance, the late P3a was enhanced for the ERPs to the personal significant deviant suggesting that this stimulus was more powerful in attracting attention involuntarily. Following the involuntary attention switch, the personally significant stimulus elicited a widely-distributed negative deflection, probably reflecting further analysis of the significant sound involving evaluation of relevance or reorienting to the primary task. Our data show, that the personal significance of mobile phone and text message technology, which have developed as a major medium of communication in our modern world, prompts the formation of individual memory representations, which affect the processing of sounds that are not in the focus of attention.
Zou, Yuan; Nathan, Viswam; Jafari, Roozbeh
2016-01-01
Electroencephalography (EEG) is the recording of electrical activity produced by the firing of neurons within the brain. These activities can be decoded by signal processing techniques. However, EEG recordings are always contaminated with artifacts which hinder the decoding process. Therefore, identifying and removing artifacts is an important step. Researchers often clean EEG recordings with assistance from independent component analysis (ICA), since it can decompose EEG recordings into a number of artifact-related and event-related potential (ERP)-related independent components. However, existing ICA-based artifact identification strategies mostly restrict themselves to a subset of artifacts, e.g., identifying eye movement artifacts only, and have not been shown to reliably identify artifacts caused by nonbiological origins like high-impedance electrodes. In this paper, we propose an automatic algorithm for the identification of general artifacts. The proposed algorithm consists of two parts: 1) an event-related feature-based clustering algorithm used to identify artifacts which have physiological origins; and 2) the electrode-scalp impedance information employed for identifying nonbiological artifacts. The results on EEG data collected from ten subjects show that our algorithm can effectively detect, separate, and remove both physiological and nonbiological artifacts. Qualitative evaluation of the reconstructed EEG signals demonstrates that our proposed method can effectively enhance the signal quality, especially the quality of ERPs, even for those that barely display ERPs in the raw EEG. The performance results also show that our proposed method can effectively identify artifacts and subsequently enhance the classification accuracies compared to four commonly used automatic artifact removal methods.
Zou, Yuan; Nathan, Viswam; Jafari, Roozbeh
2017-01-01
Electroencephalography (EEG) is the recording of electrical activity produced by the firing of neurons within the brain. These activities can be decoded by signal processing techniques. However, EEG recordings are always contaminated with artifacts which hinder the decoding process. Therefore, identifying and removing artifacts is an important step. Researchers often clean EEG recordings with assistance from Independent Component Analysis (ICA), since it can decompose EEG recordings into a number of artifact-related and event related potential (ERP)-related independent components (ICs). However, existing ICA-based artifact identification strategies mostly restrict themselves to a subset of artifacts, e.g. identifying eye movement artifacts only, and have not been shown to reliably identify artifacts caused by non-biological origins like high-impedance electrodes. In this paper, we propose an automatic algorithm for the identification of general artifacts. The proposed algorithm consists of two parts: 1) an event-related feature based clustering algorithm used to identify artifacts which have physiological origins and 2) the electrode-scalp impedance information employed for identifying non-biological artifacts. The results on EEG data collected from 10 subjects show that our algorithm can effectively detect, separate, and remove both physiological and non-biological artifacts. Qualitative evaluation of the reconstructed EEG signals demonstrates that our proposed method can effectively enhance the signal quality, especially the quality of ERPs, even for those that barely display ERPs in the raw EEG. The performance results also show that our proposed method can effectively identify artifacts and subsequently enhance the classification accuracies compared to four commonly used automatic artifact removal methods. PMID:25415992
Automatic thermographic image defect detection of composites
NASA Astrophysics Data System (ADS)
Luo, Bin; Liebenberg, Bjorn; Raymont, Jeff; Santospirito, SP
2011-05-01
Detecting defects, and especially reliably measuring defect sizes, are critical objectives in automatic NDT defect detection applications. In this work, the Sentence software is proposed for the analysis of pulsed thermography and near IR images of composite materials. Furthermore, the Sentence software delivers an end-to-end, user friendly platform for engineers to perform complete manual inspections, as well as tools that allow senior engineers to develop inspection templates and profiles, reducing the requisite thermographic skill level of the operating engineer. Finally, the Sentence software can also offer complete independence of operator decisions by the fully automated "Beep on Defect" detection functionality. The end-to-end automatic inspection system includes sub-systems for defining a panel profile, generating an inspection plan, controlling a robot-arm and capturing thermographic images to detect defects. A statistical model has been built to analyze the entire image, evaluate grey-scale ranges, import sentencing criteria and automatically detect impact damage defects. A full width half maximum algorithm has been used to quantify the flaw sizes. The identified defects are imported into the sentencing engine which then sentences (automatically compares analysis results against acceptance criteria) the inspection by comparing the most significant defect or group of defects against the inspection standards.
Sazonov, Edward S; Makeyev, Oleksandr; Schuckers, Stephanie; Lopez-Meyer, Paulo; Melanson, Edward L; Neuman, Michael R
2010-03-01
Our understanding of etiology of obesity and overweight is incomplete due to lack of objective and accurate methods for monitoring of ingestive behavior (MIB) in the free-living population. Our research has shown that frequency of swallowing may serve as a predictor for detecting food intake, differentiating liquids and solids, and estimating ingested mass. This paper proposes and compares two methods of acoustical swallowing detection from sounds contaminated by motion artifacts, speech, and external noise. Methods based on mel-scale Fourier spectrum, wavelet packets, and support vector machines are studied considering the effects of epoch size, level of decomposition, and lagging on classification accuracy. The methodology was tested on a large dataset (64.5 h with a total of 9966 swallows) collected from 20 human subjects with various degrees of adiposity. Average weighted epoch-recognition accuracy for intravisit individual models was 96.8%, which resulted in 84.7% average weighted accuracy in detection of swallowing events. These results suggest high efficiency of the proposed methodology in separation of swallowing sounds from artifacts that originate from respiration, intrinsic speech, head movements, food ingestion, and ambient noise. The recognition accuracy was not related to body mass index, suggesting that the methodology is suitable for obese individuals.
Sazonov, Edward S.; Makeyev, Oleksandr; Schuckers, Stephanie; Lopez-Meyer, Paulo; Melanson, Edward L.; Neuman, Michael R.
2010-01-01
Our understanding of etiology of obesity and overweight is incomplete due to lack of objective and accurate methods for Monitoring of Ingestive Behavior (MIB) in the free living population. Our research has shown that frequency of swallowing may serve as a predictor for detecting food intake, differentiating liquids and solids, and estimating ingested mass. This paper proposes and compares two methods of acoustical swallowing detection from sounds contaminated by motion artifacts, speech and external noise. Methods based on mel-scale Fourier spectrum, wavelet packets, and support vector machines are studied considering the effects of epoch size, level of decomposition and lagging on classification accuracy. The methodology was tested on a large dataset (64.5 hours with a total of 9,966 swallows) collected from 20 human subjects with various degrees of adiposity. Average weighted epoch recognition accuracy for intra-visit individual models was 96.8% which resulted in 84.7% average weighted accuracy in detection of swallowing events. These results suggest high efficiency of the proposed methodology in separation of swallowing sounds from artifacts that originate from respiration, intrinsic speech, head movements, food ingestion, and ambient noise. The recognition accuracy was not related to body mass index, suggesting that the methodology is suitable for obese individuals. PMID:19789095
De Angelis, Slivio; Fee, David; Haney, Matthew; Schneider, David
2012-01-01
In Alaska, where many active volcanoes exist without ground-based instrumentation, the use of techniques suitable for distant monitoring is pivotal. In this study we report regional-scale seismic and infrasound observations of volcanic activity at Mt. Cleveland between December 2011 and August 2012. During this period, twenty explosions were detected by infrasound sensors as far away as 1827 km from the active vent, and ground-coupled acoustic waves were recorded at seismic stations across the Aleutian Arc. Several events resulting from the explosive disruption of small lava domes within the summit crater were confirmed by analysis of satellite remote sensing data. However, many explosions eluded initial, automated, analyses of satellite data due to poor weather conditions. Infrasound and seismic monitoring provided effective means for detecting these hidden events. We present results from the implementation of automatic infrasound and seismo-acoustic eruption detection algorithms, and review the challenges of real-time volcano monitoring operations in remote regions. We also model acoustic propagation in the Northern Pacific, showing how tropospheric ducting effects allow infrasound to travel long distances across the Aleutian Arc. The successful results of our investigation provide motivation for expanded efforts in infrasound monitoring across the Aleutians and contributes to our knowledge of the number and style of vulcanian eruptions at Mt. Cleveland.
Predictive modeling of structured electronic health records for adverse drug event detection.
Zhao, Jing; Henriksson, Aron; Asker, Lars; Boström, Henrik
2015-01-01
The digitization of healthcare data, resulting from the increasingly widespread adoption of electronic health records, has greatly facilitated its analysis by computational methods and thereby enabled large-scale secondary use thereof. This can be exploited to support public health activities such as pharmacovigilance, wherein the safety of drugs is monitored to inform regulatory decisions about sustained use. To that end, electronic health records have emerged as a potentially valuable data source, providing access to longitudinal observations of patient treatment and drug use. A nascent line of research concerns predictive modeling of healthcare data for the automatic detection of adverse drug events, which presents its own set of challenges: it is not yet clear how to represent the heterogeneous data types in a manner conducive to learning high-performing machine learning models. Datasets from an electronic health record database are used for learning predictive models with the purpose of detecting adverse drug events. The use and representation of two data types, as well as their combination, are studied: clinical codes, describing prescribed drugs and assigned diagnoses, and measurements. Feature selection is conducted on the various types of data to reduce dimensionality and sparsity, while allowing for an in-depth feature analysis of the usefulness of each data type and representation. Within each data type, combining multiple representations yields better predictive performance compared to using any single representation. The use of clinical codes for adverse drug event detection significantly outperforms the use of measurements; however, there is no significant difference over datasets between using only clinical codes and their combination with measurements. For certain adverse drug events, the combination does, however, outperform using only clinical codes. Feature selection leads to increased predictive performance for both data types, in isolation and combined. We have demonstrated how machine learning can be applied to electronic health records for the purpose of detecting adverse drug events and proposed solutions to some of the challenges this presents, including how to represent the various data types. Overall, clinical codes are more useful than measurements and, in specific cases, it is beneficial to combine the two.
Predictive modeling of structured electronic health records for adverse drug event detection
2015-01-01
Background The digitization of healthcare data, resulting from the increasingly widespread adoption of electronic health records, has greatly facilitated its analysis by computational methods and thereby enabled large-scale secondary use thereof. This can be exploited to support public health activities such as pharmacovigilance, wherein the safety of drugs is monitored to inform regulatory decisions about sustained use. To that end, electronic health records have emerged as a potentially valuable data source, providing access to longitudinal observations of patient treatment and drug use. A nascent line of research concerns predictive modeling of healthcare data for the automatic detection of adverse drug events, which presents its own set of challenges: it is not yet clear how to represent the heterogeneous data types in a manner conducive to learning high-performing machine learning models. Methods Datasets from an electronic health record database are used for learning predictive models with the purpose of detecting adverse drug events. The use and representation of two data types, as well as their combination, are studied: clinical codes, describing prescribed drugs and assigned diagnoses, and measurements. Feature selection is conducted on the various types of data to reduce dimensionality and sparsity, while allowing for an in-depth feature analysis of the usefulness of each data type and representation. Results Within each data type, combining multiple representations yields better predictive performance compared to using any single representation. The use of clinical codes for adverse drug event detection significantly outperforms the use of measurements; however, there is no significant difference over datasets between using only clinical codes and their combination with measurements. For certain adverse drug events, the combination does, however, outperform using only clinical codes. Feature selection leads to increased predictive performance for both data types, in isolation and combined. Conclusions We have demonstrated how machine learning can be applied to electronic health records for the purpose of detecting adverse drug events and proposed solutions to some of the challenges this presents, including how to represent the various data types. Overall, clinical codes are more useful than measurements and, in specific cases, it is beneficial to combine the two. PMID:26606038
Automatic detection of typical dust devils from Mars landscape images
NASA Astrophysics Data System (ADS)
Ogohara, Kazunori; Watanabe, Takeru; Okumura, Susumu; Hatanaka, Yuji
2018-02-01
This paper presents an improved algorithm for automatic detection of Martian dust devils that successfully extracts tiny bright dust devils and obscured large dust devils from two subtracted landscape images. These dust devils are frequently observed using visible cameras onboard landers or rovers. Nevertheless, previous research on automated detection of dust devils has not focused on these common types of dust devils, but on dust devils that appear on images to be irregularly bright and large. In this study, we detect these common dust devils automatically using two kinds of parameter sets for thresholding when binarizing subtracted images. We automatically extract dust devils from 266 images taken by the Spirit rover to evaluate our algorithm. Taking dust devils detected by visual inspection to be ground truth, the precision, recall and F-measure values are 0.77, 0.86, and 0.81, respectively.
Automatic detection of articulation disorders in children with cleft lip and palate.
Maier, Andreas; Hönig, Florian; Bocklet, Tobias; Nöth, Elmar; Stelzle, Florian; Nkenke, Emeka; Schuster, Maria
2009-11-01
Speech of children with cleft lip and palate (CLP) is sometimes still disordered even after adequate surgical and nonsurgical therapies. Such speech shows complex articulation disorders, which are usually assessed perceptually, consuming time and manpower. Hence, there is a need for an easy to apply and reliable automatic method. To create a reference for an automatic system, speech data of 58 children with CLP were assessed perceptually by experienced speech therapists for characteristic phonetic disorders at the phoneme level. The first part of the article aims to detect such characteristics by a semiautomatic procedure and the second to evaluate a fully automatic, thus simple, procedure. The methods are based on a combination of speech processing algorithms. The semiautomatic method achieves moderate to good agreement (kappa approximately 0.6) for the detection of all phonetic disorders. On a speaker level, significant correlations between the perceptual evaluation and the automatic system of 0.89 are obtained. The fully automatic system yields a correlation on the speaker level of 0.81 to the perceptual evaluation. This correlation is in the range of the inter-rater correlation of the listeners. The automatic speech evaluation is able to detect phonetic disorders at an experts'level without any additional human postprocessing.
Stock, Ann-Kathrin; Steenbergen, Laura; Colzato, Lorenza; Beste, Christian
2016-12-01
Cognitive control is adaptive in the sense that it inhibits automatic processes to optimize goal-directed behavior, but high levels of control may also have detrimental effects in case they suppress beneficial automatisms. Until now, the system neurophysiological mechanisms and functional neuroanatomy underlying these adverse effects of cognitive control have remained elusive. This question was examined by analyzing the automatic exploitation of a beneficial implicit predictive feature under conditions of high versus low cognitive control demands, combining event-related potentials (ERPs) and source localization. It was found that cognitive control prohibits the beneficial automatic exploitation of additional implicit information when task demands are high. Bottom-up perceptual and attentional selection processes (P1 and N1 ERPs) are not modulated by this, but the automatic exploitation of beneficial predictive information in case of low cognitive control demands was associated with larger response-locked P3 amplitudes and stronger activation of the right inferior frontal gyrus (rIFG, BA47). This suggests that the rIFG plays a key role in the detection of relevant task cues, the exploitation of alternative task sets, and the automatic (bottom-up) implementation and reprogramming of action plans. Moreover, N450 amplitudes were larger under high cognitive control demands, which was associated with activity differences in the right medial frontal gyrus (BA9). This most likely reflects a stronger exploitation of explicit task sets which hinders the exploration of the implicit beneficial information in case of high cognitive control demands. Hum Brain Mapp 37:4511-4522, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
SeeCoast: persistent surveillance and automated scene understanding for ports and coastal areas
NASA Astrophysics Data System (ADS)
Rhodes, Bradley J.; Bomberger, Neil A.; Freyman, Todd M.; Kreamer, William; Kirschner, Linda; L'Italien, Adam C.; Mungovan, Wendy; Stauffer, Chris; Stolzar, Lauren; Waxman, Allen M.; Seibert, Michael
2007-04-01
SeeCoast is a prototype US Coast Guard port and coastal area surveillance system that aims to reduce operator workload while maintaining optimal domain awareness by shifting their focus from having to detect events to being able to analyze and act upon the knowledge derived from automatically detected anomalous activities. The automated scene understanding capability provided by the baseline SeeCoast system (as currently installed at the Joint Harbor Operations Center at Hampton Roads, VA) results from the integration of several components. Machine vision technology processes the real-time video streams provided by USCG cameras to generate vessel track and classification (based on vessel length) information. A multi-INT fusion component generates a single, coherent track picture by combining information available from the video processor with that from surface surveillance radars and AIS reports. Based on this track picture, vessel activity is analyzed by SeeCoast to detect user-defined unsafe, illegal, and threatening vessel activities using a rule-based pattern recognizer and to detect anomalous vessel activities on the basis of automatically learned behavior normalcy models. Operators can optionally guide the learning system in the form of examples and counter-examples of activities of interest, and refine the performance of the learning system by confirming alerts or indicating examples of false alarms. The fused track picture also provides a basis for automated control and tasking of cameras to detect vessels in motion. Real-time visualization combining the products of all SeeCoast components in a common operating picture is provided by a thin web-based client.
Using Activity-Related Behavioural Features towards More Effective Automatic Stress Detection
Giakoumis, Dimitris; Drosou, Anastasios; Cipresso, Pietro; Tzovaras, Dimitrios; Hassapis, George; Gaggioli, Andrea; Riva, Giuseppe
2012-01-01
This paper introduces activity-related behavioural features that can be automatically extracted from a computer system, with the aim to increase the effectiveness of automatic stress detection. The proposed features are based on processing of appropriate video and accelerometer recordings taken from the monitored subjects. For the purposes of the present study, an experiment was conducted that utilized a stress-induction protocol based on the stroop colour word test. Video, accelerometer and biosignal (Electrocardiogram and Galvanic Skin Response) recordings were collected from nineteen participants. Then, an explorative study was conducted by following a methodology mainly based on spatiotemporal descriptors (Motion History Images) that are extracted from video sequences. A large set of activity-related behavioural features, potentially useful for automatic stress detection, were proposed and examined. Experimental evaluation showed that several of these behavioural features significantly correlate to self-reported stress. Moreover, it was found that the use of the proposed features can significantly enhance the performance of typical automatic stress detection systems, commonly based on biosignal processing. PMID:23028461
Antila, Kari; Nieminen, Heikki J; Sequeiros, Roberto Blanco; Ehnholm, Gösta
2014-07-01
Up to 25% of women suffer from uterine fibroids (UF) that cause infertility, pain, and discomfort. MR-guided high intensity focused ultrasound (MR-HIFU) is an emerging technique for noninvasive, computer-guided thermal ablation of UFs. The volume of induced necrosis is a predictor of the success of the treatment. However, accurate volume assessment by hand can be time consuming, and quick tools produce biased results. Therefore, fast and reliable tools are required in order to estimate the technical treatment outcome during the therapy event so as to predict symptom relief. A novel technique has been developed for the segmentation and volume assessment of the treated region. Conventional algorithms typically require user interaction ora priori knowledge of the target. The developed algorithm exploits the treatment plan, the coordinates of the intended ablation, for fully automatic segmentation with no user input. A good similarity to an expert-segmented manual reference was achieved (Dice similarity coefficient = 0.880 ± 0.074). The average automatic segmentation time was 1.6 ± 0.7 min per patient against an order of tens of minutes when done manually. The results suggest that the segmentation algorithm developed, requiring no user-input, provides a feasible and practical approach for the automatic evaluation of the boundary and volume of the HIFU-treated region.
Automatic identification of artifacts in electrodermal activity data.
Taylor, Sara; Jaques, Natasha; Chen, Weixuan; Fedor, Szymon; Sano, Akane; Picard, Rosalind
2015-01-01
Recently, wearable devices have allowed for long term, ambulatory measurement of electrodermal activity (EDA). Despite the fact that ambulatory recording can be noisy, and recording artifacts can easily be mistaken for a physiological response during analysis, to date there is no automatic method for detecting artifacts. This paper describes the development of a machine learning algorithm for automatically detecting EDA artifacts, and provides an empirical evaluation of classification performance. We have encoded our results into a freely available web-based tool for artifact and peak detection.
NASA Astrophysics Data System (ADS)
Vasheghani Farahani, Jamileh; Zare, Mehdi; Lucas, Caro
2012-04-01
Thisarticle presents an adaptive neuro-fuzzy inference system (ANFIS) for classification of low magnitude seismic events reported in Iran by the network of Tehran Disaster Mitigation and Management Organization (TDMMO). ANFIS classifiers were used to detect seismic events using six inputs that defined the seismic events. Neuro-fuzzy coding was applied using the six extracted features as ANFIS inputs. Two types of events were defined: weak earthquakes and mining blasts. The data comprised 748 events (6289 signals) ranging from magnitude 1.1 to 4.6 recorded at 13 seismic stations between 2004 and 2009. We surveyed that there are almost 223 earthquakes with M ≤ 2.2 included in this database. Data sets from the south, east, and southeast of the city of Tehran were used to evaluate the best short period seismic discriminants, and features as inputs such as origin time of event, distance (source to station), latitude of epicenter, longitude of epicenter, magnitude, and spectral analysis (fc of the Pg wave) were used, increasing the rate of correct classification and decreasing the confusion rate between weak earthquakes and quarry blasts. The performance of the ANFIS model was evaluated for training and classification accuracy. The results confirmed that the proposed ANFIS model has good potential for determining seismic events.
Seismicity Increase in North China After the 2008 Mw7.9 Wenchuan Earthquake.
NASA Astrophysics Data System (ADS)
Goldhagen, G.; Li, C.; Peng, Z.; Wu, J.; Zhao, L.
2016-12-01
A large mainshock is capable of setting off an increase in seismicity in areas thousands of kilometers away. This phenomenon, known as remote triggering, is more likely to occur along active fault lines, aftershock zones, or regions with anthropogenic activities (e.g., mining, reservoirs, and fluid injections). By studying these susceptible areas, we can gain a better understanding of subsurface stress conditions, and long-range earthquake interactions. In this study we conduct a systematic search for remotely triggered seismicity in North China along two linear dense arrays (net code 1A and Z8) deployed by Chinese Academy of Sciences (CAS) following the 2008 Mw7.9 Wenchuan earthquake. A 5 Hz high pass filter is applied to the broadband seismogram recorded at the 1A array, which is more than 2,000 km away from the mainshock, in order to manually pick local events with double peaks. These local events have higher frequencies than earthquakes in the aftershock zone of the Wenchuan earthquake. An STA/LTA method is then employed as a way to automatically detect microseismicity in a section of the array that showed preliminary evidence of remote triggering. We find a clear increase of small earthquakes, right after the surface waves of the Wenchuan mainshock. These events, were recorded at stations close to the north section of the Tanlu fault and aftershock zones of the 1975, Ms7.3 Haicheng earthquake. This result suggests that remote triggering is more likely near active fault zones or other specific regions, as previous studies have proposed. Future work includes applying a waveform matching method to both arrays and automatically detecting micro-earthquakes missed on the catalog, and using them to better confirm the existence (or lack of) remote triggering following the Wenchuan mainshock. Our finding helps to better classify conditions that lead to the occurrence of remotely triggered earthquakes at intraplate regions.
How wind turbines affect the performance of seismic monitoring stations and networks
NASA Astrophysics Data System (ADS)
Neuffer, Tobias; Kremers, Simon
2017-12-01
In recent years, several minor seismic events were observed in the apparently aseismic region of the natural gas fields in Northern Germany. A seismic network was installed in the region consisting of borehole stations with sensor depths up to 200 m and surface stations to monitor induced seismicity. After installation of the network in 2012, an increasing number of wind turbines was established in proximity (<5 km) to several stations, thereby influencing the local noise conditions. This study demonstrates the impact of wind turbines on seismic noise level in a frequency range of 1-10 Hz at the monitoring sites with correlation to wind speed, based on the calculation of power spectral density functions and I95 values of waveforms over a time period of 4 yr. It could be shown that higher wind speeds increase the power spectral density amplitudes at distinct frequencies in the considered frequency band, depending on height as well as number and type of influencing wind turbines. The azimuthal direction of incoming Rayleigh waves at a surface station was determined to identify the noise sources. The analysis of the perturbed wave field showed that Rayleigh waves with backazimuths pointing to wind turbines in operation are dominating the wave field in a frequency band of 3-4 Hz. Additional peaks in a frequency range of 1-4 Hz could be attributed to turbine tower eigenfrequencies of various turbine manufactures with the hub height as defining parameter. Moreover, the influence of varying noise levels at a station on the ability to automatically detect seismic events was investigated. The increased noise level in correlation to higher wind speeds at the monitoring sites deteriorates the station's recording quality inhibiting the automatic detection of small seismic events. As a result, functionality and task fulfilment of the seismic monitoring network is more and more limited by the increasing number of nearby wind turbines.
Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George
2017-06-26
We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.
Sokka, Laura; Huotilainen, Minna; Leinikka, Marianne; Korpela, Jussi; Henelius, Andreas; Alain, Claude; Müller, Kiti; Pakarinen, Satu
2014-12-01
Job burnout is a significant cause of work absenteeism. Evidence from behavioral studies and patient reports suggests that job burnout is associated with impairments of attention and decreased working capacity, and it has overlapping elements with depression, anxiety and sleep disturbances. Here, we examined the electrophysiological correlates of automatic sound change detection and involuntary attention allocation in job burnout using scalp recordings of event-related potentials (ERP). Volunteers with job burnout symptoms but without severe depression and anxiety disorders and their non-burnout controls were presented with natural speech sound stimuli (standard and nine deviants), as well as three rarely occurring speech sounds with strong emotional prosody. All stimuli elicited mismatch negativity (MMN) responses that were comparable in both groups. The groups differed with respect to the P3a, an ERP component reflecting involuntary shift of attention: job burnout group showed a shorter P3a latency in response to the emotionally negative stimulus, and a longer latency in response to the positive stimulus. Results indicate that in job burnout, automatic speech sound discrimination is intact, but there is an attention capture tendency that is faster for negative, and slower to positive information compared to that of controls. Copyright © 2014 Elsevier B.V. All rights reserved.
Automated LASCO CME Catalog for Solar Cycle 23: Are CMEs Scale Invariant?
NASA Astrophysics Data System (ADS)
Robbrecht, E.; Berghmans, D.; Van der Linden, R. A. M.
2009-02-01
In this paper, we present the first automatically constructed LASCO coronal mass ejection (CME) catalog, a result of the application of the Computer Aided CME Tracking software (CACTus) on the LASCO archive during the interval 1997 September-2007 January. We have studied the CME characteristics and have compared them with similar results obtained by manual detection (CDAW CME catalog). On average, CACTus detects less than two events per day during solar minimum, up to eight events during maximum, nearly half of them being narrow (<20°). Assuming a correction factor, we find that the CACTus CME rate is surprisingly consistent with CME rates found during the past 30 years. The CACTus statistics show that small-scale outflow is ubiquitously observed in the outer corona. The majority of CACTus-only events are narrow transients related to previous CME activity or to intensity variations in the slow solar wind, reflecting its turbulent nature. A significant fraction (about 15%) of CACTus-only events were identified as independent events, thus not related to other CME activity. The CACTus CME width distribution is essentially scale invariant in angular span over a range of scales from 20° to 120° while previous catalogs present a broad maximum around 30°. The possibility that the size of coronal mass outflows follow a power-law distribution could indicate that no typical CME size exists, i.e., that the narrow transients are not different from the larger well defined CMEs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Andrew F.; Cinquini, Luca; Khudikyan, Shakeh E.
2015-01-01
“Fast radio transients” are defined here as bright millisecond pulses of radio-frequency energy. These short-duration pulses can be produced by known objects such as pulsars or potentially by more exotic objects such as evaporating black holes. The identification and verification of such an event would be of great scientific value. This is one major goal of the Very Long Baseline Array (VLBA) Fast Transient Experiment (V-FASTR), a software-based detection system installed at the VLBA. V-FASTR uses a “commensal” (piggy-back) approach, analyzing all array data continually during routine VLBA observations and identifying candidate fast transient events. Raw data can be storedmore » from a buffer memory, which enables a comprehensive off-line analysis. This is invaluable for validating the astrophysical origin of any detection. Candidates discovered by the automatic system must be reviewed each day by analysts to identify any promising signals that warrant a more in-depth investigation. To support the timely analysis of fast transient detection candidates by V-FASTR scientists, we have developed a metadata-driven, collaborative candidate review framework. The framework consists of a software pipeline for metadata processing composed of both open source software components and project-specific code written expressly to extract and catalog metadata from the incoming V-FASTR data products, and a web-based data portal that facilitates browsing and inspection of the available metadata for candidate events extracted from the VLBA radio data.« less
Van De Gucht, Tim; Saeys, Wouter; Van Meensel, Jef; Van Nuffel, Annelies; Vangeyte, Jurgen; Lauwers, Ludwig
2018-01-01
Although prototypes of automatic lameness detection systems for dairy cattle exist, information about their economic value is lacking. In this paper, a conceptual and operational framework for simulating the farm-specific economic value of automatic lameness detection systems was developed and tested on 4 system types: walkover pressure plates, walkover pressure mats, camera systems, and accelerometers. The conceptual framework maps essential factors that determine economic value (e.g., lameness prevalence, incidence and duration, lameness costs, detection performance, and their relationships). The operational simulation model links treatment costs and avoided losses with detection results and farm-specific information, such as herd size and lameness status. Results show that detection performance, herd size, discount rate, and system lifespan have a large influence on economic value. In addition, lameness prevalence influences the economic value, stressing the importance of an adequate prior estimation of the on-farm prevalence. The simulations provide first estimates for the upper limits for purchase prices of automatic detection systems. The framework allowed for identification of knowledge gaps obstructing more accurate economic value estimation. These include insights in cost reductions due to early detection and treatment, and links between specific lameness causes and their related losses. Because this model provides insight in the trade-offs between automatic detection systems' performance and investment price, it is a valuable tool to guide future research and developments. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Non Conventional Seismic Events Along the Himalayan Arc Detected in the Hi-Climb Dataset
NASA Astrophysics Data System (ADS)
Vergne, J.; Nàbĕlek, J. L.; Rivera, L.; Bollinger, L.; Burtin, A.
2008-12-01
From September 2002 to August 2005, more than 200 broadband seismic stations were operated across the Himalayan arc and the southern Tibetan plateau in the framework of the Hi-Climb project. Here, we take advantage of the high density of stations along the main profile to look for coherent seismic wave arrivals that can not be attributed to ordinary tectonic events. An automatic detection algorithm is applied to the continuous data streams filtered between 1 and 10 Hz, followed by a visual inspection of all detections. We discovered about one hundred coherent signals that cannot be attributed to local, regional or teleseismic earthquakes and which are characterized by emergent arrivals and long durations ranging from one minute to several hours. Most of these non conventional seismic events have a low signal to noise ratio and are thus only observed above 1 Hz in the frequency band where the seismic noise is the lowest. However, a small subset of them are strong enough to be observed in a larger frequency band and show an enhancement of long periods compared to standard earthquakes. Based on the analysis of the relative amplitude measured at each station or, when possible, on the correlation of the low frequency part of the signals, most of these events appear to be located along the High Himalayan range. But, because of their emergent character and the main orientation of the seismic profile, their longitude and depth remain poorly constrained. The origin of these non conventional seismic events is still unsealed but their seismic signature shares several characteristics with non volcanic tremors, glacial earthquakes and/or debris avalanches. All these phenomena may occur along the Himalayan range but were not seismically detected before. Here we discuss the pros and cons for each of these postulated candidates based on the analysis of the recorded waveforms and slip models.
Automatic Processing of Changes in Facial Emotions in Dysphoria: A Magnetoencephalography Study.
Xu, Qianru; Ruohonen, Elisa M; Ye, Chaoxiong; Li, Xueqiao; Kreegipuu, Kairi; Stefanics, Gabor; Luo, Wenbo; Astikainen, Piia
2018-01-01
It is not known to what extent the automatic encoding and change detection of peripherally presented facial emotion is altered in dysphoria. The negative bias in automatic face processing in particular has rarely been studied. We used magnetoencephalography (MEG) to record automatic brain responses to happy and sad faces in dysphoric (Beck's Depression Inventory ≥ 13) and control participants. Stimuli were presented in a passive oddball condition, which allowed potential negative bias in dysphoria at different stages of face processing (M100, M170, and M300) and alterations of change detection (visual mismatch negativity, vMMN) to be investigated. The magnetic counterpart of the vMMN was elicited at all stages of face processing, indexing automatic deviance detection in facial emotions. The M170 amplitude was modulated by emotion, response amplitudes being larger for sad faces than happy faces. Group differences were found for the M300, and they were indexed by two different interaction effects. At the left occipital region of interest, the dysphoric group had larger amplitudes for sad than happy deviant faces, reflecting negative bias in deviance detection, which was not found in the control group. On the other hand, the dysphoric group showed no vMMN to changes in facial emotions, while the vMMN was observed in the control group at the right occipital region of interest. Our results indicate that there is a negative bias in automatic visual deviance detection, but also a general change detection deficit in dysphoria.
Fast and robust segmentation in the SDO-AIA era
NASA Astrophysics Data System (ADS)
Verbeeck, Cis; Delouille, Véronique; Mampaey, Benjamin; Hochedez, Jean-François; Boyes, David; Barra, Vincent
Solar images from the Atmospheric Imaging Assembly (AIA) aboard the Solar Dynamics Ob-servatory (SDO) will flood the solar physics community with a wealth of information on solar variability, of great importance both in solar physics and in view of Space Weather applica-tions. Obtaining this information, however, requires the ability to automatically process large amounts of data in an objective fashion. In previous work, we have proposed a multi-channel unsupervised spatially-constrained multi-channel fuzzy clustering algorithm (SPoCA) that automatically segments EUV solar images into Active Regions (AR), Coronal Holes (CH), and Quiet Sun (QS). This algorithm will run in near real time on AIA data as part of the SDO Feature Finding Project, a suite of software pipeline modules for automated feature recognition and analysis for the imagery from SDO. After having corrected for the limb brightening effect, SPoCA computes an optimal clustering with respect to the regions of interest using fuzzy logic on a quality criterion to manage the various noises present in the images and the imprecision in the definition of the above regions. Next, the algorithm applies a morphological opening operation, smoothing the cluster edges while preserving their general shape. The process is fast and automatic. A lower size limit is used to distinguish AR from Bright Points. As the algorithm segments the coronal images according to their brightness, it might happen that an AR is detected as several disjoint pieces, if the brightness in between is somewhat lower. Morphological dilation is employed to reconstruct the AR themselves from their constituent pieces. Combining SPoCA's detection of AR, CH, and QS on subsequent images allows automatic tracking and naming of any region of interest. In the SDO software pipeline, SPoCA will auto-matically populate the Heliophysics Events Knowledgebase(HEK) with Active Region events. Further, the algorithm has a huge potential for correct and automatic identification of AR, CH, and QS in any study that aims to address properties of those specific regions in the corona. SPoCA is now ready and waiting to tackle solar cycle 24 using SDO data. While we presently apply SPoCA to EUV data, the method is generic enough to allow the introduction of other channels or data, e.g., Differential Emission Measure (DEM) maps. Because of the unprecedented challenges brought up by the quantity of SDO data, European partners have gathered within an ISSI team on `Mining and Exploiting the NASA Solar Dynam-ics Observatory data in Europe' (a.k.a. Soldyneuro). Its aim is to provide automated feature recognition algorithms for scanning the SDO archive, as well as conducting scientific studies that combine different algorithm's outputs. Within the Soldyneuro project, we will use data from the EUV Variability Experiment (EVE) spectrometer in order to estimate the full Sun DEM. This DEM will next be used to estimate the total flux from AIA images so as to provide a validation for the calibration of AIA.
Safety management for polluted confined space with IT system: a running case.
Hwang, Jing-Jang; Wu, Chien-Hsing; Zhuang, Zheng-Yun; Hsu, Yi-Chang
2015-01-01
This study traced a deployed real IT system to enhance occupational safety for a polluted confined space. By incorporating wireless technology, it automatically monitors the status of workers on the site and upon detected anomalous events, managers are notified effectively. The system, with a redefined standard operations process, is running well at one of Formosa Petrochemical Corporation's refineries. Evidence shows that after deployment, the system does enhance the safety level by real-time monitoring the workers and by managing well and controlling the anomalies. Therefore, such technical architecture can be applied to similar scenarios for safety enhancement purposes.
Rapid and Reliable Damage Proxy Map from InSAR Coherence
NASA Technical Reports Server (NTRS)
Yun, Sang-Ho; Fielding, Eric; Simons, Mark; Agram, Piyush; Rosen, Paul; Owen, Susan; Webb, Frank
2012-01-01
Future radar satellites will visit SoCal within a day after a disaster event. Data acquisition latency in 2015-2020 is 8 to approx. 15 hours. Data transfer latency that often involves human/agency intervention far exceeds the data acquisition latency. Need interagency cooperation to establish automatic pipeline for data transfer. The algorithm is tested with ALOS PALSAR data of Pasadena, California. Quantitative quality assessment is being pursued: Meeting with Pasadena City Hall computer engineers for a complete list of demolition/construction project 1. Estimate the probability of detection and probability of false alarm 2. Estimate the optimal threshold value.
Using Multiple Space Assests with In-Situ Measurements to Track Flooding in Thailand
NASA Technical Reports Server (NTRS)
Chien, Steve; Doubleday, Joshua; Mclaren, David; Tran, Daniel; Khunboa, Chatchai; Leelapatra, Watis; Pergamon, Vichain; Tanpipat, Veerachai; Chitradon, Royal; Boonya-aroonnet, Surajate;
2001-01-01
Increasing numbers of space assets can enable coordinated measurements of flooding phenomena to enhance tracking of extreme events. We describe the use of space and ground measurements to target further measurements as part of a flood monitoring system in Thailand. We utilize rapidly delivered MODIS data to detect major areas of flooding and the target the Earth Observing One Advanced Land Imager sensor to acquire higher spatial resolution data. Automatic surface water extent mapping products delivered to interested parties. We are also working to extend our network to include in-situ sensing networks and additional space assets.
Developments at Polish Seismological Network
NASA Astrophysics Data System (ADS)
Wiejacz, P.; Debski, W.; Lizurek, G.; Rudzinski, L.; Suchcicki, J.; Wiszniowski, J.
2009-04-01
Polish Seismological Network of the Institute of Geophysics, Polish Academy of Sciences, currently consists of 9 stations. Six of these stations are broadband. In 2008 one of the broadband stations has been moved from Warsaw city center out to a quieter site at the Central Geophysical Observatory at Belsk, thus the data has become useful for automatic data processing. Currently broadband seismic stations are spaced out to provide information from all of the territory of Poland. Automatic Seiscomp-2.5 detecting, locating and alerting system has been set up. Earthquakes that have taken place in 2004, namely the Kaliningrad and Podhale events, have caused concern about effectiveness of the network and quality of the recording. As result, the digitizer of the seismic station NIE - near the Podhale region - has been replaced in 2005, bringing the station up to the 24-bit standard and latest plans call to have the station upgraded to broadband. In the north, a new seismic station has been organized at Hel, however the site has proven to be extremely noisy. A broadband station is planned to be deployed in the north but an alternate location must be found. Further development plans call for establishment of a new 6-station short period subnetwork in and around the Upper Silesian Coal Basin to observe and readily locate local mining-induced seismic events. The ultimate goal is to provide ready and reliable information on all recorded seismic events and particularly those events from the territory of Poland. Reaching the goal requires however that a local seismic subnetwork be organized in and around the Lubin Copper Basin while the seismic station NIE be complemented by at least two stations in the immediate area where local seismicity takes place.
Classification of Nortes in the Gulf of Mexico derived from wave energy maps
NASA Astrophysics Data System (ADS)
Appendini, C. M.; Hernández-Lasheras, J.
2016-02-01
Extreme wave climate in the Gulf of Mexico is determined by tropical cyclones and winds from the Central American Cold Surges, locally referred to as Nortes. While hurricanes can have catastrophic effects, extreme waves and storm surge from Nortes occur several times a year, and thus have greater impacts on human activities along the Mexican coast of the Gulf of Mexico. Despite the constant impacts from Nortes, there is no available classification that relates their characteristics (e.g. pressure gradients, wind speed), to the associated coastal impacts. This work presents a first approximation to characterize and classify Nortes, which is based on the assumption that the derived wave energy synthetizes information (i.e. wind intensity, direction and duration) of individual Norte events as they pass through the Gulf of Mexico. First, we developed an index to identify Nortes based on surface pressure differences of two locations. To validate the methodology we compared the events identified with other studies and available Nortes logs. Afterwards, we detected Nortes from the 1986/1987, 2008/2009 and 2009/2010 seasons and used their corresponding wind fields to derive the wave energy maps using a numerical wave model. We used the energy maps to classify the events into groups using manual (visual) and automatic classifications (principal component analysis and k-means). The manual classification identified 3 types of Nortes and the automatic classification identified 5, although 3 of them had a high degree of similarity. The principal component analysis indicated that all events have similar characteristics, as few components are necessary to explain almost all of the variance. The classification from the k-means indicated that 81% of analyzed Nortes affect the southeastern Gulf of Mexico, while a smaller percentage affects the northern Gulf of Mexico and even less affect the western Caribbean.
Automatic patient respiration failure detection system with wireless transmission
NASA Technical Reports Server (NTRS)
Dimeff, J.; Pope, J. M.
1968-01-01
Automatic respiration failure detection system detects respiration failure in patients with a surgically implanted tracheostomy tube, and actuates an audible and/or visual alarm. The system incorporates a miniature radio transmitter so that the patient is unencumbered by wires yet can be monitored from a remote location.
[Micron]ADS-B Detect and Avoid Flight Tests on Phantom 4 Unmanned Aircraft System
NASA Technical Reports Server (NTRS)
Arteaga, Ricardo; Dandachy, Mike; Truong, Hong; Aruljothi, Arun; Vedantam, Mihir; Epperson, Kraettli; McCartney, Reed
2018-01-01
Researchers at the National Aeronautics and Space Administration Armstrong Flight Research Center in Edwards, California and Vigilant Aerospace Systems collaborated for the flight-test demonstration of an Automatic Dependent Surveillance-Broadcast based collision avoidance technology on a small unmanned aircraft system equipped with the uAvionix Automatic Dependent Surveillance-Broadcast transponder. The purpose of the testing was to demonstrate that National Aeronautics and Space Administration / Vigilant software and algorithms, commercialized as the FlightHorizon UAS"TM", are compatible with uAvionix hardware systems and the DJI Phantom 4 small unmanned aircraft system. The testing and demonstrations were necessary for both parties to further develop and certify the technology in three key areas: flights beyond visual line of sight, collision avoidance, and autonomous operations. The National Aeronautics and Space Administration and Vigilant Aerospace Systems have developed and successfully flight-tested an Automatic Dependent Surveillance-Broadcast Detect and Avoid system on the Phantom 4 small unmanned aircraft system. The Automatic Dependent Surveillance-Broadcast Detect and Avoid system architecture is especially suited for small unmanned aircraft systems because it integrates: 1) miniaturized Automatic Dependent Surveillance-Broadcast hardware; 2) radio data-link communications; 3) software algorithms for real-time Automatic Dependent Surveillance-Broadcast data integration, conflict detection, and alerting; and 4) a synthetic vision display using a fully-integrated National Aeronautics and Space Administration geobrowser for three dimensional graphical representations for ownship and air traffic situational awareness. The flight-test objectives were to evaluate the performance of Automatic Dependent Surveillance-Broadcast Detect and Avoid collision avoidance technology as installed on two small unmanned aircraft systems. In December 2016, four flight tests were conducted at Edwards Air Force Base. Researchers in the ground control station looking at displays were able to verify the Automatic Dependent Surveillance-Broadcast target detection and collision avoidance resolutions.
Micro-Seismic Monitoring During Stimulation at Paralana-2 South Australia
NASA Astrophysics Data System (ADS)
Hasting, M. A.; Albaric, J.; Oye, V.; Reid, P.; Messeiller, M.; Llanos, E.
2011-12-01
In 2009 the Paralana JV, drilled the Paralana-2 (P2) Enhanced Geothermal System (EGS) borehole east of the Flinders Range in South Australia. Drilling started on 30 Jun and reached a total depth of 4,003m (G.L AHD) on 9 Nov. A 7- inch casing was set and cemented to a depth of 3,725m and P2 was officially completed on the 9th Dec 2009. On 2 Jan 2011 a six meter zone was perforated between 3,679 and 3,685mRT. A stimulation of P2 was carried out on 3 Jan by injecting approximately 14,668l of fluid at pressure of up to 8.7kpsi and various rates up to 2bpm. During the stimulation 125 micro-earthquakes (MEQ) were triggered in the formation. Most MEQ events occurred in an area about 100m wide and 220m deep at an average depth of 3,850m. The largest event, ML1.4, occurred after the shut-in. Between 11 and 15 July 2011, the main fracture stimulation was carried out with ~3M litres injected at pressures up to 9kpsi and rates up to 10bpm. Over 10,000 MEQ were detected by the seismic monitoring network. This network consisted of 12 surface and 8 borehole stations with sensor depths of 40m, 200m and 1,800m. Four accelerometers were also installed to record ground motions near key facilities in the case of a larger seismic event. MEQ were automatically triggered and located in near-real-time with the software MIMO provided by NORSAR. A traffic light system was in operation and none of the detected events came close to the threshold value. More than 1/2 of the detected events could be processed and located reliably in the full automatic mode. Selected MEQ events were manually picked on site in order to improve the location accuracy. A total of 1,875 events were located to form the final picture of the stimulation fracture. Results show that fracturing occurred in three swarms. The 1st swarm occurs near the well and deepened with time from 3.7km to over 4.1km. The 2nd swarm occurred a few days in and shows as a circular patch extending a few hundred meters east of the 1st one. The 3rd swarm occurred after shut-in and extends downwards to the NNW to a depth of 4.4km. All events form along an ENE trending structure that is steeply dipping to the NNW with a total length of over 900m and width of 400m to 600m. Assuming that injected fluids went into opening of new fractures a volume change must occur. Using Brune's formula for estimating seismic moment and converting to ML we estimated that a total ML3.10 to 3.24 is required to accommodate the fluids. Summing the ML of 1,875 events yields a ML3.12. As such most of the fluids must have gone into the opening of fractures and have created a new geothermal reservoir.
Convolution neural-network-based detection of lung structures
NASA Astrophysics Data System (ADS)
Hasegawa, Akira; Lo, Shih-Chung B.; Freedman, Matthew T.; Mun, Seong K.
1994-05-01
Chest radiography is one of the most primary and widely used techniques in diagnostic imaging. Nowadays with the advent of digital radiology, the digital medical image processing techniques for digital chest radiographs have attracted considerable attention, and several studies on the computer-aided diagnosis (CADx) as well as on the conventional image processing techniques for chest radiographs have been reported. In the automatic diagnostic process for chest radiographs, it is important to outline the areas of the lungs, the heart, and the diaphragm. This is because the original chest radiograph is composed of important anatomic structures and, without knowing exact positions of the organs, the automatic diagnosis may result in unexpected detections. The automatic extraction of an anatomical structure from digital chest radiographs can be a useful tool for (1) the evaluation of heart size, (2) automatic detection of interstitial lung diseases, (3) automatic detection of lung nodules, and (4) data compression, etc. Based on the clearly defined boundaries of heart area, rib spaces, rib positions, and rib cage extracted, one should be able to use this information to facilitate the tasks of the CADx on chest radiographs. In this paper, we present an automatic scheme for the detection of lung field from chest radiographs by using a shift-invariant convolution neural network. A novel algorithm for smoothing boundaries of lungs is also presented.
Kim, Mary S.; Tsutsui, Kenta; Stern, Michael D.; Lakatta, Edward G.; Maltsev, Victor A.
2017-01-01
Local Ca2+ Releases (LCRs) are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA) node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame) sensitivity algorithm applied to each pixel (cell location). An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves), sparks and embers in muscle cells and Ca2+ puffs and syntillas in neurons. PMID:28683095
Neural network model for automatic traffic incident detection : executive summary.
DOT National Transportation Integrated Search
2001-04-01
Automatic freeway incident detection is an important component of advanced transportation management systems (ATMS) that provides information for emergency relief and traffic control and management purposes. In this research, a multi-paradigm intelli...
Automatic Detection of Whole Night Snoring Events Using Non-Contact Microphone
Dafna, Eliran; Tarasiuk, Ariel; Zigel, Yaniv
2013-01-01
Objective Although awareness of sleep disorders is increasing, limited information is available on whole night detection of snoring. Our study aimed to develop and validate a robust, high performance, and sensitive whole-night snore detector based on non-contact technology. Design Sounds during polysomnography (PSG) were recorded using a directional condenser microphone placed 1 m above the bed. An AdaBoost classifier was trained and validated on manually labeled snoring and non-snoring acoustic events. Patients Sixty-seven subjects (age 52.5±13.5 years, BMI 30.8±4.7 kg/m2, m/f 40/27) referred for PSG for obstructive sleep apnea diagnoses were prospectively and consecutively recruited. Twenty-five subjects were used for the design study; the validation study was blindly performed on the remaining forty-two subjects. Measurements and Results To train the proposed sound detector, >76,600 acoustic episodes collected in the design study were manually classified by three scorers into snore and non-snore episodes (e.g., bedding noise, coughing, environmental). A feature selection process was applied to select the most discriminative features extracted from time and spectral domains. The average snore/non-snore detection rate (accuracy) for the design group was 98.4% based on a ten-fold cross-validation technique. When tested on the validation group, the average detection rate was 98.2% with sensitivity of 98.0% (snore as a snore) and specificity of 98.3% (noise as noise). Conclusions Audio-based features extracted from time and spectral domains can accurately discriminate between snore and non-snore acoustic events. This audio analysis approach enables detection and analysis of snoring sounds from a full night in order to produce quantified measures for objective follow-up of patients. PMID:24391903
Automatic detection of whole night snoring events using non-contact microphone.
Dafna, Eliran; Tarasiuk, Ariel; Zigel, Yaniv
2013-01-01
Although awareness of sleep disorders is increasing, limited information is available on whole night detection of snoring. Our study aimed to develop and validate a robust, high performance, and sensitive whole-night snore detector based on non-contact technology. Sounds during polysomnography (PSG) were recorded using a directional condenser microphone placed 1 m above the bed. An AdaBoost classifier was trained and validated on manually labeled snoring and non-snoring acoustic events. Sixty-seven subjects (age 52.5 ± 13.5 years, BMI 30.8 ± 4.7 kg/m(2), m/f 40/27) referred for PSG for obstructive sleep apnea diagnoses were prospectively and consecutively recruited. Twenty-five subjects were used for the design study; the validation study was blindly performed on the remaining forty-two subjects. To train the proposed sound detector, >76,600 acoustic episodes collected in the design study were manually classified by three scorers into snore and non-snore episodes (e.g., bedding noise, coughing, environmental). A feature selection process was applied to select the most discriminative features extracted from time and spectral domains. The average snore/non-snore detection rate (accuracy) for the design group was 98.4% based on a ten-fold cross-validation technique. When tested on the validation group, the average detection rate was 98.2% with sensitivity of 98.0% (snore as a snore) and specificity of 98.3% (noise as noise). Audio-based features extracted from time and spectral domains can accurately discriminate between snore and non-snore acoustic events. This audio analysis approach enables detection and analysis of snoring sounds from a full night in order to produce quantified measures for objective follow-up of patients.
NASA Astrophysics Data System (ADS)
Takagi, R.; Obara, K.; Uchida, N.
2017-12-01
Understanding slow earthquake activity improves our knowledge of slip behavior in brittle-ductile transition zone and subduction process including megathrust earthquakes. In order to understand overall picture of slow slip activity, it is important to make a comprehensive catalog of slow slip events (SSEs). Although short-term SSEs have been detected by GNSS and tilt meter records systematically, analysis of long-term slow slip events relies on individual slip inversions. We develop an algorism to systematically detect long-term SSEs and estimate source parameters of the SSEs using GNSS data. The algorism is similar to GRiD-MT (Tsuruoka et al., 2009), which is grid-based automatic determination of moment tensor solution. Instead of moment tensor fitting to long period seismic records, we estimate parameters of a single rectangle fault to fit GNSS displacement time series. First, we make a two dimensional grid covering possible location of SSE. Second, we estimate best-fit parameters (length, width, slip, and rake) of the rectangle fault at each grid point by an iterative damped least square method. Depth, strike, and dip are fixed on the plate boundary. Ramp function with duration of 300 days is used for expressing time evolution of the fault slip. Third, a grid maximizing variance reduction is selected as a candidate of long-term SSE. We also search onset of ramp function based on the grid search. We applied the method to GNSS data in southwest Japan to detect long-term SSEs in Nankai subduction zone. With current selection criteria, we found 13 events with Mw6.2-6.9 in Hyuga-nada, Bungo channel, and central Shikoku from 1998 to 2015, which include unreported events. Key finding is along strike migrations of long-term SSEs from Hyuga-nada to Bungo channel and from Bungo channel to central Shikoku. In particular, three successive events migrating northward in Hyuga-nada preceded the 2003 Bungo channel SSE, and one event in central Shikoku followed the 2003 SSE in Bungo channel. The space-time dimensions of the possible along-strike migration are about 300km in length and 6 years in time. Systematic detection with assumptions of various durations in the time evolution of SSE may improve the picture of SSE activity and possible interaction with neighboring SSEs.
Van De Gucht, Tim; Van Weyenberg, Stephanie; Van Nuffel, Annelies; Lauwers, Ludwig; Vangeyte, Jürgen; Saeys, Wouter
2017-10-08
Most automatic lameness detection system prototypes have not yet been commercialized, and are hence not yet adopted in practice. Therefore, the objective of this study was to simulate the effect of detection performance (percentage missed lame cows and percentage false alarms) and system cost on the potential market share of three automatic lameness detection systems relative to visual detection: a system attached to the cow, a walkover system, and a camera system. Simulations were done using a utility model derived from survey responses obtained from dairy farmers in Flanders, Belgium. Overall, systems attached to the cow had the largest market potential, but were still not competitive with visual detection. Increasing the detection performance or lowering the system cost led to higher market shares for automatic systems at the expense of visual detection. The willingness to pay for extra performance was €2.57 per % less missed lame cows, €1.65 per % less false alerts, and €12.7 for lame leg indication, respectively. The presented results could be exploited by system designers to determine the effect of adjustments to the technology on a system's potential adoption rate.
ERIC Educational Resources Information Center
Rellecke, Julian; Palazova, Marina; Sommer, Werner; Schacht, Annekathrin
2011-01-01
The degree to which emotional aspects of stimuli are processed automatically is controversial. Here, we assessed the automatic elicitation of emotion-related brain potentials (ERPs) to positive, negative, and neutral words and facial expressions in an easy and superficial face-word discrimination task, for which the emotional valence was…
Automatic zebrafish heartbeat detection and analysis for zebrafish embryos.
Pylatiuk, Christian; Sanchez, Daniela; Mikut, Ralf; Alshut, Rüdiger; Reischl, Markus; Hirth, Sofia; Rottbauer, Wolfgang; Just, Steffen
2014-08-01
A fully automatic detection and analysis method of heartbeats in videos of nonfixed and nonanesthetized zebrafish embryos is presented. This method reduces the manual workload and time needed for preparation and imaging of the zebrafish embryos, as well as for evaluating heartbeat parameters such as frequency, beat-to-beat intervals, and arrhythmicity. The method is validated by a comparison of the results from automatic and manual detection of the heart rates of wild-type zebrafish embryos 36-120 h postfertilization and of embryonic hearts with bradycardia and pauses in the cardiac contraction.
Total lightning characteristics of recent hazardous weather events in Japan
NASA Astrophysics Data System (ADS)
Hobara, Y.; Kono, S.; Ogawa, T.; Heckman, S.; Stock, M.; Liu, C.
2017-12-01
In recent years, the total lightning (IC + CG) activity have attracted a lot of attention to improve the quality of prediction of hazardous weather phenomena (hail, wind gusts, tornadoes, heavy precipitation). Sudden increases of the total lightning flash rate so-called lightning jump (LJ) preceding the hazardous weather, reported in several studies, are one of the promising precursors. Although, increases in the frequency and intensity of these extreme weather events were reported in Japan, relationship with these events with total lightning have not studied intensively yet. In this paper, we will demonstrate the recent results from Japanese total lightning detection network (JTLN) in relation with hazardous weather events occurred in Japan in the period of 2014-2016. Automatic thunderstorm cell tracking was carried out based on the very high spatial and temporal resolution X-band MP radar echo data (1 min and 250 m) to correlate with total lightning activity. Results obtained reveal promising because the flash rate of total lightning tends to increase about 10 40 minutes before the onset of the extreme weather events. We also present the differences in lightning characteristics of thunderstorm cells between hazardous weather events and non-hazardous weather events, which is a vital information to improve the prediction efficiency.
Visual traffic jam analysis based on trajectory data.
Wang, Zuchao; Lu, Min; Yuan, Xiaoru; Zhang, Junping; van de Wetering, Huub
2013-12-01
In this work, we present an interactive system for visual analysis of urban traffic congestion based on GPS trajectories. For these trajectories we develop strategies to extract and derive traffic jam information. After cleaning the trajectories, they are matched to a road network. Subsequently, traffic speed on each road segment is computed and traffic jam events are automatically detected. Spatially and temporally related events are concatenated in, so-called, traffic jam propagation graphs. These graphs form a high-level description of a traffic jam and its propagation in time and space. Our system provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level. Case studies with 24 days of taxi GPS trajectories collected in Beijing demonstrate the effectiveness of our system.
Fletcher, Richard Ribón; Tam, Sharon; Omojola, Olufemi; Redemske, Richard; Kwan, Joyce
2011-01-01
We present a wearable sensor platform designed for monitoring and studying autonomic nervous system (ANS) activity for the purpose of mental health treatment and interventions. The mobile sensor system consists of a sensor band worn on the ankle that continuously monitors electrodermal activity (EDA), 3-axis acceleration, and temperature. A custom-designed ECG heart monitor worn on the chest is also used as an optional part of the system. The EDA signal from the ankle bands provides a measure sympathetic nervous system activity and used to detect arousal events. The optional ECG data can be used to improve the sensor classification algorithm and provide a measure of emotional "valence." Both types of sensor bands contain a Bluetooth radio that enables communication with the patient's mobile phone. When a specific arousal event is detected, the phone automatically presents therapeutic and empathetic messages to the patient in the tradition of Cognitive Behavioral Therapy (CBT). As an example of clinical use, we describe how the system is currently being used in an ongoing study for patients with drug-addiction and post-traumatic stress disorder (PTSD).
Automatic detection of larynx cancer from contrast-enhanced magnetic resonance images
NASA Astrophysics Data System (ADS)
Doshi, Trushali; Soraghan, John; Grose, Derek; MacKenzie, Kenneth; Petropoulakis, Lykourgos
2015-03-01
Detection of larynx cancer from medical imaging is important for the quantification and for the definition of target volumes in radiotherapy treatment planning (RTP). Magnetic resonance imaging (MRI) is being increasingly used in RTP due to its high resolution and excellent soft tissue contrast. Manually detecting larynx cancer from sequential MRI is time consuming and subjective. The large diversity of cancer in terms of geometry, non-distinct boundaries combined with the presence of normal anatomical regions close to the cancer regions necessitates the development of automatic and robust algorithms for this task. A new automatic algorithm for the detection of larynx cancer from 2D gadoliniumenhanced T1-weighted (T1+Gd) MRI to assist clinicians in RTP is presented. The algorithm employs edge detection using spatial neighborhood information of pixels and incorporates this information in a fuzzy c-means clustering process to robustly separate different tissues types. Furthermore, it utilizes the information of the expected cancerous location for cancer regions labeling. Comparison of this automatic detection system with manual clinical detection on real T1+Gd axial MRI slices of 2 patients (24 MRI slices) with visible larynx cancer yields an average dice similarity coefficient of 0.78+/-0.04 and average root mean square error of 1.82+/-0.28 mm. Preliminary results show that this fully automatic system can assist clinicians in RTP by obtaining quantifiable and non-subjective repeatable detection results in a particular time-efficient and unbiased fashion.
NASA Astrophysics Data System (ADS)
Applbaum, David; Dorman, Lev; Pustil'Nik, Lev; Sternlieb, Abraham; Zagnetko, Alexander; Zukerman, Igor
In Applbaum et al. (2010) it was described how the "SEP-Search" program works automat-ically, determining on the basis of on-line one-minute NM data the beginning of a great SEP event. The "SEP-Search" next uses one-minute data in order to check whether or not the observed increase reflects the beginning of a real great SEP event. If yes, the program "SEP-Research/Spectrum" automatically starts to work on line. We consider two variants: 1) quiet period (no change in cut-off rigidity), 2) disturbed period (characterized with possible changing of cut-off rigidity). We describe the method of determining the spectrum of SEP in the 1st vari-ant (for this we need data for at least two components with different coupling functions). For the 2nd variant we need data for at least three components with different coupling functions. We show that for these purposes one can use data of the total intensity and some different mul-tiplicities, but that it is better to use data from two or three NM with different cut-off rigidities. We describe in detail the algorithms of the program "SEP-Research/Spectrum." We show how this program worked on examples of some historical great SEP events. The work of NM on Mt. Hermon is supported by Israel (Tel Aviv University and ISA) -Italian (UNIRoma-Tre and IFSI-CNR) collaboration.
Recent Advances in Observations of Ground-level Auroral Kilometric Radiation
NASA Astrophysics Data System (ADS)
Labelle, J. W.; Ritter, J.; Pasternak, S.; Anderson, R. R.; Kojima, H.; Frey, H. U.
2011-12-01
Recently LaBelle and Anderson [2011] reported the first definitive observations of AKR at ground level, confirmed through simultaneous measurements on the Geotail spacecraft and at South Pole Station, Antarctica. The initial observations consisted of three examples recorded in 2004. An Antarctic observing site is critical for observing ground level AKR which is obscured by man-made broadcast signals at northern hemisphere locations. Examination of 2008 austral winter radio data from Antarctic Automatic Geophysical Observatories (AGOs) of the Polar Experiment Network for Geospace Upper-atmosphere Investigations (PENGUIn) network and South Pole Station reveals 37 ground level AKR events on 23 different days, 30 of which are confirmed by correlation with AKR observed with the Geotail spacecraft. The location of the Geotail spacecraft appears to be a significant factor enabling coincident measurements. Six of the AKR events are detected at two or three ground-level observatories separated by approximately 500 km, suggesting that the events illuminate an area comparable to a 500-km diameter. For 14 events on ten nights, photometer and all-sky imager data from South Pole and AGOs were examined; in ten cases, locations of auroral arcs could be determined at the times of the events. In eight of those cases, the AKR was detected at observatories poleward of the auroral arcs, and in the other two cases the aurora was approximately overhead at the observatory where AKR was detected. These observations suggest that the AKR signals may be ducted to ground level along magnetic field lines rather than propagating directly from the AKR source region of approximately 5000 km altitude. Correlations between structures in the AKR and intensifications of auroral arcs are occasionally observed but are rare. The ground-level AKR events have a local time distribution similar to that of AKR observed from satellites, peaking in the pre-midnight to midnight sector. This data base of >30 events observed coincidentally at ground level and on the Geotail spacecraft, including several events detected at multiple ground stations, will provide a measurements of the ratio of the ground level intensity to that observed on the satellite, with the latter adjusted for distance. This ratio of intensities, not well-determined from the original three events [LaBelle and Anderson, 2011], places an important constraint on the generation mechanism. Reference: LaBelle, J., and R.R. Anderson (2011), Ground-level detection of Auroral Kilometric Radiation, Geophys. Res. Lett., 38, L04104, doi:10.1029/2010GL046411.
Automatically Detecting Likely Edits in Clinical Notes Created Using Automatic Speech Recognition
Lybarger, Kevin; Ostendorf, Mari; Yetisgen, Meliha
2017-01-01
The use of automatic speech recognition (ASR) to create clinical notes has the potential to reduce costs associated with note creation for electronic medical records, but at current system accuracy levels, post-editing by practitioners is needed to ensure note quality. Aiming to reduce the time required to edit ASR transcripts, this paper investigates novel methods for automatic detection of edit regions within the transcripts, including both putative ASR errors but also regions that are targets for cleanup or rephrasing. We create detection models using logistic regression and conditional random field models, exploring a variety of text-based features that consider the structure of clinical notes and exploit the medical context. Different medical text resources are used to improve feature extraction. Experimental results on a large corpus of practitioner-edited clinical notes show that 67% of sentence-level edits and 45% of word-level edits can be detected with a false detection rate of 15%. PMID:29854187
SAR-based change detection using hypothesis testing and Markov random field modelling
NASA Astrophysics Data System (ADS)
Cao, W.; Martinis, S.
2015-04-01
The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.
An Application of Reassigned Time-Frequency Representations for Seismic Noise/Signal Decomposition
NASA Astrophysics Data System (ADS)
Mousavi, S. M.; Langston, C. A.
2016-12-01
Seismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. An automatic method for seismic noise/signal decomposition is presented based upon an enhanced time-frequency representation. Synchrosqueezing is a time-frequency reassignment method aimed at sharpening a time-frequency picture. Noise can be distinguished from the signal and suppressed more easily in this reassigned domain. The threshold level is estimated using a general cross validation approach that does not rely on any prior knowledge about the noise level. Efficiency of thresholding has been improved by adding a pre-processing step based on higher order statistics and a post-processing step based on adaptive hard-thresholding. In doing so, both accuracy and speed of the denoising have been improved compared to our previous algorithms (Mousavi and Langston, 2016a, 2016b; Mousavi et al., 2016). The proposed algorithm can either kill the noise (either white or colored) and keep the signal or kill the signal and keep the noise. Hence, It can be used in either normal denoising applications or in ambient noise studies. Application of the proposed method on synthetic and real seismic data shows the effectiveness of the method for denoising/designaling of local microseismic, and ocean bottom seismic data. References: Mousavi, S.M., C. A. Langston., and S. P. Horton (2016), Automatic Microseismic Denoising and Onset Detection Using the Synchrosqueezed-Continuous Wavelet Transform. Geophysics. 81, V341-V355, doi: 10.1190/GEO2015-0598.1. Mousavi, S.M., and C. A. Langston (2016a), Hybrid Seismic Denoising Using Higher-Order Statistics and Improved Wavelet Block Thresholding. Bull. Seismol. Soc. Am., 106, doi: 10.1785/0120150345. Mousavi, S.M., and C.A. Langston (2016b), Adaptive noise estimation and suppression for improving microseismic event detection, Journal of Applied Geophysics., doi: http://dx.doi.org/10.1016/j.jappgeo.2016.06.008.
Effects of musical training on sound pattern processing in high-school students.
Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse
2009-05-01
Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.
Neural network model for automatic traffic incident detection : final report, August 2001.
DOT National Transportation Integrated Search
2001-08-01
Automatic freeway incident detection is an important component of advanced transportation management systems (ATMS) that provides information for emergency relief and traffic control and management purposes. In this research, a multi-paradigm intelli...
Detecting cheaters without thinking: testing the automaticity of the cheater detection module.
Van Lier, Jens; Revlin, Russell; De Neys, Wim
2013-01-01
Evolutionary psychologists have suggested that our brain is composed of evolved mechanisms. One extensively studied mechanism is the cheater detection module. This module would make people very good at detecting cheaters in a social exchange. A vast amount of research has illustrated performance facilitation on social contract selection tasks. This facilitation is attributed to the alleged automatic and isolated operation of the module (i.e., independent of general cognitive capacity). This study, using the selection task, tested the critical automaticity assumption in three experiments. Experiments 1 and 2 established that performance on social contract versions did not depend on cognitive capacity or age. Experiment 3 showed that experimentally burdening cognitive resources with a secondary task had no impact on performance on the social contract version. However, in all experiments, performance on a non-social contract version did depend on available cognitive capacity. Overall, findings validate the automatic and effortless nature of social exchange reasoning.
Fast Drawing of Traffic Sign Using Mobile Mapping System
NASA Astrophysics Data System (ADS)
Yao, Q.; Tan, B.; Huang, Y.
2016-06-01
Traffic sign provides road users with the specified instruction and information to enhance traffic safety. Automatic detection of traffic sign is important for navigation, autonomous driving, transportation asset management, etc. With the advance of laser and imaging sensors, Mobile Mapping System (MMS) becomes widely used in transportation agencies to map the transportation infrastructure. Although many algorithms of traffic sign detection are developed in the literature, they are still a tradeoff between the detection speed and accuracy, especially for the large-scale mobile mapping of both the rural and urban roads. This paper is motivated to efficiently survey traffic signs while mapping the road network and the roadside landscape. Inspired by the manual delineation of traffic sign, a drawing strategy is proposed to quickly approximate the boundary of traffic sign. Both the shape and color prior of the traffic sign are simultaneously involved during the drawing process. The most common speed-limit sign circle and the statistic color model of traffic sign are studied in this paper. Anchor points of traffic sign edge are located with the local maxima of color and gradient difference. Starting with the anchor points, contour of traffic sign is drawn smartly along the most significant direction of color and intensity consistency. The drawing process is also constrained by the curvature feature of the traffic sign circle. The drawing of linear growth is discarded immediately if it fails to form an arc over some steps. The Kalman filter principle is adopted to predict the temporal context of traffic sign. Based on the estimated point,we can predict and double check the traffic sign in consecutive frames.The event probability of having a traffic sign over the consecutive observations is compared with the null hypothesis of no perceptible traffic sign. The temporally salient traffic sign is then detected statistically and automatically as the rare event of having a traffic sign.The proposed algorithm is tested with a diverse set of images that are taken inWuhan, China with theMMS ofWuhan University. Experimental results demonstrate that the proposed algorithm can detect traffic signs at the rate of over 80% in around 10 milliseconds. It is promising for the large-scale traffic sign survey and change detection using the mobile mapping system.
Multilevel analysis of sports video sequences
NASA Astrophysics Data System (ADS)
Han, Jungong; Farin, Dirk; de With, Peter H. N.
2006-01-01
We propose a fully automatic and flexible framework for analysis and summarization of tennis broadcast video sequences, using visual features and specific game-context knowledge. Our framework can analyze a tennis video sequence at three levels, which provides a broad range of different analysis results. The proposed framework includes novel pixel-level and object-level tennis video processing algorithms, such as a moving-player detection taking both the color and the court (playing-field) information into account, and a player-position tracking algorithm based on a 3-D camera model. Additionally, we employ scene-level models for detecting events, like service, base-line rally and net-approach, based on a number real-world visual features. The system can summarize three forms of information: (1) all court-view playing frames in a game, (2) the moving trajectory and real-speed of each player, as well as relative position between the player and the court, (3) the semantic event segments in a game. The proposed framework is flexible in choosing the level of analysis that is desired. It is effective because the framework makes use of several visual cues obtained from the real-world domain to model important events like service, thereby increasing the accuracy of the scene-level analysis. The paper presents attractive experimental results highlighting the system efficiency and analysis capabilities.
Automatic Detection of Student Mental Models during Prior Knowledge Activation in MetaTutor
ERIC Educational Resources Information Center
Rus, Vasile; Lintean, Mihai; Azevedo, Roger
2009-01-01
This paper presents several methods to automatically detecting students' mental models in MetaTutor, an intelligent tutoring system that teaches students self-regulatory processes during learning of complex science topics. In particular, we focus on detecting students' mental models based on student-generated paragraphs during prior knowledge…
40 CFR 49.4166 - Monitoring requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... burning pilot flame, electronically controlled automatic igniters, and monitoring system failures, using a... failure, electronically controlled automatic igniter failure, or improper monitoring equipment operation... and natural gas emissions in the event that natural gas recovered for pipeline injection must be...
40 CFR 49.4166 - Monitoring requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... burning pilot flame, electronically controlled automatic igniters, and monitoring system failures, using a... failure, electronically controlled automatic igniter failure, or improper monitoring equipment operation... and natural gas emissions in the event that natural gas recovered for pipeline injection must be...
DALMATIAN: An Algorithm for Automatic Cell Detection and Counting in 3D.
Shuvaev, Sergey A; Lazutkin, Alexander A; Kedrov, Alexander V; Anokhin, Konstantin V; Enikolopov, Grigori N; Koulakov, Alexei A
2017-01-01
Current 3D imaging methods, including optical projection tomography, light-sheet microscopy, block-face imaging, and serial two photon tomography enable visualization of large samples of biological tissue. Large volumes of data obtained at high resolution require development of automatic image processing techniques, such as algorithms for automatic cell detection or, more generally, point-like object detection. Current approaches to automated cell detection suffer from difficulties originating from detection of particular cell types, cell populations of different brightness, non-uniformly stained, and overlapping cells. In this study, we present a set of algorithms for robust automatic cell detection in 3D. Our algorithms are suitable for, but not limited to, whole brain regions and individual brain sections. We used watershed procedure to split regional maxima representing overlapping cells. We developed a bootstrap Gaussian fit procedure to evaluate the statistical significance of detected cells. We compared cell detection quality of our algorithm and other software using 42 samples, representing 6 staining and imaging techniques. The results provided by our algorithm matched manual expert quantification with signal-to-noise dependent confidence, including samples with cells of different brightness, non-uniformly stained, and overlapping cells for whole brain regions and individual tissue sections. Our algorithm provided the best cell detection quality among tested free and commercial software.
Automatic Surveying For Hazard Prevention On Glacier De GiÉtro, Switzerland
NASA Astrophysics Data System (ADS)
Bauder, A.; Funk, M.; Bösch, H.
Breaking off of large ice masses from the steep tongue of Glacier de Giétro may endanger a nearby reservoir. Such a falling ice mass could cause an oversplash over the dam at timeof a nearly filled lake. For this reason the glacier has been monitored intensively since the 1960's. An automatic theodolite was installed three years ago. It allows continuous displacement measurements of several targets on the glacier in order to detect short-term acceleration events. The installation includes a telemetric data transmission, which provides for immediate recognition of hazardous situations and early alarming. The obtained data were analysed in terms of precision and performance of the applied method. A high temporal resolution was gained. The comparison with traditional ob- servations shows clearly the potential of modern instruments to improve monitoring schems. We summarize the main results of this study and discuss the applicability of a modern motorized theodolite with target tracking and recognition ability for moni- toring purposes.
OKCARS : Oklahoma Collision Analysis and Response System.
DOT National Transportation Integrated Search
2012-10-01
By continuously monitoring traffic intersections to automatically detect that a collision or nearcollision : has occurred, automatically call for assistance, and automatically forewarn oncoming traffic, : our OKCARS has the capability to effectively ...
Ramírez, I; Pantrigo, J J; Montemayor, A S; López-Pérez, A E; Martín-Fontelles, M I; Brookes, S J H; Abalo, R
2017-08-01
When available, fluoroscopic recordings are a relatively cheap, non-invasive and technically straightforward way to study gastrointestinal motility. Spatiotemporal maps have been used to characterize motility of intestinal preparations in vitro, or in anesthetized animals in vivo. Here, a new automated computer-based method was used to construct spatiotemporal motility maps from fluoroscopic recordings obtained in conscious rats. Conscious, non-fasted, adult, male Wistar rats (n=8) received intragastric administration of barium contrast, and 1-2 hours later, when several loops of the small intestine were well-defined, a 2 minutes-fluoroscopic recording was obtained. Spatiotemporal diameter maps (Dmaps) were automatically calculated from the recordings. Three recordings were also manually analyzed for comparison. Frequency analysis was performed in order to calculate relevant motility parameters. In each conscious rat, a stable recording (17-20 seconds) was analyzed. The Dmaps manually and automatically obtained from the same recording were comparable, but the automated process was faster and provided higher resolution. Two frequencies of motor activity dominated; lower frequency contractions (15.2±0.9 cpm) had an amplitude approximately five times greater than higher frequency events (32.8±0.7 cpm). The automated method developed here needed little investigator input, provided high-resolution results with short computing times, and automatically compensated for breathing and other small movements, allowing recordings to be made without anesthesia. Although slow and/or infrequent events could not be detected in the short recording periods analyzed to date (17-20 seconds), this novel system enhances the analysis of in vivo motility in conscious animals. © 2017 John Wiley & Sons Ltd.
Timing of repetition suppression of event-related potentials to unattended objects.
Stefanics, Gabor; Heinzle, Jakob; Czigler, István; Valentini, Elia; Stephan, Klaas Enno
2018-05-26
Current theories of object perception emphasize the automatic nature of perceptual inference. Repetition suppression (RS), the successive decrease of brain responses to repeated stimuli, is thought to reflect the optimization of perceptual inference through neural plasticity. While functional imaging studies revealed brain regions that show suppressed responses to the repeated presentation of an object, little is known about the intra-trial time course of repetition effects to everyday objects. Here we used event-related potentials (ERP) to task-irrelevant line-drawn objects, while participants engaged in a distractor task. We quantified changes in ERPs over repetitions using three general linear models (GLM) that modelled RS by an exponential, linear, or categorical "change detection" function in each subject. Our aim was to select the model with highest evidence and determine the within-trial time-course and scalp distribution of repetition effects using that model. Model comparison revealed the superiority of the exponential model indicating that repetition effects are observable for trials beyond the first repetition. Model parameter estimates revealed a sequence of RS effects in three time windows (86-140ms, 322-360ms, and 400-446ms) and with occipital, temporo-parietal, and fronto-temporal distribution, respectively. An interval of repetition enhancement (RE) was also observed (320-340ms) over occipito-temporal sensors. Our results show that automatic processing of task-irrelevant objects involves multiple intervals of RS with distinct scalp topographies. These sequential intervals of RS and RE might reflect the short-term plasticity required for optimization of perceptual inference and the associated changes in prediction errors (PE) and predictions, respectively, over stimulus repetitions during automatic object processing. This article is protected by copyright. All rights reserved. © 2018 The Authors European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Testing & Evaluation of Close-Range SAR for Monitoring & Automatically Detecting Pavement Conditions
DOT National Transportation Integrated Search
2012-01-01
This report summarizes activities in support of the DOT contract on Testing & Evaluating Close-Range SAR for Monitoring & Automatically Detecting Pavement Conditions & Improve Visual Inspection Procedures. The work of this project was performed by Dr...
Kim, Young Jae; Kim, Kwang Gi
2018-01-01
Existing drusen measurement is difficult to use in clinic because it requires a lot of time and effort for visual inspection. In order to resolve this problem, we propose an automatic drusen detection method to help clinical diagnosis of age-related macular degeneration. First, we changed the fundus image to a green channel and extracted the ROI of the macular area based on the optic disk. Next, we detected the candidate group using the difference image of the median filter within the ROI. We also segmented vessels and removed them from the image. Finally, we detected the drusen through Renyi's entropy threshold algorithm. We performed comparisons and statistical analysis between the manual detection results and automatic detection results for 30 cases in order to verify validity. As a result, the average sensitivity was 93.37% (80.95%~100%) and the average DSC was 0.73 (0.3~0.98). In addition, the value of the ICC was 0.984 (CI: 0.967~0.993, p < 0.01), showing the high reliability of the proposed automatic method. We expect that the automatic drusen detection helps clinicians to improve the diagnostic performance in the detection of drusen on fundus image.
A fast automatic target detection method for detecting ships in infrared scenes
NASA Astrophysics Data System (ADS)
Özertem, Kemal Arda
2016-05-01
Automatic target detection in infrared scenes is a vital task for many application areas like defense, security and border surveillance. For anti-ship missiles, having a fast and robust ship detection algorithm is crucial for overall system performance. In this paper, a straight-forward yet effective ship detection method for infrared scenes is introduced. First, morphological grayscale reconstruction is applied to the input image, followed by an automatic thresholding onto the suppressed image. For the segmentation step, connected component analysis is employed to obtain target candidate regions. At this point, it can be realized that the detection is defenseless to outliers like small objects with relatively high intensity values or the clouds. To deal with this drawback, a post-processing stage is introduced. For the post-processing stage, two different methods are used. First, noisy detection results are rejected with respect to target size. Second, the waterline is detected by using Hough transform and the detection results that are located above the waterline with a small margin are rejected. After post-processing stage, there are still undesired holes remaining, which cause to detect one object as multi objects or not to detect an object as a whole. To improve the detection performance, another automatic thresholding is implemented only to target candidate regions. Finally, two detection results are fused and post-processing stage is repeated to obtain final detection result. The performance of overall methodology is tested with real world infrared test data.
The Bayesian Approach to Association
NASA Astrophysics Data System (ADS)
Arora, N. S.
2017-12-01
The Bayesian approach to Association focuses mainly on quantifying the physics of the domain. In the case of seismic association for instance let X be the set of all significant events (above some threshold) and their attributes, such as location, time, and magnitude, Y1 be the set of detections that are caused by significant events and their attributes such as seismic phase, arrival time, amplitude etc., Y2 be the set of detections that are not caused by significant events, and finally Y be the set of observed detections We would now define the joint distribution P(X, Y1, Y2, Y) = P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2) ; where the last term simply states that Y1 and Y2 are a partitioning of Y. Given the above joint distribution the inference problem is simply to find the X, Y1, and Y2 that maximizes posterior probability P(X, Y1, Y2| Y) which reduces to maximizing P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2). In this expression P(X) captures our prior belief about event locations. P(Y1 | X) captures notions of travel time, residual error distributions as well as detection and mis-detection probabilities. While P(Y2) captures the false detection rate of our seismic network. The elegance of this approach is that all of the assumptions are stated clearly in the model for P(X), P(Y1|X) and P(Y2). The implementation of the inference is merely a by-product of this model. In contrast some of the other methods such as GA hide a number of assumptions in the implementation details of the inference - such as the so called "driver cells." The other important aspect of this approach is that all seismic knowledge including knowledge from other domains such as infrasound and hydroacoustic can be included in the same model. So, we don't need to separately account for misdetections or merge seismic and infrasound events as a separate step. Finally, it should be noted that the objective of automatic association is to simplify the job of humans who are publishing seismic bulletins based on this output. The error metric for association should accordingly count errors such as missed events much higher than spurious events because the former require more work from humans. Furthermore, the error rate needs to be weighted higher during periods of high seismicity such as an aftershock sequence when the human effort tends to increase.
Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models.
AlDahoul, Nouar; Md Sabri, Aznul Qalid; Mansoor, Ali Mohammed
2018-01-01
Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN), pretrained CNN feature extractor, and hierarchical extreme learning machine) for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running). Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM). H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU), H-ELM's training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU).
StochKit2: software for discrete stochastic simulation of biochemical systems with events.
Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R
2011-09-01
StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.
Automatic detection of ECG cable interchange by analyzing both morphology and interlead relations.
Han, Chengzong; Gregg, Richard E; Feild, Dirk Q; Babaeizadeh, Saeed
2014-01-01
ECG cable interchange can generate erroneous diagnoses. For algorithms detecting ECG cable interchange, high specificity is required to maintain a low total false positive rate because the prevalence of interchange is low. In this study, we propose and evaluate an improved algorithm for automatic detection and classification of ECG cable interchange. The algorithm was developed by using both ECG morphology information and redundancy information. ECG morphology features included QRS-T and P-wave amplitude, frontal axis and clockwise vector loop rotation. The redundancy features were derived based on the EASI™ lead system transformation. The classification was implemented using linear support vector machine. The development database came from multiple sources including both normal subjects and cardiac patients. An independent database was used to test the algorithm performance. Common cable interchanges were simulated by swapping either limb cables or precordial cables. For the whole validation database, the overall sensitivity and specificity for detecting precordial cable interchange were 56.5% and 99.9%, and the sensitivity and specificity for detecting limb cable interchange (excluding left arm-left leg interchange) were 93.8% and 99.9%. Defining precordial cable interchange or limb cable interchange as a single positive event, the total false positive rate was 0.7%. When the algorithm was designed for higher sensitivity, the sensitivity for detecting precordial cable interchange increased to 74.6% and the total false positive rate increased to 2.7%, while the sensitivity for detecting limb cable interchange was maintained at 93.8%. The low total false positive rate was maintained at 0.6% for the more abnormal subset of the validation database including only hypertrophy and infarction patients. The proposed algorithm can detect and classify ECG cable interchanges with high specificity and low total false positive rate, at the cost of decreased sensitivity for certain precordial cable interchanges. The algorithm could also be configured for higher sensitivity for different applications where a lower specificity can be tolerated. Copyright © 2014 Elsevier Inc. All rights reserved.
Automatic Detection of Storm Damages Using High-Altitude Photogrammetric Imaging
NASA Astrophysics Data System (ADS)
Litkey, P.; Nurminen, K.; Honkavaara, E.
2013-05-01
The risks of storms that cause damage in forests are increasing due to climate change. Quickly detecting fallen trees, assessing the amount of fallen trees and efficiently collecting them are of great importance for economic and environmental reasons. Visually detecting and delineating storm damage is a laborious and error-prone process; thus, it is important to develop cost-efficient and highly automated methods. Objective of our research project is to investigate and develop a reliable and efficient method for automatic storm damage detection, which is based on airborne imagery that is collected after a storm. The requirements for the method are the before-storm and after-storm surface models. A difference surface is calculated using two DSMs and the locations where significant changes have appeared are automatically detected. In our previous research we used four-year old airborne laser scanning surface model as the before-storm surface. The after-storm DSM was provided from the photogrammetric images using the Next Generation Automatic Terrain Extraction (NGATE) algorithm of Socet Set software. We obtained 100% accuracy in detection of major storm damages. In this investigation we will further evaluate the sensitivity of the storm-damage detection process. We will investigate the potential of national airborne photography, that is collected at no-leaf season, to automatically produce a before-storm DSM using image matching. We will also compare impact of the terrain extraction algorithm to the results. Our results will also promote the potential of national open source data sets in the management of natural disasters.
NASA Astrophysics Data System (ADS)
Sa, Qila; Wang, Zhihui
2018-03-01
At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.
NASA Astrophysics Data System (ADS)
Yusop, Hanafi M.; Ghazali, M. F.; Yusof, M. F. M.; Remli, M. A. Pi; Kamarulzaman, M. H.
2017-10-01
In a recent study, the analysis of pressure transient signals could be seen as an accurate and low-cost method for leak and feature detection in water distribution systems. Transient phenomena occurs due to sudden changes in the fluid’s propagation in pipelines system caused by rapid pressure and flow fluctuation due to events such as closing and opening valves rapidly or through pump failure. In this paper, the feasibility of the Hilbert-Huang transform (HHT) method/technique in analysing the pressure transient signals in presented and discussed. HHT is a way to decompose a signal into intrinsic mode functions (IMF). However, the advantage of HHT is its difficulty in selecting the suitable IMF for the next data postprocessing method which is Hilbert Transform (HT). This paper reveals that utilizing the application of an integrated kurtosis-based algorithm for a z-filter technique (I-Kaz) to kurtosis ratio (I-Kaz-Kurtosis) allows/contributes to/leads to automatic selection of the IMF that should be used. This technique is demonstrated on a 57.90-meter medium high-density polyethylene (MDPE) pipe installed with a single artificial leak. The analysis results using the I-Kaz-kurtosis ratio revealed/confirmed that the method can be used as an automatic selection of the IMF although the noise level ratio of the signal is low. Therefore, the I-Kaz-kurtosis ratio method is recommended as a means to implement an automatic selection technique of the IMF for HHT analysis.
New operator assistance features in the CMS Run Control System
NASA Astrophysics Data System (ADS)
Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. R.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Vougioukas, M.; Zejdl, P.
2017-10-01
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.
New Operator Assistance Features in the CMS Run Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andre, J.M.; et al.
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potentialmore » clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.« less
Monitoring of pipeline ruptures by means of a Robust Satellite Technique (RST)
NASA Astrophysics Data System (ADS)
Filizzola, C.; Baldassarre, G.; Corrado, R.; Mazzeo, G.; Marchese, F.; Paciello, R.; Pergola, N.; Tramutoli, V.
2009-04-01
Pipeline ruptures have deep economic and ecologic consequences so that pipeline networks represent critical infrastructures to be carefully monitored particularly in areas which are frequently affected by natural disasters like earthquakes, hurricanes, landslide, etc. In order to minimize damages, the detection of harmful events along pipelines should be as rapid as possible and, at the same time, what is detected should be an actual incident and not a false alarm. In this work, a Robust Satellite Technique (RST), already applied to the prevision and NRT (Near Real Time) monitoring of major natural and environmental hazards (such as seismically active areas, volcanic activity, hydrological risk, forest fires and oil spills) has been employed to automatically identify, from satellite, anomalous Thermal Infrared (TIR) transients related to explosions of oil/gas pipelines. In this context, the combination of the RST approach with high temporal resolution, offered by geostationary satellites, seems to assure both a reliable and timely detection of such events. The potentials of the technique (applied to MSG-SEVIRI data) were tested over Iraq, a region which is sadly known for the numerous (mainly manmade) accidents to pipelines, in order to have a simulation of the effects (such as fires or explosions near or directly involving a pipeline facility) due to natural disasters.
Integrating alternative splicing detection into gene prediction.
Foissac, Sylvain; Schiex, Thomas
2005-02-10
Alternative splicing (AS) is now considered as a major actor in transcriptome/proteome diversity and it cannot be neglected in the annotation process of a new genome. Despite considerable progresses in term of accuracy in computational gene prediction, the ability to reliably predict AS variants when there is local experimental evidence of it remains an open challenge for gene finders. We have used a new integrative approach that allows to incorporate AS detection into ab initio gene prediction. This method relies on the analysis of genomically aligned transcript sequences (ESTs and/or cDNAs), and has been implemented in the dynamic programming algorithm of the graph-based gene finder EuGENE. Given a genomic sequence and a set of aligned transcripts, this new version identifies the set of transcripts carrying evidence of alternative splicing events, and provides, in addition to the classical optimal gene prediction, alternative optimal predictions (among those which are consistent with the AS events detected). This allows for multiple annotations of a single gene in a way such that each predicted variant is supported by a transcript evidence (but not necessarily with a full-length coverage). This automatic combination of experimental data analysis and ab initio gene finding offers an ideal integration of alternatively spliced gene prediction inside a single annotation pipeline.
Global Infrasound Association Based on Probabilistic Clutter Categorization
NASA Astrophysics Data System (ADS)
Arora, N. S.; Mialle, P.
2015-12-01
The IDC collects waveforms from a global network of infrasound sensors maintained by the IMS, and automatically detects signal onsets and associates them to form event hypotheses. However, a large number of signal onsets are due to local clutter sources such as microbaroms (from standing waves in the oceans), waterfalls, dams, gas flares, surf (ocean breaking waves) etc. These sources are either too diffuse or too local to form events. Worse still, the repetitive nature of this clutter leads to a large number of false event hypotheses due to the random matching of clutter at multiple stations. Previous studies, for example [1], have worked on categorization of clutter using long term trends on detection azimuth, frequency, and amplitude at each station. In this work we continue the same line of reasoning to build a probabilistic model of clutter that is used as part of NET-VISA [2], a Bayesian approach to network processing. The resulting model is a fusion of seismic, hydro-acoustic and infrasound processing built on a unified probabilistic framework. Notes: The attached figure shows all the unassociated arrivals detected at IMS station I09BR for 2012 distributed by azimuth and center frequency. (The title displays the bandwidth of the kernel density estimate along the azimuth and frequency dimensions).This plot shows multiple micro-barom sources as well as other sources of infrasound clutter. A diverse clutter-field such as this one is quite common for most IMS infrasound stations, and it highlights the dangers of forming events without due consideration of this source of noise. References: [1] Infrasound categorization Towards a statistics-based approach. J. Vergoz, P. Gaillard, A. Le Pichon, N. Brachet, and L. Ceranna. ITW 2011 [2] NET-VISA: Network Processing Vertically Integrated Seismic Analysis. N. S. Arora, S. Russell, and E. Sudderth. BSSA 2013.
Using Dictionary Pair Learning for Seizure Detection.
Ma, Xin; Yu, Nana; Zhou, Weidong
2018-02-13
Automatic seizure detection is extremely important in the monitoring and diagnosis of epilepsy. The paper presents a novel method based on dictionary pair learning (DPL) for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. First, for the EEG data, wavelet filtering and differential filtering are applied, and the kernel function is performed to make the signal linearly separable. In DPL, the synthesis dictionary and analysis dictionary are learned jointly from original training samples with alternating minimization method, and sparse coefficients are obtained by using of linear projection instead of costly [Formula: see text]-norm or [Formula: see text]-norm optimization. At last, the reconstructed residuals associated with seizure and nonseizure sub-dictionary pairs are calculated as the decision values, and the postprocessing is performed for improving the recognition rate and reducing the false detection rate of the system. A total of 530[Formula: see text]h from 20 patients with 81 seizures were used to evaluate the system. Our proposed method has achieved an average segment-based sensitivity of 93.39%, specificity of 98.51%, and event-based sensitivity of 96.36% with false detection rate of 0.236/h.
SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Yang, D
2015-06-15
Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less
Real-Time Data Processing Systems and Products at the Alaska Earthquake Information Center
NASA Astrophysics Data System (ADS)
Ruppert, N. A.; Hansen, R. A.
2007-05-01
The Alaska Earthquake Information Center (AEIC) receives data from over 400 seismic sites located within the state boundaries and the surrounding regions and serves as a regional data center. In 2007, the AEIC reported ~20,000 seismic events, with the largest event of M6.6 in Andreanof Islands. The real-time earthquake detection and data processing systems at AEIC are based on the Antelope system from BRTT, Inc. This modular and extensible processing platform allows an integrated system complete from data acquisition to catalog production. Multiple additional modules constructed with the Antelope toolbox have been developed to fit particular needs of the AEIC. The real-time earthquake locations and magnitudes are determined within 2-5 minutes of the event occurrence. AEIC maintains a 24/7 seismologist-on-duty schedule. Earthquake alarms are based on the real- time earthquake detections. Significant events are reviewed by the seismologist on duty within 30 minutes of the occurrence with information releases issued for significant events. This information is disseminated immediately via the AEIC website, ANSS website via QDDS submissions, through e-mail, cell phone and pager notifications, via fax broadcasts and recorded voice-mail messages. In addition, automatic regional moment tensors are determined for events with M>=4.0. This information is posted on the public website. ShakeMaps are being calculated in real-time with the information currently accessible via a password-protected website. AEIC is designing an alarm system targeted for the critical lifeline operations in Alaska. AEIC maintains an extensive computer network to provide adequate support for data processing and archival. For real-time processing, AEIC operates two identical, interoperable computer systems in parallel.
Properties of induced seismicity at the geothermal reservoir Insheim, Germany
NASA Astrophysics Data System (ADS)
Olbert, Kai; Küperkoch, Ludger; Thomas, Meier
2017-04-01
Within the framework of the German MAGS2 Project the processing of induced events at the geothermal power plant Insheim, Germany, has been reassessed and evaluated. The power plant is located close to the western rim of the Upper Rhine Graben in a region with a strongly heterogeneous subsurface. Therefore, the location of seismic events particularly the depth estimation is challenging. The seismic network consisting of up to 50 stations has an aperture of approximately 15 km around the power plant. Consequently, the manual processing is time consuming. Using a waveform similarity detection algorithm, the existing dataset from 2012 to 2016 has been reprocessed to complete the catalog of induced seismic events. Based on the waveform similarity clusters of similar events have been detected. Automated P- and S-arrival time determination using an improved multi-component autoregressive prediction algorithm yields approximately 14.000 P- and S-arrivals for 758 events. Applying a dataset of manual picks as reference the automated picking algorithm has been optimized resulting in a standard deviation of the residuals between automated and manual picks of about 0.02s. The automated locations show uncertainties comparable to locations of the manual reference dataset. 90 % of the automated relocations fall within the error ellipsoid of the manual locations. The remaining locations are either badly resolved due to low numbers of picks or so well resolved that the automatic location is outside the error ellipsoid although located close to the manual location. The developed automated processing scheme proved to be a useful tool to supplement real-time monitoring. The event clusters are located at small patches of faults known from reflection seismic studies. The clusters are observed close to both the injection as well as the production wells.
A Study of the Initiation Process of Coronal Mass Ejections and the Tool for Their Auto-Detection
NASA Astrophysics Data System (ADS)
Olmedo, Oscar
Coronal mass ejections (CMEs) are the most energetic and important solar activity. They are often associated with other solar phenomena such as flares and filament/prominence eruptions. Despite the significant improvement of CME study in the past decade, our understanding of the initiation process of CMEs remains elusive. In order to solve this issue, an approach that combines theoretical modelling and empirical analysis is needed. This thesis is a combination of three studies, two of which investigate the initiation process of CMEs, and the other is the development of a tool to automatically detect CMEs. First, I investigate the stability of the well-known eruptive flux rope model in the context of the torus instability. In the flux rope model, the pre-eruptive CME structure is a helical flux rope with two footpoints anchored to the solar surface. The torus instability is dependent on the balance between two opposing magnetic forces, the outward Lorentz self-force (also called curvature hoop force) and the restoring Lorentz force of the ambient magnetic fields. Previously, the condition of stability derived for the torus instability assumed that the pre-eruptive structure was a semicircular loop above the photosphere without anchored footpoints. I extend these results to partial torus flux ropes of any circularity with anchored footpoints and discovered that there is a dependence of the critical index on the fractional number of the partial torus, defined by the ratio between the arc length of the partial torus above the photosphere and the circumference of a circular torus of equal radius. I coin this result the partial torus instability (PTI). The result is more general than has been previously derived and extends to loops of any arc above the photosphere. It will be demonstrated that these results can help us understand the confinement, growth, and eventual eruption of a flux rope CME. Second, I use observations of eruptive prominences associated with CMEs to examine the behaviour of their initiation and compare these observations to theoretical models. Since theoretical models specify the pre-existence of a flux rope, the observational challenge is the interpretation of the flux rope in solar images. A good proxy for flux ropes is prominences, because of its obvious elongated helical structure above the magnetic polarity line. I compare the prominence kinematics and the associated extrapolated magnetic fields. This observational study yields two key conclusions. The first is that there is a dependence of the ejecta's kinematics on how the ambient magnetic field decay's. The second is that the critical decay index, theorized to be where the flux rope transitions from a stable to unstable configuration, is dependent on the geometry of the loop. This second result is in qualitative agreement with the theorized PTI. Finally, I develop a tool to automatically detect CME events in coronagraph images. Because of the large amount of data collected over the years, searching for candidate events to study can be daunting. In order to facilitate the search of CME event candidates, an algorithm was developed to automatically detect and characterize CMEs seen in coronagraph images. With this tool, one need not scroll through the large number of images, and only focus on particular subsets. The auto-detection reduces human bias of CME characterization. Such automated detection algorithms can have other applications, such as space weather alerts in near-real time. In summary, this thesis has improved our understanding of the initiation process of CMEs by taking both theoretical and observational studies. Future work includes investigating a larger number of events to give a better statistical characterization of the results found in the observational study. Furthermore, modification to the theoretical model of the PTI, for example by ncluding a repulsive force due to induced photospheric currents, can improve the quantitative agreement with observations. The complete knowledge of the initiation of CMEs is important because it can help us to predict when such an event may occur. Such a prediction can aid in mitigating severe space weather effects at the Earth.
Mölder, Anna; Drury, Sarah; Costen, Nicholas; Hartshorne, Geraldine M; Czanner, Silvester
2015-02-01
Embryo selection in in vitro fertilization (IVF) treatment has traditionally been done manually using microscopy at intermittent time points during embryo development. Novel technique has made it possible to monitor embryos using time lapse for long periods of time and together with the reduced cost of data storage, this has opened the door to long-term time-lapse monitoring, and large amounts of image material is now routinely gathered. However, the analysis is still to a large extent performed manually, and images are mostly used as qualitative reference. To make full use of the increased amount of microscopic image material, (semi)automated computer-aided tools are needed. An additional benefit of automation is the establishment of standardization tools for embryo selection and transfer, making decisions more transparent and less subjective. Another is the possibility to gather and analyze data in a high-throughput manner, gathering data from multiple clinics and increasing our knowledge of early human embryo development. In this study, the extraction of data to automatically select and track spatio-temporal events and features from sets of embryo images has been achieved using localized variance based on the distribution of image grey scale levels. A retrospective cohort study was performed using time-lapse imaging data derived from 39 human embryos from seven couples, covering the time from fertilization up to 6.3 days. The profile of localized variance has been used to characterize syngamy, mitotic division and stages of cleavage, compaction, and blastocoel formation. Prior to analysis, focal plane and embryo location were automatically detected, limiting precomputational user interaction to a calibration step and usable for automatic detection of region of interest (ROI) regardless of the method of analysis. The results were validated against the opinion of clinical experts. © 2015 International Society for Advancement of Cytometry. © 2015 International Society for Advancement of Cytometry.
NASA Astrophysics Data System (ADS)
Hibert, C.; Michéa, D.; Provost, F.; Malet, J. P.; Geertsema, M.
2017-12-01
Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with small mass. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest machine learning algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. The processing chain is implemented to work in a High Performance Computers centre which permits to explore years of continuous seismic data rapidly. We present here the preliminary results of the application of this processing chain for years of continuous seismic record by the Alaskan permanent seismic network and Hi-Climb trans-Himalayan seismic network. The processing chain we developed also opens the possibility for a near-real time seismic detection of landslides, in association with remote-sensing automated detection from Sentinel 2 images for example.
Automatic-repeat-request error control schemes
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.; Miller, M. J.
1983-01-01
Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.
Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.
Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo
2018-01-01
Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Investigation of an automatic trim algorithm for restructurable aircraft control
NASA Technical Reports Server (NTRS)
Weiss, J.; Eterno, J.; Grunberg, D.; Looze, D.; Ostroff, A.
1986-01-01
This paper develops and solves an automatic trim problem for restructurable aircraft control. The trim solution is applied as a feed-forward control to reject measurable disturbances following control element failures. Disturbance rejection and command following performances are recovered through the automatic feedback control redesign procedure described by Looze et al. (1985). For this project the existence of a failure detection mechanism is assumed, and methods to cope with potential detection and identification inaccuracies are addressed.
Is there pre-attentive memory-based comparison of pitch?
Jacobsen, T; Schröger, E
2001-07-01
The brain's responsiveness to changes in sound frequency has been demonstrated by an overwhelming number of studies. Change detection occurs unintentionally and automatically. It is generally assumed that this brain response, the so-called mismatch negativity (MMN) of the event-related brain potential or evoked magnetic field, is based on the outcome of a memory-comparison mechanism rather than being due to a differential state of refractoriness of tonotopically organized cortical neurons. To the authors' knowledge, however, there is no entirely compelling evidence for this belief. An experimental protocol controlling for refractoriness effects was developed and a true memory-comparison-based brain response to pitch change was demonstrated.
He, Jian; Bai, Shuang; Wang, Xiaoyi
2017-06-16
Falls are one of the main health risks among the elderly. A fall detection system based on inertial sensors can automatically detect fall event and alert a caregiver for immediate assistance, so as to reduce injuries causing by falls. Nevertheless, most inertial sensor-based fall detection technologies have focused on the accuracy of detection while neglecting quantization noise caused by inertial sensor. In this paper, an activity model based on tri-axial acceleration and gyroscope is proposed, and the difference between activities of daily living (ADLs) and falls is analyzed. Meanwhile, a Kalman filter is proposed to preprocess the raw data so as to reduce noise. A sliding window and Bayes network classifier are introduced to develop a wearable fall detection system, which is composed of a wearable motion sensor and a smart phone. The experiment shows that the proposed system distinguishes simulated falls from ADLs with a high accuracy of 95.67%, while sensitivity and specificity are 99.0% and 95.0%, respectively. Furthermore, the smart phone can issue an alarm to caregivers so as to provide timely and accurate help for the elderly, as soon as the system detects a fall.
NASA Astrophysics Data System (ADS)
Shuxin, Li; Zhilong, Zhang; Biao, Li
2018-01-01
Plane is an important target category in remote sensing targets and it is of great value to detect the plane targets automatically. As remote imaging technology developing continuously, the resolution of the remote sensing image has been very high and we can get more detailed information for detecting the remote sensing targets automatically. Deep learning network technology is the most advanced technology in image target detection and recognition, which provided great performance improvement in the field of target detection and recognition in the everyday scenes. We combined the technology with the application in the remote sensing target detection and proposed an algorithm with end to end deep network, which can learn from the remote sensing images to detect the targets in the new images automatically and robustly. Our experiments shows that the algorithm can capture the feature information of the plane target and has better performance in target detection with the old methods.
Automatic mine detection based on multiple features
NASA Astrophysics Data System (ADS)
Yu, Ssu-Hsin; Gandhe, Avinash; Witten, Thomas R.; Mehra, Raman K.
2000-08-01
Recent research sponsored by the Army, Navy and DARPA has significantly advanced the sensor technologies for mine detection. Several innovative sensor systems have been developed and prototypes were built to investigate their performance in practice. Most of the research has been focused on hardware design. However, in order for the systems to be in wide use instead of in limited use by a small group of well-trained experts, an automatic process for mine detection is needed to make the final decision process on mine vs. no mine easier and more straightforward. In this paper, we describe an automatic mine detection process consisting of three stage, (1) signal enhancement, (2) pixel-level mine detection, and (3) object-level mine detection. The final output of the system is a confidence measure that quantifies the presence of a mine. The resulting system was applied to real data collected using radar and acoustic technologies.
Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan
2007-11-01
Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.
NASA Astrophysics Data System (ADS)
Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan
2007-11-01
Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.
Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long
2014-09-12
Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.
Detecting Cheaters without Thinking: Testing the Automaticity of the Cheater Detection Module
Van Lier, Jens; Revlin, Russell; De Neys, Wim
2013-01-01
Evolutionary psychologists have suggested that our brain is composed of evolved mechanisms. One extensively studied mechanism is the cheater detection module. This module would make people very good at detecting cheaters in a social exchange. A vast amount of research has illustrated performance facilitation on social contract selection tasks. This facilitation is attributed to the alleged automatic and isolated operation of the module (i.e., independent of general cognitive capacity). This study, using the selection task, tested the critical automaticity assumption in three experiments. Experiments 1 and 2 established that performance on social contract versions did not depend on cognitive capacity or age. Experiment 3 showed that experimentally burdening cognitive resources with a secondary task had no impact on performance on the social contract version. However, in all experiments, performance on a non-social contract version did depend on available cognitive capacity. Overall, findings validate the automatic and effortless nature of social exchange reasoning. PMID:23342012
[Advances in automatic detection technology for images of thin blood film of malaria parasite].
Juan-Sheng, Zhang; Di-Qiang, Zhang; Wei, Wang; Xiao-Guang, Wei; Zeng-Guo, Wang
2017-05-05
This paper reviews the computer vision and image analysis studies aiming at automated diagnosis or screening of malaria in microscope images of thin blood film smears. On the basis of introducing the background and significance of automatic detection technology, the existing detection technologies are summarized and divided into several steps, including image acquisition, pre-processing, morphological analysis, segmentation, count, and pattern classification components. Then, the principles and implementation methods of each step are given in detail. In addition, the promotion and application in automatic detection technology of thick blood film smears are put forwarded as questions worthy of study, and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.
Corner detection and sorting method based on improved Harris algorithm in camera calibration
NASA Astrophysics Data System (ADS)
Xiao, Ying; Wang, Yonghong; Dan, Xizuo; Huang, Anqi; Hu, Yue; Yang, Lianxiang
2016-11-01
In traditional Harris corner detection algorithm, the appropriate threshold which is used to eliminate false corners is selected manually. In order to detect corners automatically, an improved algorithm which combines Harris and circular boundary theory of corners is proposed in this paper. After detecting accurate corner coordinates by using Harris algorithm and Forstner algorithm, false corners within chessboard pattern of the calibration plate can be eliminated automatically by using circular boundary theory. Moreover, a corner sorting method based on an improved calibration plate is proposed to eliminate false background corners and sort remaining corners in order. Experiment results show that the proposed algorithms can eliminate all false corners and sort remaining corners correctly and automatically.
Jung, Jaehoon; Yoon, Inhye; Paik, Joonki
2016-01-01
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978
NASA Astrophysics Data System (ADS)
Qin, Rufu; Lin, Liangzhao
2017-06-01
Coastal seiches have become an increasingly important issue in coastal science and present many challenges, particularly when attempting to provide warning services. This paper presents the methodologies, techniques and integrated services adopted for the design and implementation of a Seiches Monitoring and Forecasting Integration Framework (SMAF-IF). The SMAF-IF is an integrated system with different types of sensors and numerical models and incorporates the Geographic Information System (GIS) and web techniques, which focuses on coastal seiche events detection and early warning in the North Jiangsu shoal, China. The in situ sensors perform automatic and continuous monitoring of the marine environment status and the numerical models provide the meteorological and physical oceanographic parameter estimates. A model outputs processing software was developed in C# language using ArcGIS Engine functions, which provides the capabilities of automatically generating visualization maps and warning information. Leveraging the ArcGIS Flex API and ASP.NET web services, a web based GIS framework was designed to facilitate quasi real-time data access, interactive visualization and analysis, and provision of early warning services for end users. The integrated framework proposed in this study enables decision-makers and the publics to quickly response to emergency coastal seiche events and allows an easy adaptation to other regional and scientific domains related to real-time monitoring and forecasting.
SeisComP 3 - Where are we now?
NASA Astrophysics Data System (ADS)
Saul, Joachim; Becker, Jan; Hanka, Winfried; Heinloo, Andres; Weber, Bernd
2010-05-01
The seismological software SeisComP has evolved within the last approximately 10 years from a pure acquisition modules to a fully featured real-time earthquake monitoring software. The now very popular SeedLink protocol for seismic data transmission has been the core of SeisComP from the very beginning. Later additions included simple, purely automatic event detection, location and magnitude determination capabilities. Especially within the development of the 3rd-generation SeisComP, also known as "SeisComP 3", automatic processing capabilities have been augmented by graphical user interfaces for vizualization, rapid event review and quality control. Communication between the modules is achieved using a a TCP/IP infrastructure that allows distributed computing and remote review. For seismological metadata exchange export/import to/from QuakeML is avalable, which also provides a convenient interface with 3rd-party software. SeisComP is the primary seismological processing software at the GFZ Potsdam. It has also been in use for years in numerous seismic networks in Europe and, more recently, has been adopted as primary monitoring software by several tsunami warning centers around the Indian Ocean. In our presentation we describe the current status of development as well as future plans. We illustrate its possibilities by discussing different use cases for global and regional real-time earthquake monitoring and tsunami warning.
Timm, Jana; Weise, Annekathrin; Grimm, Sabine; Schröger, Erich
2011-01-01
The infrequent occurrence of a transient feature (deviance; e.g., frequency modulation, FM) in one of the regular occurring sinusoidal tones (standards) elicits the deviance related mismatch negativity (MMN) component of the event-related brain potential. Based on a memory-based comparison, MMN reflects the mismatch between the representations of incoming and standard sounds. The present study investigated to what extent the infrequent exclusion of an FM is detected by the MMN system. For that purpose we measured MMN to deviances that either consisted of the exclusion or inclusion of an FM at an early or late position within the sound that was present or absent, respectively, in the standard. According to the information-content hypothesis, deviance detection relies on the difference in informational content of the deviant relative to that of the standard. As this difference between deviants with FM and standards without FM is the same as in the reversed case, comparable MMNs should be elicited to FM inclusions and exclusions. According to the feature-detector hypothesis, however, the deviance detection depends on the increased activation of feature detectors to additional sound features. Thus, rare exclusions of the FM should elicit no or smaller MMN than FM inclusions. In passive listening condition, MMN was obtained only for the early inclusion, but not for the exclusions nor for the late inclusion of an FM. This asymmetry in automatic deviance detection seems to partly reflect the contribution of feature detectors even though it cannot fully account for the missing MMN to late FM inclusions. Importantly, the behavioral deviance detection performance in the active listening condition did not reveal such an asymmetry, suggesting that the intentional detection of the deviants is based on the difference in informational content. On a more general level, the results partly support the “fresh-afferent” account or an extended memory-comparison based account of MMN. PMID:21852979
Oosterwijk, J C; Knepflé, C F; Mesker, W E; Vrolijk, H; Sloos, W C; Pattenier, H; Ravkin, I; van Ommen, G J; Kanhai, H H; Tanke, H J
1998-01-01
This article explores the feasibility of the use of automated microscopy and image analysis to detect the presence of rare fetal nucleated red blood cells (NRBCs) circulating in maternal blood. The rationales for enrichment and for automated image analysis for "rare-event" detection are reviewed. We also describe the application of automated image analysis to 42 maternal blood samples, using a protocol consisting of one-step enrichment followed by immunocytochemical staining for fetal hemoglobin (HbF) and FISH for X- and Y-chromosomal sequences. Automated image analysis consisted of multimode microscopy and subsequent visual evaluation of image memories containing the selected objects. The FISH results were compared with the results of conventional karyotyping of the chorionic villi. By use of manual screening, 43% of the slides were found to be positive (>=1 NRBC), with a mean number of 11 NRBCs (range 1-40). By automated microscopy, 52% were positive, with on average 17 NRBCs (range 1-111). There was a good correlation between both manual and automated screening, but the NRBC yield from automated image analysis was found to be superior to that from manual screening (P=.0443), particularly when the NRBC count was >15. Seven (64%) of 11 XY fetuses were correctly diagnosed by FISH analysis of automatically detected cells, and all discrepancies were restricted to the lower cell-count range. We believe that automated microscopy and image analysis reduce the screening workload, are more sensitive than manual evaluation, and can be used to detect rare HbF-containing NRBCs in maternal blood. PMID:9837832
Gabbett, Tim J
2013-08-01
The physical demands of rugby league, rugby union, and American football are significantly increased through the large number of collisions players are required to perform during match play. Because of the labor-intensive nature of coding collisions from video recordings, manufacturers of wearable microsensor (e.g., global positioning system [GPS]) units have refined the technology to automatically detect collisions, with several sport scientists attempting to use these microsensors to quantify the physical demands of collision sports. However, a question remains over the validity of these microtechnology units to quantify the contact demands of collision sports. Indeed, recent evidence has shown significant differences in the number of "impacts" recorded by microtechnology units (GPSports) and the actual number of collisions coded from video. However, a separate study investigated the validity of a different microtechnology unit (minimaxX; Catapult Sports) that included GPS and triaxial accelerometers, and also a gyroscope and magnetometer, to quantify collisions. Collisions detected by the minimaxX unit were compared with video-based coding of the actual events. No significant differences were detected in the number of mild, moderate, and heavy collisions detected via the minimaxX units and those coded from video recordings of the actual event. Furthermore, a strong correlation (r = 0.96, p < 0.01) was observed between collisions recorded via the minimaxX units and those coded from video recordings of the event. These findings demonstrate that only one commercially available and wearable microtechnology unit (minimaxX) can be considered capable of offering a valid method of quantifying the contact loads that typically occur in collision sports. Until such validation research is completed, sport scientists should be circumspect of the ability of other units to perform similar functions.
Double ErrP Detection for Automatic Error Correction in an ERP-Based BCI Speller.
Cruz, Aniana; Pires, Gabriel; Nunes, Urbano J
2018-01-01
Brain-computer interface (BCI) is a useful device for people with severe motor disabilities. However, due to its low speed and low reliability, BCI still has a very limited application in daily real-world tasks. This paper proposes a P300-based BCI speller combined with a double error-related potential (ErrP) detection to automatically correct erroneous decisions. This novel approach introduces a second error detection to infer whether wrong automatic correction also elicits a second ErrP. Thus, two single-trial responses, instead of one, contribute to the final selection, improving the reliability of error detection. Moreover, to increase error detection, the evoked potential detected as target by the P300 classifier is combined with the evoked error potential at a feature-level. Discriminable error and positive potentials (response to correct feedback) were clearly identified. The proposed approach was tested on nine healthy participants and one tetraplegic participant. The online average accuracy for the first and second ErrPs were 88.4% and 84.8%, respectively. With automatic correction, we achieved an improvement around 5% achieving 89.9% in spelling accuracy for an effective 2.92 symbols/min. The proposed approach revealed that double ErrP detection can improve the reliability and speed of BCI systems.
Earthquake Risk Reduction to Istanbul Natural Gas Distribution Network
NASA Astrophysics Data System (ADS)
Zulfikar, Can; Kariptas, Cagatay; Biyikoglu, Hikmet; Ozarpa, Cevat
2017-04-01
Earthquake Risk Reduction to Istanbul Natural Gas Distribution Network Istanbul Natural Gas Distribution Corporation (IGDAS) is one of the end users of the Istanbul Earthquake Early Warning (EEW) signal. IGDAS, the primary natural gas provider in Istanbul, operates an extensive system 9,867km of gas lines with 750 district regulators and 474,000 service boxes. The natural gas comes to Istanbul city borders with 70bar in 30inch diameter steel pipeline. The gas pressure is reduced to 20bar in RMS stations and distributed to district regulators inside the city. 110 of 750 district regulators are instrumented with strong motion accelerometers in order to cut gas flow during an earthquake event in the case of ground motion parameters exceeds the certain threshold levels. Also, state of-the-art protection systems automatically cut natural gas flow when breaks in the gas pipelines are detected. IGDAS uses a sophisticated SCADA (supervisory control and data acquisition) system to monitor the state-of-health of its pipeline network. This system provides real-time information about quantities related to pipeline monitoring, including input-output pressure, drawing information, positions of station and RTU (remote terminal unit) gates, slum shut mechanism status at 750 district regulator sites. IGDAS Real-time Earthquake Risk Reduction algorithm follows 4 stages as below: 1) Real-time ground motion data transmitted from 110 IGDAS and 110 KOERI (Kandilli Observatory and Earthquake Research Institute) acceleration stations to the IGDAS Scada Center and KOERI data center. 2) During an earthquake event EEW information is sent from IGDAS Scada Center to the IGDAS stations. 3) Automatic Shut-Off is applied at IGDAS district regulators, and calculated parameters are sent from stations to the IGDAS Scada Center and KOERI. 4) Integrated building and gas pipeline damage maps are prepared immediately after the earthquake event. The today's technology allows to rapidly estimate the expected level of shaking when an earthquake starts to occur. However, in Istanbul case for a potential Marmara Sea Earthquake, the time is very limited even to estimate the level of shaking. The robust threshold based EEW system is only algorithm for such a near source event to activate automatic shut-off mechanism in the critical infrastructures before the damaging waves arrive. This safety measure even with a few seconds of early warning time will help to mitigate potential damages and secondary hazards.
Auto-detection of Halo CME Parameters as the Initial Condition of Solar Wind Propagation
NASA Astrophysics Data System (ADS)
Choi, Kyu-Cheol; Park, Mi-Young; Kim, Jae-Hun
2017-12-01
Halo coronal mass ejections (CMEs) originating from solar activities give rise to geomagnetic storms when they reach the Earth. Variations in the geomagnetic field during a geomagnetic storm can damage satellites, communication systems, electrical power grids, and power systems, and induce currents. Therefore, automated techniques for detecting and analyzing halo CMEs have been eliciting increasing attention for the monitoring and prediction of the space weather environment. In this study, we developed an algorithm to sense and detect halo CMEs using large angle and spectrometric coronagraph (LASCO) C3 coronagraph images from the solar and heliospheric observatory (SOHO) satellite. In addition, we developed an image processing technique to derive the morphological and dynamical characteristics of halo CMEs, namely, the source location, width, actual CME speed, and arrival time at a 21.5 solar radius. The proposed halo CME automatic analysis model was validated using a model of the past three halo CME events. As a result, a solar event that occurred at 03:38 UT on Mar. 23, 2014 was predicted to arrive at Earth at 23:00 UT on Mar. 25, whereas the actual arrival time was at 04:30 UT on Mar. 26, which is a difference of 5 hr and 30 min. In addition, a solar event that occurred at 12:55 UT on Apr. 18, 2014 was estimated to arrive at Earth at 16:00 UT on Apr. 20, which is 4 hr ahead of the actual arrival time of 20:00 UT on the same day. However, the estimation error was reduced significantly compared to the ENLIL model. As a further study, the model will be applied to many more events for validation and testing, and after such tests are completed, on-line service will be provided at the Korean Space Weather Center to detect halo CMEs and derive the model parameters.
NASA Astrophysics Data System (ADS)
Labak, P.; Arndt, R.; Villagran, M.
2009-04-01
One of the sub-goals of the Integrated Field Experiment in 2008 (IFE08) in Kazakhstan was testing the prototype elements of the Seismic aftershock monitoring system (SAMS) for on-site inspection purposes. The task of the SAMS is to collect the facts, which should help to clarify nature of the triggering event. Therefore the SAMS has to be capable to detect and identify events as small as magnitude -2 in the inspection area size up to 1000 km2. Equipment for 30 mini-arrays and 10 3-component stations represented the field equipment of the SAMS. Each mini-array consisted of a central 3-component seismometer and 3 vertical seismometers at the distance about 100 m from the central seismometer. The mini-arrays covered approximately 80% of surrogate inspection area (IA) on the territory of former Semipalatinsk test site. Most of the stations were installed during the first four days of field operations by the seismic sub-team, which consisted of 10 seismologists. SAMS data center comprised 2 IBM Blade centers and 8 working places for data archiving, detection list production and event analysis. A prototype of SAMS software was tested. Average daily amount of collected raw data was 15-30 GB and increased according to the amount of stations entering operation. Routine manual data screening and data analyses were performed by 2-6 subteam members. Automatic screening was used for selected time intervals. Screening was performed using the Sonoview program in frequency domain and using the Geotool and Hypolines programs for screening in time domain. The screening results were merged into the master event list. The master event list served as a basis of detailed analysis of unclear events and events identified to be potentially in the IA. Detailed analysis of events to be potentially in the IA was performed by the Hypoline and Geotool programs. In addition, the Hyposimplex and Hypocenter programs were also used for localization of events. The results of analysis were integrated in the visual form using the Seistrain/geosearch program. Data were fully screened for the period 5.-13.9.2008. 360 teleseismic, regional and local events were identified. Results of the detection and analysis will be presented and consequences for further SAMS development will be discussed.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Events of default and SBA's... Noncompliance With Terms of Leverage § 107.1810 Events of default and SBA's remedies for Licensee's... time of their issuance. (b) Automatic events of default. The occurrence of one or more of the events in...
Automatic enforcement and highway safety.
DOT National Transportation Integrated Search
2011-05-01
The objectives of this research are to: 1. Identify aspects of the automatic detection of red light running that the public finds offensive or problematical, and quantify the level of opposition on each aspect. 2. Identify aspects of the automatic de...
Advanced Geospatial Hydrodynamic Signals Analysis for Tsunami Event Detection and Warning
NASA Astrophysics Data System (ADS)
Arbab-Zavar, Banafshe; Sabeur, Zoheir
2013-04-01
Current early tsunami warning can be issued upon the detection of a seismic event which may occur at a given location offshore. This also provides an opportunity to predict the tsunami wave propagation and run-ups at potentially affected coastal zones by selecting the best matching seismic event from a database of pre-computed tsunami scenarios. Nevertheless, it remains difficult and challenging to obtain the rupture parameters of the tsunamigenic earthquakes in real time and simulate the tsunami propagation with high accuracy. In this study, we propose a supporting approach, in which the hydrodynamic signal is systematically analysed for traces of a tsunamigenic signal. The combination of relatively low amplitudes of a tsunami signal at deep waters and the frequent occurrence of background signals and noise contributes to a generally low signal to noise ratio for the tsunami signal; which in turn makes the detection of this signal difficult. In order to improve the accuracy and confidence of detection, a re-identification framework in which a tsunamigenic signal is detected via the scan of a network of hydrodynamic stations with water level sensing is performed. The aim is to attempt the re-identification of the same signatures as the tsunami wave spatially propagates through the hydrodynamic stations sensing network. The re-identification of the tsunamigenic signal is technically possible since the tsunami signal at the open ocean itself conserves its birthmarks relating it to the source event. As well as supporting the initial detection and improving the confidence of detection, a re-identified signal is indicative of the spatial range of the signal, and thereby it can be used to facilitate the identification of certain background signals such as wind waves which do not have as large a spatial reach as tsunamis. In this paper, the proposed methodology for the automatic detection of tsunamigenic signals has been achieved using open data from NOAA with a recorded tsunami event in the Pacific Ocean. The new approach will be tested in the future on other oceanic regions including the Mediteranean Sea and North East Atlantic Ocean zones. Both authors acknowledge that the current research is currently conducted under the TRIDEC IP FP7 project[1] which involves the development of a system of systems for collaborative, complex and critical decision-support in evolving crises. [1] TRIDEC IP ICT-2009.4.3 Intelligent Information Management Project Reference: 258723. http://www.tridec-online.eu/home
Automatic detection of electric power troubles (AI application)
NASA Technical Reports Server (NTRS)
Wang, Caroline; Zeanah, Hugh; Anderson, Audie; Patrick, Clint
1987-01-01
The design goals for the Automatic Detection of Electric Power Troubles (ADEPT) were to enhance Fault Diagnosis Techniques in a very efficient way. ADEPT system was designed in two modes of operation: (1) Real time fault isolation, and (2) a local simulator which simulates the models theoretically.
Automatic food detection in egocentric images using artificial intelligence technology
USDA-ARS?s Scientific Manuscript database
Our objective was to develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment. To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable devic...
The Automatic and Controlled Processing of Temporal and Spatial Patterns.
1980-02-01
Schneider, 1977; Laberge , 1973, 1975). There is a need to expand automatic processing to inputs where an event is defined by a sequence of stimuli. In this...that found by LaBerge (1973). In the LaBerge experiment, subjects were tested on both familiar and unfamiliar characters. For the familiar characters... LaBerge argued that the separate features were unitized into letters automatically. For unfamiliar characters, subjects could not initially
Comprehensive eye evaluation algorithm
NASA Astrophysics Data System (ADS)
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
HIPAA-compliant automatic monitoring system for RIS-integrated PACS operation
NASA Astrophysics Data System (ADS)
Jin, Jin; Zhang, Jianguo; Chen, Xiaomeng; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen; Feng, Jie; Sheng, Liwei; Huang, H. K.
2006-03-01
As a governmental regulation, Health Insurance Portability and Accountability Act (HIPAA) was issued to protect the privacy of health information that identifies individuals who are living or deceased. HIPAA requires security services supporting implementation features: Access control; Audit controls; Authorization control; Data authentication; and Entity authentication. These controls, which proposed in HIPAA Security Standards, are Audit trails here. Audit trails can be used for surveillance purposes, to detect when interesting events might be happening that warrant further investigation. Or they can be used forensically, after the detection of a security breach, to determine what went wrong and who or what was at fault. In order to provide security control services and to achieve the high and continuous availability, we design the HIPAA-Compliant Automatic Monitoring System for RIS-Integrated PACS operation. The system consists of two parts: monitoring agents running in each PACS component computer and a Monitor Server running in a remote computer. Monitoring agents are deployed on all computer nodes in RIS-Integrated PACS system to collect the Audit trail messages defined by the Supplement 95 of the DICOM standard: Audit Trail Messages. Then the Monitor Server gathers all audit messages and processes them to provide security information in three levels: system resources, PACS/RIS applications, and users/patients data accessing. Now the RIS-Integrated PACS managers can monitor and control the entire RIS-Integrated PACS operation through web service provided by the Monitor Server. This paper presents the design of a HIPAA-compliant automatic monitoring system for RIS-Integrated PACS Operation, and gives the preliminary results performed by this monitoring system on a clinical RIS-integrated PACS.
Precision Seismic Monitoring of Volcanic Eruptions at Axial Seamount
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Wilcock, W. S. D.; Tolstoy, M.; Baillard, C.; Tan, Y. J.; Schaff, D. P.
2017-12-01
Seven permanent ocean bottom seismometers of the Ocean Observatories Initiative's real time cabled observatory at Axial Seamount off the coast of the western United States record seismic activity since 2014. The array captured the April 2015 eruption, shedding light on the detailed structure and dynamics of the volcano and the Juan de Fuca midocean ridge system (Wilcock et al., 2016). After a period of continuously increasing seismic activity primarily associated with the reactivation of caldera ring faults, and the subsequent seismic crisis on April 24, 2015 with 7000 recorded events that day, seismicity rates steadily declined and the array currently records an average of 5 events per day. Here we present results from ongoing efforts to automatically detect and precisely locate seismic events at Axial in real-time, providing the computational framework and fundamental data that will allow rapid characterization and analysis of spatio-temporal changes in seismogenic properties. We combine a kurtosis-based P- and S-phase onset picker and time domain cross-correlation detection and phase delay timing algorithms together with single-event and double-difference location methods to rapidly and precisely (tens of meters) compute the location and magnitudes of new events with respect to a 2-year long, high-resolution background catalog that includes nearly 100,000 events within a 5×5 km region. We extend the real-time double-difference location software DD-RT to efficiently handle the anticipated high-rate and high-density earthquake activity during future eruptions. The modular monitoring framework will allow real-time tracking of other seismic events such as tremors and sea-floor lava explosions that enable the timing and location of lava flows and thus guide response research cruises to the most interesting sites. Finally, rapid detection of eruption precursors and initiation will allow for adaptive sampling by the OOI instruments for optimal recording of future eruptions. With a higher eruption recurrence rate than land-based volcanoes the Axial OOI observatory offers the opportunity to monitor and study volcanic eruptions throughout multiple cycles.
The Applicability of Incoherent Array Processing to IMS Seismic Array Stations
NASA Astrophysics Data System (ADS)
Gibbons, S. J.
2012-04-01
The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea nuclear tests but, due to signal incoherence, failed to contribute to the automatic event detections. It is demonstrated that the smoothed incoherent slowness estimates for the MJAR Pn phases for both tests indicate unambiguously the correct type of phase and a backazimuth estimate within 5 degrees of the great-circle backazimuth. The detection part of the algorithm is applicable to all IMS arrays, and spectrogram-based processing may offer a reduction in the false alarm rate for high frequency signals. Significantly, the local maxima of the scalar functions derived from the transformed spectrogram beams provide good estimates of the signal onset time. High frequency energy is of greater significance for lower event magnitudes and in, for example, the cavity decoupling detection evasion scenario. There is a need to characterize propagation paths with low attenuation of high frequency energy and situations in which parameter estimation on array stations fails.
iAnn: an event sharing platform for the life sciences.
Jimenez, Rafael C; Albar, Juan P; Bhak, Jong; Blatter, Marie-Claude; Blicher, Thomas; Brazas, Michelle D; Brooksbank, Cath; Budd, Aidan; De Las Rivas, Javier; Dreyer, Jacqueline; van Driel, Marc A; Dunn, Michael J; Fernandes, Pedro L; van Gelder, Celia W G; Hermjakob, Henning; Ioannidis, Vassilios; Judge, David P; Kahlem, Pascal; Korpelainen, Eija; Kraus, Hans-Joachim; Loveland, Jane; Mayer, Christine; McDowall, Jennifer; Moran, Federico; Mulder, Nicola; Nyronen, Tommi; Rother, Kristian; Salazar, Gustavo A; Schneider, Reinhard; Via, Allegra; Villaveces, Jose M; Yu, Ping; Schneider, Maria V; Attwood, Teresa K; Corpas, Manuel
2013-08-01
We present iAnn, an open source community-driven platform for dissemination of life science events, such as courses, conferences and workshops. iAnn allows automatic visualisation and integration of customised event reports. A central repository lies at the core of the platform: curators add submitted events, and these are subsequently accessed via web services. Thus, once an iAnn widget is incorporated into a website, it permanently shows timely relevant information as if it were native to the remote site. At the same time, announcements submitted to the repository are automatically disseminated to all portals that query the system. To facilitate the visualization of announcements, iAnn provides powerful filtering options and views, integrated in Google Maps and Google Calendar. All iAnn widgets are freely available. http://iann.pro/iannviewer manuel.corpas@tgac.ac.uk.
Seismic monitoring of small alpine rockfalls - validity, precision and limitations
NASA Astrophysics Data System (ADS)
Dietze, Michael; Mohadjer, Solmaz; Turowski, Jens M.; Ehlers, Todd A.; Hovius, Niels
2017-10-01
Rockfall in deglaciated mountain valleys is perhaps the most important post-glacial geomorphic process for determining the rates and patterns of valley wall erosion. Furthermore, rockfall poses a significant hazard to inhabitants and motivates monitoring efforts in populated areas. Traditional rockfall detection methods, such as aerial photography and terrestrial laser scanning (TLS) data evaluation, provide constraints on the location and released volume of rock but have limitations due to significant time lags or integration times between surveys, and deliver limited information on rockfall triggering mechanisms and the dynamics of individual events. Environmental seismology, the study of seismic signals emitted by processes at the Earth's surface, provides a complementary solution to these shortcomings. However, this approach is predominantly limited by the strength of the signals emitted by a source and their transformation and attenuation towards receivers. To test the ability of seismic methods to identify and locate small rockfalls, and to characterise their dynamics, we surveyed a 2.16 km2 large, near-vertical cliff section of the Lauterbrunnen Valley in the Swiss Alps with a TLS device and six broadband seismometers. During 37 days in autumn 2014, 10 TLS-detected rockfalls with volumes ranging from 0.053 ± 0.004 to 2.338 ± 0.085 m3 were independently detected and located by the seismic approach, with a deviation of 81-29+59 m (about 7 % of the average inter-station distance of the seismometer network). Further potential rockfalls were detected outside the TLS-surveyed cliff area. The onset of individual events can be determined within a few milliseconds, and their dynamics can be resolved into distinct phases, such as detachment, free fall, intermittent impact, fragmentation, arrival at the talus slope and subsequent slope activity. The small rockfall volumes in this area require significant supervision during data processing: 2175 initially picked potential events reduced to 511 potential events after applying automatic rejection criteria. The 511 events needed to be inspected manually to reveal 19 short earthquakes and 37 potential rockfalls, including the 10 TLS-detected events. Rockfall volume does not show a relationship with released seismic energy or peak amplitude at this spatial scale due to the dominance of other, process-inherent factors, such as fall height, degree of fragmentation, and subsequent talus slope activity. The combination of TLS and environmental seismology provides, despite the significant amount of manual data processing, a detailed validation of seismic detection of small volume rockfalls, and revealed unprecedented temporal, spatial and geometric details about rockfalls in steep mountainous terrain.
46 CFR 76.33-20 - Operation and installation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... EQUIPMENT Smoke Detecting System, Details § 76.33-20 Operation and installation. (a) The system shall be so arranged and installed that the presence of smoke in any of the protected spaces will automatically be... automatically indicate the zone in which the smoke originated. The detecting cabinet shall normally be located...
46 CFR 76.33-20 - Operation and installation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... EQUIPMENT Smoke Detecting System, Details § 76.33-20 Operation and installation. (a) The system shall be so arranged and installed that the presence of smoke in any of the protected spaces will automatically be... automatically indicate the zone in which the smoke originated. The detecting cabinet shall normally be located...
46 CFR 76.33-20 - Operation and installation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... EQUIPMENT Smoke Detecting System, Details § 76.33-20 Operation and installation. (a) The system shall be so arranged and installed that the presence of smoke in any of the protected spaces will automatically be... automatically indicate the zone in which the smoke originated. The detecting cabinet shall normally be located...
46 CFR 76.33-20 - Operation and installation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... EQUIPMENT Smoke Detecting System, Details § 76.33-20 Operation and installation. (a) The system shall be so arranged and installed that the presence of smoke in any of the protected spaces will automatically be... automatically indicate the zone in which the smoke originated. The detecting cabinet shall normally be located...
Automatic Conceptual Encoding of Printed Verbal Material: Assessment of Population Differences.
ERIC Educational Resources Information Center
Kee, Daniel W.; And Others
1984-01-01
The release from proactive interference task as used to investigate categorical encoding of items. Low socioeconomic status Black and middle socioeconomic status White children were compared. Conceptual encoding differences between these populations were not detected in automatic conceptual encoding but were detected when the free recall method…
Deep learning based beat event detection in action movie franchises
NASA Astrophysics Data System (ADS)
Ejaz, N.; Khan, U. A.; Martínez-del-Amor, M. A.; Sparenberg, H.
2018-04-01
Automatic understanding and interpretation of movies can be used in a variety of ways to semantically manage the massive volumes of movies data. "Action Movie Franchises" dataset is a collection of twenty Hollywood action movies from five famous franchises with ground truth annotations at shot and beat level of each movie. In this dataset, the annotations are provided for eleven semantic beat categories. In this work, we propose a deep learning based method to classify shots and beat-events on this dataset. The training dataset for each of the eleven beat categories is developed and then a Convolution Neural Network is trained. After finding the shot boundaries, key frames are extracted for each shot and then three classification labels are assigned to each key frame. The classification labels for each of the key frames in a particular shot are then used to assign a unique label to each shot. A simple sliding window based method is then used to group adjacent shots having the same label in order to find a particular beat event. The results of beat event classification are presented based on criteria of precision, recall, and F-measure. The results are compared with the existing technique and significant improvements are recorded.
NASA Astrophysics Data System (ADS)
Podhoranyi, M.; Kuchar, S.; Portero, A.
2016-08-01
The primary objective of this study is to present techniques that cover usage of a hydrodynamic model as the main tool for monitoring and assessment of flood events while focusing on modelling of inundation areas. We analyzed the 2010 flood event (14th May - 20th May) that occurred in the Moravian-Silesian region (Czech Republic). Under investigation were four main catchments: Opava, Odra, Olše and Ostravice. Four hydrodynamic models were created and implemented into the Floreon+ platform in order to map inundation areas that arose during the flood event. In order to study the dynamics of the water, we applied an unsteady flow simulation for the entire area (HEC-RAS 4.1). The inundation areas were monitored, evaluated and recorded semi-automatically by means of the Floreon+ platform. We focused on information about the extent and presence of the flood areas. The modeled flooded areas were verified by comparing them with real data from different sources (official reports, aerial photos and hydrological networks). The study confirmed that hydrodynamic modeling is a very useful tool for mapping and monitoring of inundation areas. Overall, our models detected 48 inundation areas during the 2010 flood event.
Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan
2016-08-01
Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.
Automatic textual annotation of video news based on semantic visual object extraction
NASA Astrophysics Data System (ADS)
Boujemaa, Nozha; Fleuret, Francois; Gouet, Valerie; Sahbi, Hichem
2003-12-01
In this paper, we present our work for automatic generation of textual metadata based on visual content analysis of video news. We present two methods for semantic object detection and recognition from a cross modal image-text thesaurus. These thesaurus represent a supervised association between models and semantic labels. This paper is concerned with two semantic objects: faces and Tv logos. In the first part, we present our work for efficient face detection and recogniton with automatic name generation. This method allows us also to suggest the textual annotation of shots close-up estimation. On the other hand, we were interested to automatically detect and recognize different Tv logos present on incoming different news from different Tv Channels. This work was done jointly with the French Tv Channel TF1 within the "MediaWorks" project that consists on an hybrid text-image indexing and retrieval plateform for video news.
NASA Astrophysics Data System (ADS)
Kromer, Ryan A.; Abellán, Antonio; Hutchinson, D. Jean; Lato, Matt; Chanut, Marie-Aurelie; Dubois, Laurent; Jaboyedoff, Michel
2017-05-01
We present an automated terrestrial laser scanning (ATLS) system with automatic near-real-time change detection processing. The ATLS system was tested on the Séchilienne landslide in France for a 6-week period with data collected at 30 min intervals. The purpose of developing the system was to fill the gap of high-temporal-resolution TLS monitoring studies of earth surface processes and to offer a cost-effective, light, portable alternative to ground-based interferometric synthetic aperture radar (GB-InSAR) deformation monitoring. During the study, we detected the flux of talus, displacement of the landslide and pre-failure deformation of discrete rockfall events. Additionally, we found the ATLS system to be an effective tool in monitoring landslide and rockfall processes despite missing points due to poor atmospheric conditions or rainfall. Furthermore, such a system has the potential to help us better understand a wide variety of slope processes at high levels of temporal detail.
Anomaly detection using temporal data mining in a smart home environment.
Jakkula, V; Cook, D J
2008-01-01
To many people, home is a sanctuary. With the maturing of smart home technologies, many people with cognitive and physical disabilities can lead independent lives in their own homes for extended periods of time. In this paper, we investigate the design of machine learning algorithms that support this goal. We hypothesize that machine learning algorithms can be designed to automatically learn models of resident behavior in a smart home, and that the results can be used to perform automated health monitoring and to detect anomalies. Specifically, our algorithms draw upon the temporal nature of sensor data collected in a smart home to build a model of expected activities and to detect unexpected, and possibly health-critical, events in the home. We validate our algorithms using synthetic data and real activity data collected from volunteers in an automated smart environment. The results from our experiments support our hypothesis that a model can be learned from observed smart home data and used to report anomalies, as they occur, in a smart home.
NASA Technical Reports Server (NTRS)
Yildiz, Yidiray; Kolmanovsky, Ilya V.; Acosta, Diana
2011-01-01
This paper proposes a control allocation system that can detect and compensate the phase shift between the desired and the actual total control effort due to rate limiting of the actuators. Phase shifting is an important problem in control system applications since it effectively introduces a time delay which may destabilize the closed loop dynamics. A relevant example comes from flight control where aggressive pilot commands, high gain of the flight control system or some anomaly in the system may cause actuator rate limiting and effective time delay introduction. This time delay can instigate Pilot Induced Oscillations (PIO), which is an abnormal coupling between the pilot and the aircraft resulting in unintentional and undesired oscillations. The proposed control allocation system reduces the effective time delay by first detecting the phase shift and then minimizing it using constrained optimization techniques. Flight control simulation results for an unstable aircraft with inertial cross coupling are reported, which demonstrate phase shift minimization and recovery from a PIO event.
MRI-alone radiation therapy planning for prostate cancer: Automatic fiducial marker detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghose, Soumya, E-mail: soumya.ghose@case.edu; Mitra, Jhimli; Rivest-Hénault, David
Purpose: The feasibility of radiation therapy treatment planning using substitute computed tomography (sCT) generated from magnetic resonance images (MRIs) has been demonstrated by a number of research groups. One challenge with an MRI-alone workflow is the accurate identification of intraprostatic gold fiducial markers, which are frequently used for prostate localization prior to each dose delivery fraction. This paper investigates a template-matching approach for the detection of these seeds in MRI. Methods: Two different gradient echo T1 and T2* weighted MRI sequences were acquired from fifteen prostate cancer patients and evaluated for seed detection. For training, seed templates from manual contoursmore » were selected in a spectral clustering manifold learning framework. This aids in clustering “similar” gold fiducial markers together. The marker with the minimum distance to a cluster centroid was selected as the representative template of that cluster during training. During testing, Gaussian mixture modeling followed by a Markovian model was used in automatic detection of the probable candidates. The probable candidates were rigidly registered to the templates identified from spectral clustering, and a similarity metric is computed for ranking and detection. Results: A fiducial detection accuracy of 95% was obtained compared to manual observations. Expert radiation therapist observers were able to correctly identify all three implanted seeds on 11 of the 15 scans (the proposed method correctly identified all seeds on 10 of the 15). Conclusions: An novel automatic framework for gold fiducial marker detection in MRI is proposed and evaluated with detection accuracies comparable to manual detection. When radiation therapists are unable to determine the seed location in MRI, they refer back to the planning CT (only available in the existing clinical framework); similarly, an automatic quality control is built into the automatic software to ensure that all gold seeds are either correctly detected or a warning is raised for further manual intervention.« less
MRI-alone radiation therapy planning for prostate cancer: Automatic fiducial marker detection.
Ghose, Soumya; Mitra, Jhimli; Rivest-Hénault, David; Fazlollahi, Amir; Stanwell, Peter; Pichler, Peter; Sun, Jidi; Fripp, Jurgen; Greer, Peter B; Dowling, Jason A
2016-05-01
The feasibility of radiation therapy treatment planning using substitute computed tomography (sCT) generated from magnetic resonance images (MRIs) has been demonstrated by a number of research groups. One challenge with an MRI-alone workflow is the accurate identification of intraprostatic gold fiducial markers, which are frequently used for prostate localization prior to each dose delivery fraction. This paper investigates a template-matching approach for the detection of these seeds in MRI. Two different gradient echo T1 and T2* weighted MRI sequences were acquired from fifteen prostate cancer patients and evaluated for seed detection. For training, seed templates from manual contours were selected in a spectral clustering manifold learning framework. This aids in clustering "similar" gold fiducial markers together. The marker with the minimum distance to a cluster centroid was selected as the representative template of that cluster during training. During testing, Gaussian mixture modeling followed by a Markovian model was used in automatic detection of the probable candidates. The probable candidates were rigidly registered to the templates identified from spectral clustering, and a similarity metric is computed for ranking and detection. A fiducial detection accuracy of 95% was obtained compared to manual observations. Expert radiation therapist observers were able to correctly identify all three implanted seeds on 11 of the 15 scans (the proposed method correctly identified all seeds on 10 of the 15). An novel automatic framework for gold fiducial marker detection in MRI is proposed and evaluated with detection accuracies comparable to manual detection. When radiation therapists are unable to determine the seed location in MRI, they refer back to the planning CT (only available in the existing clinical framework); similarly, an automatic quality control is built into the automatic software to ensure that all gold seeds are either correctly detected or a warning is raised for further manual intervention.
Automatic Association of News Items.
ERIC Educational Resources Information Center
Carrick, Christina; Watters, Carolyn
1997-01-01
Discussion of electronic news delivery systems and the automatic generation of electronic editions focuses on the association of related items of different media type, specifically photos and stories. The goal is to be able to determine to what degree any two news items refer to the same news event. (Author/LRW)
Electrophysiological Evidence of Automatic Early Semantic Processing
ERIC Educational Resources Information Center
Hinojosa, Jose A.; Martin-Loeches, Manuel; Munoz, Francisco; Casado, Pilar; Pozo, Miguel A.
2004-01-01
This study investigates the automatic-controlled nature of early semantic processing by means of the Recognition Potential (RP), an event-related potential response that reflects lexical selection processes. For this purpose tasks differing in their processing requirements were used. Half of the participants performed a physical task involving a…
@INGVterremoti: Tweeting the Automatic Detection of Earthquakes
NASA Astrophysics Data System (ADS)
Casarotti, E.; Amato, A.; Comunello, F.; Lauciani, V.; Nostro, C.; Polidoro, P.
2014-12-01
The use of social media is emerging as a powerful tool fordisseminating trusted information about earthquakes. Since 2009, theTwitter account @INGVterremoti provides constant and timely detailsabout M2+ seismic events detected by the Italian National SeismicNetwork, directly connected with the seismologists on duty at IstitutoNazionale di Geofisica e Vulcanologia (INGV). After the 2012 seismicsequence, the account has been awarded by a national prize as the"most useful Twitter account". Currently, it updates more than 110,000followers (one the first 50 Italian Twitter accounts for number offollowers). Nevertheless, since it provides only the manual revisionof seismic parameters, the timing (approximately between 10 and 20minutes after an event) has started to be under evaluation.Undeniably, mobile internet, social network sites and Twitter in particularrequire a more rapid and "real-time" reaction.During the last 18 months, INGV tested the tweeting of the automaticdetection of M3+ earthquakes, obtaining results reliable enough to bereleased openly 1 or 2 minutes after a seismic event. During the summerof 2014, INGV, with the collaboration of CORIS (Department ofCommunication and Social Research, Sapienza University of Rome),involved the followers of @INGVterremoti and citizens, carrying out aquali-quantitative study (through in-depth interviews and a websurvey) in order to evaluate the best format to deliver suchinformation. In this presentation we will illustrate the results of the reliability test and theanalysis of the survey.
NASA Astrophysics Data System (ADS)
Khodaverdi zahraee, N.; Rastiveis, H.
2017-09-01
Earthquake is one of the most divesting natural events that threaten human life during history. After the earthquake, having information about the damaged area, the amount and type of damage can be a great help in the relief and reconstruction for disaster managers. It is very important that these measures should be taken immediately after the earthquake because any negligence could be more criminal losses. The purpose of this paper is to propose and implement an automatic approach for mapping destructed buildings after an earthquake using pre- and post-event high resolution satellite images. In the proposed method after preprocessing, segmentation of both images is performed using multi-resolution segmentation technique. Then, the segmentation results are intersected with ArcGIS to obtain equal image objects on both images. After that, appropriate textural features, which make a better difference between changed or unchanged areas, are calculated for all the image objects. Finally, subtracting the extracted textural features from pre- and post-event images, obtained values are applied as an input feature vector in an artificial neural network for classifying the area into two classes of changed and unchanged areas. The proposed method was evaluated using WorldView2 satellite images, acquired before and after the 2010 Haiti earthquake. The reported overall accuracy of 93% proved the ability of the proposed method for post-earthquake buildings change detection.
The Origins of Belief Representation: Monkeys Fail to Automatically Represent Others’ Beliefs
Martin, Alia; Santos, Laurie R.
2014-01-01
Young infants’ successful performance on false belief tasks has led several researchers to argue that there may be a core knowledge system for representing the beliefs of other agents, emerging early in human development and constraining automatic belief processing into adulthood. One way to investigate this purported core belief representation system is to examine whether non-human primates share such a system. Although non-human primates have historically performed poorly on false belief tasks that require executive function capacities, little work has explored how primates perform on more automatic measures of belief processing. To get at this issue, we modified Kovács et al. (2010)’s test of automatic belief representation to examine whether one non-human primate species—the rhesus macaque (Macaca mulatta)—is automatically influenced by another agent’s beliefs when tracking an object’s location. Monkeys saw an event in which a human agent watched an apple move back and forth between two boxes and an outcome in which one box was revealed to be empty. By occluding segments of the apple’s movement from either the monkey or the agent, we manipulated both the monkeys’ belief (true or false) and agent’s belief (true or false) about the final location of the apple. We found that monkeys looked longer at events that violated their own beliefs than at events that were consistent with their beliefs. In contrast to human infants, however, monkeys’ expectations were not influenced by another agent’s beliefs, suggesting that belief representation may be an aspect of core knowledge unique to humans. PMID:24374209
The origins of belief representation: monkeys fail to automatically represent others' beliefs.
Martin, Alia; Santos, Laurie R
2014-03-01
Young infants' successful performance on false belief tasks has led several researchers to argue that there may be a core knowledge system for representing the beliefs of other agents, emerging early in human development and constraining automatic belief processing into adulthood. One way to investigate this purported core belief representation system is to examine whether non-human primates share such a system. Although non-human primates have historically performed poorly on false belief tasks that require executive function capacities, little work has explored how primates perform on more automatic measures of belief processing. To get at this issue, we modified Kovács et al. (2010)'s test of automatic belief representation to examine whether one non-human primate species--the rhesus macaque (Macaca mulatta)--is automatically influenced by another agent's beliefs when tracking an object's location. Monkeys saw an event in which a human agent watched an apple move back and forth between two boxes and an outcome in which one box was revealed to be empty. By occluding segments of the apple's movement from either the monkey or the agent, we manipulated both the monkeys' belief (true or false) and agent's belief (true or false) about the final location of the apple. We found that monkeys looked longer at events that violated their own beliefs than at events that were consistent with their beliefs. In contrast to human infants, however, monkeys' expectations were not influenced by another agent's beliefs, suggesting that belief representation may be an aspect of core knowledge unique to humans. Copyright © 2013 Elsevier B.V. All rights reserved.
46 CFR 76.05-1 - Fire detecting systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... fitted with an automatic sprinkling system, except in relatively incombustible spaces. 2 Sprinkler heads....1 Offices, lockers, and isolated storerooms Electric, pneumatic, or automatic sprinkling1 Do.1 Public spaces None required with 20-minute patrol. Electric, pneumatic, or automatic sprinkling with 1...
46 CFR 76.05-1 - Fire detecting systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... fitted with an automatic sprinkling system, except in relatively incombustible spaces. 2 Sprinkler heads....1 Offices, lockers, and isolated storerooms Electric, pneumatic, or automatic sprinkling1 Do.1 Public spaces None required with 20-minute patrol. Electric, pneumatic, or automatic sprinkling with 1...
Automatic detection of small surface targets with electro-optical sensors in a harbor environment
NASA Astrophysics Data System (ADS)
Bouma, Henri; de Lange, Dirk-Jan J.; van den Broek, Sebastiaan P.; Kemp, Rob A. W.; Schwering, Piet B. W.
2008-10-01
In modern warfare scenarios naval ships must operate in coastal environments. These complex environments, in bays and narrow straits, with cluttered littoral backgrounds and many civilian ships may contain asymmetric threats of fast targets, such as rhibs, cabin boats and jet-skis. Optical sensors, in combination with image enhancement and automatic detection, assist an operator to reduce the response time, which is crucial for the protection of the naval and land-based supporting forces. In this paper, we present our work on automatic detection of small surface targets which includes multi-scale horizon detection and robust estimation of the background intensity. To evaluate the performance of our detection technology, data was recorded with both infrared and visual-light cameras in a coastal zone and in a harbor environment. During these trials multiple small targets were used. Results of this evaluation are shown in this paper.
Use of an automatic resistivity system for detecting abandoned mine workings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, W.R.; Burdick, R.G.
1983-01-01
A high-resolution earth resistivity system has been designed and constructed for use as a means of detecting abandoned coal mine workings. The automatic pole-dipole earth resistivity technique has already been applied to the detection of subsurface voids for military applications. The hardware and software of the system are described, together with applications for surveying and mapping abandoned coal mine workings. Field tests are presented to illustrate the detection of both air-filled and water-filled mine workings.
An image-based approach for automatic detecting five true-leaves stage of cotton
NASA Astrophysics Data System (ADS)
Li, Yanan; Cao, Zhiguo; Wu, Xi; Yu, Zhenghong; Wang, Yu; Bai, Xiaodong
2013-10-01
Cotton, as one of the four major economic crops, is of great significance to the development of the national economy. Monitoring cotton growth status by automatic image-based detection makes sense due to its low-cost, low-labor and the capability of continuous observations. However, little research has been done to improve close observation of different growth stages of field crops using digital cameras. Therefore, algorithms proposed by us were developed to detect the growth information and predict the starting date of cotton automatically. In this paper, we introduce an approach for automatic detecting five true-leaves stage, which is a critical growth stage of cotton. On account of the drawbacks caused by illumination and the complex background, we cannot use the global coverage as the unique standard of judgment. Consequently, we propose a new method to determine the five true-leaves stage through detecting the node number between the main stem and the side stems, based on the agricultural meteorological observation specification. The error of the results between the predicted starting date with the proposed algorithm and artificial observations is restricted to no more than one day.
Precision Targeting With a Tracking Adaptive Optics Scanning Laser Ophthalmoscope
2006-01-01
automatic high- resolution mosaic generation, and automatic blink detection and tracking re-lock were also tested. The system has the potential to become an...structures can lead to earlier detection of retinal diseases such as age-related macular degeneration (AMD) and diabetic retinopathy (DR). Combined...optics systems sense perturbations in the detected wave-front and apply corrections to an optical element that flatten the wave-front and allow near
Director, Operational Test and Evaluation FY 2004 Annual Report
2004-01-01
HIGH) Space Based Radar (SBR) Sensor Fuzed Weapon (SFW) P3I (CBU-97/B) Small Diameter Bomb (SDB) Secure Mobile Anti-Jam Reliable Tactical Terminal...detection, identification, and sampling capability for both fixed-site and mobile operations. The system must automatically detect and identify up to ten...staffing within the Services. SYSTEM DESCRIPTION AND MISSION The Services envision JCAD as a hand-held device that automatically detects, identifies, and
NASA Astrophysics Data System (ADS)
Agurto, C.; Barriga, S.; Murray, V.; Murillo, S.; Zamora, G.; Bauman, W.; Pattichis, M.; Soliz, P.
2011-03-01
In the United States and most of the western world, the leading causes of vision impairment and blindness are age-related macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma. In the last decade, research in automatic detection of retinal lesions associated with eye diseases has produced several automatic systems for detection and screening of AMD, DR, and glaucoma. However. advanced, sight-threatening stages of DR and AMD can present with lesions not commonly addressed by current approaches to automatic screening. In this paper we present an automatic eye screening system based on multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions that addresses not only the early stages, but also advanced stages of retinal and optic nerve disease. Ten different experiments were performed in which abnormal features such as neovascularization, drusen, exudates, pigmentation abnormalities, geographic atrophy (GA), and glaucoma were classified. The algorithm achieved an accuracy detection range of [0.77 to 0.98] area under the ROC curve for a set of 810 images. When set to a specificity value of 0.60, the sensitivity of the algorithm to the detection of abnormal features ranged between 0.88 and 1.00. Our system demonstrates that, given an appropriate training set, it is possible to use a unique algorithm to detect a broad range of eye diseases.
Robust Spacecraft Component Detection in Point Clouds.
Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng
2018-03-21
Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.
Robust Spacecraft Component Detection in Point Clouds
Wei, Quanmao; Jiang, Zhiguo
2018-01-01
Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density. PMID:29561828