DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne
Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less
Trust index based fault tolerant multiple event localization algorithm for WSNs.
Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue
2011-01-01
This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.
Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs
Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue
2011-01-01
This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972
Multiple-Event Seismic Location Using the Markov-Chain Monte Carlo Technique
NASA Astrophysics Data System (ADS)
Myers, S. C.; Johannesson, G.; Hanley, W.
2005-12-01
We develop a new multiple-event location algorithm (MCMCloc) that utilizes the Markov-Chain Monte Carlo (MCMC) method. Unlike most inverse methods, the MCMC approach produces a suite of solutions, each of which is consistent with observations and prior estimates of data and model uncertainties. Model parameters in MCMCloc consist of event hypocenters, and travel-time predictions. Data are arrival time measurements and phase assignments. Posteriori estimates of event locations, path corrections, pick errors, and phase assignments are made through analysis of the posteriori suite of acceptable solutions. Prior uncertainty estimates include correlations between travel-time predictions, correlations between measurement errors, the probability of misidentifying one phase for another, and the probability of spurious data. Inclusion of prior constraints on location accuracy allows direct utilization of ground-truth locations or well-constrained location parameters (e.g. from InSAR) that aid in the accuracy of the solution. Implementation of a correlation structure for travel-time predictions allows MCMCloc to operate over arbitrarily large geographic areas. Transition in behavior between a multiple-event locator for tightly clustered events and a single-event locator for solitary events is controlled by the spatial correlation of travel-time predictions. We test the MCMC locator on a regional data set of Nevada Test Site nuclear explosions. Event locations and origin times are known for these events, allowing us to test the features of MCMCloc using a high-quality ground truth data set. Preliminary tests suggest that MCMCloc provides excellent relative locations, often outperforming traditional multiple-event location algorithms, and excellent absolute locations are attained when constraints from one or more ground truth event are included. When phase assignments are switched, we find that MCMCloc properly corrects the error when predicted arrival times are separated by several seconds. In cases where the predicted arrival times are within the combined uncertainty of prediction and measurement errors, MCMCloc determines the probability of one or the other phase assignment and propagates this uncertainty into all model parameters. We find that MCMCloc is a promising method for simultaneously locating large, geographically distributed data sets. Because we incorporate prior knowledge on many parameters, MCMCloc is ideal for combining trusted data with data of unknown reliability. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48, Contribution UCRL-ABS-215048
Strategies for automatic processing of large aftershock sequences
NASA Astrophysics Data System (ADS)
Kvaerna, T.; Gibbons, S. J.
2017-12-01
Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.
A General Event Location Algorithm with Applications to Eclipse and Station Line-of-Sight
NASA Technical Reports Server (NTRS)
Parker, Joel J. K.; Hughes, Steven P.
2011-01-01
A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.
A General Event Location Algorithm with Applications to Eclispe and Station Line-of-Sight
NASA Technical Reports Server (NTRS)
Parker, Joel J. K.; Hughes, Steven P.
2011-01-01
A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.
A probabilistic framework for single-station location of seismicity on Earth and Mars
NASA Astrophysics Data System (ADS)
Böse, M.; Clinton, J. F.; Ceylan, S.; Euchner, F.; van Driel, M.; Khan, A.; Giardini, D.; Lognonné, P.; Banerdt, W. B.
2017-01-01
Locating the source of seismic energy from a single three-component seismic station is associated with large uncertainties, originating from challenges in identifying seismic phases, as well as inevitable pick and model uncertainties. The challenge is even higher for planets such as Mars, where interior structure is a priori largely unknown. In this study, we address the single-station location problem by developing a probabilistic framework that combines location estimates from multiple algorithms to estimate the probability density function (PDF) for epicentral distance, back azimuth, and origin time. Each algorithm uses independent and complementary information in the seismic signals. Together, the algorithms allow locating seismicity ranging from local to teleseismic quakes. Distances and origin times of large regional and teleseismic events (M > 5.5) are estimated from observed and theoretical body- and multi-orbit surface-wave travel times. The latter are picked from the maxima in the waveform envelopes in various frequency bands. For smaller events at local and regional distances, only first arrival picks of body waves are used, possibly in combination with fundamental Rayleigh R1 waveform maxima where detectable; depth phases, such as pP or PmP, help constrain source depth and improve distance estimates. Back azimuth is determined from the polarization of the Rayleigh- and/or P-wave phases. When seismic signals are good enough for multiple approaches to be used, estimates from the various methods are combined through the product of their PDFs, resulting in an improved event location and reduced uncertainty range estimate compared to the results obtained from each algorithm independently. To verify our approach, we use both earthquake recordings from existing Earth stations and synthetic Martian seismograms. The Mars synthetics are generated with a full-waveform scheme (AxiSEM) using spherically-symmetric seismic velocity, density and attenuation models of Mars that incorporate existing knowledge of Mars internal structure, and include expected ambient and instrumental noise. While our probabilistic framework is developed mainly for application to Mars in the context of the upcoming InSight mission, it is also relevant for locating seismic events on Earth in regions with sparse instrumentation.
Calibrated Multiple Event Relocations of the Central and Eastern United States
NASA Astrophysics Data System (ADS)
Yeck, W. L.; Benz, H.; McNamara, D. E.; Bergman, E.; Herrmann, R. B.; Myers, S. C.
2015-12-01
Earthquake locations are a first-order observable which form the basis of a wide range of seismic analyses. Currently, the ANSS catalog primarily contains published single-event earthquake locations that rely on assumed 1D velocity models. Increasing the accuracy of cataloged earthquake hypocenter locations and origin times and constraining their associated errors can improve our understanding of Earth structure and have a fundamental impact on subsequent seismic studies. Multiple-event relocation algorithms often increase the precision of relative earthquake hypocenters but are hindered by their limited ability to provide realistic location uncertainties for individual earthquakes. Recently, a Bayesian approach to the multiple event relocation problem has proven to have many benefits including the ability to: (1) handle large data sets; (2) easily incorporate a priori hypocenter information; (3) model phase assignment errors; and, (4) correct for errors in the assumed travel time model. In this study we employ bayseloc [Myers et al., 2007, 2009] to relocate earthquakes in the Central and Eastern United States from 1964-present. We relocate ~11,000 earthquakes with a dataset of ~439,000 arrival time observations. Our dataset includes arrival-time observations from the ANSS catalog supplemented with arrival-time data from the Reviewed ISC Bulletin (prior to 1981), targeted local studies, and arrival-time data from the TA Array. One significant benefit of the bayesloc algorithm is its ability to incorporate a priori constraints on the probability distributions of specific earthquake locations parameters. To constrain the inversion, we use high-quality calibrated earthquake locations from local studies, including studies from: Raton Basin, Colorado; Mineral, Virginia; Guy, Arkansas; Cheneville, Quebec; Oklahoma; and Mt. Carmel, Illinois. We also add depth constraints to 232 earthquakes from regional moment tensors. Finally, we add constraints from four historic (1964-1973) ground truth events from a verification database. We (1) evaluate our ability to improve our location estimations, (2) use improved locations to evaluate Earth structure in seismically active regions, and (3) examine improvements to the estimated locations of historic large magnitude earthquakes.
LLNL Location and Detection Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, S C; Harris, D B; Anderson, M L
2003-07-16
We present two LLNL research projects in the topical areas of location and detection. The first project assesses epicenter accuracy using a multiple-event location algorithm, and the second project employs waveform subspace Correlation to detect and identify events at Fennoscandian mines. Accurately located seismic events are the bases of location calibration. A well-characterized set of calibration events enables new Earth model development, empirical calibration, and validation of models. In a recent study, Bondar et al. (2003) develop network coverage criteria for assessing the accuracy of event locations that are determined using single-event, linearized inversion methods. These criteria are conservative andmore » are meant for application to large bulletins where emphasis is on catalog completeness and any given event location may be improved through detailed analysis or application of advanced algorithms. Relative event location techniques are touted as advancements that may improve absolute location accuracy by (1) ensuring an internally consistent dataset, (2) constraining a subset of events to known locations, and (3) taking advantage of station and event correlation structure. Here we present the preliminary phase of this work in which we use Nevada Test Site (NTS) nuclear explosions, with known locations, to test the effect of travel-time model accuracy on relative location accuracy. Like previous studies, we find that the reference velocity-model and relative-location accuracy are highly correlated. We also find that metrics based on travel-time residual of relocated events are not a reliable for assessing either velocity-model or relative-location accuracy. In the topical area of detection, we develop specialized correlation (subspace) detectors for the principal mines surrounding the ARCES station located in the European Arctic. Our objective is to provide efficient screens for explosions occurring in the mines of the Kola Peninsula (Kovdor, Zapolyarny, Olenogorsk, Khibiny) and the major iron mines of northern Sweden (Malmberget, Kiruna). In excess of 90% of the events detected by the ARCES station are mining explosions, and a significant fraction are from these northern mining groups. The primary challenge in developing waveform correlation detectors is the degree of variation in the source time histories of the shots, which can result in poor correlation among events even in close proximity. Our approach to solving this problem is to use lagged subspace correlation detectors, which offer some prospect of compensating for variation and uncertainty in source time functions.« less
NASA Astrophysics Data System (ADS)
Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien
2011-06-01
A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA) Sensor Network Fabric (IBM).
NASA Astrophysics Data System (ADS)
Fischer, M.; Caprio, M.; Cua, G. B.; Heaton, T. H.; Clinton, J. F.; Wiemer, S.
2009-12-01
The Virtual Seismologist (VS) algorithm is a Bayesian approach to earthquake early warning (EEW) being implemented by the Swiss Seismological Service at ETH Zurich. The application of Bayes’ theorem in earthquake early warning states that the most probable source estimate at any given time is a combination of contributions from a likelihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS algorithm was one of three EEW algorithms involved in the California Integrated Seismic Network (CISN) real-time EEW testing and performance evaluation effort. Its compelling real-time performance in California over the last three years has led to its inclusion in the new USGS-funded effort to develop key components of CISN ShakeAlert, a prototype EEW system that could potentially be implemented in California. A significant portion of VS code development was supported by the SAFER EEW project in Europe. We discuss recent enhancements to the VS EEW algorithm. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to be declared an event to reduce false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and it requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) to a hybrid on-site/regional approach capable of providing a continuously evolving stream of EEW information starting from the first P-detection. Offline analysis on Swiss and California waveform datasets indicate that the multiple-threshold approach is faster and more reliable for larger events than the earlier version of the VS codes. This multiple-threshold approach is well-suited for implementation on a wide range of devices, from embedded processor systems installed at a seismic stations, to small autonomous networks for local warnings, to large-scale regional networks such as the CISN. In addition, we quantify the influence of systematic use of prior information and Vs30-based corrections for site amplification on VS magnitude estimation performance, and describe how components of the VS algorithm will be integrated into non-EEW standard network processing procedures at CHNet, the national broadband / strong motion network in Switzerland. These enhancements to the VS codes will be transitioned from off-line to real-time testing at CHNet in Europe in the coming months, and will be incorporated into the development of key components of CISN ShakeAlert prototype system in California.
Seismic Characterization of EGS Reservoirs
NASA Astrophysics Data System (ADS)
Templeton, D. C.; Pyle, M. L.; Matzel, E.; Myers, S.; Johannesson, G.
2014-12-01
To aid in the seismic characterization of Engineered Geothermal Systems (EGS), we enhance the traditional microearthquake detection and location methodologies at two EGS systems. We apply the Matched Field Processing (MFP) seismic imaging technique to detect new seismic events using known discrete microearthquake sources. Events identified using MFP are typically smaller magnitude events or events that occur within the coda of a larger event. Additionally, we apply a Bayesian multiple-event seismic location algorithm, called MicroBayesLoc, to estimate the 95% probability ellipsoids for events with high signal-to-noise ratios (SNR). Such probability ellipsoid information can provide evidence for determining if a seismic lineation could be real or simply within the anticipated error range. We apply this methodology to the Basel EGS data set and compare it to another EGS dataset. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
An Improved Source-Scanning Algorithm for Locating Earthquake Clusters or Aftershock Sequences
NASA Astrophysics Data System (ADS)
Liao, Y.; Kao, H.; Hsu, S.
2010-12-01
The Source-scanning Algorithm (SSA) was originally introduced in 2004 to locate non-volcanic tremors. Its application was later expanded to the identification of earthquake rupture planes and the near-real-time detection and monitoring of landslides and mud/debris flows. In this study, we further improve SSA for the purpose of locating earthquake clusters or aftershock sequences when only a limited number of waveform observations are available. The main improvements include the application of a ground motion analyzer to separate P and S waves, the automatic determination of resolution based on the grid size and time step of the scanning process, and a modified brightness function to utilize constraints from multiple phases. Specifically, the improved SSA (named as ISSA) addresses two major issues related to locating earthquake clusters/aftershocks. The first one is the massive amount of both time and labour to locate a large number of seismic events manually. And the second one is to efficiently and correctly identify the same phase across the entire recording array when multiple events occur closely in time and space. To test the robustness of ISSA, we generate synthetic waveforms consisting of 3 separated events such that individual P and S phases arrive at different stations in different order, thus making correct phase picking nearly impossible. Using these very complicated waveforms as the input, the ISSA scans all model space for possible combination of time and location for the existence of seismic sources. The scanning results successfully associate various phases from each event at all stations, and correctly recover the input. To further demonstrate the advantage of ISSA, we apply it to the waveform data collected by a temporary OBS array for the aftershock sequence of an offshore earthquake southwest of Taiwan. The overall signal-to-noise ratio is inadequate for locating small events; and the precise arrival times of P and S phases are difficult to determine. We use one of the largest aftershocks that can be located by conventional methods as our reference event to calibrate the controlling parameters of ISSA. These parameters include the overall Vp/Vs ratio (because a precise S velocity model was unavailable), the length of scanning time window, and the weighting factor for each station. Our results show that ISSA is not only more efficient in locating earthquake clusters/aftershocks, but also capable of identifying many events missed by conventional phase-picking methods.
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.
2016-04-01
Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched field processing.
NASA Astrophysics Data System (ADS)
Cua, G. B.; Fischer, M.; Caprio, M.; Heaton, T. H.; Cisn Earthquake Early Warning Project Team
2010-12-01
The Virtual Seismologist (VS) earthquake early warning (EEW) algorithm is one of 3 EEW approaches being incorporated into the California Integrated Seismic Network (CISN) ShakeAlert system, a prototype EEW system that could potentially be implemented in California. The VS algorithm, implemented by the Swiss Seismological Service at ETH Zurich, is a Bayesian approach to EEW, wherein the most probable source estimate at any given time is a combination of contributions from a likehihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS codes have been running in real-time at the Southern California Seismic Network since July 2008, and at the Northern California Seismic Network since February 2009. We discuss recent enhancements to the VS EEW algorithm that are being integrated into CISN ShakeAlert. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to initiate an event declaration, with the goal of reducing false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and the requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) into an on-site/regional approach capable of providing a continuously evolving stream of EEW information starting from the first P-detection. Real-time and offline analysis on Swiss and California waveform datasets indicate that the multiple-threshold approach is faster and more reliable for larger events than the earlier version of the VS codes. In addition, we provide evolutionary estimates of the probability of false alarms (PFA), which is an envisioned output stream of the CISN ShakeAlert system. The real-time decision-making approach envisioned for CISN ShakeAlert users, where users specify a threshhold PFA in addition to thresholds on peak ground motion estimates, has the potential to increase the available warning time for users with high tolerance to false alarms without compromising the needs of users with lower tolerances to false alarms.
Towards a global flood detection system using social media
NASA Astrophysics Data System (ADS)
de Bruijn, Jens; de Moel, Hans; Jongman, Brenden; Aerts, Jeroen
2017-04-01
It is widely recognized that an early warning is critical in improving international disaster response. Analysis of social media in real-time can provide valuable information about an event or help to detect unexpected events. For successful and reliable detection systems that work globally, it is important that sufficient data is available and that the algorithm works both in data-rich and data-poor environments. In this study, both a new geotagging system and multi-level event detection system for flood hazards was developed using Twitter data. Geotagging algorithms that regard one tweet as a single document are well-studied. However, no algorithms exist that combine several sequential tweets mentioning keywords regarding a specific event type. Within the time frame of an event, multiple users use event related keywords that refer to the same place name. This notion allows us to treat several sequential tweets posted in the last 24 hours as one document. For all these tweets, we collect a series of spatial indicators given in the tweet metadata and extract additional topological indicators from the text. Using these indicators, we can reduce ambiguity and thus better estimate what locations are tweeted about. Using these localized tweets, Bayesian change-point analysis is used to find significant increases of tweets mentioning countries, provinces or towns. In data-poor environments detection of events on a country level is possible, while in other, data-rich, environments detection on a city level is achieved. Additionally, on a city-level we analyse the spatial dependence of mentioned places. If multiple places within a limited spatial extent are mentioned, detection confidence increases. We run the algorithm using 2 years of Twitter data with flood related keywords in 13 major languages and validate against a flood event database. We find that the geotagging algorithm yields significantly more data than previously developed algorithms and successfully deals with ambiguous place names. In addition, we show that our detection system can both quickly and reliably detect floods, even in countries where data is scarce, while achieving high detail in countries where more data is available.
NASA Astrophysics Data System (ADS)
Loftus, K.; Saar, S. H.
2017-12-01
NOAA's Space Weather Prediction Center publishes the current definitive public soft X-ray flare catalog, derived using data from the X-ray Sensor (XRS) on the Geostationary Operational Environmental Satellites (GOES) series. However, this flare list has shortcomings for use in scientific analysis. Its detection algorithm has drawbacks (missing smaller flux events and poorly characterizing complex ones), and its event timing is imprecise (peak and end times are frequently marked incorrectly, and hence peak fluxes are underestimated). It also lacks explicit and regular spatial location data. We present a new database, "The Where of the Flare" catalog, which improves upon the precision of NOAA's current version, with more consistent and accurate spatial locations, timings, and peak fluxes. Our catalog also offers several new parameters per flare (e.g. background flux, integrated flux). We use data from the GOES Solar X-ray Imager (SXI) for spatial flare locating. Our detection algorithm is more sensitive to smaller flux events close to the background level and more precisely marks flare start/peak/end times so that integrated flux can be accurately calculated. It also decomposes complex events (with multiple overlapping flares) by constituent peaks. The catalog dates from the operation of the first SXI instrument in 2003 until the present. We give an overview of the detection algorithm's design, review the catalog's features, and discuss preliminary statistical analyses of light curve morphology, complex event decomposition, and integrated flux distribution. The Where of the Flare catalog will be useful in studying X-ray flare statistics and correlating X-ray flare properties with other observations. This work was supported by Contract #8100002705 from Lockheed-Martin to SAO in support of the science of NASA's IRIS mission.
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Harris, D. B.; Dahl-Jensen, T.; Kværna, T.; Larsen, T. B.; Paulsen, B.; Voss, P. H.
2017-12-01
The oceanic boundary separating the Eurasian and North American plates between 70° and 84° north hosts large earthquakes which are well recorded teleseismically, and many more seismic events at far lower magnitudes that are well recorded only at regional distances. Existing seismic bulletins have considerable spread and bias resulting from limited station coverage and deficiencies in the velocity models applied. This is particularly acute for the lower magnitude events which may only be constrained by a small number of Pn and Sn arrivals. Over the past two decades there has been a significant improvement in the seismic network in the Arctic: a difficult region to instrument due to the harsh climate, a sparsity of accessible sites (particularly at significant distances from the sea), and the expense and difficult logistics of deploying and maintaining stations. New deployments and upgrades to stations on Greenland, Svalbard, Jan Mayen, Hopen, and Bjørnøya have resulted in a sparse but stable regional seismic network which results in events down to magnitudes below 3 generating high-quality Pn and Sn signals on multiple stations. A catalogue of several hundred events in the region since 1998 has been generated using many new phase readings on stations on both sides of the spreading ridge in addition to teleseismic P phases. A Bayesian multiple event relocation has resulted in a significant reduction in the spread of hypocentre estimates for both large and small events. Whereas single event location algorithms minimize vectors of time residuals on an event-by-event basis, the Bayesloc program finds a joint probability distribution of origins, hypocentres, and corrections to traveltime predictions for large numbers of events. The solutions obtained favour those event hypotheses resulting in time residuals which are most consistent over a given source region. The relocations have been performed with different 1-D velocity models applicable to the Arctic region and hypocentres obtained using Bayesloc have been shown to be relatively insensitive to the specified velocity structure in the crust and upper mantle, even for events only constrained by regional phases. The patterns of time residuals resulting from the multiple-event location procedure provide well-constrained time correction surfaces for single-event location estimates and are sufficiently stable to identify a number of picking errors and instrumental timing anomalies. This allows for subsequent quality control of the input data and further improvement in the location estimates. We use the relocated events to form narrowband empirical steering vectors for wave fronts arriving at the SPITS array on Svalbard for azimuth and apparent velocity estimation. We demonstrate that empirical matched field parameter estimation determined by source region is a viable supplement to planewave f-k analysis, mitigating bias and obviating the need for Slowness and Azimuth Station Corrections. A database of reference events and phase arrivals is provided to facilitate further refinement of event locations and the construction of empirical signal detectors.
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.
We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.; ...
2016-06-08
We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less
Event-by-event PET image reconstruction using list-mode origin ensembles algorithm
NASA Astrophysics Data System (ADS)
Andreyev, Andriy
2016-03-01
There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.
Multiple Autonomous Discrete Event Controllers for Constellations
NASA Technical Reports Server (NTRS)
Esposito, Timothy C.
2003-01-01
The Multiple Autonomous Discrete Event Controllers for Constellations (MADECC) project is an effort within the National Aeronautics and Space Administration Goddard Space Flight Center's (NASA/GSFC) Information Systems Division to develop autonomous positioning and attitude control for constellation satellites. It will be accomplished using traditional control theory and advanced coordination algorithms developed by the Johns Hopkins University Applied Physics Laboratory (JHU/APL). This capability will be demonstrated in the discrete event control test-bed located at JHU/APL. This project will be modeled for the Leonardo constellation mission, but is intended to be adaptable to any constellation mission. To develop a common software architecture. the controllers will only model very high-level responses. For instance, after determining that a maneuver must be made. the MADECC system will output B (Delta)V (velocity change) value. Lower level systems must then decide which thrusters to fire and for how long to achieve that (Delta)V.
NASA Astrophysics Data System (ADS)
Wang, Chun-yu; He, Lin; Li, Yan; Shuai, Chang-geng
2018-01-01
In engineering applications, ship machinery vibration may be induced by multiple rotational machines sharing a common vibration isolation platform and operating at the same time, and multiple sinusoidal components may be excited. These components may be located at frequencies with large differences or at very close frequencies. A multi-reference filtered-x Newton narrowband (MRFx-Newton) algorithm is proposed to control these multiple sinusoidal components in an MIMO (multiple input and multiple output) system, especially for those located at very close frequencies. The proposed MRFx-Newton algorithm can decouple and suppress multiple sinusoidal components located in the same narrow frequency band even though such components cannot be separated from each other by a narrowband-pass filter. Like the Fx-Newton algorithm, good real-time performance is also achieved by the faster convergence speed brought by the 2nd-order inverse secondary-path filter in the time domain. Experiments are also conducted to verify the feasibility and test the performance of the proposed algorithm installed in an active-passive vibration isolation system in suppressing the vibration excited by an artificial source and air compressor/s. The results show that the proposed algorithm not only has comparable convergence rate as the Fx-Newton algorithm but also has better real-time performance and robustness than the Fx-Newton algorithm in active control of the vibration induced by multiple sound sources/rotational machines working on a shared platform.
Seismic Characterization of the Newberry and Cooper Basin EGS Sites
NASA Astrophysics Data System (ADS)
Templeton, D. C.; Wang, J.; Goebel, M.; Johannesson, G.; Myers, S. C.; Harris, D.; Cladouhos, T. T.
2015-12-01
To aid in the seismic characterization of Engineered Geothermal Systems (EGS), we enhance traditional microearthquake detection and location methodologies at two EGS systems: the Newberry EGS site and the Habanero EGS site in the Cooper Basin of South Australia. We apply the Matched Field Processing (MFP) seismic imaging technique to detect new seismic events using known discrete microearthquake sources. Events identified using MFP typically have smaller magnitudes or occur within the coda of a larger event. Additionally, we apply a Bayesian multiple-event location algorithm, called MicroBayesLoc, to estimate the 95% probability ellipsoids for events with high signal-to-noise ratios (SNR). Such probability ellipsoid information can provide evidence for determining if a seismic lineation is real, or simply within the anticipated error range. At the Newberry EGS site, 235 events were reported in the original catalog. MFP identified 164 additional events (an increase of over 70% more events). For the relocated events in the Newberry catalog, we can distinguish two distinct seismic swarms that fall outside of one another's 95% probability error ellipsoids.This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Technical Reports Server (NTRS)
Shah, Ankoor S.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.; Clancy, Daniel (Technical Monitor)
2002-01-01
Accurate measurement of single-trial responses is key to a definitive use of complex electromagnetic and hemodynamic measurements in the investigation of brain dynamics. We developed the multiple component, Event-Related Potential (mcERP) approach to single-trial response estimation. To improve our resolution of dynamic interactions between neuronal ensembles located in different layers within a cortical region and/or in different cortical regions. The mcERP model assets that multiple components defined as stereotypic waveforms comprise the stimulus-evoked response and that these components may vary in amplitude and latency from trial to trial. Maximum a posteriori (MAP) solutions for the model are obtained by iterating a set of equations derived from the posterior probability. Our first goal was to use the ANTWERP algorithm to analyze interactions (specifically latency and amplitude correlation) between responses in different layers within a cortical region. Thus, we evaluated the model by applying the algorithm to synthetic data containing two correlated local components and one independent far-field component. Three cases were considered: the local components were correlated by an interaction in their single-trial amplitudes, by an interaction in their single-trial latencies, or by an interaction in both amplitude and latency. We then analyzed the accuracy with which the algorithm estimated the component waveshapes and the single-trial parameters as a function of the linearity of each of these relationships. Extensions of these analyses to real data are discussed as well as ongoing work to incorporate more detailed prior information.
Graph-based sensor fusion for classification of transient acoustic signals.
Srinivas, Umamahesh; Nasrabadi, Nasser M; Monga, Vishal
2015-03-01
Advances in acoustic sensing have enabled the simultaneous acquisition of multiple measurements of the same physical event via co-located acoustic sensors. We exploit the inherent correlation among such multiple measurements for acoustic signal classification, to identify the launch/impact of munition (i.e., rockets, mortars). Specifically, we propose a probabilistic graphical model framework that can explicitly learn the class conditional correlations between the cepstral features extracted from these different measurements. Additionally, we employ symbolic dynamic filtering-based features, which offer improvements over the traditional cepstral features in terms of robustness to signal distortions. Experiments on real acoustic data sets show that our proposed algorithm outperforms conventional classifiers as well as the recently proposed joint sparsity models for multisensor acoustic classification. Additionally our proposed algorithm is less sensitive to insufficiency in training samples compared to competing approaches.
NASA Astrophysics Data System (ADS)
Hu, Chongqing; Li, Aihua; Zhao, Xingyang
2011-02-01
This paper proposes a multivariate statistical analysis approach to processing the instantaneous engine speed signal for the purpose of locating multiple misfire events in internal combustion engines. The state of each cylinder is described with a characteristic vector extracted from the instantaneous engine speed signal following a three-step procedure. These characteristic vectors are considered as the values of various procedure parameters of an engine cycle. Therefore, determination of occurrence of misfire events and identification of misfiring cylinders can be accomplished by a principal component analysis (PCA) based pattern recognition methodology. The proposed algorithm can be implemented easily in practice because the threshold can be defined adaptively without the information of operating conditions. Besides, the effect of torsional vibration on the engine speed waveform is interpreted as the presence of super powerful cylinder, which is also isolated by the algorithm. The misfiring cylinder and the super powerful cylinder are often adjacent in the firing sequence, thus missing detections and false alarms can be avoided effectively by checking the relationship between the cylinders.
Cross-beam coherence of infrasonic signals at local and regional ranges.
Alberts, W C Kirkpatrick; Tenney, Stephen M
2017-11-01
Signals collected by infrasound arrays require continuous analysis by skilled personnel or by automatic algorithms in order to extract useable information. Typical pieces of information gained by analysis of infrasonic signals collected by multiple sensor arrays are arrival time, line of bearing, amplitude, and duration. These can all be used, often with significant accuracy, to locate sources. A very important part of this chain is associating collected signals across multiple arrays. Here, a pairwise, cross-beam coherence method of signal association is described that allows rapid signal association for high signal-to-noise ratio events captured by multiple infrasound arrays at ranges exceeding 150 km. Methods, test cases, and results are described.
Compound Event Barrier Coverage in Wireless Sensor Networks under Multi-Constraint Conditions.
Zhuang, Yaoming; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi
2016-12-24
It is important to monitor compound event by barrier coverage issues in wireless sensor networks (WSNs). Compound event barrier coverage (CEBC) is a novel coverage problem. Unlike traditional ones, the data of compound event barrier coverage comes from different types of sensors. It will be subject to multiple constraints under complex conditions in real-world applications. The main objective of this paper is to design an efficient algorithm for complex conditions that can combine the compound event confidence. Moreover, a multiplier method based on an active-set strategy (ASMP) is proposed to optimize the multiple constraints in compound event barrier coverage. The algorithm can calculate the coverage ratio efficiently and allocate the sensor resources reasonably in compound event barrier coverage. The proposed algorithm can simplify complex problems to reduce the computational load of the network and improve the network efficiency. The simulation results demonstrate that the proposed algorithm is more effective and efficient than existing methods, especially in the allocation of sensor resources.
Compound Event Barrier Coverage in Wireless Sensor Networks under Multi-Constraint Conditions
Zhuang, Yaoming; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi
2016-01-01
It is important to monitor compound event by barrier coverage issues in wireless sensor networks (WSNs). Compound event barrier coverage (CEBC) is a novel coverage problem. Unlike traditional ones, the data of compound event barrier coverage comes from different types of sensors. It will be subject to multiple constraints under complex conditions in real-world applications. The main objective of this paper is to design an efficient algorithm for complex conditions that can combine the compound event confidence. Moreover, a multiplier method based on an active-set strategy (ASMP) is proposed to optimize the multiple constraints in compound event barrier coverage. The algorithm can calculate the coverage ratio efficiently and allocate the sensor resources reasonably in compound event barrier coverage. The proposed algorithm can simplify complex problems to reduce the computational load of the network and improve the network efficiency. The simulation results demonstrate that the proposed algorithm is more effective and efficient than existing methods, especially in the allocation of sensor resources. PMID:28029118
Yan, Gang; Zhou, Li
2018-02-21
This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method.
Zhou, Li
2018-01-01
This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method. PMID:29466310
He, Jianjun; Gu, Hong; Liu, Wenqi
2012-01-01
It is well known that an important step toward understanding the functions of a protein is to determine its subcellular location. Although numerous prediction algorithms have been developed, most of them typically focused on the proteins with only one location. In recent years, researchers have begun to pay attention to the subcellular localization prediction of the proteins with multiple sites. However, almost all the existing approaches have failed to take into account the correlations among the locations caused by the proteins with multiple sites, which may be the important information for improving the prediction accuracy of the proteins with multiple sites. In this paper, a new algorithm which can effectively exploit the correlations among the locations is proposed by using gaussian process model. Besides, the algorithm also can realize optimal linear combination of various feature extraction technologies and could be robust to the imbalanced data set. Experimental results on a human protein data set show that the proposed algorithm is valid and can achieve better performance than the existing approaches.
Properties of induced seismicity at the geothermal reservoir Insheim, Germany
NASA Astrophysics Data System (ADS)
Olbert, Kai; Küperkoch, Ludger; Thomas, Meier
2017-04-01
Within the framework of the German MAGS2 Project the processing of induced events at the geothermal power plant Insheim, Germany, has been reassessed and evaluated. The power plant is located close to the western rim of the Upper Rhine Graben in a region with a strongly heterogeneous subsurface. Therefore, the location of seismic events particularly the depth estimation is challenging. The seismic network consisting of up to 50 stations has an aperture of approximately 15 km around the power plant. Consequently, the manual processing is time consuming. Using a waveform similarity detection algorithm, the existing dataset from 2012 to 2016 has been reprocessed to complete the catalog of induced seismic events. Based on the waveform similarity clusters of similar events have been detected. Automated P- and S-arrival time determination using an improved multi-component autoregressive prediction algorithm yields approximately 14.000 P- and S-arrivals for 758 events. Applying a dataset of manual picks as reference the automated picking algorithm has been optimized resulting in a standard deviation of the residuals between automated and manual picks of about 0.02s. The automated locations show uncertainties comparable to locations of the manual reference dataset. 90 % of the automated relocations fall within the error ellipsoid of the manual locations. The remaining locations are either badly resolved due to low numbers of picks or so well resolved that the automatic location is outside the error ellipsoid although located close to the manual location. The developed automated processing scheme proved to be a useful tool to supplement real-time monitoring. The event clusters are located at small patches of faults known from reflection seismic studies. The clusters are observed close to both the injection as well as the production wells.
Use of Archived Information by the United States National Data Center
NASA Astrophysics Data System (ADS)
Junek, W. N.; Pope, B. M.; Roman-Nieves, J. I.; VanDeMark, T. F.; Ichinose, G. A.; Poffenberger, A.; Woods, M. T.
2012-12-01
The United States National Data Center (US NDC) is responsible for monitoring international compliance to nuclear test ban treaties, acquiring data and data products from the International Data Center (IDC), and distributing data according to established policy. The archive of automated and reviewed event solutions residing at the US NDC is a valuable resource for assessing and improving the performance of signal detection, event formation, location, and discrimination algorithms. Numerous research initiatives are currently underway that are focused on optimizing these processes using historic waveform data and alphanumeric information. Identification of optimum station processing parameters is routinely performed through the analysis of archived waveform data. Station specific detector tuning studies produce and compare receiver operating characteristics for multiple detector configurations (e.g., detector type, filter passband) to identify an optimum set of processing parameters with an acceptable false alarm rate. Large aftershock sequences can inundate automated phase association algorithms with numerous detections that are closely spaced in time, which increases the number of false and/or mixed associations in automated event solutions and increases analyst burden. Archived waveform data and alphanumeric information are being exploited to develop an aftershock processor that will construct association templates to assist the Global Association (GA) application, reduce the number of false and merged phase associations, and lessen analyst burden. Statistical models are being developed and evaluated for potential use by the GA application for identifying and rejecting unlikely preliminary event solutions. Other uses of archived data at the US NDC include: improved event locations using empirical travel time corrections and discrimination via a statistical framework known as the event classification matrix (ECM).
Detecting and Locating Seismic Events Without Phase Picks or Velocity Models
NASA Astrophysics Data System (ADS)
Arrowsmith, S.; Young, C. J.; Ballard, S.; Slinkard, M.
2015-12-01
The standard paradigm for seismic event monitoring is to scan waveforms from a network of stations and identify the arrival time of various seismic phases. A signal association algorithm then groups the picks to form events, which are subsequently located by minimizing residuals between measured travel times and travel times predicted by an Earth model. Many of these steps are prone to significant errors which can lead to erroneous arrival associations and event locations. Here, we revisit a concept for event detection that does not require phase picks or travel time curves and fuses detection, association and location into a single algorithm. Our pickless event detector exploits existing catalog and waveform data to build an empirical stack of the full regional seismic wavefield, which is subsequently used to detect and locate events at a network level using correlation techniques. Because the technique uses more of the information content of the original waveforms, the concept is particularly powerful for detecting weak events that would be missed by conventional methods. We apply our detector to seismic data from the University of Utah Seismograph Stations network and compare our results with the earthquake catalog published by the University of Utah. We demonstrate that the pickless detector can detect and locate significant numbers of events previously missed by standard data processing techniques.
SYNAISTHISI: an IoT-powered smart visitor management and cognitive recommendations system
NASA Astrophysics Data System (ADS)
Thanos, Giorgos Konstandinos; Karafylli, Christina; Karafylli, Maria; Zacharakis, Dimitris; Papadimitriou, Apostolis; Dimitros, Kostantinos; Kanellopoulou, Konstantina; Kyriazanos, Dimitris M.; Thomopoulos, Stelios C. A.
2016-05-01
Location-based and navigation services are really needed to help visitors and audience of big events, complex buildings, shopping malls, airports and large companies. However, the lack of GPS and proper mapping indoors usually renders location-based applications and services useless or simply not applicable in such environments. SYNAISTHISI introduces a mobile application for smartphones which offers navigation capabilities outside and inside buildings and through multiple floor levels. The application comes together with a suite of helpful services, including personalized recommendations, visit/event management and a helpful search functionality in order to navigate to a specific location, event or person. As the user finds his way towards his destination, NFC-enabled checkpoints and bluetooth beacons assist him, while offering re-routing, check-in/out capabilities and useful information about ongoing meetings and nearby events. The application is supported by a back-end GIS system which can provide a broad and clear view to event organizers, campus managers and field personnel for purposes of event logistics, safety and security. SYNAISTHISI system comes with plenty competitive advantages including (a) Seamless Navigation as users move between outdoor and indoor areas and different floor levels by using innovative routing algorithms, (b) connection to and powered by IoT platform, for localization and real-time information feedback, (c) dynamic personalized recommendations based on user profile, location and real-time information provided by the IoT platform and (d) Indoor localization without the need for expensive infrastructure and installations.
eqMAXEL: A new automatic earthquake location algorithm implementation for Earthworm
NASA Astrophysics Data System (ADS)
Lisowski, S.; Friberg, P. A.; Sheen, D. H.
2017-12-01
A common problem with automated earthquake location systems for a local to regional scale seismic network is false triggering and false locations inside the network caused by larger regional to teleseismic distance earthquakes. This false location issue also presents a problem for earthquake early warning systems where societal impacts of false alarms can be very expensive. Towards solving this issue, Sheen et al. (2016) implemented a robust maximum-likelihood earthquake location algorithm known as MAXEL. It was shown with both synthetics and real-data for a small number of arrivals, that large regional events were easily identifiable through metrics in the MAXEL algorithm. In the summer of 2017, we collaboratively implemented the MAXEL algorithm into a fully functional Earthworm module and tested it in regions of the USA where false detections and alarming are observed. We show robust improvement in the ability of the Earthworm system to filter out regional and teleseismic events that would have falsely located inside the network using the traditional Earthworm hypoinverse solution. We also explore using different grid sizes in the implementation of the MAXEL algorithm, which was originally designed with South Korea as the target network size.
Analysis of the Dryden Wet Bulb GLobe Temperature Algorithm for White Sands Missile Range
NASA Technical Reports Server (NTRS)
LaQuay, Ryan Matthew
2011-01-01
In locations where workforce is exposed to high relative humidity and light winds, heat stress is a significant concern. Such is the case at the White Sands Missile Range in New Mexico. Heat stress is depicted by the wet bulb globe temperature, which is the official measurement used by the American Conference of Governmental Industrial Hygienists. The wet bulb globe temperature is measured by an instrument which was designed to be portable and needing routine maintenance. As an alternative form for measuring the wet bulb globe temperature, algorithms have been created to calculate the wet bulb globe temperature from basic meteorological observations. The algorithms are location dependent; therefore a specific algorithm is usually not suitable for multiple locations. Due to climatology similarities, the algorithm developed for use at the Dryden Flight Research Center was applied to data from the White Sands Missile Range. A study was performed that compared a wet bulb globe instrument to data from two Surface Atmospheric Measurement Systems that was applied to the Dryden wet bulb globe temperature algorithm. The period of study was from June to September of2009, with focus being applied from 0900 to 1800, local time. Analysis showed that the algorithm worked well, with a few exceptions. The algorithm becomes less accurate to the measurement when the dew point temperature is over 10 Celsius. Cloud cover also has a significant effect on the measured wet bulb globe temperature. The algorithm does not show red and black heat stress flags well due to shorter time scales of such events. The results of this study show that it is plausible that the Dryden Flight Research wet bulb globe temperature algorithm is compatible with the White Sands Missile Range, except for when there are increased dew point temperatures and cloud cover or precipitation. During such occasions, the wet bulb globe temperature instrument would be the preferred method of measurement. Out of the 30 dates examined, 23 fell under the category of having good accuracy.
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
NASA Astrophysics Data System (ADS)
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
Revision of an automated microseismic location algorithm for DAS - 3C geophone hybrid array
NASA Astrophysics Data System (ADS)
Mizuno, T.; LeCalvez, J.; Raymer, D.
2017-12-01
Application of distributed acoustic sensing (DAS) has been studied in several areas in seismology. One of the areas is microseismic reservoir monitoring (e.g., Molteni et al., 2017, First Break). Considering the present limitations of DAS, which include relatively low signal-to-noise ratio (SNR) and no 3C polarization measurements, a DAS - 3C geophone hybrid array is a practical option when using a single monitoring well. Considering the large volume of data from distributed sensing, microseismic event detection and location using a source scanning type algorithm is a reasonable choice, especially for real-time monitoring. The algorithm must handle both strain rate along the borehole axis for DAS and particle velocity for 3C geophones. Only a small quantity of large SNR events will be detected throughout a large aperture encompassing the hybrid array; therefore, the aperture is to be optimized dynamically to eliminate noisy channels for a majority of events. For such hybrid array, coalescence microseismic mapping (CMM) (Drew et al., 2005, SPE) was revised. CMM forms a likelihood function of location of event and its origin time. At each receiver, a time function of event arrival likelihood is inferred using an SNR function, and it is migrated to time and space to determine hypocenter and origin time likelihood. This algorithm was revised to dynamically optimize such a hybrid array by identifying receivers where a microseismic signal is possibly detected and using only those receivers to compute the likelihood function. Currently, peak SNR is used to select receivers. To prevent false results due to small aperture, a minimum aperture threshold is employed. The algorithm refines location likelihood using 3C geophone polarization. We tested this algorithm using a ray-based synthetic dataset. Leaney (2014, PhD thesis, UBC) is used to compute particle velocity at receivers. Strain rate along the borehole axis is computed from particle velocity as DAS microseismic synthetic data. The likelihood function formed by both DAS and geophone behaves as expected with the aperture dynamically selected depending on the SNR of the event. We conclude that this algorithm can be successfully applied for such hybrid arrays to monitor microseismic activity. A study using a recently acquired dataset is planned.
A Probabilistic Model of Global-Scale Seismology with Veith-Clawson Amplitude Corrections
NASA Astrophysics Data System (ADS)
Arora, N. S.; Russell, S.
2013-12-01
We present a probabilistic generative model of global-scale seismology, NET-VISA, that is designed to address the event detection and location problem of seismic monitoring. The model is based on a standard Bayesian framework with prior probabilities for event generation and propagation as well as likelihoods of detection and arrival (or onset) parameters. The model is supplemented with a greedy search algorithm that iteratively improves the predicted bulletin with respect to the posterior probability. Our prior model incorporates both seismic theory and empirical observations as appropriate. For instance, we use empirical observations for the expected rates of earthquake at each point on the earth, while we use the Gutenberg-Richter law for the the expected magnitude distribution of these earthquakes. In this work, we describe an extension of our model where we include the Veith-Clawson (1972) amplitude decline curves in our empirically calibrated arrival amplitude model. While this change doesn't alter the overall event-detection results, we have chosen to keep the Veith-Clawson curves since they are more seismically accurate. We also describe a recent change to our search algorithm, whereby we now consider multiple hypotheses when we encounter a series of closely spaced arrivals which could be explained by either a single event or multiple co-located events. This change has led to a sharp improvement in our results on large after-shock sequences. We use the analyst-curated LEB bulletin or the REB bulletin, which is the published product of the IDC, as a reference and measure the overlap (percentage of reference events that are matched) and inconsistency (percentage of test bulletin events that don't match anything in the reference) of a one-to-one matching between the test and the reference bulletins. In the table below we show results for NET-VISA and SEL3, which is produced by the existing GA software, for the whole of 2009. These results show that NET-VISA, which is restricted to use arrivals with a 6 hour lag (in order to be comparable to SEL3), reduces the number of missed events by a factor of 2.5 while simultaneously reducing the rate of spurious events. Further, these "spurious" NET-VISA events, in fact, include many real events which are missed by the human analysts. When we compare the NET-VISA events, with arrivals from at least 3 stations (to be comparable to LEB), with NEIC events (in the ISC catalog) over the continental United States, as well as NNC events over Central Asia, we find that NET-VISA identifies 1.5 to 2 times the number of events that the IDC analysts find. Most of these additional events are in the 2--4 mb or ML range. Our experiments also confirm that NET-VISA accurately located each of the recent nuclear explosions to within 5 km of the LEB location. For large after-shock sequences, NET-VISA has been shown to be very efficient as well as accurate. For example on the Tohoku sequence (March 10 -- 14, 2011), NET-VISA (running time 2.57 days) had an overlap of 82.7 % with LEB and inconsistency of 26.8 % versus SEL3's overlap of 71.9 % and inconsistency of 40 %.
Detection and analysis of microseismic events using a Matched Filtering Algorithm (MFA)
NASA Astrophysics Data System (ADS)
Caffagni, Enrico; Eaton, David W.; Jones, Joshua P.; van der Baan, Mirko
2016-07-01
A new Matched Filtering Algorithm (MFA) is proposed for detecting and analysing microseismic events recorded by downhole monitoring of hydraulic fracturing. This method requires a set of well-located template (`parent') events, which are obtained using conventional microseismic processing and selected on the basis of high signal-to-noise (S/N) ratio and representative spatial distribution of the recorded microseismicity. Detection and extraction of `child' events are based on stacked, multichannel cross-correlation of the continuous waveform data, using the parent events as reference signals. The location of a child event relative to its parent is determined using an automated process, by rotation of the multicomponent waveforms into the ray-centred co-ordinates of the parent and maximizing the energy of the stacked amplitude envelope within a search volume around the parent's hypocentre. After correction for geometrical spreading and attenuation, the relative magnitude of the child event is obtained automatically using the ratio of stacked envelope peak with respect to its parent. Since only a small number of parent events require interactive analysis such as picking P- and S-wave arrivals, the MFA approach offers the potential for significant reduction in effort for downhole microseismic processing. Our algorithm also facilitates the analysis of single-phase child events, that is, microseismic events for which only one of the S- or P-wave arrivals is evident due to unfavourable S/N conditions. A real-data example using microseismic monitoring data from four stages of an open-hole slickwater hydraulic fracture treatment in western Canada demonstrates that a sparse set of parents (in this case, 4.6 per cent of the originally located events) yields a significant (more than fourfold increase) in the number of located events compared with the original catalogue. Moreover, analysis of the new MFA catalogue suggests that this approach leads to more robust interpretation of the induced microseismicity and novel insights into dynamic rupture processes based on the average temporal (foreshock-aftershock) relationship of child events to parents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buzatu, Adrian; /McGill U.
2006-08-01
Improving our ability to identify the top quark pair (t{bar t}) primary vertex (PV) on an event-by-event basis is essential for many analyses in the lepton-plus-jets channel performed by the Collider Detector at Fermilab (CDF) Collaboration. We compare the algorithm currently used by CDF (A1) with another algorithm (A2) using Monte Carlo simulation at high instantaneous luminosities. We confirm that A1 is more efficient than A2 at selecting the t{bar t} PV at all PV multiplicities, both with efficiencies larger than 99%. Event selection rejects events with a distance larger than 5 cm along the proton beam between the t{barmore » t} PV and the charged lepton. We find flat distributions for the signal over background significance of this cut for all cut values larger than 1 cm, for all PV multiplicities and for both algorithms. We conclude that any cut value larger than 1 cm is acceptable for both algorithms under the Tevatron's expected instantaneous luminosity improvements.« less
2010-09-01
MULTIPLE-ARRAY DETECTION, ASSOCIATION AND LOCATION OF INFRASOUND AND SEISMO-ACOUSTIC EVENTS – UTILIZATION OF GROUND TRUTH INFORMATION Stephen J...and infrasound data from seismo-acoustic arrays and apply the methodology to regional networks for validation with ground truth information. In the...initial year of the project automated techniques for detecting, associating and locating infrasound signals were developed. Recently, the location
Comparison of Event Detection Methods for Centralized Sensor Networks
NASA Technical Reports Server (NTRS)
Sauvageon, Julien; Agogiono, Alice M.; Farhang, Ali; Tumer, Irem Y.
2006-01-01
The development of an Integrated Vehicle Health Management (IVHM) for space vehicles has become a great concern. Smart Sensor Networks is one of the promising technologies that are catching a lot of attention. In this paper, we propose to a qualitative comparison of several local event (hot spot) detection algorithms in centralized redundant sensor networks. The algorithms are compared regarding their ability to locate and evaluate the event under noise and sensor failures. The purpose of this study is to check if the ratio performance/computational power of the Mote Fuzzy Validation and Fusion algorithm is relevant compare to simpler methods.
Liu, Lei; Zhao, Jing
2014-01-01
An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users. PMID:24790579
Zhong, Cheng; Liu, Lei; Zhao, Jing
2014-01-01
An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users.
Chin, Wei-Chien-Benny; Wen, Tzai-Hung; Sabel, Clive E; Wang, I-Hsiang
2017-10-03
A diffusion process can be considered as the movement of linked events through space and time. Therefore, space-time locations of events are key to identify any diffusion process. However, previous clustering analysis methods have focused only on space-time proximity characteristics, neglecting the temporal lag of the movement of events. We argue that the temporal lag between events is a key to understand the process of diffusion movement. Using the temporal lag could help to clarify the types of close relationships. This study aims to develop a data exploration algorithm, namely the TrAcking Progression In Time And Space (TaPiTaS) algorithm, for understanding diffusion processes. Based on the spatial distance and temporal interval between cases, TaPiTaS detects sub-clusters, a group of events that have high probability of having common sources, identifies progression links, the relationships between sub-clusters, and tracks progression chains, the connected components of sub-clusters. Dengue Fever cases data was used as an illustrative case study. The location and temporal range of sub-clusters are presented, along with the progression links. TaPiTaS algorithm contributes a more detailed and in-depth understanding of the development of progression chains, namely the geographic diffusion process.
NASA Astrophysics Data System (ADS)
Wen, Fang-Qing; Zhang, Gong; Ben, De
2015-11-01
This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61071163, 61271327, and 61471191), the Funding for Outstanding Doctoral Dissertation in Nanjing University of Aeronautics and Astronautics, China (Grant No. BCXJ14-08), the Funding of Innovation Program for Graduate Education of Jiangsu Province, China (Grant No. KYLX 0277), the Fundamental Research Funds for the Central Universities, China (Grant No. 3082015NP2015504), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PADA), China.
Improved Regional Seismic Event Locations Using 3-D Velocity Models
1999-12-15
regional velocity model to estimate event hypocenters. Travel times for the regional phases are calculated using a sophisticated eikonal finite...can greatly improve estimates of event locations. Our algorithm calculates travel times using a finite difference approximation of the eikonal ...such as IASP91 or J-B. 3-D velocity models require more sophisticated travel time modeling routines; thus, we use a 3-D eikonal equation solver
Andersson, Richard; Larsson, Linnea; Holmqvist, Kenneth; Stridh, Martin; Nyström, Marcus
2017-04-01
Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.
Optimization methods for locating lightning flashes using magnetic direction finding networks
NASA Technical Reports Server (NTRS)
Goodman, Steven J.
1989-01-01
Techniques for producing best point estimates of target position using direction finder bearing information are reviewed. The use of an algorithm that calculates the cloud-to-ground flash location given multiple bearings is illustrated and the position errors are described. This algorithm can be used to analyze direction finder network performance.
Joint Inversion of Source Location and Source Mechanism of Induced Microseismics
NASA Astrophysics Data System (ADS)
Liang, C.
2014-12-01
Seismic source mechanism is a useful property to indicate the source physics and stress and strain distribution in regional, local and micro scales. In this study we jointly invert source mechanisms and locations for microseismics induced in fluid fracturing treatment in the oil and gas industry. For the events that are big enough to see waveforms, there are quite a few techniques can be applied to invert the source mechanism including waveform inversion, first polarity inversion and many other methods and variants based on these methods. However, for events that are too small to identify in seismic traces such as the microseismics induced by the fluid fracturing in the Oil and Gas industry, a source scanning algorithms (SSA for short) with waveform stacking are usually applied. At the same time, a joint inversion of location and source mechanism are possible but at a cost of high computation budget. The algorithm is thereby called Source Location and Mechanism Scanning Algorithm, SLMSA for short. In this case, for given velocity structure, all possible combinations of source locations (X,Y and Z) and source mechanism (Strike, Dip and Rake) are used to compute travel-times and polarities of waveforms. Correcting Normal moveout times and polarities, and stacking all waveforms, the (X, Y, Z , strike, dip, rake) combination that gives the strongest stacking waveform is identified as the solution. To solve the problem of high computation problem, CPU-GPU programing is applied. Numerical datasets are used to test the algorithm. The SLMSA has also been applied to a fluid fracturing datasets and reveal several advantages against the location only method: (1) for shear sources, the source only program can hardly locate them because of the canceling out of positive and negative polarized traces, but the SLMSA method can successfully pick up those events; (2) microseismic locations alone may not be enough to indicate the directionality of micro-fractures. The statistics of source mechanisms can certainly provide more knowledges on the orientation of fractures; (3) in our practice, the joint inversion method almost always yield more events than the source only method and for those events that are also picked by the SSA method, the stacking power of SLMSA are always higher than the ones obtained in SSA.
NASA Astrophysics Data System (ADS)
Kawzenuk, B.; Sellars, S. L.; Nguyen, P.; Ralph, F. M.; Sorooshian, S.
2017-12-01
The CONNected objECT (CONNECT) algorithm is applied to Integrated Water Vapor Transport (IVT) data from the NASA's Modern-Era Retrospective Analysis for Research and Applications - Version 2 reanalysis product for the period 1980 to 2016 to study water vapor transport globally. The algorithm generates life-cycle records as statistical objects for the time and space location of the evolving strong vapor transport events. Global statistics are presented and used to investigate how climate variability impacts the events' location and frequency. Results show distinct water vapor object frequency and seasonal peaks during NH and SH Winter. Moreover, a positive linear trend in the annual number of objects is reported, increasing by 3.58 objects year-over-year (with 95% confidence, +/- 1.39). In addition, we show five distinct regions where these events typically exist (southeastern United States, eastern China, South Pacific south of 25°S, eastern South America and off the southern tip of South Africa), and where they rarely exist (eastern South Pacific Ocean and central southern Atlantic Ocean between 5°N-25°S). In addition, the event frequency and geographical location are also shown to be related to the Arctic Oscillation, Pacific North American Pattern, and the Quasi-Biennial Oscillation.
A new approach to identify, classify and count drugrelated events
Bürkle, Thomas; Müller, Fabian; Patapovas, Andrius; Sonst, Anja; Pfistermeister, Barbara; Plank-Kiegele, Bettina; Dormann, Harald; Maas, Renke
2013-01-01
Aims The incidence of clinical events related to medication errors and/or adverse drug reactions reported in the literature varies by a degree that cannot solely be explained by the clinical setting, the varying scrutiny of investigators or varying definitions of drug-related events. Our hypothesis was that the individual complexity of many clinical cases may pose relevant limitations for current definitions and algorithms used to identify, classify and count adverse drug-related events. Methods Based on clinical cases derived from an observational study we identified and classified common clinical problems that cannot be adequately characterized by the currently used definitions and algorithms. Results It appears that some key models currently used to describe the relation of medication errors (MEs), adverse drug reactions (ADRs) and adverse drug events (ADEs) can easily be misinterpreted or contain logical inconsistencies that limit their accurate use to all but the simplest clinical cases. A key limitation of current models is the inability to deal with complex interactions such as one drug causing two clinically distinct side effects or multiple drugs contributing to a single clinical event. Using a large set of clinical cases we developed a revised model of the interdependence between MEs, ADEs and ADRs and extended current event definitions when multiple medications cause multiple types of problems. We propose algorithms that may help to improve the identification, classification and counting of drug-related events. Conclusions The new model may help to overcome some of the limitations that complex clinical cases pose to current paper- or software-based drug therapy safety. PMID:24007453
Waldhauser, F.; Ellsworth, W.L.
2000-01-01
We have developed an efficient method to determine high-resolution hypocenter locations over large distances. The location method incorporates ordinary absolute travel-time measurements and/or cross-correlation P-and S-wave differential travel-time measurements. Residuals between observed and theoretical travel-time differences (or double-differences) are minimized for pairs of earthquakes at each station while linking together all observed event-station pairs. A least-squares solution is found by iteratively adjusting the vector difference between hypocentral pairs. The double-difference algorithm minimizes errors due to unmodeled velocity structure without the use of station corrections. Because catalog and cross-correlation data are combined into one system of equations, interevent distances within multiplets are determined to the accuracy of the cross-correlation data, while the relative locations between multiplets and uncorrelated events are simultaneously determined to the accuracy of the absolute travel-time data. Statistical resampling methods are used to estimate data accuracy and location errors. Uncertainties in double-difference locations are improved by more than an order of magnitude compared to catalog locations. The algorithm is tested, and its performance is demonstrated on two clusters of earthquakes located on the northern Hayward fault, California. There it colapses the diffuse catalog locations into sharp images of seismicity and reveals horizontal lineations of hypocenter that define the narrow regions on the fault where stress is released by brittle failure.
Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation
Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu
2015-01-01
To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401
Quantum partial search for uneven distribution of multiple target items
NASA Astrophysics Data System (ADS)
Zhang, Kun; Korepin, Vladimir
2018-06-01
Quantum partial search algorithm is an approximate search. It aims to find a target block (which has the target items). It runs a little faster than full Grover search. In this paper, we consider quantum partial search algorithm for multiple target items unevenly distributed in a database (target blocks have different number of target items). The algorithm we describe can locate one of the target blocks. Efficiency of the algorithm is measured by number of queries to the oracle. We optimize the algorithm in order to improve efficiency. By perturbation method, we find that the algorithm runs the fastest when target items are evenly distributed in database.
Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford; ...
2016-01-01
The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events thatmore » generated the phase picks.« less
Real-time Automatic Detectors of P and S Waves Using Singular Values Decomposition
NASA Astrophysics Data System (ADS)
Kurzon, I.; Vernon, F.; Rosenberger, A.; Ben-Zion, Y.
2013-12-01
We implement a new method for the automatic detection of the primary P and S phases using Singular Value Decomposition (SVD) analysis. The method is based on a real-time iteration algorithm of Rosenberger (2010) for the SVD of three component seismograms. Rosenberger's algorithm identifies the incidence angle by applying SVD and separates the waveforms into their P and S components. We have been using the same algorithm with the modification that we filter the waveforms prior to the SVD, and then apply SNR (Signal-to-Noise Ratio) detectors for picking the P and S arrivals, on the new filtered+SVD-separated channels. A recent deployment in San Jacinto Fault Zone area provides a very dense seismic network that allows us to test the detection algorithm in diverse setting, such as: events with different source mechanisms, stations with different site characteristics, and ray paths that diverge from the SVD approximation used in the algorithm, (e.g., rays propagating within the fault and recorded on linear arrays, crossing the fault). We have found that a Butterworth band-pass filter of 2-30Hz, with four poles at each of the corner frequencies, shows the best performance in a large variety of events and stations within the SJFZ. Using the SVD detectors we obtain a similar number of P and S picks, which is a rare thing to see in ordinary SNR detectors. Also for the actual real-time operation of the ANZA and SJFZ real-time seismic networks, the above filter (2-30Hz) shows a very impressive performance, tested on many events and several aftershock sequences in the region from the MW 5.2 of June 2005, through the MW 5.4 of July 2010, to MW 4.7 of March 2013. Here we show the results of testing the detectors on the most complex and intense aftershock sequence, the MW 5.2 of June 2005, in which in the very first hour there were ~4 events a minute. This aftershock sequence was thoroughly reviewed by several analysts, identifying 294 events in the first hour, located in a condensed cluster around the main shock. We used this hour of events to fine-tune the automatic SVD detection, association and location of the real-time system, reaching a 37% automatic identification and location of events, with a minimum of 10 stations per event, all events fall within the same condensed cluster and there are no false events or large offsets of their locations. An ordinary SNR detector did not exceed the 11% success with a minimum of 8 stations per event, 2 false events and a wider spread of events (not within the reviewed cluster). One of the main advantages of the SVD detectors for real-time operations is the actual separation between the P and S components, by that significantly reducing the noise of picks detected by ordinary SNR detectors. The new method has been applied for a significant amount of events within the SJFZ in the past 8 years, and is now in the final stage of real-time implementation in UCSD for the ANZA and SJFZ networks, tuned for automatic detection and location of local events.
NASA Astrophysics Data System (ADS)
Lagos, Soledad R.; Velis, Danilo R.
2018-02-01
We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.
2012-05-07
AFRL-RV-PS- AFRL-RV-PS- TP-2012-0017 TP-2012-0017 MULTIPLE-ARRAY DETECTION, ASSOCIATION AND LOCATION OF INFRASOUND AND SEISMO-ACOUSTIC...ASSOCIATION AND LOCATION OF 5a. CONTRACT NUMBER FA8718-08-C-0008 INFRASOUND AND SEISMO-ACOUSTIC EVENT – UTILIZATION OF GROUND-TRUTH... infrasound signals from both correlated and uncorrelated noise. Approaches to this problem are implementation of the F-detector, which employs the F
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmons, N. A.; Myers, S. C.; Johannesson, G.
[1] We develop a global-scale P wave velocity model (LLNL-G3Dv3) designed to accurately predict seismic travel times at regional and teleseismic distances simultaneously. The model provides a new image of Earth's interior, but the underlying practical purpose of the model is to provide enhanced seismic event location capabilities. The LLNL-G3Dv3 model is based on ∼2.8 millionP and Pnarrivals that are re-processed using our global multiple-event locator called Bayesloc. We construct LLNL-G3Dv3 within a spherical tessellation based framework, allowing for explicit representation of undulating and discontinuous layers including the crust and transition zone layers. Using a multiscale inversion technique, regional trendsmore » as well as fine details are captured where the data allow. LLNL-G3Dv3 exhibits large-scale structures including cratons and superplumes as well numerous complex details in the upper mantle including within the transition zone. Particularly, the model reveals new details of a vast network of subducted slabs trapped within the transition beneath much of Eurasia, including beneath the Tibetan Plateau. We demonstrate the impact of Bayesloc multiple-event location on the resulting tomographic images through comparison with images produced without the benefit of multiple-event constraints (single-event locations). We find that the multiple-event locations allow for better reconciliation of the large set of direct P phases recorded at 0–97° distance and yield a smoother and more continuous image relative to the single-event locations. Travel times predicted from a 3-D model are also found to be strongly influenced by the initial locations of the input data, even when an iterative inversion/relocation technique is employed.« less
Abnormal global and local event detection in compressive sensing domain
NASA Astrophysics Data System (ADS)
Wang, Tian; Qiao, Meina; Chen, Jie; Wang, Chuanyun; Zhang, Wenjia; Snoussi, Hichem
2018-05-01
Abnormal event detection, also known as anomaly detection, is one challenging task in security video surveillance. It is important to develop effective and robust movement representation models for global and local abnormal event detection to fight against factors such as occlusion and illumination change. In this paper, a new algorithm is proposed. It can locate the abnormal events on one frame, and detect the global abnormal frame. The proposed algorithm employs a sparse measurement matrix designed to represent the movement feature based on optical flow efficiently. Then, the abnormal detection mission is constructed as a one-class classification task via merely learning from the training normal samples. Experiments demonstrate that our algorithm performs well on the benchmark abnormal detection datasets against state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Qu, Hongquan; Yuan, Shijiao; Wang, Yanping; Yang, Dan
2018-04-01
To improve the recognition performance of optical fiber prewarning system (OFPS), this study proposed a hierarchical recognition algorithm (HRA). Compared with traditional methods, which employ only a complex algorithm that includes multiple extracted features and complex classifiers to increase the recognition rate with a considerable decrease in recognition speed, HRA takes advantage of the continuity of intrusion events, thereby creating a staged recognition flow inspired by stress reaction. HRA is expected to achieve high-level recognition accuracy with less time consumption. First, this work analyzed the continuity of intrusion events and then presented the algorithm based on the mechanism of stress reaction. Finally, it verified the time consumption through theoretical analysis and experiments, and the recognition accuracy was obtained through experiments. Experiment results show that the processing speed of HRA is 3.3 times faster than that of a traditional complicated algorithm and has a similar recognition rate of 98%. The study is of great significance to fast intrusion event recognition in OFPS.
Precision Seismic Monitoring of Volcanic Eruptions at Axial Seamount
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Wilcock, W. S. D.; Tolstoy, M.; Baillard, C.; Tan, Y. J.; Schaff, D. P.
2017-12-01
Seven permanent ocean bottom seismometers of the Ocean Observatories Initiative's real time cabled observatory at Axial Seamount off the coast of the western United States record seismic activity since 2014. The array captured the April 2015 eruption, shedding light on the detailed structure and dynamics of the volcano and the Juan de Fuca midocean ridge system (Wilcock et al., 2016). After a period of continuously increasing seismic activity primarily associated with the reactivation of caldera ring faults, and the subsequent seismic crisis on April 24, 2015 with 7000 recorded events that day, seismicity rates steadily declined and the array currently records an average of 5 events per day. Here we present results from ongoing efforts to automatically detect and precisely locate seismic events at Axial in real-time, providing the computational framework and fundamental data that will allow rapid characterization and analysis of spatio-temporal changes in seismogenic properties. We combine a kurtosis-based P- and S-phase onset picker and time domain cross-correlation detection and phase delay timing algorithms together with single-event and double-difference location methods to rapidly and precisely (tens of meters) compute the location and magnitudes of new events with respect to a 2-year long, high-resolution background catalog that includes nearly 100,000 events within a 5×5 km region. We extend the real-time double-difference location software DD-RT to efficiently handle the anticipated high-rate and high-density earthquake activity during future eruptions. The modular monitoring framework will allow real-time tracking of other seismic events such as tremors and sea-floor lava explosions that enable the timing and location of lava flows and thus guide response research cruises to the most interesting sites. Finally, rapid detection of eruption precursors and initiation will allow for adaptive sampling by the OOI instruments for optimal recording of future eruptions. With a higher eruption recurrence rate than land-based volcanoes the Axial OOI observatory offers the opportunity to monitor and study volcanic eruptions throughout multiple cycles.
[Algorithms of multiband remote sensing for coastal red tide waters].
Mao, Xianmou; Huang, Weigen
2003-07-01
The spectral characteristics of the coastal waters in East China Sea was studied using in situ measurements, and the multiband algorithms of remote sensing for bloom waters was discussed and developed. Examples of red tide detection using the algorithms in the East China Sea were presented. The results showed that the algorithms could provide information about the location and the area coverage of the red tide events.
Han, Guangjie; Liu, Li; Jiang, Jinfang; Shu, Lei; Rodrigues, Joel J.P.C.
2016-01-01
Localization is one of the hottest research topics in Underwater Wireless Sensor Networks (UWSNs), since many important applications of UWSNs, e.g., event sensing, target tracking and monitoring, require location information of sensor nodes. Nowadays, a large number of localization algorithms have been proposed for UWSNs. How to improve location accuracy are well studied. However, few of them take location reliability or security into consideration. In this paper, we propose a Collaborative Secure Localization algorithm based on Trust model (CSLT) for UWSNs to ensure location security. Based on the trust model, the secure localization process can be divided into the following five sub-processes: trust evaluation of anchor nodes, initial localization of unknown nodes, trust evaluation of reference nodes, selection of reference node, and secondary localization of unknown node. Simulation results demonstrate that the proposed CSLT algorithm performs better than the compared related works in terms of location security, average localization accuracy and localization ratio. PMID:26891300
Explosion localization and characterization via infrasound using numerical modeling
NASA Astrophysics Data System (ADS)
Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.
2017-12-01
Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.
Liu, Chen-Yi; Goertzen, Andrew L
2013-07-21
An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.
NASA Astrophysics Data System (ADS)
Ziegler, A.; Balch, R. S.; van Wijk, J.
2015-12-01
Farnsworth Oil Field in North Texas hosts an ongoing carbon capture, utilization, and storage project. This study is focused on passive seismic monitoring at the carbon injection site to measure, locate, and catalog any induced seismic events. A Geometrics Geode system is being utilized for continuous recording of the passive seismic downhole bore array in a monitoring well. The array consists of 3-component dual Geospace OMNI-2400 15Hz geophones with a vertical spacing of 30.5m. Downhole temperature and pressure are also monitored. Seismic data is recorded continuously and is produced at a rate of over 900GB per month, which must be archived and reviewed. A Short Term Average/Long Term Average (STA/LTA) algorithm was evaluated for its ability to search for events, including identification and quantification of any false positive events. It was determined that the algorithm was not appropriate for event detection with the background level of noise at the field site and for the recording equipment as configured. Alternatives are being investigated. The final intended outcome of the passive seismic monitoring is to mine the continuous database and develop a catalog of microseismic events/locations and to determine if there is any relationship to CO2 injection in the field. Identifying the location of any microseismic events will allow for correlation with carbon injection locations and previously characterized geological and structural features such as faults and paleoslopes. Additionally, the borehole array has recorded over 1200 active sources with three sweeps at each source location that were acquired during a nearby 3D VSP. These data were evaluated for their usability and location within an effective radius of the array and were stacked to improve signal-noise ratio and are used to calibrate a full field velocity model to enhance event location accuracy. Funding for this project is provided by the U.S. Department of Energy under Award No. DE-FC26-05NT42591.
Serious injury prediction algorithm based on large-scale data and under-triage control.
Nishimoto, Tetsuya; Mukaigawa, Kosuke; Tominaga, Shigeru; Lubbe, Nils; Kiuchi, Toru; Motomura, Tomokazu; Matsumoto, Hisashi
2017-01-01
The present study was undertaken to construct an algorithm for an advanced automatic collision notification system based on national traffic accident data compiled by Japanese police. While US research into the development of a serious-injury prediction algorithm is based on a logistic regression algorithm using the National Automotive Sampling System/Crashworthiness Data System, the present injury prediction algorithm was based on comprehensive police data covering all accidents that occurred across Japan. The particular focus of this research is to improve the rescue of injured vehicle occupants in traffic accidents, and the present algorithm assumes the use of an onboard event data recorder data from which risk factors such as pseudo delta-V, vehicle impact location, seatbelt wearing or non-wearing, involvement in a single impact or multiple impact crash and the occupant's age can be derived. As a result, a simple and handy algorithm suited for onboard vehicle installation was constructed from a sample of half of the available police data. The other half of the police data was applied to the validation testing of this new algorithm using receiver operating characteristic analysis. An additional validation was conducted using in-depth investigation of accident injuries in collaboration with prospective host emergency care institutes. The validated algorithm, named the TOYOTA-Nihon University algorithm, proved to be as useful as the US URGENCY and other existing algorithms. Furthermore, an under-triage control analysis found that the present algorithm could achieve an under-triage rate of less than 10% by setting a threshold of 8.3%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Identifying Patterns in the Weather of Europe for Source Term Estimation
NASA Astrophysics Data System (ADS)
Klampanos, Iraklis; Pappas, Charalambos; Andronopoulos, Spyros; Davvetas, Athanasios; Ikonomopoulos, Andreas; Karkaletsis, Vangelis
2017-04-01
During emergencies that involve the release of hazardous substances into the atmosphere the potential health effects on the human population and the environment are of primary concern. Such events have occurred in the past, most notably involving radioactive and toxic substances. Examples of radioactive release events include the Chernobyl accident in 1986, as well as the more recent Fukushima Daiichi accident in 2011. Often, the release of dangerous substances in the atmosphere is detected at locations different from the release origin. The objective of this work is the rapid estimation of such unknown sources shortly after the detection of dangerous substances in the atmosphere, with an initial focus on nuclear or radiological releases. Typically, after the detection of a radioactive substance in the atmosphere indicating the occurrence of an unknown release, the source location is estimated via inverse modelling. However, depending on factors such as the spatial resolution desired, traditional inverse modelling can be computationally time-consuming. This is especially true for cases where complex topography and weather conditions are involved and can therefore be problematic when timing is critical. Making use of machine learning techniques and the Big Data Europe platform1, our approach moves the bulk of the computation before any such event taking place, therefore allowing for rapid initial, albeit rougher, estimations regarding the source location. Our proposed approach is based on the automatic identification of weather patterns within the European continent. Identifying weather patterns has long been an active research field. Our case is differentiated by the fact that it focuses on plume dispersion patterns and these meteorological variables that affect dispersion the most. For a small set of recurrent weather patterns, we simulate hypothetical radioactive releases from a pre-known set of nuclear reactor locations and for different substance and temporal parameters, using the Java flavour of the Euratom-supported funded RODOS (Real-time On-line DecisiOn Support) system2 for off-site emergency management after nuclear accidents. Once dispersions have been pre-computed, and immediately after a detected release, the currently observed weather can be matched to the derived weather classes. Since each weather class corresponds to a different plume dispersion pattern, the closest classes to an unseen weather sample, say the current weather, are the most likely to lead us to the release origin. In addressing the above problem, we make use of multiple years of weather reanalysis data from NCAR's version3 of ECMWF's ERA-Interim4. To derive useful weather classes, we evaluate several algorithms, ranging from straightforward unsupervised clustering to more complex methods, including relevant neural-network algorithms, on multiple variables. Variables and feature sets, clustering algorithms and evaluation approaches are all dealt with and presented experimentally. The Big Data Europe platform allows for the implementation and execution of the above tasks in the cloud, in a scalable, robust and efficient way.
MECH: Algorithms and Tools for Automated Assessment of Potential Attack Locations
2015-10-06
conscious and subconscious processing of the geometric structure of the local terrain, sight lines to prominent or useful terrain features, proximity...This intuition or instinct is the outcome of an unconscious or subconscious integration of available facts and impressions. Thus, in the search...adjacency. Even so, we inevitably introduce a bias between events and non-event road locations when calculating the route visibility features. 63
NASA Astrophysics Data System (ADS)
Reynen, Andrew; Audet, Pascal
2017-09-01
A new method using a machine learning technique is applied to event classification and detection at seismic networks. This method is applicable to a variety of network sizes and settings. The algorithm makes use of a small catalogue of known observations across the entire network. Two attributes, the polarization and frequency content, are used as input to regression. These attributes are extracted at predicted arrival times for P and S waves using only an approximate velocity model, as attributes are calculated over large time spans. This method of waveform characterization is shown to be able to distinguish between blasts and earthquakes with 99 per cent accuracy using a network of 13 stations located in Southern California. The combination of machine learning with generalized waveform features is further applied to event detection in Oklahoma, United States. The event detection algorithm makes use of a pair of unique seismic phases to locate events, with a precision directly related to the sampling rate of the generalized waveform features. Over a week of data from 30 stations in Oklahoma, United States are used to automatically detect 25 times more events than the catalogue of the local geological survey, with a false detection rate of less than 2 per cent. This method provides a highly confident way of detecting and locating events. Furthermore, a large number of seismic events can be automatically detected with low false alarm, allowing for a larger automatic event catalogue with a high degree of trust.
Multiple Component Event-Related Potential (mcERP) Estimation
NASA Technical Reports Server (NTRS)
Knuth, K. H.; Clanton, S. T.; Shah, A. S.; Truccolo, W. A.; Ding, M.; Bressler, S. L.; Trejo, L. J.; Schroeder, C. E.; Clancy, Daniel (Technical Monitor)
2002-01-01
We show how model-based estimation of the neural sources responsible for transient neuroelectric signals can be improved by the analysis of single trial data. Previously, we showed that a multiple component event-related potential (mcERP) algorithm can extract the responses of individual sources from recordings of a mixture of multiple, possibly interacting, neural ensembles. McERP also estimated single-trial amplitudes and onset latencies, thus allowing more accurate estimation of ongoing neural activity during an experimental trial. The mcERP algorithm is related to informax independent component analysis (ICA); however, the underlying signal model is more physiologically realistic in that a component is modeled as a stereotypic waveshape varying both in amplitude and onset latency from trial to trial. The result is a model that reflects quantities of interest to the neuroscientist. Here we demonstrate that the mcERP algorithm provides more accurate results than more traditional methods such as factor analysis and the more recent ICA. Whereas factor analysis assumes the sources are orthogonal and ICA assumes the sources are statistically independent, the mcERP algorithm makes no such assumptions thus allowing investigators to examine interactions among components by estimating the properties of single-trial responses.
NASA Astrophysics Data System (ADS)
Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme
2016-01-01
This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.
Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness
NASA Astrophysics Data System (ADS)
Feng, Albert
2002-05-01
Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.
Research on fully distributed optical fiber sensing security system localization algorithm
NASA Astrophysics Data System (ADS)
Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen
2013-12-01
A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Shabarekh, Charlotte; Furjanic, Caitlin
2011-06-01
In this paper, we present results of adversarial activity recognition using data collected in the Empire Challenge (EC 09) exercise. The EC09 experiment provided an opportunity to evaluate our probabilistic spatiotemporal mission recognition algorithms using the data from live air-born and ground sensors. Using ambiguous and noisy data about locations of entities and motion events on the ground, the algorithms inferred the types and locations of OPFOR activities, including reconnaissance, cache runs, IED emplacements, logistics, and planning meetings. In this paper, we present detailed summary of the validation study and recognition accuracy results. Our algorithms were able to detect locations and types of over 75% of hostile activities in EC09 while producing 25% false alarms.
Time Series Reconstruction of Surface Flow Velocity on Marine-terminating Outlet Glaciers
NASA Astrophysics Data System (ADS)
Jeong, Seongsu
The flow velocity of glacier and its fluctuation are valuable data to study the contribution of sea level rise of ice sheet by understanding its dynamic structure. Repeat-image feature tracking (RIFT) is a platform-independent, feature tracking-based velocity measurement methodology effective for building a time series of velocity maps from optical images. However, limited availability of perfectly-conditioned images motivated to improve robustness of the algorithm. With this background, we developed an improved RIFT algorithm based on multiple-image multiple-chip algorithm presented in Ahn and Howat (2011). The test results affirm improvement in the new RIFT algorithm in avoiding outlier, and the analysis of the multiple matching results clarified that each individual matching results worked in complementary manner to deduce the correct displacements. LANDSAT 8 is a new satellite in LANDSAT program that has begun its operation since 2013. The improved radiometric performance of OLI aboard the satellite is expected to enable better velocity mapping results than ETM+ aboard LANDSAT 7. However, it was not yet well studied that in what cases the new will sensor will be beneficial, and how much the improvement will be obtained. We carried out a simulation-based comparison between ETM+ and OLI and confirmed OLI outperforms ETM+ especially in low contrast conditions, especially in polar night, translucent cloud covers, and bright upglacier with less texture. We have identified a rift on ice shelf of Pine island glacier located in western Antarctic ice sheet. Unlike the previous events, the evolution of the current started from the center of the ice shelf. In order to analyze this unique event, we utilized the improved RIFT algorithm to its OLI images to retrieve time series of velocity maps. We discovered from the analyses that the part of ice shelf below the rift is changing its speed, and shifting of splashing crevasses on shear margin is migrating to the center of the shelf. Concerning the concurrent disintegration of ice melange on its western part of the terminus, we postulate that change in flow regime attributes to loss of resistance force exerted by the melange. There are several topics that need to be addressed for further improve the RIFT algorithm. As coregistration error is significant contributor to the velocity measurement, a method to mitigate that error needs to be devised. Also, considering that the domain of RIFT product spans not only in space but also in time, its regridding and gap filling work will benefit from extending its domain to both space and time.
Near-real time 3D probabilistic earthquakes locations at Mt. Etna volcano
NASA Astrophysics Data System (ADS)
Barberi, G.; D'Agostino, M.; Mostaccio, A.; Patane', D.; Tuve', T.
2012-04-01
Automatic procedure for locating earthquake in quasi-real time must provide a good estimation of earthquakes location within a few seconds after the event is first detected and is strongly needed for seismic warning system. The reliability of an automatic location algorithm is influenced by several factors such as errors in picking seismic phases, network geometry, and velocity model uncertainties. On Mt. Etna, the seismic network is managed by INGV and the quasi-real time earthquakes locations are performed by using an automatic-picking algorithm based on short-term-average to long-term-average ratios (STA/LTA) calculated from an approximate squared envelope function of the seismogram, which furnish a list of P-wave arrival times, and the location algorithm Hypoellipse, with a 1D velocity model. The main purpose of this work is to investigate the performances of a different automatic procedure to improve the quasi-real time earthquakes locations. In fact, as the automatic data processing may be affected by outliers (wrong picks), the use of a traditional earthquake location techniques based on a least-square misfit function (L2-norm) often yield unstable and unreliable solutions. Moreover, on Mt. Etna, the 1D model is often unable to represent the complex structure of the volcano (in particular the strong lateral heterogeneities), whereas the increasing accuracy in the 3D velocity models at Mt. Etna during recent years allows their use today in routine earthquake locations. Therefore, we selected, as reference locations, all the events occurred on Mt. Etna in the last year (2011) which was automatically detected and located by means of the Hypoellipse code. By using this dataset (more than 300 events), we applied a nonlinear probabilistic earthquake location algorithm using the Equal Differential Time (EDT) likelihood function, (Font et al., 2004; Lomax, 2005) which is much more robust in the presence of outliers in the data. Successively, by using a probabilistic non linear method (NonLinLoc, Lomax, 2001) and the 3D velocity model, derived from the one developed by Patanè et al. (2006) integrated with that obtained by Chiarabba et al. (2004), we obtained the best possible constraint on the location of the focii expressed as a probability density function (PDF) for the hypocenter location in 3D space. As expected, the obtained results, compared with the reference ones, show that the NonLinLoc software (applied to a 3D velocity model) is more reliable than the Hypoellipse code (applied to layered 1D velocity models), leading to more reliable automatic locations also when outliers are present.
Field-aligned currents associated with multiple arc systems
NASA Astrophysics Data System (ADS)
Wu, J.; Knudsen, D. J.; Gillies, D. M.; Donovan, E.; Burchill, J. K.
2016-12-01
It is often thought that auroral arcs are a direct consequence of upward field-aligned currents. In fact, the relation between currents and brightness is more complicated. Multiple auroral arc systems provide and opportunity to study this relation in detail; this information can be used as a test of models for quasi-static arc formation. In this study, we have identified two types of FAC configurations in multiple parallel arc systems using ground-based optical data from the THEMIS all-sky imagers (ASIs), magnetometers and electric field instruments onboard the Swarm satellites during the period from December 2013 to March 2015. In type 1 events, each arc is an intensification within a broad, unipolar current sheet and downward currents only exist outside the upward current sheet. In type 2 events, multiple arc systems represent a collection of multiple up/down current pairs. By collecting 12 events for type 1 and 17 events for type 2, we find that (1) Type 1 events are mainly located between 22-23MLT. Type 2 events are mainly located around midnight. (2) The typical size of upward and downward FAC in type 2 events are comparable, while upward FAC in type 1 events are larger than downward FAC. (3) Upward currents with more arcs embedded have larger intensities and widths. (4) There is no significant difference between the characteristic widths of multiple arcs and single arcs.
Symbolic discrete event system specification
NASA Technical Reports Server (NTRS)
Zeigler, Bernard P.; Chi, Sungdo
1992-01-01
Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.
Real-Time Gait Event Detection Based on Kinematic Data Coupled to a Biomechanical Model.
Lambrecht, Stefan; Harutyunyan, Anna; Tanghe, Kevin; Afschrift, Maarten; De Schutter, Joris; Jonkers, Ilse
2017-03-24
Real-time detection of multiple stance events, more specifically initial contact (IC), foot flat (FF), heel off (HO), and toe off (TO), could greatly benefit neurorobotic (NR) and neuroprosthetic (NP) control. Three real-time threshold-based algorithms have been developed, detecting the aforementioned events based on kinematic data in combination with a biomechanical model. Data from seven subjects walking at three speeds on an instrumented treadmill were used to validate the presented algorithms, accumulating to a total of 558 steps. The reference for the gait events was obtained using marker and force plate data. All algorithms had excellent precision and no false positives were observed. Timing delays of the presented algorithms were similar to current state-of-the-art algorithms for the detection of IC and TO, whereas smaller delays were achieved for the detection of FF. Our results indicate that, based on their high precision and low delays, these algorithms can be used for the control of an NR/NP, with the exception of the HO event. Kinematic data is used in most NR/NP control schemes and is thus available at no additional cost, resulting in a minimal computational burden. The presented methods can also be applied for screening pathological gait or gait analysis in general in/outside of the laboratory.
Solar Occultation Retrieval Algorithm Development
NASA Technical Reports Server (NTRS)
Lumpe, Jerry D.
2004-01-01
This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.
van den Broek, Evert; van Lieshout, Stef; Rausch, Christian; Ylstra, Bauke; van de Wiel, Mark A; Meijer, Gerrit A; Fijneman, Remond J A; Abeln, Sanne
2016-01-01
Development of cancer is driven by somatic alterations, including numerical and structural chromosomal aberrations. Currently, several computational methods are available and are widely applied to detect numerical copy number aberrations (CNAs) of chromosomal segments in tumor genomes. However, there is lack of computational methods that systematically detect structural chromosomal aberrations by virtue of the genomic location of CNA-associated chromosomal breaks and identify genes that appear non-randomly affected by chromosomal breakpoints across (large) series of tumor samples. 'GeneBreak' is developed to systematically identify genes recurrently affected by the genomic location of chromosomal CNA-associated breaks by a genome-wide approach, which can be applied to DNA copy number data obtained by array-Comparative Genomic Hybridization (CGH) or by (low-pass) whole genome sequencing (WGS). First, 'GeneBreak' collects the genomic locations of chromosomal CNA-associated breaks that were previously pinpointed by the segmentation algorithm that was applied to obtain CNA profiles. Next, a tailored annotation approach for breakpoint-to-gene mapping is implemented. Finally, dedicated cohort-based statistics is incorporated with correction for covariates that influence the probability to be a breakpoint gene. In addition, multiple testing correction is integrated to reveal recurrent breakpoint events. This easy-to-use algorithm, 'GeneBreak', is implemented in R ( www.cran.r-project.org ) and is available from Bioconductor ( www.bioconductor.org/packages/release/bioc/html/GeneBreak.html ).
Using Bluetooth proximity sensing to determine where office workers spend time at work.
Clark, Bronwyn K; Winkler, Elisabeth A; Brakenridge, Charlotte L; Trost, Stewart G; Healy, Genevieve N
2018-01-01
Most wearable devices that measure movement in workplaces cannot determine the context in which people spend time. This study examined the accuracy of Bluetooth sensing (10-second intervals) via the ActiGraph GT9X Link monitor to determine location in an office setting, using two simple, bespoke algorithms. For one work day (mean±SD 6.2±1.1 hours), 30 office workers (30% men, aged 38±11 years) simultaneously wore chest-mounted cameras (video recording) and Bluetooth-enabled monitors (initialised as receivers) on the wrist and thigh. Additional monitors (initialised as beacons) were placed in the entry, kitchen, photocopy room, corridors, and the wearer's office. Firstly, participant presence/absence at each location was predicted from the presence/absence of signals at that location (ignoring all other signals). Secondly, using the information gathered at multiple locations simultaneously, a simple heuristic model was used to predict at which location the participant was present. The Bluetooth-determined location for each algorithm was tested against the camera in terms of F-scores. When considering locations individually, the accuracy obtained was excellent in the office (F-score = 0.98 and 0.97 for thigh and wrist positions) but poor in other locations (F-score = 0.04 to 0.36), stemming primarily from a high false positive rate. The multi-location algorithm exhibited high accuracy for the office location (F-score = 0.97 for both wear positions). It also improved the F-scores obtained in the remaining locations, but not always to levels indicating good accuracy (e.g., F-score for photocopy room ≈0.1 in both wear positions). The Bluetooth signalling function shows promise for determining where workers spend most of their time (i.e., their office). Placing beacons in multiple locations and using a rule-based decision model improved classification accuracy; however, for workplace locations visited infrequently or with considerable movement, accuracy was below desirable levels. Further development of algorithms is warranted.
SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation
NASA Technical Reports Server (NTRS)
Steinman, Jeff S.
1992-01-01
Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.
NASA Astrophysics Data System (ADS)
Brogan, R.; Young, C. J.; Ballard, S.
2017-12-01
A major problem with developing new data processing algorithms for seismic event monitoring is the lack of standard, high-quality "ground-truth" data sets to test against. The unfortunate effect of this is that new algorithms are often developed and tested with new data sets, making comparison of algorithms difficult and subjective. In an effort towards resolving this problem, we have developed the Unconstrained Event Bulletin (UEB), a ground-truth data set from the International Monitoring System (IMS) primary and auxiliary seismic networks for a two-week period in May 2010. All UEB analysis was performed by the same expert, who has more than 30 years of experience analyzing seismic data for nuclear explosion monitoring. We used the most complete International Data Centre (IDC) analyst-review event bulletin (the Late Event Bulletin or LEB) as a starting point for this analysis. To make the UEB more complete, we relaxed the minimum event definite criteria to the level of a pair of P-type and an S-type phases at a single station and using azimuth/slowness as defining. To add even more events that our analyst recognized and didn't want to omit, in rare cases, events were constructed using only 1 P-phase. Perhaps most importantly, on average our analyst spent more than 60 hours per day of data, far more than was possible in the production of the LEB. The result of all this was that while the LEB had 2,101 LEB events for the 2-week time period, we ended up with 11,435 events in the UEB, an increase of over 400%. New events are located all over the world and include both earthquakes and manmade events such as mining explosions. Our intent is to make our UEB data set openly available for all researchers to use for testing detection, correlation, and location algorithms, thus making it much easier to objectively compare different research efforts. Acknowledgement: Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.
Analysis of the geophysical data using a posteriori algorithms
NASA Astrophysics Data System (ADS)
Voskoboynikova, Gyulnara; Khairetdinov, Marat
2016-04-01
The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.
Automatic processing of induced events in the geothermal reservoirs Landau and Insheim, Germany
NASA Astrophysics Data System (ADS)
Olbert, Kai; Küperkoch, Ludger; Meier, Thomas
2016-04-01
Induced events can be a risk to local infrastructure that need to be understood and evaluated. They represent also a chance to learn more about the reservoir behavior and characteristics. Prior to the analysis, the waveform data must be processed consistently and accurately to avoid erroneous interpretations. In the framework of the MAGS2 project an automatic off-line event detection and a phase onset time determination algorithm are applied to induced seismic events in geothermal systems in Landau and Insheim, Germany. The off-line detection algorithm works based on a cross-correlation of continuous data taken from the local seismic network with master events. It distinguishes events between different reservoirs and within the individual reservoirs. Furthermore, it provides a location and magnitude estimation. Data from 2007 to 2014 are processed and compared with other detections using the SeisComp3 cross correlation detector and a STA/LTA detector. The detected events are analyzed concerning spatial or temporal clustering. Furthermore the number of events are compared to the existing detection lists. The automatic phase picking algorithm combines an AR-AIC approach with a cost function to find precise P1- and S1-phase onset times which can be used for localization and tomography studies. 800 induced events are processed, determining 5000 P1- and 6000 S1-picks. The phase onset times show a high precision with mean residuals to manual phase picks of 0s (P1) to 0.04s (S1) and standard deviations below ±0.05s. The received automatic picks are applied to relocate a selected number of events to evaluate influences on the location precision.
Active sensing in the categorization of visual patterns
Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M
2016-01-01
Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford
The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events thatmore » generated the phase picks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khachatryan, Vardan
The performance of missing transverse energy reconstruction algorithms is presented by our team using√s=8 TeV proton-proton (pp) data collected with the CMS detector. Events with anomalous missing transverse energy are studied, and the performance of algorithms used to identify and remove these events is presented. The scale and resolution for missing transverse energy, including the effects of multiple pp interactions (pileup), are measured using events with an identified Z boson or isolated photon, and are found to be well described by the simulation. Novel missing transverse energy reconstruction algorithms developed specifically to mitigate the effects of large numbers of pileupmore » interactions on the missing transverse energy resolution are presented. These algorithms significantly reduce the dependence of the missing transverse energy resolution on pileup interactions. Furthermore, an algorithm that provides an estimate of the significance of the missing transverse energy is presented, which is used to estimate the compatibility of the reconstructed missing transverse energy with a zero nominal value.« less
3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion
Dou, Qingxu; Wei, Lijun; Magee, Derek R.; Atkins, Phil R.; Chapman, David N.; Curioni, Giulio; Goddard, Kevin F.; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R.; Rustighi, Emiliano; Swingler, Steven G.; Rogers, Christopher D. F.; Cohn, Anthony G.
2016-01-01
We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed “multi-utility multi-sensor” system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation. PMID:27827836
3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.
Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G
2016-11-02
We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.
Event Discrimination Using Seismoacoustic Catalog Probabilities
NASA Astrophysics Data System (ADS)
Albert, S.; Arrowsmith, S.; Bowman, D.; Downey, N.; Koch, C.
2017-12-01
Presented here are three seismoacoustic catalogs from various years and locations throughout Utah and New Mexico. To create these catalogs, we combine seismic and acoustic events detected and located using different algorithms. Seismoacoustic events are formed based on similarity of origin time and location. Following seismoacoustic fusion, the data is compared against ground truth events. Each catalog contains events originating from both natural and anthropogenic sources. By creating these seismoacoustic catalogs, we show that the fusion of seismic and acoustic data leads to a better understanding of the nature of individual events. The probability of an event being a surface blast given its presence in each seismoacoustic catalog is quantified. We use these probabilities to discriminate between events from natural and anthropogenic sources. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.
Cloud Effects in Hyperspectral Imagery from First-Principles Scene Simulations
2009-01-01
SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, or distribution to multiple locations...scattering and absorption, scattering events, surface scattering with material-dependent bidirectional reflectances, multiple surface adjacency...aerosols or clouds, they may be absorbed, or they may reflect off the ground or an object. A given photon may undergo multiple scattering events
Computing all hybridization networks for multiple binary phylogenetic input trees.
Albrecht, Benjamin
2015-07-30
The computation of phylogenetic trees on the same set of species that are based on different orthologous genes can lead to incongruent trees. One possible explanation for this behavior are interspecific hybridization events recombining genes of different species. An important approach to analyze such events is the computation of hybridization networks. This work presents the first algorithm computing the hybridization number as well as a set of representative hybridization networks for multiple binary phylogenetic input trees on the same set of taxa. To improve its practical runtime, we show how this algorithm can be parallelized. Moreover, we demonstrate the efficiency of the software Hybroscale, containing an implementation of our algorithm, by comparing it to PIRNv2.0, which is so far the best available software computing the exact hybridization number for multiple binary phylogenetic trees on the same set of taxa. The algorithm is part of the software Hybroscale, which was developed specifically for the investigation of hybridization networks including their computation and visualization. Hybroscale is freely available(1) and runs on all three major operating systems. Our simulation study indicates that our approach is on average 100 times faster than PIRNv2.0. Moreover, we show how Hybroscale improves the interpretation of the reported hybridization networks by adding certain features to its graphical representation.
Characterising private and shared signatures of positive selection in 37 Asian populations.
Liu, Xuanyao; Lu, Dongsheng; Saw, Woei-Yuh; Shaw, Philip J; Wangkumhang, Pongsakorn; Ngamphiw, Chumpol; Fucharoen, Suthat; Lert-Itthiporn, Worachart; Chin-Inmanu, Kwanrutai; Chau, Tran Nguyen Bich; Anders, Katie; Kasturiratne, Anuradhani; de Silva, H Janaka; Katsuya, Tomohiro; Kimura, Ryosuke; Nabika, Toru; Ohkubo, Takayoshi; Tabara, Yasuharu; Takeuchi, Fumihiko; Yamamoto, Ken; Yokota, Mitsuhiro; Mamatyusupu, Dolikun; Yang, Wenjun; Chung, Yeun-Jun; Jin, Li; Hoh, Boon-Peng; Wickremasinghe, Ananda R; Ong, RickTwee-Hee; Khor, Chiea-Chuen; Dunstan, Sarah J; Simmons, Cameron; Tongsima, Sissades; Suriyaphol, Prapat; Kato, Norihiro; Xu, Shuhua; Teo, Yik-Ying
2017-04-01
The Asian Diversity Project (ADP) assembled 37 cosmopolitan and ethnic minority populations in Asia that have been densely genotyped across over half a million markers to study patterns of genetic diversity and positive natural selection. We performed population structure analyses of the ADP populations and divided these populations into four major groups based on their genographic information. By applying a highly sensitive algorithm haploPS to locate genomic signatures of positive selection, 140 distinct genomic regions exhibiting evidence of positive selection in at least one population were identified. We examined the extent of signal sharing for regions that were selected in multiple populations and observed that populations clustered in a similar fashion to that of how the ancestry clades were phylogenetically defined. In particular, populations predominantly located in South Asia underwent considerably different adaptation as compared with populations from the other geographical regions. Signatures of positive selection present in multiple geographical regions were predicted to be older and have emerged prior to the separation of the populations in the different regions. In contrast, selection signals present in a single population group tended to be of lower frequencies and thus can be attributed to recent evolutionary events.
Characterising private and shared signatures of positive selection in 37 Asian populations
Liu, Xuanyao; Lu, Dongsheng; Saw, Woei-Yuh; Shaw, Philip J; Wangkumhang, Pongsakorn; Ngamphiw, Chumpol; Fucharoen, Suthat; Lert-itthiporn, Worachart; Chin-inmanu, Kwanrutai; Chau, Tran Nguyen Bich; Anders, Katie; Kasturiratne, Anuradhani; de Silva, H Janaka; Katsuya, Tomohiro; Kimura, Ryosuke; Nabika, Toru; Ohkubo, Takayoshi; Tabara, Yasuharu; Takeuchi, Fumihiko; Yamamoto, Ken; Yokota, Mitsuhiro; Mamatyusupu, Dolikun; Yang, Wenjun; Chung, Yeun-Jun; Jin, Li; Hoh, Boon-Peng; Wickremasinghe, Ananda R; Ong, RickTwee-Hee; Khor, Chiea-Chuen; Dunstan, Sarah J; Simmons, Cameron; Tongsima, Sissades; Suriyaphol, Prapat; Kato, Norihiro; Xu, Shuhua; Teo, Yik-Ying
2017-01-01
The Asian Diversity Project (ADP) assembled 37 cosmopolitan and ethnic minority populations in Asia that have been densely genotyped across over half a million markers to study patterns of genetic diversity and positive natural selection. We performed population structure analyses of the ADP populations and divided these populations into four major groups based on their genographic information. By applying a highly sensitive algorithm haploPS to locate genomic signatures of positive selection, 140 distinct genomic regions exhibiting evidence of positive selection in at least one population were identified. We examined the extent of signal sharing for regions that were selected in multiple populations and observed that populations clustered in a similar fashion to that of how the ancestry clades were phylogenetically defined. In particular, populations predominantly located in South Asia underwent considerably different adaptation as compared with populations from the other geographical regions. Signatures of positive selection present in multiple geographical regions were predicted to be older and have emerged prior to the separation of the populations in the different regions. In contrast, selection signals present in a single population group tended to be of lower frequencies and thus can be attributed to recent evolutionary events. PMID:28098149
NASA Astrophysics Data System (ADS)
Diniakos, R. S.; Bilek, S. L.; Rowe, C. A.; Draganov, D.
2015-12-01
The subduction of the Nazca Plate beneath the South American Plate along Chile has led to some of the largest earthquakes recorded on modern seismic instrumentation. These include the 1960 M 9.5 Valdivia, 2010 M 8.8 Maule, and 2014 M 8.1 Iquique earthquakes. Slip heterogeneity for both the 2010 and 2014 earthquakes has been noted in various studies. In order to explore both spatial variations in the continued aftershocks of the 2010 event, and also seismicity to the north along Iquique prior to the 2014 earthquake relative to the high slip regions, we are expanding the catalog of small earthquakes using template matching algorithms to find other small earthquakes in the region. We start with an earthquake catalog developed from regional and local array data; these events provide the templates used to search through waveform data from a temporary seismic array in Malargue, Argentina, located ~300 km west of the Maule region, which operated in 2012. Our template events are first identified on the array stations, and we use a 10-s window around the P-wave arrival as the template. We then use a waveform cross-correlation algorithm to compare the template with day-long seismograms from Malargue stations. The newly detected events are then located using the HYPOINVERSE2000 program. Initial results for 103 templates on 19 of the array stations show that we find 275 new events ,with an average of three new events for each template correlated. For these preliminary results, events from the Maule region appear to provide the most new detections, with an average of ten new events. We will present our locations for the detected events and we will compare them to patterns of high slip along the 2010 rupture zone of the M 8.8 Maule earthquake and the 2014 M 8.1 Iquique event.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-08
.... Sponsor: Maine Chapter, Multiple Sclerosis Society. Date: August 21, 2010. Time: 11 am to 2 pm. Location... Sailboat Race. Sponsor: Maine Chapter, Multiple Sclerosis Society. Date: August 21, 2010. Time: 10 am to 4... Tugboat Muster. Event Type: Power Boat Race. Sponsor: Maine Chapter, National Multiple Sclerosis Society...
Repeating Earthquakes on the Queen Charlotte Plate Boundary
NASA Astrophysics Data System (ADS)
Hayward, T. W.; Bostock, M. G.
2015-12-01
The Queen Charlotte Fault (QCF) is a major plate boundary located off the northwest coast of North America that has produced large earthquakes in 1949 (M8.1) and more recently in October, 2012 (M7.8). The 2012 event was dominated by thrusting despite the fact that plate motions at the boundary are nearly transcurrent. It is now widely believed that the plate boundary comprises the QCF (i.e., a dextral strike-slip fault) as well as an element of subduction of the Pacific Plate beneath the North American Plate. Repeating earthquakes and seismic tremor have been observed in the vicinity of the QCF; providing insight into the spatial and temporal characteristics of repeating earthquakes is the goal of this research. Due to poor station coverage and data quality, traditional methods of locating earthquakes are not applicable to these events. Instead, we have implemented an algorithm to locate local (i.e., < 100 km distance to epicenter) earthquakes using a single, three-component seismogram. This algorithm relies on the P-wave polarization and, through comparison with larger local events in the Geological Survey of Canada catalogue, is shown to yield epicentral locations accurate to within 5-10 km. A total of 24 unique families of repeating earthquakes has been identified, and 4 of these families have been located with high confidence. Their epicenters locate directly on the trace of the QCF and their depths are shallow (i.e., 5-15 km), consistent with the proposed depth of the QCF. Analysis of temporal recurrence leading up to the 2012 M7.8 event reveals a non-random pattern, with an approximately 15 day periodicity. Further analysis is planned to study whether this behaviour persists after the 2012 event and to gain insight into the effects of the 2012 event on the stress field and frictional properties of the plate boundary.
Gossip-based solutions for discrete rendezvous in populations of communicating agents.
Hollander, Christopher D; Wu, Annie S
2014-01-01
The objective of the rendezvous problem is to construct a method that enables a population of agents to agree on a spatial (and possibly temporal) meeting location. We introduce the buffered gossip algorithm as a general solution to the rendezvous problem in a discrete domain with direct communication between decentralized agents. We compare the performance of the buffered gossip algorithm against the well known uniform gossip algorithm. We believe that a buffered solution is preferable to an unbuffered solution, such as the uniform gossip algorithm, because the use of a buffer allows an agent to use multiple information sources when determining its desired rendezvous point, and that access to multiple information sources may improve agent decision making by reinforcing or contradicting an initial choice. To show that the buffered gossip algorithm is an actual solution for the rendezvous problem, we construct a theoretical proof of convergence and derive the conditions under which the buffered gossip algorithm is guaranteed to produce a consensus on rendezvous location. We use these results to verify that the uniform gossip algorithm also solves the rendezvous problem. We then use a multi-agent simulation to conduct a series of simulation experiments to compare the performance between the buffered and uniform gossip algorithms. Our results suggest that the buffered gossip algorithm can solve the rendezvous problem faster than the uniform gossip algorithm; however, the relative performance between these two solutions depends on the specific constraints of the problem and the parameters of the buffered gossip algorithm.
Gossip-Based Solutions for Discrete Rendezvous in Populations of Communicating Agents
Hollander, Christopher D.; Wu, Annie S.
2014-01-01
The objective of the rendezvous problem is to construct a method that enables a population of agents to agree on a spatial (and possibly temporal) meeting location. We introduce the buffered gossip algorithm as a general solution to the rendezvous problem in a discrete domain with direct communication between decentralized agents. We compare the performance of the buffered gossip algorithm against the well known uniform gossip algorithm. We believe that a buffered solution is preferable to an unbuffered solution, such as the uniform gossip algorithm, because the use of a buffer allows an agent to use multiple information sources when determining its desired rendezvous point, and that access to multiple information sources may improve agent decision making by reinforcing or contradicting an initial choice. To show that the buffered gossip algorithm is an actual solution for the rendezvous problem, we construct a theoretical proof of convergence and derive the conditions under which the buffered gossip algorithm is guaranteed to produce a consensus on rendezvous location. We use these results to verify that the uniform gossip algorithm also solves the rendezvous problem. We then use a multi-agent simulation to conduct a series of simulation experiments to compare the performance between the buffered and uniform gossip algorithms. Our results suggest that the buffered gossip algorithm can solve the rendezvous problem faster than the uniform gossip algorithm; however, the relative performance between these two solutions depends on the specific constraints of the problem and the parameters of the buffered gossip algorithm. PMID:25397882
Automatic Multi-sensor Data Quality Checking and Event Detection for Environmental Sensing
NASA Astrophysics Data System (ADS)
LIU, Q.; Zhang, Y.; Zhao, Y.; Gao, D.; Gallaher, D. W.; Lv, Q.; Shang, L.
2017-12-01
With the advances in sensing technologies, large-scale environmental sensing infrastructures are pervasively deployed to continuously collect data for various research and application fields, such as air quality study and weather condition monitoring. In such infrastructures, many sensor nodes are distributed in a specific area and each individual sensor node is capable of measuring several parameters (e.g., humidity, temperature, and pressure), providing massive data for natural event detection and analysis. However, due to the dynamics of the ambient environment, sensor data can be contaminated by errors or noise. Thus, data quality is still a primary concern for scientists before drawing any reliable scientific conclusions. To help researchers identify potential data quality issues and detect meaningful natural events, this work proposes a novel algorithm to automatically identify and rank anomalous time windows from multiple sensor data streams. More specifically, (1) the algorithm adaptively learns the characteristics of normal evolving time series and (2) models the spatial-temporal relationship among multiple sensor nodes to infer the anomaly likelihood of a time series window for a particular parameter in a sensor node. Case studies using different data sets are presented and the experimental results demonstrate that the proposed algorithm can effectively identify anomalous time windows, which may resulted from data quality issues and natural events.
Genetic Algorithm Phase Retrieval for the Systematic Image-Based Optical Alignment Testbed
NASA Technical Reports Server (NTRS)
Rakoczy, John; Steincamp, James; Taylor, Jaime
2003-01-01
A reduced surrogate, one point crossover genetic algorithm with random rank-based selection was used successfully to estimate the multiple phases of a segmented optical system modeled on the seven-mirror Systematic Image-Based Optical Alignment testbed located at NASA's Marshall Space Flight Center.
NASA Astrophysics Data System (ADS)
Chalmers, Alex
2007-10-01
A simple model is presented of a possible inspection regimen applied to each leg of a cargo containers' journey between its point of origin and destination. Several candidate modalities are proposed to be used at multiple remote locations to act as a pre-screen inspection as the target approaches a perimeter and as the primary inspection modality at the portal. Information from multiple data sets are fused to optimize the costs and performance of a network of such inspection systems. A series of image processing algorithms are presented that automatically process X-ray images of containerized cargo. The goal of this processing is to locate the container in a real time stream of traffic traversing a portal without impeding the flow of commerce. Such processing may facilitate the inclusion of unmanned/unattended inspection systems in such a network. Several samples of the processing applied to data collected from deployed systems are included. Simulated data from a notional cargo inspection system with multiple sensor modalities and advanced data fusion algorithms are also included to show the potential increased detection and throughput performance of such a configuration.
Multi-Agent Patrolling under Uncertainty and Threats.
Chen, Shaofei; Wu, Feng; Shen, Lincheng; Chen, Jing; Ramchurn, Sarvapali D
2015-01-01
We investigate a multi-agent patrolling problem where information is distributed alongside threats in environments with uncertainties. Specifically, the information and threat at each location are independently modelled as multi-state Markov chains, whose states are not observed until the location is visited by an agent. While agents will obtain information at a location, they may also suffer damage from the threat at that location. Therefore, the goal of the agents is to gather as much information as possible while mitigating the damage incurred. To address this challenge, we formulate the single-agent patrolling problem as a Partially Observable Markov Decision Process (POMDP) and propose a computationally efficient algorithm to solve this model. Building upon this, to compute patrols for multiple agents, the single-agent algorithm is extended for each agent with the aim of maximising its marginal contribution to the team. We empirically evaluate our algorithm on problems of multi-agent patrolling and show that it outperforms a baseline algorithm up to 44% for 10 agents and by 21% for 15 agents in large domains.
Efficient RNA structure comparison algorithms.
Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason
2017-12-01
Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.
NASA Astrophysics Data System (ADS)
Baziw, Erick; Verbeek, Gerald
2012-12-01
Among engineers there is considerable interest in the real-time identification of "events" within time series data with a low signal to noise ratio. This is especially true for acoustic emission analysis, which is utilized to assess the integrity and safety of many structures and is also applied in the field of passive seismic monitoring (PSM). Here an array of seismic receivers are used to acquire acoustic signals to monitor locations where seismic activity is expected: underground excavations, deep open pits and quarries, reservoirs into which fluids are injected or from which fluids are produced, permeable subsurface formations, or sites of large underground explosions. The most important element of PSM is event detection: the monitoring of seismic acoustic emissions is a continuous, real-time process which typically runs 24 h a day, 7 days a week, and therefore a PSM system with poor event detection can easily acquire terabytes of useless data as it does not identify crucial acoustic events. This paper outlines a new algorithm developed for this application, the so-called SEED™ (Signal Enhancement and Event Detection) algorithm. The SEED™ algorithm uses real-time Bayesian recursive estimation digital filtering techniques for PSM signal enhancement and event detection.
A generalized algorithm to design finite field normal basis multipliers
NASA Technical Reports Server (NTRS)
Wang, C. C.
1986-01-01
Finite field arithmetic logic is central in the implementation of some error-correcting coders and some cryptographic devices. There is a need for good multiplication algorithms which can be easily realized. Massey and Omura recently developed a new multiplication algorithm for finite fields based on a normal basis representation. Using the normal basis representation, the design of the finite field multiplier is simple and regular. The fundamental design of the Massey-Omura multiplier is based on a design of a product function. In this article, a generalized algorithm to locate a normal basis in a field is first presented. Using this normal basis, an algorithm to construct the product function is then developed. This design does not depend on particular characteristics of the generator polynomial of the field.
NASA Astrophysics Data System (ADS)
Matoza, Robin S.; Green, David N.; Le Pichon, Alexis; Shearer, Peter M.; Fee, David; Mialle, Pierrick; Ceranna, Lars
2017-04-01
We experiment with a new method to search systematically through multiyear data from the International Monitoring System (IMS) infrasound network to identify explosive volcanic eruption signals originating anywhere on Earth. Detecting, quantifying, and cataloging the global occurrence of explosive volcanism helps toward several goals in Earth sciences and has direct applications in volcanic hazard mitigation. We combine infrasound signal association across multiple stations with source location using a brute-force, grid-search, cross-bearings approach. The algorithm corrects for a background prior rate of coherent unwanted infrasound signals (clutter) in a global grid, without needing to screen array processing detection lists from individual stations prior to association. We develop the algorithm using case studies of explosive eruptions: 2008 Kasatochi, Alaska; 2009 Sarychev Peak, Kurile Islands; and 2010 Eyjafjallajökull, Iceland. We apply the method to global IMS infrasound data from 2005-2010 to construct a preliminary acoustic catalog that emphasizes sustained explosive volcanic activity (long-duration signals or sequences of impulsive transients lasting hours to days). This work represents a step toward the goal of integrating IMS infrasound data products into global volcanic eruption early warning and notification systems. Additionally, a better understanding of volcanic signal detection and location with the IMS helps improve operational event detection, discrimination, and association capabilities.
EEG and MEG source localization using recursively applied (RAP) MUSIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Leahy, R.M.
1996-12-31
The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which usesmore » the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.« less
NASA Astrophysics Data System (ADS)
Given, J. W.; Guendel, F.
2013-05-01
The International Data Centre is a vital element of the Comprehensive Test Ban Treaty (CTBT) verification mechanism. The fundamental mission of the International Data Centre (IDC) is to collect, process, and analyze monitoring data and to present results as event bulletins to Member States. For the IDC and in particular for waveform technologies, a key measure of the quality of its products is the accuracy by which every detected event is located. Accurate event location is crucial for purposes of an On Site Inspection (OSI), which would confirm the conduct of a nuclear test. Thus it is important for the IDC monitoring and data analysis to adopt new processing algorithms that improve the accuracy of event location. Among them the development of new algorithms to compute regional seismic travel times through 3-dimensional models have greatly increased IDC's location precision, the reduction of computational time, allowing forward and inverse modeling of large data sets. One of these algorithms has been the Regional Seismic Travel Time model (RSTT) of Myers et al., (2011). The RSTT model is nominally a global model; however, it currently covers only North America and Eurasia in sufficient detail. It is the intention CTBTO's Provisional Technical Secretariat and the IDC to extend the RSTT model to other regions of the earth, e.g. Latin America-Caribbean, Africa and Asia. This is particularly important for the IDC location procedure, as there are regions of the earth for which crustal models are not well constrained. For this purpose IDC has launched a RSTT initiative. In May 2012, a technical meeting was held in Vienna under the auspices of the CTBTO. The purpose of this meeting was to invite National Data Centre experts as well as network operators from Africa, Europe, the Middle East, Asia, Australia, Latin and North America to discuss the context under which a project to extend the RSTT model would be implemented. A total of 41 participants from 32 Member States were present. The Latin America and Caribbean region with a rapidly expanding group of advanced seismic networks was selected as a pilot project for the implementation of a regional RSTT initiative. This poster will assess the actions taken by the IDC in order to advance the RSTT project in the Latin America and Caribbean Region.
NASA Astrophysics Data System (ADS)
Sellars, S. L.; Kawzenuk, B.; Nguyen, P.; Ralph, F. M.; Sorooshian, S.
2017-12-01
The CONNected objECT (CONNECT) algorithm is applied to global Integrated Water Vapor Transport data from the NASA's Modern-Era Retrospective Analysis for Research and Applications - Version 2 reanalysis product for the period of 1980 to 2016. The algorithm generates life-cycle records in time and space evolving strong vapor transport events. We show five regions, located in the midlatitudes, where events typically exist (off the coast of the southeast United States, eastern China, eastern South America, off the southern tip of South Africa, and in the southeastern Pacific Ocean). Global statistics show distinct genesis and termination regions and global seasonal peak frequency during Northern Hemisphere late fall/winter and Southern Hemisphere winter. In addition, the event frequency and geographical location are shown to be modulated by the Arctic Oscillation, Pacific North American Pattern, and the quasi-biennial oscillation. Moreover, a positive linear trend in the annual number of objects is reported, increasing by 3.58 objects year-over-year.
Event-related potentials to structural familiar face incongruity processing.
Jemel, B; George, N; Olivares, E; Fiori, N; Renault, B
1999-07-01
Thirty scalp sites were used to investigate the specific topography of the event-related potentials (ERPs) related to face associative priming when masked eyes of familiar faces were completed with either the proper features or incongruent ones. The enhanced negativity of N210 and N350, due to structural incongruity of faces, have a "category specific" inferotemporal localization on the scalp. Additional analyses support the existence of multiple ERP features within the temporal interval typically associated with N400 (N350 and N380), involving occipitotemporal and centroparietal areas. Seven reliable dipole locations have been evidenced using the brain electrical source analysis algorithm. Some of these localizations (fusiform, parahippocampal) are already known to be involved in face recognition, the other ones being related to general cognitive processes related to the task's demand. Because of their specific topography, the observed effects suggest that the face structural congruency process might involve early specialized neocortical areas in parallel with cortical memory circuits in the integration of perceptual and cognitive face processing.
Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.
Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart
2014-10-01
Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our algorithm and relates them to Action Units that have been associated with pain expression. We conclude the paper by demonstrating that MS-MIL yields a significant improvement on another spontaneous facial expression dataset, the FEEDTUM dataset.
Three-dimensional Probabilistic Earthquake Location Applied to 2002-2003 Mt. Etna Eruption
NASA Astrophysics Data System (ADS)
Mostaccio, A.; Tuve', T.; Zuccarello, L.; Patane', D.; Saccorotti, G.; D'Agostino, M.
2005-12-01
Recorded seismicity for the Mt. Etna volcano, occurred during the 2002-2003 eruption, has been relocated using a probabilistic, non-linear, earthquake location approach. We used the software package NonLinLoc (Lomax et al., 2000) adopting the 3D velocity model obtained by Cocina et al., 2005. We applied our data through different algorithms: (1) via a grid-search; (2) via a Metropolis-Gibbs; and (3) via an Oct-tree. The Oct-Tree algorithm gives efficient, faster and accurate mapping of the PDF (Probability Density Function) of the earthquake location problem. More than 300 seismic events were analyzed in order to compare non-linear location results with the ones obtained by using traditional, linearized earthquake location algorithm such as Hypoellipse, and a 3D linearized inversion (Thurber, 1983). Moreover, we compare 38 focal mechanisms, chosen following stricta criteria selection, with the ones obtained by the 3D and 1D results. Although the presented approach is more of a traditional relocation application, probabilistic earthquake location could be used in routinely survey.
A Bayesian framework for infrasound location
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.; Arrowsmith, Stephen J.; Anderson, Dale N.
2010-04-01
We develop a framework for location of infrasound events using backazimuth and infrasonic arrival times from multiple arrays. Bayesian infrasonic source location (BISL) developed here estimates event location and associated credibility regions. BISL accounts for unknown source-to-array path or phase by formulating infrasonic group velocity as random. Differences between observed and predicted source-to-array traveltimes are partitioned into two additive Gaussian sources, measurement error and model error, the second of which accounts for the unknown influence of wind and temperature on path. By applying the technique to both synthetic tests and ground-truth events, we highlight the complementary nature of back azimuths and arrival times for estimating well-constrained event locations. BISL is an extension to methods developed earlier by Arrowsmith et al. that provided simple bounds on location using a grid-search technique.
NASA Astrophysics Data System (ADS)
Cannata, A.; Montalto, P.; Aliotta, M.; Cassisi, C.; Pulvirenti, A.; Privitera, E.; Patanè, D.
2011-04-01
Active volcanoes generate sonic and infrasonic signals, whose investigation provides useful information for both monitoring purposes and the study of the dynamics of explosive phenomena. At Mt. Etna volcano (Italy), a pattern recognition system based on infrasonic waveform features has been developed. First, by a parametric power spectrum method, the features describing and characterizing the infrasound events were extracted: peak frequency and quality factor. Then, together with the peak-to-peak amplitude, these features constituted a 3-D ‘feature space’; by Density-Based Spatial Clustering of Applications with Noise algorithm (DBSCAN) three clusters were recognized inside it. After the clustering process, by using a common location method (semblance method) and additional volcanological information concerning the intensity of the explosive activity, we were able to associate each cluster to a particular source vent and/or a kind of volcanic activity. Finally, for automatic event location, clusters were used to train a model based on Support Vector Machine, calculating optimal hyperplanes able to maximize the margins of separation among the clusters. After the training phase this system automatically allows recognizing the active vent with no location algorithm and by using only a single station.
NASA Astrophysics Data System (ADS)
Beucler, E.; Haugmard, M.; Mocquet, A.
2016-12-01
The most widely used inversion schemes to locate earthquakes are based on iterative linearized least-squares algorithms and using an a priori knowledge of the propagation medium. When a small amount of observations is available for moderate events for instance, these methods may lead to large trade-offs between outputs and both the velocity model and the initial set of hypocentral parameters. We present a joint structure-source determination approach using Bayesian inferences. Monte-Carlo continuous samplings, using Markov chains, generate models within a broad range of parameters, distributed according to the unknown posterior distributions. The non-linear exploration of both the seismic structure (velocity and thickness) and the source parameters relies on a fast forward problem using 1-D travel time computations. The a posteriori covariances between parameters (hypocentre depth, origin time and seismic structure among others) are computed and explicitly documented. This method manages to decrease the influence of the surrounding seismic network geometry (sparse and/or azimuthally inhomogeneous) and a too constrained velocity structure by inferring realistic distributions on hypocentral parameters. Our algorithm is successfully used to accurately locate events of the Armorican Massif (western France), which is characterized by moderate and apparently diffuse local seismicity.
NASA Astrophysics Data System (ADS)
Takiguchi, Yu; Toyoda, Haruyoshi
2017-11-01
We report here an algorithm for calculating a hologram to be employed in a high-access speed microscope for observing sensory-driven synaptic activity across all inputs to single living neurons in an intact cerebral cortex. The system is based on holographic multi-beam generation using a two-dimensional phase-only spatial light modulator to excite multiple locations in three dimensions with a single hologram. The hologram was calculated with a three-dimensional weighted iterative Fourier transform method using the Ewald sphere restriction to increase the calculation speed. Our algorithm achieved good uniformity of three dimensionally generated excitation spots; the standard deviation of the spot intensities was reduced by a factor of two compared with a conventional algorithm.
NASA Astrophysics Data System (ADS)
Takiguchi, Yu; Toyoda, Haruyoshi
2018-06-01
We report here an algorithm for calculating a hologram to be employed in a high-access speed microscope for observing sensory-driven synaptic activity across all inputs to single living neurons in an intact cerebral cortex. The system is based on holographic multi-beam generation using a two-dimensional phase-only spatial light modulator to excite multiple locations in three dimensions with a single hologram. The hologram was calculated with a three-dimensional weighted iterative Fourier transform method using the Ewald sphere restriction to increase the calculation speed. Our algorithm achieved good uniformity of three dimensionally generated excitation spots; the standard deviation of the spot intensities was reduced by a factor of two compared with a conventional algorithm.
Parallel Fault Strands at 9-km Depth Resolved on the Imperial Fault, Southern California
NASA Astrophysics Data System (ADS)
Shearer, P. M.
2001-12-01
The Imperial Fault is one of the most active faults in California with several M>6 events during the 20th century and geodetic results suggesting that it currently carries almost 80% of the total plate motion between the Pacific and North American plates. We apply waveform cross-correlation to a group of ~1500 microearthquakes along the Imperial Fault and find that about 25% of the events form similar event clusters. Event relocation based on precise differential times among events in these clusters reveals multiple streaks of seismicity up to 5 km in length that are at a nearly constant depth of ~9 km but are spaced about 0.5 km apart in map view. These multiples are unlikely to be a location artifact because they are spaced more widely than the computed location errors and different streaks can be resolved within individual similar event clusters. The streaks are parallel to the mapped surface rupture of the 1979 Mw=6.5 Imperial Valley earthquake. No obvious temporal migration of the event locations is observed. Limited focal mechanism data for the events within the streaks are consistent with right-lateral slip on vertical fault planes. The seismicity not contained in similar event clusters cannot be located as precisely; our locations for these events scatter between 7 and 11 km depth, but it is possible that their true locations could be much more tightly clustered. The observed streaks have some similarities to those previously observed in northern California along the San Andreas and Hayward faults (e.g., Rubin et al., 1999; Waldhauser et al., 1999); however those streaks were imaged within a single fault plane rather than the multiple faults resolved on the Imperial Fault. The apparent constant depth of the Imperial streaks is similar to that seen in Hawaii at much shallower depth by Gillard et al. (1996). Geodetic results (e.g., Lyons et al., 2001) suggest that the Imperial Fault is currently slipping at 45 mm/yr below a locked portion that extends to ~10 km depth. We interpret our observed seismicity streaks as representing activity on multiple fault strands at transition depths between the locked shallow part of the Imperial Fault and the slipping portion at greater depths. It is likely that these strands extend into the aseismic region below, suggesting that the lower crustal shear zone is at least 2 km wide.
NASA Technical Reports Server (NTRS)
Turso, James; Lawrence, Charles; Litt, Jonathan
2004-01-01
The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.
NASA Technical Reports Server (NTRS)
Turso, James A.; Lawrence, Charles; Litt, Jonathan S.
2007-01-01
The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/ health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite-element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.
Parameterization of synoptic weather systems in the South Atlantic Bight for modeling applications
NASA Astrophysics Data System (ADS)
Wu, Xiaodong; Voulgaris, George; Kumar, Nirnimesh
2017-10-01
An event based, long-term, climatological analysis is presented that allows the creation of coastal ocean atmospheric forcing on the coastal ocean that preserves both frequency of occurrence and event time history. An algorithm is developed that identifies individual storm event (cold fronts, warm fronts, and tropical storms) from meteorological records. The algorithm has been applied to a location along the South Atlantic Bight, off South Carolina, an area prone to cyclogenesis occurrence and passages of atmospheric fronts. Comparison against daily weather maps confirms that the algorithm is efficient in identifying cold fronts and warm fronts, while the identification of tropical storms is less successful. The average state of the storm events and their variability are represented by the temporal evolution of atmospheric pressure, air temperature, wind velocity, and wave directional spectral energy. The use of uncorrected algorithm-detected events provides climatologies that show a little deviation from those derived using corrected events. The effectiveness of this analysis method is further verified by numerically simulating the wave conditions driven by the characteristic wind forcing and comparing the results with the wave climatology that corresponds to each storm type. A high level of consistency found in the comparison indicates that this analysis method can be used for accurately characterizing event-based oceanic processes and long-term storm-induced morphodynamic processes on wind-dominated coasts.
Cost-effective solutions to maintaining smart grid reliability
NASA Astrophysics Data System (ADS)
Qin, Qiu
As the aging power systems are increasingly working closer to the capacity and thermal limits, maintaining an sufficient reliability has been of great concern to the government agency, utility companies and users. This dissertation focuses on improving the reliability of transmission and distribution systems. Based on the wide area measurements, multiple model algorithms are developed to diagnose transmission line three-phase short to ground faults in the presence of protection misoperations. The multiple model algorithms utilize the electric network dynamics to provide prompt and reliable diagnosis outcomes. Computational complexity of the diagnosis algorithm is reduced by using a two-step heuristic. The multiple model algorithm is incorporated into a hybrid simulation framework, which consist of both continuous state simulation and discrete event simulation, to study the operation of transmission systems. With hybrid simulation, line switching strategy for enhancing the tolerance to protection misoperations is studied based on the concept of security index, which involves the faulted mode probability and stability coverage. Local measurements are used to track the generator state and faulty mode probabilities are calculated in the multiple model algorithms. FACTS devices are considered as controllers for the transmission system. The placement of FACTS devices into power systems is investigated with a criterion of maintaining a prescribed level of control reconfigurability. Control reconfigurability measures the small signal combined controllability and observability of a power system with an additional requirement on fault tolerance. For the distribution systems, a hierarchical framework, including a high level recloser allocation scheme and a low level recloser placement scheme, is presented. The impacts of recloser placement on the reliability indices is analyzed. Evaluation of reliability indices in the placement process is carried out via discrete event simulation. The reliability requirements are described with probabilities and evaluated from the empirical distributions of reliability indices.
Adaptive Waveform Correlation Detectors for Arrays: Algorithms for Autonomous Calibration
2007-09-01
March 17, 2005. The seismic signals from both master and detected events are followed by infrasound arrivals. Note the long duration of the...correlation coefficient traces with a significant array -gain. A detected event that is co-located with the master event will record the same time-difference...estimating the detection threshold reduction for a range of highly repeating seismic sources using arrays of different configurations and at different
Seismic envelope-based detection and location of ground-coupled airwaves from volcanoes in Alaska
Fee, David; Haney, Matt; Matoza, Robin S.; Szuberla, Curt A.L.; Lyons, John; Waythomas, Christopher F.
2016-01-01
Volcanic explosions and other infrasonic sources frequently produce acoustic waves that are recorded by seismometers. Here we explore multiple techniques to detect, locate, and characterize ground‐coupled airwaves (GCA) on volcano seismic networks in Alaska. GCA waveforms are typically incoherent between stations, thus we use envelope‐based techniques in our analyses. For distant sources and planar waves, we use f‐k beamforming to estimate back azimuth and trace velocity parameters. For spherical waves originating within the network, we use two related time difference of arrival (TDOA) methods to detect and localize the source. We investigate a modified envelope function to enhance the signal‐to‐noise ratio and emphasize both high energies and energy contrasts within a spectrogram. We apply these methods to recent eruptions from Cleveland, Veniaminof, and Pavlof Volcanoes, Alaska. Array processing of GCA from Cleveland Volcano on 4 May 2013 produces robust detection and wave characterization. Our modified envelopes substantially improve the short‐term average/long‐term average ratios, enhancing explosion detection. We detect GCA within both the Veniaminof and Pavlof networks from the 2007 and 2013–2014 activity, indicating repeated volcanic explosions. Event clustering and forward modeling suggests that high‐resolution localization is possible for GCA on typical volcano seismic networks. These results indicate that GCA can be used to help detect, locate, characterize, and monitor volcanic eruptions, particularly in difficult‐to‐monitor regions. We have implemented these GCA detection algorithms into our operational volcano‐monitoring algorithms at the Alaska Volcano Observatory.
Khachatryan, Vardan
2015-02-12
The performance of missing transverse energy reconstruction algorithms is presented by our team using√s=8 TeV proton-proton (pp) data collected with the CMS detector. Events with anomalous missing transverse energy are studied, and the performance of algorithms used to identify and remove these events is presented. The scale and resolution for missing transverse energy, including the effects of multiple pp interactions (pileup), are measured using events with an identified Z boson or isolated photon, and are found to be well described by the simulation. Novel missing transverse energy reconstruction algorithms developed specifically to mitigate the effects of large numbers of pileupmore » interactions on the missing transverse energy resolution are presented. These algorithms significantly reduce the dependence of the missing transverse energy resolution on pileup interactions. Furthermore, an algorithm that provides an estimate of the significance of the missing transverse energy is presented, which is used to estimate the compatibility of the reconstructed missing transverse energy with a zero nominal value.« less
LensFlow: A Convolutional Neural Network in Search of Strong Gravitational Lenses
NASA Astrophysics Data System (ADS)
Pourrahmani, Milad; Nayyeri, Hooshang; Cooray, Asantha
2018-03-01
In this work, we present our machine learning classification algorithm for identifying strong gravitational lenses from wide-area surveys using convolutional neural networks; LENSFLOW. We train and test the algorithm using a wide variety of strong gravitational lens configurations from simulations of lensing events. Images are processed through multiple convolutional layers that extract feature maps necessary to assign a lens probability to each image. LENSFLOW provides a ranking scheme for all sources that could be used to identify potential gravitational lens candidates by significantly reducing the number of images that have to be visually inspected. We apply our algorithm to the HST/ACS i-band observations of the COSMOS field and present our sample of identified lensing candidates. The developed machine learning algorithm is more computationally efficient and complimentary to classical lens identification algorithms and is ideal for discovering such events across wide areas from current and future surveys such as LSST and WFIRST.
Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm
Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.
2010-01-01
A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155
Nam, Haewon
2017-01-01
We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794
Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources.
Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter
2016-01-01
Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. EEG data were generated by simulating multiple cortical sources (2-4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms.
Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources
Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter
2016-01-01
Background Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. Methods EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. Results While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms. PMID:26809000
Design of an Acoustic Target Intrusion Detection System Based on Small-Aperture Microphone Array.
Zu, Xingshui; Guo, Feng; Huang, Jingchang; Zhao, Qin; Liu, Huawei; Li, Baoqing; Yuan, Xiaobing
2017-03-04
Automated surveillance of remote locations in a wireless sensor network is dominated by the detection algorithm because actual intrusions in such locations are a rare event. Therefore, a detection method with low power consumption is crucial for persistent surveillance to ensure longevity of the sensor networks. A simple and effective two-stage algorithm composed of energy detector (ED) and delay detector (DD) with all its operations in time-domain using small-aperture microphone array (SAMA) is proposed. The algorithm analyzes the quite different velocities between wind noise and sound waves to improve the detection capability of ED in the surveillance area. Experiments in four different fields with three types of vehicles show that the algorithm is robust to wind noise and the probability of detection and false alarm are 96.67% and 2.857%, respectively.
Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems
Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao
2016-01-01
In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896
Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.
Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao
2016-03-12
In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.
Bernhardt, Paul W.; Zhang, Daowen; Wang, Huixia Judy
2014-01-01
Joint modeling techniques have become a popular strategy for studying the association between a response and one or more longitudinal covariates. Motivated by the GenIMS study, where it is of interest to model the event of survival using censored longitudinal biomarkers, a joint model is proposed for describing the relationship between a binary outcome and multiple longitudinal covariates subject to detection limits. A fast, approximate EM algorithm is developed that reduces the dimension of integration in the E-step of the algorithm to one, regardless of the number of random effects in the joint model. Numerical studies demonstrate that the proposed approximate EM algorithm leads to satisfactory parameter and variance estimates in situations with and without censoring on the longitudinal covariates. The approximate EM algorithm is applied to analyze the GenIMS data set. PMID:25598564
Improvements on the seismic catalog previous to the 2011 El Hierro eruption.
NASA Astrophysics Data System (ADS)
Domínguez Cerdeña, Itahiza; del Fresno, Carmen
2017-04-01
Precursors from the submarine eruption of El Hierro (Canary Islands) in 2011 included 10,000 low magnitude earthquakes and 5 cm crustal deformation within 81 days previous to the eruption onset on the 10th October. Seismicity revealed a 20 km horizontal migration from the North to the South of the island and depths ranging from 10 and 17 km with deeper events occurring further South. The earthquakes of the seismic catalog were manually picked by the IGN almost in real time, but there has not been a subsequent revision to check for new non located events jet and the completeness magnitude for the seismic catalog have strong changes during the entire swarm due to the variable number of events per day. In this work we used different techniques to improve the quality of the seismic catalog. First we applied different automatic algorithms to detect new events including the LTA-STA method. Then, we performed a semiautomatic system to correlate the new P and S detections with known phases from the original catalog. The new detected earthquakes were also located using Hypoellipse algorithm. The resulting new catalog included 15,000 new events mainly concentrated in the last weeks of the swarm and we assure a completeness magnitude of 1.2 during the whole series. As the seismicity from the original catalog was already relocated using hypoDD algorithm, we improved the location of the new events using a master-cluster relocation. This method consists in relocating earthquakes towards a cluster of well located events instead of a single event as the master-event method. In our case this cluster correspond to the relocated earthquakes from the original catalog. Finally, we obtained a new equation for the local magnitude estimation which allow us to include corrections for each seismic station in order to avoid local effects. The resulting magnitude catalog has a better fit with the moment magnitude catalog obtained for the strong earthquakes of this series in previous studies. Moreover, we also computed the spatial and temporal evolution of the b value from the Gutenberg-Richter relation of the improved catalog. The b value map and evolution of the relocated seismicity suggests the presence of a expanding sill of magma in the north of the island at the beginning of the unrest. During the last month of the series, seismicity tracked a magma migration towards the South, where there was the final vent of the submarine eruption.
NASA Astrophysics Data System (ADS)
Gica, E.
2016-12-01
The Short-term Inundation Forecasting for Tsunamis (SIFT) tool, developed by NOAA Center for Tsunami Research (NCTR) at the Pacific Marine Environmental Laboratory (PMEL), is used in forecast operations at the Tsunami Warning Centers in Alaska and Hawaii. The SIFT tool relies on a pre-computed tsunami propagation database, real-time DART buoy data, and an inversion algorithm to define the tsunami source. The tsunami propagation database is composed of 50×100km unit sources, simulated basin-wide for at least 24 hours. Different combinations of unit sources, DART buoys, and length of real-time DART buoy data can generate a wide range of results within the defined tsunami source. For an inexperienced SIFT user, the primary challenge is to determine which solution, among multiple solutions for a single tsunami event, would provide the best forecast in real time. This study investigates how the use of different tsunami sources affects simulated tsunamis at tide gauge locations. Using the tide gauge at Hilo, Hawaii, a total of 50 possible solutions for the 2011 Tohoku tsunami are considered. Maximum tsunami wave amplitude and root mean square error results are used to compare tide gauge data and the simulated tsunami time series. Results of this study will facilitate SIFT users' efforts to determine if the simulated tide gauge tsunami time series from a specific tsunami source solution would be within the range of possible solutions. This study will serve as the basis for investigating more historical tsunami events and tide gauge locations.
Higher moments of net-proton multiplicity distributions in a heavy-ion event pile-up scenario
NASA Astrophysics Data System (ADS)
Garg, P.; Mishra, D. K.
2017-10-01
High-luminosity modern accelerators, like the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) and Large Hadron Collider (LHC) at European Organization for Nuclear Research (CERN), inherently have event pile-up scenarios which significantly contribute to physics events as a background. While state-of-the-art tracking algorithms and detector concepts take care of these event pile-up scenarios, several offline analytical techniques are used to remove such events from the physics analysis. It is still difficult to identify the remaining pile-up events in an event sample for physics analysis. Since the fraction of these events is significantly small, it may not be as serious of an issue for other analyses as it would be for an event-by-event analysis. Particularly when the characteristics of the multiplicity distribution are observable, one needs to be very careful. In the present work, we demonstrate how a small fraction of residual pile-up events can change the moments and their ratios of an event-by-event net-proton multiplicity distribution, which are sensitive to the dynamical fluctuations due to the QCD critical point. For this study, we assume that the individual event-by-event proton and antiproton multiplicity distributions follow Poisson, negative binomial, or binomial distributions. We observe a significant effect in cumulants and their ratios of net-proton multiplicity distributions due to pile-up events, particularly at lower energies. It might be crucial to estimate the fraction of pile-up events in the data sample while interpreting the experimental observable for the critical point.
NASA Astrophysics Data System (ADS)
Meier, M.; Cua, G. B.; Wiemer, S.; Fischer, M.
2011-12-01
The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) that uses observed phase arrivals, ground motion amplitudes and selected prior information to estimate earthquake magnitude, location and origin time, and predict the distribution of peak ground motion throughout a region using envelope attenuation relationships. Implementation of the VS algorithm in California is an on-going effort of the Swiss Seismological Service (SED) at ETH Zürich. VS is one of three EEW algorithms - the other two being ElarmS (Allen and Kanamori, 2003) and On-Site (Wu and Kanamori, 2005; Boese et al., 2008) - that form the basis of the California Integrated Seismic Network ShakeAlert system, a prototype end-to-end EEW system that could potentially be implemented in California. The current prototype version of VS in California requires picks at 4 stations to initiate an event declaration. On average, taking into account data latency, variable station distribution, and processing time, this initial estimate is available about 20 seconds after the earthquake origin time, corresponding to a blind zone of about 70 km around the epicenter which would receive no warning, but where it would be the most useful. To increase the available warning time, we want to produce EEW estimates faster (with less than 4 stations). However, working with less than 4 stations with our current approach would increase the number of false alerts, for which there is very little tolerance in a useful EEW system. We explore the use of back-azimuth estimations and the Voronoi-based concept of not-yet-arrived data for reducing false alerts of the earliest VS estimates. The concept of not-yet-arrived data was originally used to provide evolutionary location estimates in EEW (Horiuchi, 2005; Cua and Heaton, 2007; Satriano et al. 2008). However, it can also be applied in discriminating between earthquake and non-earthquake signals. For real earthquakes, the constraints on earthquake location from the not-yet-arrived data and the back-azimuth estimations are consistent with location constraints from the available picks. For non-earthquake signals, these different location constraints are in most cases inconsistent. We use archived event data from the Northern and Southern California Seismic Networks as well as archived continuous waveform data from where the current VS codes erroneously declared events to quantify how using a combination of pick-based and not-yet-arrived data constraints can reduce VS false alert rates while providing faster warning information. The consistency of the pick-based and not-yet-arrived data constraints are mapped into the VS likelihood parameter, which reflects the degree of believe that the signals come from a real earthquake. This approach contributes towards improving the robustness of the Virtual Seismologist Multiple Threshold Event Detection (VS-MTED), which allows for single-station event declarations, when signal amplitudes are large enough.
Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Castano, Diego J.
1987-01-01
Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.
Merging NLO multi-jet calculations with improved unitarization
NASA Astrophysics Data System (ADS)
Bellm, Johannes; Gieseke, Stefan; Plätzer, Simon
2018-03-01
We present an algorithm to combine multiple matrix elements at LO and NLO with a parton shower. We build on the unitarized merging paradigm. The inclusion of higher orders and multiplicities reduce the scale uncertainties for observables sensitive to hard emissions, while preserving the features of inclusive quantities. The combination allows further soft and collinear emissions to be predicted by the all-order parton-shower approximation. We inspect the impact of terms that are formally but not parametrically negligible. We present results for a number of collider observables where multiple jets are observed, either on their own or in the presence of additional uncoloured particles. The algorithm is implemented in the event generator Herwig.
Heist, E Kevin; Herre, John M; Binkley, Philip F; Van Bakel, Adrian B; Porterfield, James G; Porterfield, Linda M; Qu, Fujian; Turkel, Melanie; Pavri, Behzad B
2014-10-15
Detect Fluid Early from Intrathoracic Impedance Monitoring (DEFEAT-PE) is a prospective, multicenter study of multiple intrathoracic impedance vectors to detect pulmonary congestion (PC) events. Changes in intrathoracic impedance between the right ventricular (RV) coil and device can (RVcoil→Can) of implantable cardioverter-defibrillators (ICDs) and cardiac resynchronization therapy ICDs (CRT-Ds) are used clinically for the detection of PC events, but other impedance vectors and algorithms have not been studied prospectively. An initial 75-patient study was used to derive optimal impedance vectors to detect PC events, with 2 vector combinations selected for prospective analysis in DEFEAT-PE (ICD vectors: RVring→Can + RVcoil→Can, detection threshold 13 days; CRT-D vectors: left ventricular ring→Can + RVcoil→Can, detection threshold 14 days). Impedance changes were considered true positive if detected <30 days before an adjudicated PC event. One hundred sixty-two patients were enrolled (80 with ICDs and 82 with CRT-Ds), all with ≥1 previous PC event. One hundred forty-four patients provided study data, with 214 patient-years of follow-up and 139 PC events. Sensitivity for PC events of the prespecified algorithms was as follows: ICD: sensitivity 32.3%, false-positive rate 1.28 per patient-year; CRT-D: sensitivity 32.4%, false-positive rate 1.66 per patient-year. An alternative algorithm, ultimately approved by the US Food and Drug Administration (RVring→Can + RVcoil→Can, detection threshold 14 days), resulted in (for all patients) sensitivity of 21.6% and a false-positive rate of 0.9 per patient-year. The CRT-D thoracic impedance vector algorithm selected in the derivation study was not superior to the ICD algorithm RVring→Can + RVcoil→Can when studied prospectively. In conclusion, to achieve an acceptably low false-positive rate, the intrathoracic impedance algorithms studied in DEFEAT-PE resulted in low sensitivity for the prediction of heart failure events. Copyright © 2014 Elsevier Inc. All rights reserved.
Long-term changes of the glacial seismicity: case study from Spitsbergen
NASA Astrophysics Data System (ADS)
Gajek, Wojciech; Trojanowski, Jacek; Malinowski, Michał
2016-04-01
Changes in global temperature balance have proved to have a major impact on the cryosphere, and therefore withdrawing glaciers are the symbol of the warming climate. Our study focuses on year-to-year changes in glacier-generated seismicity. We have processed 7-year long continuous seismological data recorded by the HSP broadband station located in the proximity of Hansbreen glacier (Hornsund, southern Spitsbergen), obtaining seismic activity distribution between 2008 and 2014. We developed a new fuzzy logic algorithm to distinguish between glacier- and non-glacier-origin events. The algorithm takes into account the frequency of seismic signal and the energy flow in certain time interval. Our research has revealed that the number of detected glacier-origin events over last two years has doubled. Annual events distribution correlates well with temperature and precipitation curves, illustrating characteristic yearlong behaviour of glacier seismic activity. To further support our observations, we have analysed 5-year long distribution of glacier-origin tremors detected in the vicinity of the Kronebreen glacier using KBS broadband station located in Ny-Ålesund (western Spitsbergen). We observe a steady increase in the number of detected events. detected each year, however not as significant as for Hornsund dataset.
Jun, James Jaeyoon; Longtin, André; Maler, Leonard
2013-01-01
In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source localization. PMID:23805244
Statistical reconstruction for cosmic ray muon tomography.
Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J
2007-08-01
Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.
Minimizing Security Forces Response Times Through the Use of Facility Location Methodologies
2005-03-01
etc. In his book Network and Discrete Location: Models, Algorithms, and Applications author Mark S. Daskin provides a comprehensive introduction to...the art and science of locating facilities. ( Daskin , 1995) The book espouses to be a hands-on guide to using and developing facility location models...method used in this research is from Daskin (1995). The formulation presented by Daskin includes a multiplicative weighting factor for the amount
NASA Technical Reports Server (NTRS)
Mehr, Ali Farhang; Sauvageon, Julien; Agogino, Alice M.; Tumer, Irem Y.
2006-01-01
Recent advances in micro electromechanical systems technology, digital electronics, and wireless communications have enabled development of low-cost, low-power, multifunctional miniature smart sensors. These sensors can be deployed throughout a region in an aerospace vehicle to build a network for measurement, detection and surveillance applications. Event detection using such centralized sensor networks is often regarded as one of the most promising health management technologies in aerospace applications where timely detection of local anomalies has a great impact on the safety of the mission. In this paper, we propose to conduct a qualitative comparison of several local event detection algorithms for centralized redundant sensor networks. The algorithms are compared with respect to their ability to locate and evaluate an event in the presence of noise and sensor failures for various node geometries and densities.
hypoDD-A Program to Compute Double-Difference Hypocenter Locations
Waldhauser, Felix
2001-01-01
HypoDD is a Fortran computer program package for relocating earthquakes with the double-difference algorithm of Waldhauser and Ellsworth (2000). This document provides a brief introduction into how to run and use the programs ph2dt and hypoDD to compute double-difference (DD) hypocenter locations. It gives a short overview of the DD technique, discusses the data preprocessing using ph2dt, and leads through the earthquake relocation process using hypoDD. The appendices include the reference manuals for the two programs and a short description of auxiliary programs and example data. Some minor subroutines are presently in the c language, and future releases will be in c. Earthquake location algorithms are usually based on some form of Geiger’s method, the linearization of the travel time equation in a first order Taylor series that relates the difference between the observed and predicted travel time to unknown adjustments in the hypocentral coordinates through the partial derivatives of travel time with respect to the unknowns. Earthquakes can be located individually with this algorithm, or jointly when other unknowns link together the solutions to indivdual earthquakes, such as station corrections in the joint hypocenter determination (JHD) method, or the earth model in seismic tomography. The DD technique (described in detail in Waldhauser and Ellsworth, 2000) takes advantage of the fact that if the hypocentral separation between two earthquakes is small compared to the event-station distance and the scale length of velocity heterogeneity, then the ray paths between the source region and a common station are similar along almost the entire ray path (Fréchet, 1985; Got et al., 1994). In this case, the difference in travel times for two events observed at one station can be attributed to the spatial offset between the events with high accuracy. DD equations are built by differencing Geiger’s equation for earthquake location. In this way, the residual between observed and calculated travel-time difference (or double-difference) between two events at a common station are a related to adjustments in the relative position of the hypocenters and origin times through the partial derivatives of the travel times for each event with respect to the unknown. HypoDD calculates travel times in a layered velocity model (where velocity depends only on depth) for the current hypocenters at the station where the phase was recorded. The double-difference residuals for pairs of earthquakes at each station are minimized by weighted least squares using the method of singular value decomposition (SVD) or the conjugate gradients method (LSQR, Paige and Saunders, 1982). Solutions are found by iteratively adjusting the vector difference between nearby hypocentral pairs, with the locations and partial derivatives being updated after each iteration. Details about the algorithm can be found in Waldhauser and Ellsworth (2000). When the earthquake location problem is linearized using the double-difference equations, the common mode errors cancel, principally those related to the receiver-side structure. Thus we avoid the need for station corrections or high-accuracy of predicted travel times for the portion of the raypath that lies outside the focal volume. This approach is especially useful in regions with a dense distribution of seismicity, i.e. where distances between neighboring events are only a few hundred meters. The improvement of double-difference locations over ordinary JHD locations is shown in Figure 1 for about 10,000 earthquakes that occurred during the 1997 seismic crisis in the Long Valley caldera, California. While the JHD locations (left panel) show a diffuse picture of the seismicity, double-difference locations (right panel) bring structural details such as the location of active fault planes into sharp focus.
Connecting Shock Parameters to the Radiation Hazard from Energetic Particles
NASA Technical Reports Server (NTRS)
Berdichevsky, Daniel B.; Reames, Donald V.; Lepping, Ronald P.; Schwenn, Rainer W.
2004-01-01
We use data from Helios, IMP-8, and other spacecraft (e.g. ISEE) to systematically investigate solar energetic particle (SEP) events from different longitudes and distances in the heliosphere. The purpose of the project is to assess empirically the connection between the morphology of the travelling shock and strength with observed enhancements in the flow of energized particles in shock accelerated particle (SEP) events (also often identified as "gradual" solar energetic particle events). Activities during this first year centered on the organization of the SEPs events and their correlation with solar wind observations at multiple spacecraft locations. From an identified list of more than 30 SEPs events at multiple spacecraft locations, currently four single cases for detailed study were selected and are in an advance phase of preparation for publication. Preliminary results of these four cases were presented at AGU Spring and Fall 2003 meetings, and other meetings on SEPs.
GPS-Free Localization Algorithm for Wireless Sensor Networks
Wang, Lei; Xu, Qingzheng
2010-01-01
Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time. PMID:22219694
Propagation of the velocity model uncertainties to the seismic event location
NASA Astrophysics Data System (ADS)
Gesret, A.; Desassis, N.; Noble, M.; Romary, T.; Maisons, C.
2015-01-01
Earthquake hypocentre locations are crucial in many domains of application (academic and industrial) as seismic event location maps are commonly used to delineate faults or fractures. The interpretation of these maps depends on location accuracy and on the reliability of the associated uncertainties. The largest contribution to location and uncertainty errors is due to the fact that the velocity model errors are usually not correctly taken into account. We propose a new Bayesian formulation that integrates properly the knowledge on the velocity model into the formulation of the probabilistic earthquake location. In this work, the velocity model uncertainties are first estimated with a Bayesian tomography of active shot data. We implement a sampling Monte Carlo type algorithm to generate velocity models distributed according to the posterior distribution. In a second step, we propagate the velocity model uncertainties to the seismic event location in a probabilistic framework. This enables to obtain more reliable hypocentre locations as well as their associated uncertainties accounting for picking and velocity model uncertainties. We illustrate the tomography results and the gain in accuracy of earthquake location for two synthetic examples and one real data case study in the context of induced microseismicity.
Location-based prospective memory.
O'Rear, Andrea E; Radvansky, Gabriel A
2018-02-01
This study explores location-based prospective memory. People often have to remember to do things when in a particular location, such as buying tissues the next time they are in the supermarket. For event cognition theory, location is important for structuring events. However, because event cognition has not been used to examine prospective memory, the question remains of how multiple events will influence prospective memory performance. In our experiments, people delivered messages from store to store in a virtual shopping mall as an ongoing task. The prospective tasks were to do certain activities in certain stores. For Experiment 1, each trial involved one prospective memory task to be done in a single location at one of three delays. The virtual environment and location cues were effective for prospective memory, and performance was unaffected by delay. For Experiment 2, each trial involved two prospective memory tasks, given in either one or two instruction locations, and to be done in either one or two store locations. There was improved performance when people received instructions from two locations and did both tasks in one location relative to other combinations. This demonstrates that location-based event structure influences how well people perform on prospective memory tasks.
Multi-Robot, Multi-Target Particle Swarm Optimization Search in Noisy Wireless Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Derr; Milos Manic
Multiple small robots (swarms) can work together using Particle Swarm Optimization (PSO) to perform tasks that are difficult or impossible for a single robot to accomplish. The problem considered in this paper is exploration of an unknown environment with the goal of finding a target(s) at an unknown location(s) using multiple small mobile robots. This work demonstrates the use of a distributed PSO algorithm with a novel adaptive RSS weighting factor to guide robots for locating target(s) in high risk environments. The approach was developed and analyzed on multiple robot single and multiple target search. The approach was further enhancedmore » by the multi-robot-multi-target search in noisy environments. The experimental results demonstrated how the availability of radio frequency signal can significantly affect robot search time to reach a target.« less
Nandola, Naresh N.; Rivera, Daniel E.
2011-01-01
This paper presents a data-centric modeling and predictive control approach for nonlinear hybrid systems. System identification of hybrid systems represents a challenging problem because model parameters depend on the mode or operating point of the system. The proposed algorithm applies Model-on-Demand (MoD) estimation to generate a local linear approximation of the nonlinear hybrid system at each time step, using a small subset of data selected by an adaptive bandwidth selector. The appeal of the MoD approach lies in the fact that model parameters are estimated based on a current operating point; hence estimation of locations or modes governed by autonomous discrete events is achieved automatically. The local MoD model is then converted into a mixed logical dynamical (MLD) system representation which can be used directly in a model predictive control (MPC) law for hybrid systems using multiple-degree-of-freedom tuning. The effectiveness of the proposed MoD predictive control algorithm for nonlinear hybrid systems is demonstrated on a hypothetical adaptive behavioral intervention problem inspired by Fast Track, a real-life preventive intervention for improving parental function and reducing conduct disorder in at-risk children. Simulation results demonstrate that the proposed algorithm can be useful for adaptive intervention problems exhibiting both nonlinear and hybrid character. PMID:21874087
NASA Astrophysics Data System (ADS)
Zhang, Zheng
2017-10-01
Concept of radio direction finding systems, which use radio direction finding is based on digital signal processing algorithms. Thus, the radio direction finding system becomes capable to locate and track signals by the both. Performance of radio direction finding significantly depends on effectiveness of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This manuscript investigates implementation of the DOA algorithms (MUSIC) on the uniform linear array in the presence of white noise. The experiment results exhibit that MUSIC algorithm changed well with the radio direction.
Fusing Image Data for Calculating Position of an Object
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Cheng, Yang; Liebersbach, Robert; Trebi-Ollenu, Ashitey
2007-01-01
A computer program has been written for use in maintaining the calibration, with respect to the positions of imaged objects, of a stereoscopic pair of cameras on each of the Mars Explorer Rovers Spirit and Opportunity. The program identifies and locates a known object in the images. The object in question is part of a Moessbauer spectrometer located at the tip of a robot arm, the kinematics of which are known. In the program, the images are processed through a module that extracts edges, combines the edges into line segments, and then derives ellipse centroids from the line segments. The images are also processed by a feature-extraction algorithm that performs a wavelet analysis, then performs a pattern-recognition operation in the wavelet-coefficient space to determine matches to a texture feature measure derived from the horizontal, vertical, and diagonal coefficients. The centroids from the ellipse finder and the wavelet feature matcher are then fused to determine co-location. In the event that a match is found, the centroid (or centroids if multiple matches are present) is reported. If no match is found, the process reports the results of the analyses for further examination by human experts.
A Study on Double Event Detection for PHENIX at RHIC
NASA Astrophysics Data System (ADS)
Vazquez-Carson, Sebastian; Phenix Collaboration
2016-09-01
Many measurements made in Heavy Ion experiments such as PHENIX at RHIC focus on geometrical properties because phenomena such as collective flow give insight into quark-gluon plasma and the strong nuclear force. As part of this investigation, PHENIX has taken data in 2016 for deuteron on gold collisions at several energies. An acceptable luminosity is achieved by injecting up to 120 separate bunches each with billions of ions into the storage ring, from which two, separate beams are made to collide. This method has a drawback as there is a chance for multiple pairs of nuclei to collide in a single bunch crossing. Data taken in a double event cannot be separated into two independent events and has no clear interpretation. This effect's magnitude is estimated and incorporated in published results as a systematic uncertainty and studies on this topic have already been conducted within PHENIX. I develop several additional algorithms to flag multiple interaction events by examining the time dependence of data from the two Beam-Beam Counters - detectors surrounding the beam pipe on opposite ends of the interaction region. The algorithms are tested with data, in which events with double interactions are artificially produced using low luminosity data. I am working at the University of Colorado at Boulder on behalf of the PHENIX collaboration.
Lonini, Luca; Reissman, Timothy; Ochoa, Jose M; Mummidisetty, Chaithanya K; Kording, Konrad; Jayaraman, Arun
2017-10-01
The objective of rehabilitation after spinal cord injury is to enable successful function in everyday life and independence at home. Clinical tests can assess whether patients are able to execute functional movements but are limited in assessing such information at home. A prototype system is developed that detects stand-to-reach activities, a movement with important functional implications, at multiple locations within a mock kitchen. Ten individuals with incomplete spinal cord injuries performed a sequence of standing and reaching tasks. The system monitored their movements by combining two sources of information: a triaxial accelerometer, placed on the subject's thigh, detected sitting or standing, and a network of radio frequency tags, wirelessly connected to a wrist-worn device, detected reaching at three locations. A threshold-based algorithm detected execution of the combined tasks and accuracy was measured by the number of correctly identified events. The system was shown to have an average accuracy of 98% for inferring when individuals performed stand-to-reach activities at each tag location within the same room. The combination of accelerometry and tags yielded accurate assessments of functional stand-to-reach activities within a home environment. Optimization of this technology could simplify patient compliance and allow clinicians to assess functional home activities.
Chen, Ming-Hui; Zeng, Donglin; Hu, Kuolung; Jia, Catherine
2014-01-01
Summary In many biomedical studies, patients may experience the same type of recurrent event repeatedly over time, such as bleeding, multiple infections and disease. In this article, we propose a Bayesian design to a pivotal clinical trial in which lower risk myelodysplastic syndromes (MDS) patients are treated with MDS disease modifying therapies. One of the key study objectives is to demonstrate the investigational product (treatment) effect on reduction of platelet transfusion and bleeding events while receiving MDS therapies. In this context, we propose a new Bayesian approach for the design of superiority clinical trials using recurrent events frailty regression models. Historical recurrent events data from an already completed phase 2 trial are incorporated into the Bayesian design via the partial borrowing power prior of Ibrahim et al. (2012, Biometrics 68, 578–586). An efficient Gibbs sampling algorithm, a predictive data generation algorithm, and a simulation-based algorithm are developed for sampling from the fitting posterior distribution, generating the predictive recurrent events data, and computing various design quantities such as the type I error rate and power, respectively. An extensive simulation study is conducted to compare the proposed method to the existing frequentist methods and to investigate various operating characteristics of the proposed design. PMID:25041037
Greedy Sparse Approaches for Homological Coverage in Location Unaware Sensor Networks
2017-12-08
GlobalSIP); 2013 Dec; Austin , TX . p. 595– 598. 33. Farah C, Schwaner F, Abedi A, Worboys M. Distributed homology algorithm to detect topological events...ARL-TR-8235•DEC 2017 US Army Research Laboratory Greedy Sparse Approaches for Homological Coverage in Location-Unaware Sensor Net- works by Terrence...8235•DEC 2017 US Army Research Laboratory Greedy Sparse Approaches for Homological Coverage in Location-Unaware Sensor Net- works by Terrence J Moore
NASA Astrophysics Data System (ADS)
Rahman, Md. Habibur; Matin, M. A.; Salma, Umma
2017-12-01
The precipitation patterns of seventeen locations in Bangladesh from 1961 to 2014 were studied using a cluster analysis and metric multidimensional scaling. In doing so, the current research applies four major hierarchical clustering methods to precipitation in conjunction with different dissimilarity measures and metric multidimensional scaling. A variety of clustering algorithms were used to provide multiple clustering dendrograms for a mixture of distance measures. The dendrogram of pre-monsoon rainfall for the seventeen locations formed five clusters. The pre-monsoon precipitation data for the areas of Srimangal and Sylhet were located in two clusters across the combination of five dissimilarity measures and four hierarchical clustering algorithms. The single linkage algorithm with Euclidian and Manhattan distances, the average linkage algorithm with the Minkowski distance, and Ward's linkage algorithm provided similar results with regard to monsoon precipitation. The results of the post-monsoon and winter precipitation data are shown in different types of dendrograms with disparate combinations of sub-clusters. The schematic geometrical representations of the precipitation data using metric multidimensional scaling showed that the post-monsoon rainfall of Cox's Bazar was located far from those of the other locations. The results of a box-and-whisker plot, different clustering techniques, and metric multidimensional scaling indicated that the precipitation behaviour of Srimangal and Sylhet during the pre-monsoon season, Cox's Bazar and Sylhet during the monsoon season, Maijdi Court and Cox's Bazar during the post-monsoon season, and Cox's Bazar and Khulna during the winter differed from those at other locations in Bangladesh.
NASA Astrophysics Data System (ADS)
Taghavi, F.; Owlad, E.; Ackerman, S. A.
2017-03-01
South-west Asia including the Middle East is one of the most prone regions to dust storm events. In recent years, there was an increase in the occurrence of these environmental and meteorological phenomena. Remote sensing could serve as an applicable method to detect and also characterise these events. In this study, two dust enhancement algorithms were used to investigate the behaviour of dust events using satellite data, compare with numerical model output and other satellite products and finally validate with in-situ measurements. The results show that the use of thermal infrared algorithm enhances dust more accurately. The aerosol optical depth from MODIS and output of a Dust Regional Atmospheric Model (DREAM8b) are applied for comparing the results. Ground-based observations of synoptic stations and sun photometers are used for validating the satellite products. To find the transport direction and the locations of the dust sources and the synoptic situations during these events, model outputs (HYSPLIT and NCEP/NCAR) are presented. Comparing the results with synoptic maps and the model outputs showed that using enhancement algorithms is a more reliable way than any other MODIS products or model outputs to enhance the dust.
Testing the Rapid Detection Capabilities of the Quake-Catcher Network
NASA Astrophysics Data System (ADS)
Chung, A. I.; Cochran, E.; Yildirim, B.; Christensen, C. M.; Kaiser, A. E.; Lawrence, J. F.
2013-12-01
The Quake-Catcher Network (QCN) is a versatile network of MEMS accelerometers that are used in combination with distributed volunteer computing to detect earthquakes around the world. Using a dense network of QCN stations installed in Christchurch, New Zealand after the 2010 M7.1 Darfield earthquake, hundreds of events in the Christchurch area were detected and rapidly characterized. When the M6.3 Christchurch event occurred on 21 February 2011, QCN sensors recorded the event and calculated its magnitude, location, and created a map of estimated shaking intensity within 7 seconds of the earthquake origin time. Successive iterations improved the calculations and, within 24 seconds of the earthquake, magnitude and location values were calculated that were comparable to those provided by GeoNet. We have rigorously tested numerous methods to create a working magnitude scaling relationship. In this presentation, we show a drastic improvement in the magnitude estimates using the maximum acceleration at the time of the first trigger and updated ground accelerations from one to three seconds after the initial trigger. 75% of the events rapidly detected and characterized by QCN are within 0.5 magnitude units of the official GeoNet reported magnitude values, with 95% of the events within 1 magnitude unit. We also test the QCN detection algorithms using higher quality data from the SCSN network in Southern California. We examine a dataset of M5 and larger earthquakes that occurred since 1995. We present the performance of the QCN algorithms for this dataset, including time to detection as well as location and magnitude accuracy.
Modeling a Single SEP Event from Multiple Vantage Points Using the iPATH Model
NASA Astrophysics Data System (ADS)
Hu, Junxiang; Li, Gang; Fu, Shuai; Zank, Gary; Ao, Xianzhi
2018-02-01
Using the recently extended 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model, we model an example gradual solar energetic particle event as observed at multiple locations. Protons and ions that are energized via the diffusive shock acceleration mechanism are followed at a 2D coronal mass ejection-driven shock where the shock geometry varies across the shock front. The subsequent transport of energetic particles, including cross-field diffusion, is modeled by a Monte Carlo code that is based on a stochastic differential equation method. Time intensity profiles and particle spectra at multiple locations and different radial distances, separated in longitudes, are presented. The results shown here are relevant to the upcoming Parker Solar Probe mission.
NASA Astrophysics Data System (ADS)
Ikelle, Luc T.
2006-02-01
We here describe one way of constructing internal multiples from surface seismic data only. The key feature of our construct of internal multiples is the introduction of the concept of virtual seismic events. Virtual events here are events, which are not directly recorded in standard seismic data acquisition, but their existence allows us to construct internal multiples with scattering points at the sea surface; the standard construct of internal multiples does not include any scattering points at the sea surface. The mathematical and computational operations invoked in our construction of virtual events and internal multiples are similar to those encountered in the construction of free-surface multiples based on the Kirchhoff or Born scattering theory. For instance, our construct operates on one temporal frequency at a time, just like free-surface demultiple algorithms; other internal multiple constructs tend to require all frequencies for the computation of an internal multiple at a given frequency. It does not require any knowledge of the subsurface nor an explicit knowledge of specific interfaces that are responsible for the generation of internal multiples in seismic data. However, our construct requires that the data be divided into two, three or four windows to avoid generating primaries. This segmentation of the data also allows us to select a range of periods of internal multiples that one wishes to construct because, in the context of the attenuation of internal multiples, it is important to avoid generating short-period internal multiples that may constructively average to form primaries at the seismic scale.
On dealing with multiple correlation peaks in PIV
NASA Astrophysics Data System (ADS)
Masullo, A.; Theunissen, R.
2018-05-01
A novel algorithm to analyse PIV images in the presence of strong in-plane displacement gradients and reduce sub-grid filtering is proposed in this paper. Interrogation windows subjected to strong in-plane displacement gradients often produce correlation maps presenting multiple peaks. Standard multi-grid procedures discard such ambiguous correlation windows using a signal to noise (SNR) filter. The proposed algorithm improves the standard multi-grid algorithm allowing the detection of splintered peaks in a correlation map through an automatic threshold, producing multiple displacement vectors for each correlation area. Vector locations are chosen by translating images according to the peak displacements and by selecting the areas with the strongest match. The method is assessed on synthetic images of a boundary layer of varying intensity and a sinusoidal displacement field of changing wavelength. An experimental case of a flow exhibiting strong velocity gradients is also provided to show the improvements brought by this technique.
Fine-Scale Event Location and Error Analysis in NET-VISA
NASA Astrophysics Data System (ADS)
Arora, N. S.; Russell, S.
2016-12-01
NET-VISA is a generative probabilistic model for the occurrence of seismic, hydro, and atmospheric events, and the propagation of energy from these events through various mediums and phases before being detected, or misdetected, by IMS stations. It is built on top of the basic station, and arrival detection processing at the IDC, and is currently being tested in the IDC network processing pipelines. A key distinguishing feature of NET-VISA is that it is easy to incorporate prior scientific knowledge and historical data into the probabilistic model. The model accounts for both detections and mis-detections when forming events, and this allows it to make more accurate event hypothesis. It has been continuously evaluated since 2012, and in each year it makes a roughly 60% reduction in the number of missed events without increasing the false event rate as compared to the existing GA algorithm. More importantly the model finds large numbers of events that have been confirmed by regional seismic bulletins but missed by the IDC analysts using the same data. In this work we focus on enhancements to the model to improve the location accuracy, and error ellipses. We will present a new version of the model that focuses on the fine scale around the event location, and present error ellipses and analysis of recent important events.
NASA Astrophysics Data System (ADS)
Nooshiri, Nima; Heimann, Sebastian; Saul, Joachim; Tilmann, Frederik; Dahm, Torsten
2015-04-01
Automatic earthquake locations are sometimes associated with very large residuals up to 10 s even for clear arrivals, especially for regional stations in subduction zones because of their strongly heterogeneous velocity structure associated. Although these residuals are most likely not related to measurement errors but unmodelled velocity heterogeneity, these stations are usually removed from or down-weighted in the location procedure. While this is possible for large events, it may not be useful if the earthquake is weak. In this case, implementation of travel-time station corrections may significantly improve the automatic locations. Here, the shrinking box source-specific station term method (SSST) [Lin and Shearer, 2005] has been applied to improve relative location accuracy of 1678 events that occurred in the Tonga subduction zone between 2010 and mid-2014. Picks were obtained from the GEOFON earthquake bulletin for all available station networks. We calculated a set of timing corrections for each station which vary as a function of source position. A separate time correction was computed for each source-receiver path at the given station by smoothing the residual field over nearby events. We begin with a very large smoothing radius essentially encompassing the whole event set and iterate by progressively shrinking the smoothing radius. In this way, we attempted to correct for the systematic errors, that are introduced into the locations by the inaccuracies in the assumed velocity structure, without solving for a new velocity model itself. One of the advantages of the SSST technique is that the event location part of the calculation is separate from the station term calculation and can be performed using any single event location method. In this study, we applied a non-linear, probabilistic, global-search earthquake location method using the software package NonLinLoc [Lomax et al., 2000]. The non-linear location algorithm implemented in NonLinLoc is less sensitive to the problem of local misfit minima in the model space. Moreover, the spatial errors estimated by NonLinLoc are much more reliable than those derived by linearized algorithms. According to the obtained results, the root-mean-square (RMS) residual decreased from 1.37 s for the original GEOFON catalog (using a global 1-D velocity model without station specific corrections) to 0.90 s for our SSST catalog. Our results show 45-70% reduction of the median absolute deviation (MAD) of the travel-time residuals at regional stations. Additionally, our locations exhibit less scatter in depth and a sharper image of the seismicity associated with the subducting slab compared to the initial locations.
NASA Astrophysics Data System (ADS)
Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret
2003-12-01
A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.
NASA Astrophysics Data System (ADS)
Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.
2013-09-01
We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.
Sound source tracking device for telematic spatial sound field reproduction
NASA Astrophysics Data System (ADS)
Cardenas, Bruno
This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.
2013-07-01
structure of the data and Gower’s similarity coefficient as the algorithm for calculating the proximity matrices. The following section provides a...representative set of terrorist event data. Attribute Day Location Time Prim /Attack Sec/Attack Weight 1 1 1 1 1 Scale Nominal Nominal Interval Nominal...calculate the similarity it uses Gower’s similarity and multidimensional scaling algorithms contained in an R statistical computing environment
GMTI Direction of Arrival Measurements from Multiple Phase Centers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin W.; Bickel, Douglas L.
2015-03-01
Ground Moving Target Indicator (GMTI) radar attempts to detect and locate targets with unknown motion. Very slow-moving targets are difficult to locate in the presence of surrounding clutter. This necessitates multiple antenna phase centers (or equivalent) to offer independent Direction of Arrival (DOA) measurements. DOA accuracy and precision generally remains dependent on target Signal-to-Noise Ratio (SNR), Clutter-toNoise Ratio (CNR), scene topography, interfering signals, and a number of antenna parameters. This is true even for adaptive techniques like Space-Time-AdaptiveProcessing (STAP) algorithms.
Yang, Jing; Xu, Mai; Zhao, Wei; Xu, Baoguo
2010-01-01
For monitoring burst events in a kind of reactive wireless sensor networks (WSNs), a multipath routing protocol (MRP) based on dynamic clustering and ant colony optimization (ACO) is proposed. Such an approach can maximize the network lifetime and reduce the energy consumption. An important attribute of WSNs is their limited power supply, and therefore some metrics (such as energy consumption of communication among nodes, residual energy, path length) were considered as very important criteria while designing routing in the MRP. Firstly, a cluster head (CH) is selected among nodes located in the event area according to some parameters, such as residual energy. Secondly, an improved ACO algorithm is applied in the search for multiple paths between the CH and sink node. Finally, the CH dynamically chooses a route to transmit data with a probability that depends on many path metrics, such as energy consumption. The simulation results show that MRP can prolong the network lifetime, as well as balance of energy consumption among nodes and reduce the average energy consumption effectively.
Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter
NASA Astrophysics Data System (ADS)
Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.
2018-04-01
Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.
Determination of the Earth's Plasmapause Location from the CE-3 EUVC Images
NASA Technical Reports Server (NTRS)
He, Fei; Zhang, Xiao-Xin; Chen, Bo; Fok, Mei-Ching; Nakano, Shinya
2016-01-01
The Moon-based Extreme Ultraviolet Camera (EUVC) aboard China's Chang'e-3 (CE-3) mission has successfully imaged the entire Earth's plasmasphere for the first time from the side views on lunar surface. An EUVC image on 21 April 2014 is used in this study to demonstrate the characteristics and configurations of the Moon-based EUV imaging and to illustrate the determination algorithm of the plasmapause locations on the magnetic equator. The plasmapause locations determined from all the available EUVC images with the Minimum L Algorithm are quantitatively compared with those extracted from insitu observations (Defense Meteorological Satellite Program, Time History of Events and Macroscale Interactions during Substorms, and Radiation Belt Storm Probes). Excellent agreement between the determined plasmapauses seen by EUVC and the extracted ones from other satellites indicates the reliability of the Moon-based EUVC images as well as the determination algorithm. This preliminary study provides an important basis for future investigation of the dynamics of the plasmasphere with the Moon-based EUVC imaging.
NASA Astrophysics Data System (ADS)
Horstmann, T.; Harrington, R. M.; Cochran, E. S.
2012-12-01
Frequently, the lack of distinctive phase arrivals makes locating tectonic tremor more challenging than locating earthquakes. Classic location algorithms based on travel times cannot be directly applied because impulsive phase arrivals are often difficult to recognize. Traditional location algorithms are often modified to use phase arrivals identified from stacks of recurring low-frequency events (LFEs) observed within tremor episodes, rather than single events. Stacking the LFE waveforms improves the signal-to-noise ratio for the otherwise non-distinct phase arrivals. In this study, we apply a different method to locate tectonic tremor: a modified time-reversal imaging approach that potentially exploits the information from the entire tremor waveform instead of phase arrivals from individual LFEs. Time reversal imaging uses the waveforms of a given seismic source recorded by multiple seismometers at discrete points on the surface and a 3D velocity model to rebroadcast the waveforms back into the medium to identify the seismic source location. In practice, the method works by reversing the seismograms recorded at each of the stations in time, and back-propagating them from the receiver location individually into the sub-surface as a new source time function. We use a staggered-grid, finite-difference code with 2.5 ms time steps and a grid node spacing of 50 m to compute the rebroadcast wavefield. We calculate the time-dependent curl field at each grid point of the model volume for each back-propagated seismogram. To locate the tremor, we assume that the source time function back-propagated from each individual station produces a similar curl field at the source position. We then cross-correlate the time dependent curl field functions and calculate a median cross-correlation coefficient at each grid point. The highest median cross-correlation coefficient in the model volume is expected to represent the source location. For our analysis, we use the velocity model of Thurber et al. (2006) interpolated to a grid spacing of 50 m. Such grid spacing corresponds to frequencies of up to 8 Hz, which is suitable to calculate the wave propagation of tremor. Our dataset contains continuous broadband data from 13 STS-2 seismometers deployed from May 2010 to July 2011 along the Cholame segment of the San Andreas Fault as well as data from the HRSN and PBO networks. Initial synthetic results from tests on a 2D plane using a line of 15 receivers suggest that we are able to recover accurate event locations to within 100 m horizontally and 300 m depth. We conduct additional synthetic tests to determine the influence of signal-to-noise ratio, number of stations used, and the uncertainty in the velocity model on the location result by adding noise to the seismograms and perturbations to the velocity model. Preliminary results show accurate show location results to within 400 m with a median signal-to-noise ratio of 3.5 and 5% perturbations in the velocity model. The next steps will entail performing the synthetic tests on the 3D velocity model, and applying the method to tremor waveforms. Furthermore, we will determine the spatial and temporal distribution of the source locations and compare our results to those by Sumy and others.
Location Privacy Protection on Social Networks
NASA Astrophysics Data System (ADS)
Zhan, Justin; Fang, Xing
Location information is considered as private in many scenarios. Protecting location information on mobile ad-hoc networks has attracted much research in past years. However, location information protection on social networks has not been paid much attention. In this paper, we present a novel location privacy protection approach on the basis of user messages in social networks. Our approach grants flexibility to users by offering them multiple protecting options. To the best of our knowledge, this is the first attempt to protect social network users' location information via text messages. We propose five algorithms for location privacy protection on social networks.
Cui, Yong; Wang, Qiusheng; Yuan, Haiwen; Song, Xiao; Hu, Xuemin; Zhao, Luxing
2015-01-01
In the wireless sensor networks (WSNs) for electric field measurement system under the High-Voltage Direct Current (HVDC) transmission lines, it is necessary to obtain the electric field distribution with multiple sensors. The location information of each sensor is essential to the correct analysis of measurement results. Compared with the existing approach which gathers the location information by manually labelling sensors during deployment, the automatic localization can reduce the workload and improve the measurement efficiency. A novel and practical range-free localization algorithm for the localization of one-dimensional linear topology wireless networks in the electric field measurement system is presented. The algorithm utilizes unknown nodes' neighbor lists based on the Received Signal Strength Indicator (RSSI) values to determine the relative locations of nodes. The algorithm is able to handle the exceptional situation of the output permutation which can effectively improve the accuracy of localization. The performance of this algorithm under real circumstances has been evaluated through several experiments with different numbers of nodes and different node deployments in the China State Grid HVDC test base. Results show that the proposed algorithm achieves an accuracy of over 96% under different conditions. PMID:25658390
Cui, Yong; Wang, Qiusheng; Yuan, Haiwen; Song, Xiao; Hu, Xuemin; Zhao, Luxing
2015-02-04
In the wireless sensor networks (WSNs) for electric field measurement system under the High-Voltage Direct Current (HVDC) transmission lines, it is necessary to obtain the electric field distribution with multiple sensors. The location information of each sensor is essential to the correct analysis of measurement results. Compared with the existing approach which gathers the location information by manually labelling sensors during deployment, the automatic localization can reduce the workload and improve the measurement efficiency. A novel and practical range-free localization algorithm for the localization of one-dimensional linear topology wireless networks in the electric field measurement system is presented. The algorithm utilizes unknown nodes' neighbor lists based on the Received Signal Strength Indicator (RSSI) values to determine the relative locations of nodes. The algorithm is able to handle the exceptional situation of the output permutation which can effectively improve the accuracy of localization. The performance of this algorithm under real circumstances has been evaluated through several experiments with different numbers of nodes and different node deployments in the China State Grid HVDC test base. Results show that the proposed algorithm achieves an accuracy of over 96% under different conditions.
Statistical hadronization and microcanonical ensemble
Becattini, F.; Ferroni, L.
2004-01-01
We present a Monte Carlo calculation of the microcanonical ensemble of the of the ideal hadron-resonance gas including all known states up to a mass of 1. 8 GeV, taking into account quantum statistics. The computing method is a development of a previous one based on a Metropolis Monte Carlo algorithm, with a the grand-canonical limit of the multi-species multiplicity distribution as proposal matrix. The microcanonical average multiplicities of the various hadron species are found to converge to the canonical ones for moderately low values of the total energy. This algorithm opens the way for event generators based for themore » statistical hadronization model.« less
Time difference of arrival to blast localization of potential chemical/biological event on the move
NASA Astrophysics Data System (ADS)
Morcos, Amir; Desai, Sachi; Peltzer, Brian; Hohil, Myron E.
2007-10-01
Integrating a sensor suite with ability to discriminate potential Chemical/Biological (CB) events from high-explosive (HE) events employing a standalone acoustic sensor with a Time Difference of Arrival (TDOA) algorithm we developed a cueing mechanism for more power intensive and range limited sensing techniques. Enabling the event detection algorithm to locate to a blast event using TDOA we then provide further information of the event as either Launch/Impact and if CB/HE. The added information is provided to a range limited chemical sensing system that exploits spectroscopy to determine the contents of the chemical event. The main innovation within this sensor suite is the system will provide this information on the move while the chemical sensor will have adequate time to determine the contents of the event from a safe stand-off distance. The CB/HE discrimination algorithm exploits acoustic sensors to provide early detection and identification of CB attacks. Distinct characteristics arise within the different airburst signatures because HE warheads emphasize concussive and shrapnel effects, while CB warheads are designed to disperse their contents over large areas, therefore employing a slower burning, less intense explosive to mix and spread their contents. Differences characterized by variations in the corresponding peak pressure and rise time of the blast, differences in the ratio of positive pressure amplitude to the negative amplitude, and variations in the overall duration of the resulting waveform. The discrete wavelet transform (DWT) is used to extract the predominant components of these characteristics from air burst signatures at ranges exceeding 3km. Highly reliable discrimination is achieved with a feed-forward neural network classifier trained on a feature space derived from the distribution of wavelet coefficients and higher frequency details found within different levels of the multiresolution decomposition. The development of an adaptive noise floor to provide early event detection assists in minimizing the false alarm rate and increasing the confidence whether the event is blast event or back ground noise. The integration of these algorithms with the TDOA algorithm provides a complex suite of algorithms that can give early warning detection and highly reliable look direction from a great stand-off distance for a moving vehicle to determine if a candidate blast event is CB and if CB what is the composition of the resulting cloud.
Identification of P/S-wave successions for application in microseismicity
NASA Astrophysics Data System (ADS)
Deflandre, J.-P.; Dubesset, M.
1992-09-01
Interpretation of P/S-wave successions is used in induced or passive microseismicity. It makes the location of microseismic events possible when the triangulation technique cannot be used. To improve the reliability of the method, we propose a technique that identifies the P/S-wave successions among recorded wave successions. A polarization software is used to verify the orthogonality between the P and S polarization axes. The polarization parameters are computed all along the 3-component acoustic signal. Then the algorithm detects time windows within which the signal polarization axis is perpendicular to the polarization axis of the wave in the reference time window (representative of the P wave). The technique is demonstrated for a synthetic event, and three application cases are presented. The first one corresponds to a calibration shot within which the arrivals of perpendicularly polarized waves are correctly detected in spite of their moderate amplitude. The second example presents a microseismic event recorded during gas withdrawal from an underground gas storage reservoir. The last example is chosen as a counter-example, concerning a microseismic event recorded during a hydraulic fracturing job. The detection algorithm reveals that, in this case, the wave succession does not correspond to a P/S one. This implies that such an event must not be located by the method based on the interpretation of a P/S-wave succession as no such a succession is confirmed.
Connecting a cognitive architecture to robotic perception
NASA Astrophysics Data System (ADS)
Kurup, Unmesh; Lebiere, Christian; Stentz, Anthony; Hebert, Martial
2012-06-01
We present an integrated architecture in which perception and cognition interact and provide information to each other leading to improved performance in real-world situations. Our system integrates the Felzenswalb et. al. object-detection algorithm with the ACT-R cognitive architecture. The targeted task is to predict and classify pedestrian behavior in a checkpoint scenario, most specifically to discriminate between normal versus checkpoint-avoiding behavior. The Felzenswalb algorithm is a learning-based algorithm for detecting and localizing objects in images. ACT-R is a cognitive architecture that has been successfully used to model human cognition with a high degree of fidelity on tasks ranging from basic decision-making to the control of complex systems such as driving or air traffic control. The Felzenswalb algorithm detects pedestrians in the image and provides ACT-R a set of features based primarily on their locations. ACT-R uses its pattern-matching capabilities, specifically its partial-matching and blending mechanisms, to track objects across multiple images and classify their behavior based on the sequence of observed features. ACT-R also provides feedback to the Felzenswalb algorithm in the form of expected object locations that allow the algorithm to eliminate false-positives and improve its overall performance. This capability is an instance of the benefits pursued in developing a richer interaction between bottom-up perceptual processes and top-down goal-directed cognition. We trained the system on individual behaviors (only one person in the scene) and evaluated its performance across single and multiple behavior sets.
NASA Astrophysics Data System (ADS)
Gohatre, Umakant Bhaskar; Patil, Venkat P.
2018-04-01
In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.
Pant, Jeevan K; Krishnan, Sridhar
2018-03-15
To present a new compressive sensing (CS)-based method for the acquisition of ECG signals and for robust estimation of heart-rate variability (HRV) parameters from compressively sensed measurements with high compression ratio. CS is used in the biosensor to compress the ECG signal. Estimation of the locations of QRS segments is carried out by applying two algorithms on the compressed measurements. The first algorithm reconstructs the ECG signal by enforcing a block-sparse structure on the first-order difference of the signal, so the transient QRS segments are significantly emphasized on the first-order difference of the signal. Multiple block-divisions of the signals are carried out with various block lengths, and multiple reconstructed signals are combined to enhance the robustness of the localization of the QRS segments. The second algorithm removes errors in the locations of QRS segments by applying low-pass filtering and morphological operations. The proposed CS-based method is found to be effective for the reconstruction of ECG signals by enforcing transient QRS structures on the first-order difference of the signal. It is demonstrated to be robust not only to high compression ratio but also to various artefacts present in ECG signals acquired by using on-body wireless sensors. HRV parameters computed by using the QRS locations estimated from the signals reconstructed with a compression ratio as high as 90% are comparable with that computed by using QRS locations estimated by using the Pan-Tompkins algorithm. The proposed method is useful for the realization of long-term HRV monitoring systems by using CS-based low-power wireless on-body biosensors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Leahy, R.M.
A new method for source localization is described that is based on a modification of the well known multiple signal classification (MUSIC) algorithm. In classical MUSIC, the array manifold vector is projected onto an estimate of the signal subspace, but errors in the estimate can make location of multiple sources difficult. Recursively applied and projected (RAP) MUSIC uses each successively located source to form an intermediate array gain matrix, and projects both the array manifold and the signal subspace estimate into its orthogonal complement. The MUSIC projection is then performed in this reduced subspace. Using the metric of principal angles,more » the authors describe a general form of the RAP-MUSIC algorithm for the case of diversely polarized sources. Through a uniform linear array simulation, the authors demonstrate the improved Monte Carlo performance of RAP-MUSIC relative to MUSIC and two other sequential subspace methods, S and IES-MUSIC.« less
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
Walking through doorways causes forgetting: Event structure or updating disruption?
Pettijohn, Kyle A; Radvansky, Gabriel A
2016-11-01
According to event cognition theory, people segment experience into separate event models. One consequence of this segmentation is that when people transport objects from one location to another, memory is worse than if people move across a large location. In two experiments participants navigated through a virtual environment, and recognition memory was tested in either the presence or the absence of a location shift for objects that were recently interacted with (i.e., just picked up or set down). Of particular concern here is whether this location updating effect is due to (a) differences in retention intervals as a result of the navigation process, (b) a temporary disruption in cognitive processing that may occur as a result of the updating processes, or (c) a need to manage multiple event models, as has been suggested in prior research. Experiment 1 explored whether retention interval is driving this effect by recording travel times from the acquisition of an object and the probe time. The results revealed that travel times were similar, thereby rejecting a retention interval explanation. Experiment 2 explored whether a temporary disruption in processing is producing the effect by introducing a 3-second delay prior to the presentation of a memory probe. The pattern of results was not affected by adding a delay, thereby rejecting a temporary disruption account. These results are interpreted in the context of the event horizon model, which suggests that when there are multiple event models that contain common elements there is interference at retrieval, which compromises performance.
Scalable Probabilistic Inference for Global Seismic Monitoring
NASA Astrophysics Data System (ADS)
Arora, N. S.; Dear, T.; Russell, S.
2011-12-01
We describe a probabilistic generative model for seismic events, their transmission through the earth, and their detection (or mis-detection) at seismic stations. We also describe an inference algorithm that constructs the most probable event bulletin explaining the observed set of detections. The model and inference are called NET-VISA (network processing vertically integrated seismic analysis) and is designed to replace the current automated network processing at the IDC, the SEL3 bulletin. Our results (attached table) demonstrate that NET-VISA significantly outperforms SEL3 by reducing the missed events from 30.3% down to 12.5%. The difference is even more dramatic for smaller magnitude events. NET-VISA has no difficulty in locating nuclear explosions as well. The attached figure demonstrates the location predicted by NET-VISA versus other bulletins for the second DPRK event. Further evaluation on dense regional networks demonstrates that NET-VISA finds many events missed in the LEB bulletin, which is produced by the human analysts. Large aftershock sequences, as produced by the 2004 December Sumatra earthquake and the 2011 March Tohoku earthquake, can pose a significant load for automated processing, often delaying the IDC bulletins by weeks or months. Indeed these sequences can overload the serial NET-VISA inference as well. We describe an enhancement to NET-VISA to make it multi-threaded, and hence take full advantage of the processing power of multi-core and -cpu machines. Our experiments show that the new inference algorithm is able to achieve 80% efficiency in parallel speedup.
NASA Astrophysics Data System (ADS)
Park, Won-Kwang; Kim, Hwa Pyung; Lee, Kwang-Jae; Son, Seong-Ho
2017-11-01
Motivated by the biomedical engineering used in early-stage breast cancer detection, we investigated the use of MUltiple SIgnal Classification (MUSIC) algorithm for location searching of small anomalies using S-parameters. We considered the application of MUSIC to functional imaging where a small number of dipole antennas are used. Our approach is based on the application of Born approximation or physical factorization. We analyzed cases in which the anomaly is respectively small and large in relation to the wavelength, and the structure of the left-singular vectors is linked to the nonzero singular values of a Multi-Static Response (MSR) matrix whose elements are the S-parameters. Using simulations, we demonstrated the strengths and weaknesses of the MUSIC algorithm in detecting both small and extended anomalies.
Using multiplets to track volcanic processes at Kilauea Volcano, Hawaii
NASA Astrophysics Data System (ADS)
Thelen, W. A.
2011-12-01
Multiplets, or repeating earthquakes, are commonly observed at volcanoes, particularly those exhibiting unrest. At Kilauea, multiplets have been observed as part of long period (LP) earthquake swarms [Battaglia et al., 2003] and as volcano-tectonic (VT) earthquakes associated with dike intrusion [Rubin et al., 1998]. The focus of most previous studies has been on the precise location of the multiplets based on reviewed absolute locations, a process that can require extensive human intervention and post-processing. Conversely, the detection of multiplets and measurement of multiplet parameters can be done in real-time without human interaction with locations approximated by the stations that best record the multiplet. The Hawaiian Volcano Observatory (HVO) is in the process of implementing and testing an algorithm to detect multiplets in near-real time and to analyze certain metrics to provide enhanced interpretive insights into ongoing volcanic processes. Metrics such as multiplet percent of total seismicity, multiplet event recurrence interval, multiplet lifespan, average event amplitude, and multiplet event amplitude variability have been shown to be valuable in understanding volcanic processes at Bezymianny Volcano, Russia and Mount St. Helens, Washington and thus are tracked as part of the algorithm. The near real-time implementation of the algorithm can be triggered from an earthworm subnet trigger or other triggering algorithm and employs a MySQL database to store results, similar to an algorithm implemented by Got et al. [2002]. Initial results using this algorithm to analyze VT earthquakes along Kilauea's Upper East Rift Zone between September 2010 and August 2011 show that periods of summit pressurization coincide with ample multiplet development. Summit pressurization is loosely defined by high rates of seismicity within the summit and Upper East Rift areas, coincident with lava high stands in the Halema`uma`u lava lake. High percentages, up to 100%, of earthquakes occurring during summit pressurization were part of a multiplet. Percentages were particularly high immediately prior to the March 5 Kamoamoa eruption. Interestingly, many multiplets that were present prior to the Kamoamoa eruption were reactivated during summit pressurization occurring in late July 2011. At a correlation coefficient of 0.7, 90% of the multiplets during the study period had populations of 10 or fewer earthquakes. Between periods of summit pressurization, earthquakes that belong to multiplets rarely occur, even though magma is flowing through the Upper East Rift Zone. Battaglia, J., Got, J. L. and Okubo, P., 2003. Location of long-period events below Kilauea Volcano using seismic amplitudes and accurate relative relocation. Journal of Geophysical Research-Solid Earth, v.108 (B12) 2553. Got, J. L., P. Okubo, R. Machenbaum, and W. Tanigawa (2002), A real-time procedure for progressive multiplet relative relocation at the Hawaiian Volcano Observatory, Bulletin of the Seismological Society of America, 92(5), 2019. Rubin, A. M., D. Gillard, and J. L. Got (1998), A reinterpretation of seismicity associated with the January 1983 dike intrusion at Kilauea Volcano, Hawaii, Journal of Geophysical Research-Solid Earth, 103(B5), 10003.
Delineating Concealed Faults within Cogdell Oil Field via Earthquake Detection
NASA Astrophysics Data System (ADS)
Aiken, C.; Walter, J. I.; Brudzinski, M.; Skoumal, R.; Savvaidis, A.; Frohlich, C.; Borgfeldt, T.; Dotray, P.
2016-12-01
Cogdell oil field, located within the Permian Basin of western Texas, has experienced several earthquakes ranging from magnitude 1.7 to 4.6, most of which were recorded since 2006. Using the Earthscope USArray, Gan and Frohlich [2013] relocated some of these events and found a positive correlation in the timing of increased earthquake activity and increased CO2 injection volume. However, focal depths of these earthquakes are unknown due to 70 km station spacing of the USArray. Accurate focal depths as well as new detections can delineate subsurface faults and establish whether earthquakes are occurring in the shallow sediments or in the deeper basement. To delineate subsurface fault(s) in this region, we first detect earthquakes not currently listed in the USGS catalog by applying continuous waveform-template matching algorithms to multiple seismic data sets. We utilize seismic data spanning the time frame of 2006 to 2016 - which includes data from the U.S. Geological Survey Global Seismographic Network, the USArray, and the Sweetwater, TX broadband and nodal array located 20-40 km away. The catalog of earthquakes enhanced by template matching reveals events that were well recorded by the large-N Sweetwater array, so we are experimenting with strategies for optimizing template matching using different configurations of many stations. Since earthquake activity in the Cogdell oil field is on-going (a magnitude 2.6 occurred on May 29, 2016), a temporary deployment of TexNet seismometers has been planned for the immediate vicinity of Cogdell oil field in August 2016. Results on focal depths and detection of small magnitude events are pending this small local network deployment.
Event detection in an assisted living environment.
Stroiescu, Florin; Daly, Kieran; Kuris, Benjamin
2011-01-01
This paper presents the design of a wireless event detection and in building location awareness system. The systems architecture is based on using a body worn sensor to detect events such as falls where they occur in an assisted living environment. This process involves developing event detection algorithms and transmitting such events wirelessly to an in house network based on the 802.15.4 protocol. The network would then generate alerts both in the assisted living facility and remotely to an offsite monitoring facility. The focus of this paper is on the design of the system architecture and the compliance challenges in applying this technology.
Guo, Lili; Qi, Junwei; Xue, Wei
2018-01-01
This article proposes a novel active localization method based on the mixed polarization multiple signal classification (MP-MUSIC) algorithm for positioning a metal target or an insulator target in the underwater environment by using a uniform circular antenna (UCA). The boundary element method (BEM) is introduced to analyze the boundary of the target by use of a matrix equation. In this method, an electric dipole source as a part of the locating system is set perpendicularly to the plane of the UCA. As a result, the UCA can only receive the induction field of the target. The potential of each electrode of the UCA is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields-based localization method, which can be easily implemented in practical engineering applications. A simulation model and a physical experiment are constructed. The simulation and the experiment results provide accurate positioning performance, with the help of verifying the effectiveness of the proposed localization method in underwater target locating. PMID:29439495
Rare event simulation in radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollman, Craig
1993-10-01
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less
Algorithms for Determining Physical Responses of Structures Under Load
NASA Technical Reports Server (NTRS)
Richards, W. Lance; Ko, William L.
2012-01-01
Ultra-efficient real-time structural monitoring algorithms have been developed to provide extensive information about the physical response of structures under load. These algorithms are driven by actual strain data to measure accurately local strains at multiple locations on the surface of a structure. Through a single point load calibration test, these structural strains are then used to calculate key physical properties of the structure at each measurement location. Such properties include the structure s flexural rigidity (the product of the structure's modulus of elasticity, and its moment of inertia) and the section modulus (the moment of inertia divided by the structure s half-depth). The resulting structural properties at each location can be used to determine the structure s bending moment, shear, and structural loads in real time while the structure is in service. The amount of structural information can be maximized through the use of highly multiplexed fiber Bragg grating technology using optical time domain reflectometry and optical frequency domain reflectometry, which can provide a local strain measurement every 10 mm on a single hair-sized optical fiber. Since local strain is used as input to the algorithms, this system serves multiple purposes of measuring strains and displacements, as well as determining structural bending moment, shear, and loads for assessing real-time structural health. The first step is to install a series of strain sensors on the structure s surface in such a way as to measure bending strains at desired locations. The next step is to perform a simple ground test calibration. For a beam of length l (see example), discretized into n sections and subjected to a tip load of P that places the beam in bending, the flexural rigidity of the beam can be experimentally determined at each measurement location x. The bending moment at each station can then be determined for any general set of loads applied during operation.
Lahr, J.C.; Chouet, B.A.; Stephens, C.D.; Power, J.A.; Page, R.A.
1994-01-01
Determination of the precise locations of seismic events associated with the 1989-1990 eruptions of Redoubt Volcano posed a number of problems, including poorly known crustal velocities, a sparse station distribution, and an abundance of events with emergent phase onsets. In addition, the high relief of the volcano could not be incorporated into the hypoellipse earthquake location algorithm. This algorithm was modified to allow hypocenters to be located above the elevation of the seismic stations. The velocity model was calibrated on the basis of a posteruptive seismic survey, in which four chemical explosions were recorded by eight stations of the permanent network supplemented with 20 temporary seismographs deployed on and around the volcanic edifice. The model consists of a stack of homogeneous horizontal layers; setting the top of the model at the summit allows events to be located anywhere within the volcanic edifice. Detailed analysis of hypocentral errors shows that the long-period (LP) events constituting the vigorous 23-hour swarm that preceded the initial eruption on December 14 could have originated from a point 1.4 km below the crater floor. A similar analysis of LP events in the swarm preceding the major eruption on January 2 shows they also could have originated from a point, the location of which is shifted 0.8 km northwest and 0.7 km deeper than the source of the initial swarm. We suggest this shift in LP activity reflects a northward jump in the pathway for magmatic gases caused by the sealing of the initial pathway by magma extrusion during the last half of December. Volcano-tectonic (VT) earthquakes did not occur until after the initial 23-hour-long swarm. They began slowly just below the LP source and their rate of occurrence increased after the eruption of 01:52 AST on December 15, when they shifted to depths of 6 to 10 km. After January 2 the VT activity migrated gradually northward; this migration suggests northward propagating withdrawal of magma from a plexus of dikes and/or sills located in the 6 to 10 km depth range. Precise relocations of selected events prior to January 2 clearly resolve a narrow, steeply dipping, pencil-shaped concentration of activity in the depth range of 1-7 km, which illuminates the conduit along which magma was transported to the surface. A third event type, named hybrid, which blends the characteristics of both VT and LP events, originates just below the LP source, and may reflect brittle failure along a zone intersecting a fluid-filled crack. The distribution of hybrid events is elongated 0.2-0.4 km in an east-west direction. This distribution may offer constraints on the orientation and size of the fluid-filled crack inferred to be the source of the LP events. ?? 1994.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, David R.; Wagstaff, Kiri L.; Majid, Walid A.
2011-07-10
Recent investigations reveal an important new class of transient radio phenomena that occur on submillisecond timescales. Often, transient surveys' data volumes are too large to archive exhaustively. Instead, an online automatic system must excise impulsive interference and detect candidate events in real time. This work presents a case study using data from multiple geographically distributed stations to perform simultaneous interference excision and transient detection. We present several algorithms that incorporate dedispersed data from multiple sites, and report experiments with a commensal real-time transient detection system on the Very Long Baseline Array. We test the system using observations of pulsar B0329+54.more » The multiple-station algorithms enhanced sensitivity for detection of individual pulses. These strategies could improve detection performance for a future generation of geographically distributed arrays such as the Australian Square Kilometre Array Pathfinder and the Square Kilometre Array.« less
2009-09-30
signals detected by infrasound arrays were discriminated as surface explosions, not earthquakes , and are marked by yellow...velocity, and amplitude of detected signals at each array . Horizontal propagation velocity of infrasound signals , also called celerity, is used not only...REPRINT 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE MULTIPLE- ARRAY DETECTION , ASSOCIATION AND LOCATION OF INFRASOUND AND SEISMO-ACOUSTIC
Network hydraulics inclusion in water quality event detection using multiple sensor stations data.
Oliker, Nurit; Ostfeld, Avi
2015-09-01
Event detection is one of the current most challenging topics in water distribution systems analysis: how regular on-line hydraulic (e.g., pressure, flow) and water quality (e.g., pH, residual chlorine, turbidity) measurements at different network locations can be efficiently utilized to detect water quality contamination events. This study describes an integrated event detection model which combines multiple sensor stations data with network hydraulics. To date event detection modelling is likely limited to single sensor station location and dataset. Single sensor station models are detached from network hydraulics insights and as a result might be significantly exposed to false positive alarms. This work is aimed at decreasing this limitation through integrating local and spatial hydraulic data understanding into an event detection model. The spatial analysis complements the local event detection effort through discovering events with lower signatures by exploring the sensors mutual hydraulic influences. The unique contribution of this study is in incorporating hydraulic simulation information into the overall event detection process of spatially distributed sensors. The methodology is demonstrated on two example applications using base runs and sensitivity analyses. Results show a clear advantage of the suggested model over single-sensor event detection schemes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Reality Check Algorithm for Complex Sources in Early Warning
NASA Astrophysics Data System (ADS)
Karakus, G.; Heaton, T. H.
2013-12-01
In almost all currently operating earthquake early warning (EEW) systems, presently available seismic data are used to predict future shaking. In most cases, location and magnitude are estimated. We are developing an algorithm to test the goodness of that prediction in real time. We monitor envelopes of acceleration, velocity, and displacement; if they deviate significantly from the envelope predicted by Cua's envelope gmpe's then we declare an overfit (perhaps false alarm) or an underfit (possibly a larger event has just occurred). This algorithm is designed to provide a robust measure and to work as quickly as possible in real-time. We monitor the logarithm of the ratio between the envelopes of the ongoing observed event and the envelopes derived from the predicted envelopes of channels of ground motion of the Virtual Seismologist (VS) (Cua, G. and Heaton, T.). Then, we recursively filter this result with a simple running median (de-spiking operator) to minimize the effect of one single high value. Depending on the result of the filtered value we make a decision such as if this value is large enough (e.g., >1), then we would declare, 'that a larger event is in progress', or similarly if this value is small enough (e.g., <-1), then we would declare a false alarm. We design the algorithm to work at a wide range of amplitude scales; that is, it should work for both small and large events.
NASA Astrophysics Data System (ADS)
Ceré, Raphaël; Kaiser, Christian
2015-04-01
Currently, three quarters of the Swiss population is living in urban areas. The total population is still increasing, and urbanized space is increasing event faster. Consequently, the intensity of use has decreased but the exposure of the urban space to natural events has grown along with the cost related to the impact of hazards. In line with this fact, during the 20th century there has been a noticeable increase of natural disasters accompanied by the rapid increase of the world population, leading to higher costs. Additionally to the fact that more people are exposed to natural hazards, the value of goods globally has increased more than proportionally. Consequently, the vulnerability of urban space is, more than ever before, a major issue for socio-economic development. Here, vulnerability is defined as the potential human loss or loss of infrastructure caused by a hazardous event. It encompasses factors of urban infrastructure, population and the environment, which increase the susceptibility of a location to the impact of hazards. This paper describes a novel method for improving the interactive use of exploratory data analysis in the context of minimizing vulnerability and disaster risk by prevention or mitigation. This method is used to assess the similarity between different locations with respect to several characteristics relevant to vulnerability at different scales, allowing for automatic display of multiple locations similar to the one under investigation by an expert. Visualizing vulnerability simultaneously for several locations allows for analyzing and comparing of metric characteristics between multiple places and at different scales. The interactivity aspect is also useful for understanding vulnerability patterns and it facilitates disaster risk management and decisions on global preventive measures in urban spaces. Metrics for vulnerability assessment can be extracted from extensive geospatial datasets such as high-resolution digital elevation models (DEM) or individual building vector layers. Morphological properties can be calculated for different scales using different moving window sizes. Multi-scale measures such as fractal or lacunarity can be integrated into the analysis. Other properties such as different densities and ratios are also easy to calculate and include. Based on a rather extensive set of properties or features, a feature selection or extraction method such as Principal Component Analysis can be used to obtain a subset of relevant properties. In a second step, an unsupervised classification algorithm such as Self-Organizing Maps can be used to group similar locations together, and criteria such as the intra-group distance and geographic distribution can be used for selecting relevant locations to be displayed in an interactive data exploration interface along with a given main location. A case study for a part of Switzerland illustrates the presented approach within a working interactive tool, showing the feasibility and allowing for an investigation of the usefulness of our method.
Remote listening and passive acoustic detection in a 3-D environment
NASA Astrophysics Data System (ADS)
Barnhill, Colin
Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and listening algorithms especially for multiple speech sources and room environments. The algorithms use high order (spherical harmonic) beamforming and power signal characteristics for source localization and signal enhancement These methods are applied to remote listening, surveillance, and teleconferencing.
Semiautomated tremor detection using a combined cross-correlation and neural network approach
Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.
2013-01-01
Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low‒amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross‒correlation technique, followed by a Self‒Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being “semiautomated”. We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal‒to‒noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal‒to‒noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.
Semiautomated tremor detection using a combined cross-correlation and neural network approach
NASA Astrophysics Data System (ADS)
Horstmann, T.; Harrington, R. M.; Cochran, E. S.
2013-09-01
Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low-amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross-correlation technique, followed by a Self-Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being "semiautomated". We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal-to-noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal-to-noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.
Data fusion for a vision-aided radiological detection system: Calibration algorithm performance
NASA Astrophysics Data System (ADS)
Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas
2018-05-01
In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average calibration-difference of 22 cm. Using NaI and He-3 detectors in place of the EJ-309, the calibration-difference was 52 cm for NaI and 75 cm for He-3. The algorithm is not detector dependent; however, from these results it was determined that detector dependent adjustments are required.
Parente, Daniel J; Ray, J Christian J; Swint-Kruse, Liskin
2015-12-01
As proteins evolve, amino acid positions key to protein structure or function are subject to mutational constraints. These positions can be detected by analyzing sequence families for amino acid conservation or for coevolution between pairs of positions. Coevolutionary scores are usually rank-ordered and thresholded to reveal the top pairwise scores, but they also can be treated as weighted networks. Here, we used network analyses to bypass a major complication of coevolution studies: For a given sequence alignment, alternative algorithms usually identify different, top pairwise scores. We reconciled results from five commonly-used, mathematically divergent algorithms (ELSC, McBASC, OMES, SCA, and ZNMI), using the LacI/GalR and 1,6-bisphosphate aldolase protein families as models. Calculations used unthresholded coevolution scores from which column-specific properties such as sequence entropy and random noise were subtracted; "central" positions were identified by calculating various network centrality scores. When compared among algorithms, network centrality methods, particularly eigenvector centrality, showed markedly better agreement than comparisons of the top pairwise scores. Positions with large centrality scores occurred at key structural locations and/or were functionally sensitive to mutations. Further, the top central positions often differed from those with top pairwise coevolution scores: instead of a few strong scores, central positions often had multiple, moderate scores. We conclude that eigenvector centrality calculations reveal a robust evolutionary pattern of constraints-detectable by divergent algorithms--that occur at key protein locations. Finally, we discuss the fact that multiple patterns coexist in evolutionary data that, together, give rise to emergent protein functions. © 2015 Wiley Periodicals, Inc.
Subspace-based analysis of the ERT inverse problem
NASA Astrophysics Data System (ADS)
Ben Hadj Miled, Mohamed Khames; Miller, Eric L.
2004-05-01
In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.
Courses of action for effects based operations using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Haider, Sajjad; Levis, Alexander H.
2006-05-01
This paper presents an Evolutionary Algorithms (EAs) based approach to identify effective courses of action (COAs) in Effects Based Operations. The approach uses Timed Influence Nets (TINs) as the underlying mathematical model to capture a dynamic uncertain situation. TINs provide a concise graph-theoretic probabilistic approach to specify the cause and effect relationships that exist among the variables of interest (actions, desired effects, and other uncertain events) in a problem domain. The purpose of building these TIN models is to identify and analyze several alternative courses of action. The current practice is to use trial and error based techniques which are not only labor intensive but also produce sub-optimal results and are not capable of modeling constraints among actionable events. The EA based approach presented in this paper is aimed to overcome these limitations. The approach generates multiple COAs that are close enough in terms of achieving the desired effect. The purpose of generating multiple COAs is to give several alternatives to a decision maker. Moreover, the alternate COAs could be generalized based on the relationships that exist among the actions and their execution timings. The approach also allows a system analyst to capture certain types of constraints among actionable events.
Online track detection in triggerless mode for INO
NASA Astrophysics Data System (ADS)
Jain, A.; Padmini, S.; Joseph, A. N.; Mahesh, P.; Preetha, N.; Behere, A.; Sikder, S. S.; Majumder, G.; Behera, S. P.
2018-03-01
The India based Neutrino Observatory (INO) is a proposed particle physics research project to study the atmospheric neutrinos. INO-Iron Calorimeter (ICAL) will consist of 28,800 detectors having 3.6 million electronic channels expected to activate with 100 Hz single rate, producing data at a rate of 3 GBps. Data collected contains a few real hits generated by muon tracks and the remaining noise-induced spurious hits. Estimated reduction factor after filtering out data of interest from generated data is of the order of 103. This makes trigger generation critical for efficient data collection and storage. Trigger is generated by detecting coincidence across multiple channels satisfying trigger criteria, within a small window of 200 ns in the trigger region. As the probability of neutrino interaction is very low, track detection algorithm has to be efficient and fast enough to process 5 × 106 events-candidates/s without introducing significant dead time, so that not even a single neutrino event is missed out. A hardware based trigger system is presently proposed for on-line track detection considering stringent timing requirements. Though the trigger system can be designed with scalability, a lot of hardware devices and interconnections make it a complex and expensive solution with limited flexibility. A software based track detection approach working on the hit information offers an elegant solution with possibility of varying trigger criteria for selecting various potentially interesting physics events. An event selection approach for an alternative triggerless readout scheme has been developed. The algorithm is mathematically simple, robust and parallelizable. It has been validated by detecting simulated muon events for energies of the range of 1 GeV-10 GeV with 100% efficiency at a processing rate of 60 μs/event on a 16 core machine. The algorithm and result of a proof-of-concept for its faster implementation over multiple cores is presented. The paper also discusses about harnessing the computing capabilities of multi-core computing farm, thereby optimizing number of nodes required for the proposed system.
A Novel Control Strategy for Autonomous Operation of Isolated Microgrid with Prioritized Loads
NASA Astrophysics Data System (ADS)
Kumar, R. Hari; Ushakumari, S.
2018-05-01
Maintenance of power balance between generation and demand is one of the most critical requirements for the stable operation of a power system network. To mitigate the power imbalance during the occurrence of any disturbance in the system, fast acting algorithms are inevitable. This paper proposes a novel algorithm for load shedding and network reconfiguration in an isolated microgrid with prioritized loads and multiple islands, which will help to quickly restore the system in the event of a fault. The performance of the proposed algorithm is enhanced using genetic algorithm and its effectiveness is illustrated with simulation results on modified Consortium for Electric Reliability Technology Solutions (CERTS) microgrid.
Warrick, P A; Precup, D; Hamilton, E F; Kearney, R E
2007-01-01
To develop a singular-spectrum analysis (SSA) based change-point detection algorithm applicable to fetal heart rate (FHR) monitoring to improve the detection of deceleration events. We present a method for decomposing a signal into near-orthogonal components via the discrete cosine transform (DCT) and apply this in a novel online manner to change-point detection based on SSA. The SSA technique forms models of the underlying signal that can be compared over time; models that are sufficiently different indicate signal change points. To adapt the algorithm to deceleration detection where many successive similar change events can occur, we modify the standard SSA algorithm to hold the reference model constant under such conditions, an approach that we term "base-hold SSA". The algorithm is applied to a database of 15 FHR tracings that have been preprocessed to locate candidate decelerations and is compared to the markings of an expert obstetrician. Of the 528 true and 1285 false decelerations presented to the algorithm, the base-hold approach improved on standard SSA, reducing the number of missed decelerations from 64 to 49 (21.9%) while maintaining the same reduction in false-positives (278). The standard SSA assumption that changes are infrequent does not apply to FHR analysis where decelerations can occur successively and in close proximity; our base-hold SSA modification improves detection of these types of event series.
NASA Astrophysics Data System (ADS)
Coopersmith, Evan Joseph
The techniques and information employed for decision-making vary with the spatial and temporal scope of the assessment required. In modern agriculture, the farm owner or manager makes decisions on a day-to-day or even hour-to-hour basis for dozens of fields scattered over as much as a fifty-mile radius from some central location. Following precipitation events, land begins to dry. Land-owners and managers often trace serpentine paths of 150+ miles every morning to inspect the conditions of their various parcels. His or her objective lies in appropriate resource usage -- is a given tract of land dry enough to be workable at this moment or would he or she be better served waiting patiently? Longer-term, these owners and managers decide upon which seeds will grow most effectively and which crops will make their operations profitable. At even longer temporal scales, decisions are made regarding which fields must be acquired and sold and what types of equipment will be necessary in future operations. This work develops and validates algorithms for these shorter-term decisions, along with models of national climate patterns and climate changes to enable longer-term operational planning. A test site at the University of Illinois South Farms (Urbana, IL, USA) served as the primary location to validate machine learning algorithms, employing public sources of precipitation and potential evapotranspiration to model the wetting/drying process. In expanding such local decision support tools to locations on a national scale, one must recognize the heterogeneity of hydroclimatic and soil characteristics throughout the United States. Machine learning algorithms modeling the wetting/drying process must address this variability, and yet it is wholly impractical to construct a separate algorithm for every conceivable location. For this reason, a national hydrological classification system is presented, allowing clusters of hydroclimatic similarity to emerge naturally from annual regime curve data and facilitate the development of cluster-specific algorithms. Given the desire to enable intelligent decision-making at any location, this classification system is developed in a manner that will allow for classification anywhere in the U.S., even in an ungauged basin. Daily time series data from 428 catchments in the MOPEX database are analyzed to produce an empirical classification tree, partitioning the United States into regions of hydroclimatic similarity. In constructing a classification tree based upon 55 years of data, it is important to recognize the non-stationary nature of climate data. The shifts in climatic regimes will cause certain locations to shift their ultimate position within the classification tree, requiring decision-makers to alter land usage, farming practices, and equipment needs, and algorithms to adjust accordingly. This work adapts the classification model to address the issue of regime shifts over larger temporal scales and suggests how land-usage and farming protocol may vary from hydroclimatic shifts in decades to come. Finally, the generalizability of the hydroclimatic classification system is tested with a physically-based soil moisture model calibrated at several locations throughout the continental United States. The soil moisture model is calibrated at a given site and then applied with the same parameters at other sites within and outside the same hydroclimatic class. The model's performance deteriorates minimally if the calibration and validation location are within the same hydroclimatic class, but deteriorates significantly if the calibration and validates sites are located in different hydroclimatic classes. These soil moisture estimates at the field scale are then further refined by the introduction of LiDAR elevation data, distinguishing faster-drying peaks and ridges from slower-drying valleys. The inclusion of LiDAR enabled multiple locations within the same field to be predicted accurately despite non-identical topography. This cross-application of parametric calibrations and LiDAR-driven disaggregation facilitates decision-support at locations without proximally-located soil moisture sensors.
Federated learning of predictive models from federated Electronic Health Records.
Brisimi, Theodora S; Chen, Ruidi; Mela, Theofanie; Olshevsky, Alex; Paschalidis, Ioannis Ch; Shi, Wei
2018-04-01
In an era of "big data," computationally efficient and privacy-aware solutions for large-scale machine learning problems become crucial, especially in the healthcare domain, where large amounts of data are stored in different locations and owned by different entities. Past research has been focused on centralized algorithms, which assume the existence of a central data repository (database) which stores and can process the data from all participants. Such an architecture, however, can be impractical when data are not centrally located, it does not scale well to very large datasets, and introduces single-point of failure risks which could compromise the integrity and privacy of the data. Given scores of data widely spread across hospitals/individuals, a decentralized computationally scalable methodology is very much in need. We aim at solving a binary supervised classification problem to predict hospitalizations for cardiac events using a distributed algorithm. We seek to develop a general decentralized optimization framework enabling multiple data holders to collaborate and converge to a common predictive model, without explicitly exchanging raw data. We focus on the soft-margin l 1 -regularized sparse Support Vector Machine (sSVM) classifier. We develop an iterative cluster Primal Dual Splitting (cPDS) algorithm for solving the large-scale sSVM problem in a decentralized fashion. Such a distributed learning scheme is relevant for multi-institutional collaborations or peer-to-peer applications, allowing the data holders to collaborate, while keeping every participant's data private. We test cPDS on the problem of predicting hospitalizations due to heart diseases within a calendar year based on information in the patients Electronic Health Records prior to that year. cPDS converges faster than centralized methods at the cost of some communication between agents. It also converges faster and with less communication overhead compared to an alternative distributed algorithm. In both cases, it achieves similar prediction accuracy measured by the Area Under the Receiver Operating Characteristic Curve (AUC) of the classifier. We extract important features discovered by the algorithm that are predictive of future hospitalizations, thus providing a way to interpret the classification results and inform prevention efforts. Copyright © 2018 Elsevier B.V. All rights reserved.
A Compact VLSI System for Bio-Inspired Visual Motion Estimation.
Shi, Cong; Luo, Gang
2018-04-01
This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.
Estimation of anomaly location and size using electrical impedance tomography.
Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun; Woo, Eung Je; Cho, Young Gu
2003-01-01
We developed a new algorithm that estimates locations and sizes of anomalies in electrically conducting medium based on electrical impedance tomography (EIT) technique. When only the boundary current and voltage measurements are available, it is not practically feasible to reconstruct accurate high-resolution cross-sectional conductivity or resistivity images of a subject. In this paper, we focus our attention on the estimation of locations and sizes of anomalies with different conductivity values compared with the background tissues. We showed the performance of the algorithm from experimental results using a 32-channel EIT system and saline phantom. With about 1.73% measurement error in boundary current-voltage data, we found that the minimal size (area) of the detectable anomaly is about 0.72% of the size (area) of the phantom. Potential applications include the monitoring of impedance related physiological events and bubble detection in two-phase flow. Since this new algorithm requires neither any forward solver nor time-consuming minimization process, it is fast enough for various real-time applications in medicine and nondestructive testing.
He, Xinhua; Hu, Wenfa
2014-01-01
This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.
He, Xinhua
2014-01-01
This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367
Tool for Automated Retrieval of Generic Event Tracks (TARGET)
NASA Technical Reports Server (NTRS)
Clune, Thomas; Freeman, Shawn; Cruz, Carlos; Burns, Robert; Kuo, Kwo-Sen; Kouatchou, Jules
2013-01-01
Methods have been developed to identify and track tornado-producing mesoscale convective systems (MCSs) automatically over the continental United States, in order to facilitate systematic studies of these powerful and often destructive events. Several data sources were combined to ensure event identification accuracy. Records of watches and warnings issued by National Weather Service (NWS), and tornado locations and tracks from the Tornado History Project (THP) were used to locate MCSs in high-resolution precipitation observations and GOES infrared (11-micron) Rapid Scan Operation (RSO) imagery. Thresholds are then applied to the latter two data sets to define MCS events and track their developments. MCSs produce a broad range of severe convective weather events that are significantly affecting the living conditions of the populations exposed to them. Understanding how MCSs grow and develop could help scientists improve their weather prediction models, and also provide tools to decision-makers whose goals are to protect populations and their property. Associating storm cells across frames of remotely sensed images poses a difficult problem because storms evolve, split, and merge. Any storm-tracking method should include the following processes: storm identification, storm tracking, and quantification of storm intensity and activity. The spatiotemporal coordinates of the tracks will enable researchers to obtain other coincident observations to conduct more thorough studies of these events. In addition to their tracked locations, their areal extents, precipitation intensities, and accumulations all as functions of their evolutions in time were also obtained and recorded for these events. All parameters so derived can be catalogued into a moving object database (MODB) for custom queries. The purpose of this software is to provide a generalized, cross-platform, pluggable tool for identifying events within a set of scientific data based upon specified criteria with the possibility of storing identified events into a searchable database. The core of the application uses an implementation of the connected component labeling (CCL) algorithm to identify areas of interest, then uses a set of criteria to establish spatial and temporal relationships between identified components. The CCL algorithm is used for identifying objects within images for computer vision. This application applies it to scientific data sets using arbitrary criteria. The most novel concept was applying a generalized CCL implementation to scientific data sets for establishing events both spatially and temporally. The combination of several existing concepts (pluggable components, generalized CCL algorithm, etc.) into one application is also novel. In addition, how the system is designed, i.e., its extensibility with pluggable components, and its configurability with a simple configuration file, is innovative. This allows the system to be applied to new scenarios with ease.
Detecting the red tide based on remote sensing data in optically complex East China Sea
NASA Astrophysics Data System (ADS)
Xu, Xiaohui; Pan, Delu; Mao, Zhihua; Tao, Bangyi; Liu, Qiong
2012-09-01
Red tide not only destroys marine fishery production, deteriorates the marine environment, affects coastal tourist industry, but also causes human poison, even death by eating toxic seafood contaminated by red tide organisms. Remote sensing technology has the characteristics of large-scale, synchronized, rapid monitoring, so it is one of the most important and most effective means of red tide monitoring. This paper selects the high frequency red tides areas of the East China Sea as study area, MODIS/Aqua L2 data as the data source, analysis and compares the spectral differences in the red tide water bodies and non-red tide water bodies of many historical events. Based on the spectral differences, this paper develops the algorithm of Rrs555/Rrs488> 1.5 to extract the red tide information. Apply the algorithm on red tide event happened in the East China Sea on May 28, 2009 to extract the information of red tide, and found that the method can determine effectively the location of the occurrence of red tide; there is a good corresponding relationship between red tide extraction result and chlorophyll a concentration extracted by remote sensing, shows that these algorithm can determine effectively the location and extract the red tide information.
Linearized inversion of multiple scattering seismic energy
NASA Astrophysics Data System (ADS)
Aldawood, Ali; Hoteit, Ibrahim; Zuberi, Mohammad
2014-05-01
Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. So, imaging seismic data with the single-scattering assumption does not locate multiple bounces events in their actual subsurface positions. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single scattering energy such as nearly vertical faults. Standard migration of these multiples provides subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. The resultant image obtained by the adjoint operator is a smoothed depiction of the true subsurface reflectivity model and is heavily masked by migration artifacts and the source wavelet fingerprint that needs to be properly deconvolved. Hence, we proposed a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. The proposed algorithm uses the least-square image based on single-scattering assumption as a constraint to invert for the part of the image that is illuminated by internal scattering energy. Then, we posed the problem of imaging double-scattering energy as a least-square minimization problem that requires solving the normal equation of the following form: GTGv = GTd, (1) where G is a linearized forward modeling operator that predicts double-scattered seismic data. Also, GT is a linearized adjoint operator that image double-scattered seismic data. Gradient-based optimization algorithms solve this linear system. Hence, we used a quasi-Newton optimization technique to find the least-square minimizer. In this approach, an estimate of the Hessian matrix that contains curvature information is modified at every iteration by a low-rank update based on gradient changes at every step. At each iteration, the data residual is imaged using GT to determine the model update. Application of the linearized inversion to synthetic data to image a vertical fault plane demonstrate the effectiveness of this methodology to properly delineate the vertical fault plane and give better amplitude information than the standard migrated image using the adjoint operator that takes into account internal multiples. Thus, least-square imaging of multiple scattering enhances the spatial resolution of the events illuminated by internal scattering energy. It also deconvolves the source signature and helps remove the fingerprint of the acquisition geometry. The final image is obtained by the superposition of the least-square solution based on single scattering assumption and the least-square solution based on double scattering assumption.
Monitoring microearthquakes with the San Andreas fault observatory at depth
Oye, V.; Ellsworth, W.L.
2007-01-01
In 2005, the San Andreas Fault Observatory at Depth (SAFOD) was drilled through the San Andreas Fault zone at a depth of about 3.1 km. The borehole has subsequently been instrumented with high-frequency geophones in order to better constrain locations and source processes of nearby microearthquakes that will be targeted in the upcoming phase of SAFOD. The microseismic monitoring software MIMO, developed by NORSAR, has been installed at SAFOD to provide near-real time locations and magnitude estimates using the high sampling rate (4000 Hz) waveform data. To improve the detection and location accuracy, we incorporate data from the nearby, shallow borehole (???250 m) seismometers of the High Resolution Seismic Network (HRSN). The event association algorithm of the MIMO software incorporates HRSN detections provided by the USGS real time earthworm software. The concept of the new event association is based on the generalized beam forming, primarily used in array seismology. The method requires the pre-computation of theoretical travel times in a 3D grid of potential microearthquake locations to the seismometers of the current station network. By minimizing the differences between theoretical and observed detection times an event is associated and the location accuracy is significantly improved.
NASA Astrophysics Data System (ADS)
Brakensiek, Joshua; Ragozzine, D.
2012-10-01
The transit method for discovering extra-solar planets relies on detecting regular diminutions of light from stars due to the shadows of planets passing in between the star and the observer. NASA's Kepler Mission has successfully discovered thousands of exoplanet candidates using this technique, including hundreds of stars with multiple transiting planets. In order to estimate the frequency of these valuable systems, our research concerns the efficient calculation of geometric probabilities for detecting multiple transiting extrasolar planets around the same parent star. In order to improve on previous studies that used numerical methods (e.g., Ragozzine & Holman 2010, Tremaine & Dong 2011), we have constructed an efficient, analytical algorithm which, given a collection of conjectured exoplanets orbiting a star, computes the probability that any particular group of exoplanets are transiting. The algorithm applies theorems of elementary differential geometry to compute the areas bounded by circular curves on the surface of a sphere (see Ragozzine & Holman 2010). The implemented algorithm is more accurate and orders of magnitude faster than previous algorithms, based on comparison with Monte Carlo simulations. Expanding this work, we have also developed semi-analytical methods for determining the frequency of exoplanet mutual events, i.e., the geometric probability two planets will transit each other (Planet-Planet Occultation) and the probability that this transit occurs simultaneously as they transit their star (Overlapping Double Transits; see Ragozzine & Holman 2010). The latter algorithm can also be applied to calculating the probability of observing transiting circumbinary planets (Doyle et al. 2011, Welsh et al. 2012). All of these algorithms have been coded in C and will be made publicly available. We will present and advertise these codes and illustrate their value for studying exoplanetary systems.
NASA Technical Reports Server (NTRS)
Kong, Edmund M.; Saenz-Otero, Alvar; Nolet, Simon; Berkovitz, Dustin S.; Miller, David W.; Sell, Steve W.
2004-01-01
The MIT-SSL SPHERES testbed provides a facility for the development of algorithms necessary for the success of Distributed Satellite Systems (DSS). The initial development contemplated formation flight and docking control algorithms; SPHERES now supports the study of metrology, control, autonomy, artificial intelligence, and communications algorithms and their effects on DSS projects. To support this wide range of topics, the SPHERES design contemplated the need to support multiple researchers, as echoed from both the hardware and software designs. The SPHERES operational plan further facilitates the development of algorithms by multiple researchers, while the operational locations incrementally increase the ability of the tests to operate in a representative environment. In this paper, an overview of the SPHERES testbed is first presented. The SPHERES testbed serves as a model of the design philosophies that allow for the various researches being carried out on such a facility. The implementation of these philosophies are further highlighted in the three different programs that are currently scheduled for testing onboard the International Space Station (ISS) and three that are proposed for a re-flight mission: Mass Property Identification, Autonomous Rendezvous and Docking, TPF Multiple Spacecraft Formation Flight in the first flight and Precision Optical Pointing, Tethered Formation Flight and Mars Orbit Sample Retrieval for the re-flight mission.
2014-01-01
This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772
Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.
2015-01-01
We present a new method to locate low-frequency earthquakes (LFEs) within tectonic tremor episodes based on time-reverse imaging techniques. The modified time-reverse imaging technique presented here is the first method that locates individual LFEs within tremor episodes within 5 km uncertainty without relying on high-amplitude P-wave arrivals and that produces similar hypocentral locations to methods that locate events by stacking hundreds of LFEs without having to assume event co-location. In contrast to classic time-reverse imaging algorithms, we implement a modification to the method that searches for phase coherence over a short time period rather than identifying the maximum amplitude of a superpositioned wavefield. The method is independent of amplitude and can help constrain event origin time. The method uses individual LFE origin times, but does not rely on a priori information on LFE templates and families.We apply the method to locate 34 individual LFEs within tremor episodes that occur between 2010 and 2011 on the San Andreas Fault, near Cholame, California. Individual LFE location accuracies range from 2.6 to 5 km horizontally and 4.8 km vertically. Other methods that have been able to locate individual LFEs with accuracy of less than 5 km have mainly used large-amplitude events where a P-phase arrival can be identified. The method described here has the potential to locate a larger number of individual low-amplitude events with only the S-phase arrival. Location accuracy is controlled by the velocity model resolution and the wavelength of the dominant energy of the signal. Location results are also dependent on the number of stations used and are negligibly correlated with other factors such as the maximum gap in azimuthal coverage, source–station distance and signal-to-noise ratio.
Goldsworthy, Raymond L.; Delhorne, Lorraine A.; Desloge, Joseph G.; Braida, Louis D.
2014-01-01
This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials under three processing conditions: An omni-directional response, a dipole-directional response, and the spatial-filtering algorithm. The background noise condition used three simultaneous time-reversed speech signals as interferers located at 90°, 180°, and 270°. Results indicated that the spatial-filtering algorithm can provide speech reception benefits of 5.8 to 10.7 dB SRT compared to an omni-directional response in a reverberant room with multiple noise sources. Given the observed SRT benefits, coupled with an efficient design, the proposed algorithm is promising as a CI noise-reduction solution. PMID:25096120
NASA Astrophysics Data System (ADS)
Guo, H.; Zhang, H.
2016-12-01
Relocating high-precision earthquakes is a central task for monitoring earthquakes and studying the structure of earth's interior. The most popular location method is the event-pair double-difference (DD) relative location method, which uses the catalog and/or more accurate waveform cross-correlation (WCC) differential times from event pairs with small inter-event separations to the common stations to reduce the effect of the velocity uncertainties outside the source region. Similarly, Zhang et al. [2010] developed a station-pair DD location method which uses the differential times from common events to pairs of stations to reduce the effect of the velocity uncertainties near the source region, to relocate the non-volcanic tremors (NVT) beneath the San Andreas Fault (SAF). To utilize advantages of both DD location methods, we have proposed and developed a new double-pair DD location method to use the differential times from pairs of events to pairs of stations. The new method can remove the event origin time and station correction terms from the inversion system and cancel out the effects of the velocity uncertainties near and outside the source region simultaneously. We tested and applied the new method on the northern California regular earthquakes to validate its performance. In comparison, among three DD location methods, the new double-pair DD method can determine more accurate relative locations and the station-pair DD method can better improve the absolute locations. Thus, we further proposed a new location strategy combining station-pair and double-pair differential times to determine accurate absolute and relative locations at the same time. For NVTs, it is difficult to pick the first arrivals and derive the WCC event-pair differential times, thus the general practice is to measure station-pair envelope WCC differential times. However, station-pair tremor locations are scattered due to the low-precision relative locations. The ability that double-pair data can be directly constructed from the station-pair data means that double-pair DD method can be used for improving NVT locations. We have applied the new method to the NVTs beneath the SAF near Cholame, California. Compared to the previous results, the new double-pair DD tremor locations are more concentrated and show more detailed structures.
Constrained Surface-Level Gateway Placement for Underwater Acoustic Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Li, Deying; Li, Zheng; Ma, Wenkai; Chen, Hong
One approach to guarantee the performance of underwater acoustic sensor networks is to deploy multiple Surface-level Gateways (SGs) at the surface. This paper addresses the connected (or survivable) Constrained Surface-level Gateway Placement (C-SGP) problem for 3-D underwater acoustic sensor networks. Given a set of candidate locations where SGs can be placed, our objective is to place minimum number of SGs at a subset of candidate locations such that it is connected (or 2-connected) from any USN to the base station. We propose a polynomial time approximation algorithm for the connected C-SGP problem and survivable C-SGP problem, respectively. Simulations are conducted to verify our algorithms' efficiency.
Non-stationary least-squares complex decomposition for microseismic noise attenuation
NASA Astrophysics Data System (ADS)
Chen, Yangkang
2018-06-01
Microseismic data processing and imaging are crucial for subsurface real-time monitoring during hydraulic fracturing process. Unlike the active-source seismic events or large-scale earthquake events, the microseismic event is usually of very small magnitude, which makes its detection challenging. The biggest trouble of microseismic data is the low signal-to-noise ratio issue. Because of the small energy difference between effective microseismic signal and ambient noise, the effective signals are usually buried in strong random noise. I propose a useful microseismic denoising algorithm that is based on decomposing a microseismic trace into an ensemble of components using least-squares inversion. Based on the predictive property of useful microseismic event along the time direction, the random noise can be filtered out via least-squares fitting of multiple damping exponential components. The method is flexible and almost automated since the only parameter needed to be defined is a decomposition number. I use some synthetic and real data examples to demonstrate the potential of the algorithm in processing complicated microseismic data sets.
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
This article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on computed tomography (CT) examinations. We developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. The scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing data set of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. The proposed method is able to robustly and accurately disconnect all connections between left and right lungs, and the guided dynamic programming algorithm is able to remove redundant processing.
Planning Paths Through Singularities in the Center of Mass Space
NASA Technical Reports Server (NTRS)
Doggett, William R.; Messner, William C.; Juang, Jer-Nan
1998-01-01
The center of mass space is a convenient space for planning motions that minimize reaction forces at the robot's base or optimize the stability of a mechanism. A unique problem associated with path planning in the center of mass space is the potential existence of multiple center of mass images for a single Cartesian obstacle, since a single center of mass location can correspond to multiple robot joint configurations. The existence of multiple images results in a need to either maintain multiple center of mass obstacle maps or to update obstacle locations when the robot passes through a singularity, such as when it moves from an elbow-up to an elbow-down configuration. To illustrate the concepts presented in this paper, a path is planned for an example task requiring motion through multiple center of mass space maps. The object of the path planning algorithm is to locate the bang- bang acceleration profile that minimizes the robot's base reactions in the presence of a single Cartesian obstacle. To simplify the presentation, only non-redundant robots are considered and joint non-linearities are neglected.
The use of propagation path corrections to improve regional seismic event location in western China
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steck, L.K.; Cogbill, A.H.; Velasco, A.A.
1999-03-01
In an effort to improve the ability to locate seismic events in western China using only regional data, the authors have developed empirical propagation path corrections (PPCs) and applied such corrections using both traditional location routines as well as a nonlinear grid search method. Thus far, the authors have concentrated on corrections to observed P arrival times for shallow events using travel-time observations available from the USGS EDRs, the ISC catalogs, their own travel-tim picks from regional data, and data from other catalogs. They relocate events with the algorithm of Bratt and Bache (1988) from a region encompassing China. Formore » individual stations having sufficient data, they produce a map of the regional travel-time residuals from all well-located teleseismic events. From these maps, interpolated PPC surfaces have been constructed using both surface fitting under tension and modified Bayesian kriging. The latter method offers the advantage of providing well-behaved interpolants, but requires that the authors have adequate error estimates associated with the travel-time residuals. To improve error estimates for kriging and event location, they separate measurement error from modeling error. The modeling error is defined as the travel-time variance of a particular model as a function of distance, while the measurement error is defined as the picking error associated with each phase. They estimate measurement errors for arrivals from the EDRs based on roundoff or truncation, and use signal-to-noise for the travel-time picks from the waveform data set.« less
A feasibility study of using event-related potential as a biometrics.
Yih-Choung Yu; Sicheng Wang; Gabel, Lisa A
2016-08-01
The use of an individual's neural response to stimuli (the event-related potential or ERP) has potential as a biometric because it is highly resistant to fraud relative to other conventional authentication systems. P300 is an ERP in human electroencephalography (EEG) that occurs in response to an oddball stimulus when an individual is actively engaged in a target detection task. Because P300 is consistently detectable from almost every subject, it is considered a potential signal for biometric applications. This paper presents a feasibility study of using topological plots of P300 as a biometric in subject authentication. The variation in latency and location of P300 response of 24 participants performing the P300Speller task were studied. Data sets from four participants were used for algorithm training; data from the other 20 participants were used as imposters for algorithm validation. The result showed that the algorithm was able to correctly identify three out of these four participants. Validation test also proved that the algorithm was able to reject 95% of the imposters for those three authenticated participants.
Contamination Event Detection with Multivariate Time-Series Data in Agricultural Water Monitoring †
Mao, Yingchi; Qi, Hai; Ping, Ping; Li, Xiaofang
2017-01-01
Time series data of multiple water quality parameters are obtained from the water sensor networks deployed in the agricultural water supply network. The accurate and efficient detection and warning of contamination events to prevent pollution from spreading is one of the most important issues when pollution occurs. In order to comprehensively reduce the event detection deviation, a spatial–temporal-based event detection approach with multivariate time-series data for water quality monitoring (M-STED) was proposed. The M-STED approach includes three parts. The first part is that M-STED adopts a Rule K algorithm to select backbone nodes as the nodes in the CDS, and forward the sensed data of multiple water parameters. The second part is to determine the state of each backbone node with back propagation neural network models and the sequential Bayesian analysis in the current timestamp. The third part is to establish a spatial model with Bayesian networks to estimate the state of the backbones in the next timestamp and trace the “outlier” node to its neighborhoods to detect a contamination event. The experimental results indicate that the average detection rate is more than 80% with M-STED and the false detection rate is lower than 9%, respectively. The M-STED approach can improve the rate of detection by about 40% and reduce the false alarm rate by about 45%, compared with the event detection with a single water parameter algorithm, S-STED. Moreover, the proposed M-STED can exhibit better performance in terms of detection delay and scalability. PMID:29207535
AthenaMT: upgrading the ATLAS software framework for the many-core world with multi-threading
NASA Astrophysics Data System (ADS)
Leggett, Charles; Baines, John; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; van Gemmeren, Peter; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; ATLAS Collaboration
2017-10-01
ATLAS’s current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying handling of features such as event and time dependent data, asynchronous callbacks, metadata, integration with the online High Level Trigger for partial processing in certain regions of interest, concurrent I/O, as well as ensuring thread safety of core services. We also report on upgrading the framework to handle Algorithms that are fully re-entrant.
NASA Astrophysics Data System (ADS)
Kappler, Karl N.; Schneider, Daniel D.; MacLean, Laura S.; Bleier, Thomas E.
2017-08-01
A method for identification of pulsations in time series of magnetic field data which are simultaneously present in multiple channels of data at one or more sensor locations is described. Candidate pulsations of interest are first identified in geomagnetic time series by inspection. Time series of these "training events" are represented in matrix form and transpose-multiplied to generate time-domain covariance matrices. The ranked eigenvectors of this matrix are stored as a feature of the pulsation. In the second stage of the algorithm, a sliding window (approximately the width of the training event) is moved across the vector-valued time-series comprising the channels on which the training event was observed. At each window position, the data covariance matrix and associated eigenvectors are calculated. We compare the orientation of the dominant eigenvectors of the training data to those from the windowed data and flag windows where the dominant eigenvectors directions are similar. This was successful in automatically identifying pulses which share polarization and appear to be from the same source process. We apply the method to a case study of continuously sampled (50 Hz) data from six observatories, each equipped with three-component induction coil magnetometers. We examine a 90-day interval of data associated with a cluster of four observatories located within 50 km of Napa, California, together with two remote reference stations-one 100 km to the north of the cluster and the other 350 km south. When the training data contains signals present in the remote reference observatories, we are reliably able to identify and extract global geomagnetic signals such as solar-generated noise. When training data contains pulsations only observed in the cluster of local observatories, we identify several types of non-plane wave signals having similar polarization.
NASA Astrophysics Data System (ADS)
Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc
2018-05-01
The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.
Visualization of Traffic Accidents
NASA Technical Reports Server (NTRS)
Wang, Jie; Shen, Yuzhong; Khattak, Asad
2010-01-01
Traffic accidents have tremendous impact on society. Annually approximately 6.4 million vehicle accidents are reported by police in the US and nearly half of them result in catastrophic injuries. Visualizations of traffic accidents using geographic information systems (GIS) greatly facilitate handling and analysis of traffic accidents in many aspects. Environmental Systems Research Institute (ESRI), Inc. is the world leader in GIS research and development. ArcGIS, a software package developed by ESRI, has the capabilities to display events associated with a road network, such as accident locations, and pavement quality. But when event locations related to a road network are processed, the existing algorithm used by ArcGIS does not utilize all the information related to the routes of the road network and produces erroneous visualization results of event locations. This software bug causes serious problems for applications in which accurate location information is critical for emergency responses, such as traffic accidents. This paper aims to address this problem and proposes an improved method that utilizes all relevant information of traffic accidents, namely, route number, direction, and mile post, and extracts correct event locations for accurate traffic accident visualization and analysis. The proposed method generates a new shape file for traffic accidents and displays them on top of the existing road network in ArcGIS. Visualization of traffic accidents along Hampton Roads Bridge Tunnel is included to demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Behzad, Mehdi; Ghadami, Amin; Maghsoodi, Ameneh; Michael Hale, Jack
2013-11-01
In this paper, a simple method for detection of multiple edge cracks in Euler-Bernoulli beams having two different types of cracks is presented based on energy equations. Each crack is modeled as a massless rotational spring using Linear Elastic Fracture Mechanics (LEFM) theory, and a relationship among natural frequencies, crack locations and stiffness of equivalent springs is demonstrated. In the procedure, for detection of m cracks in a beam, 3m equations and natural frequencies of healthy and cracked beam in two different directions are needed as input to the algorithm. The main accomplishment of the presented algorithm is the capability to detect the location, severity and type of each crack in a multi-cracked beam. Concise and simple calculations along with accuracy are other advantages of this method. A number of numerical examples for cantilever beams including one and two cracks are presented to validate the method.
Utilizing Weather RADAR for Rapid Location of Meteorite Falls and Space Debris Re-Entry
NASA Technical Reports Server (NTRS)
Fries, Marc D.
2016-01-01
This activity utilizes existing NOAA weather RADAR imagery to locate meteorite falls and space debris falls. The near-real-time availability and spatial accuracy of these data allow rapid recovery of material from both meteorite falls and space debris re-entry events. To date, at least 22 meteorite fall recoveries have benefitted from RADAR detection and fall modeling, and multiple debris re-entry events over the United States have been observed in unprecedented detail.
Virtualized Traffic: reconstructing traffic flows from discrete spatiotemporal data.
Sewall, Jason; van den Berg, Jur; Lin, Ming C; Manocha, Dinesh
2011-01-01
We present a novel concept, Virtualized Traffic, to reconstruct and visualize continuous traffic flows from discrete spatiotemporal data provided by traffic sensors or generated artificially to enhance a sense of immersion in a dynamic virtual world. Given the positions of each car at two recorded locations on a highway and the corresponding time instances, our approach can reconstruct the traffic flows (i.e., the dynamic motions of multiple cars over time) between the two locations along the highway for immersive visualization of virtual cities or other environments. Our algorithm is applicable to high-density traffic on highways with an arbitrary number of lanes and takes into account the geometric, kinematic, and dynamic constraints on the cars. Our method reconstructs the car motion that automatically minimizes the number of lane changes, respects safety distance to other cars, and computes the acceleration necessary to obtain a smooth traffic flow subject to the given constraints. Furthermore, our framework can process a continuous stream of input data in real time, enabling the users to view virtualized traffic events in a virtual world as they occur. We demonstrate our reconstruction technique with both synthetic and real-world input. © 2011 IEEE Published by the IEEE Computer Society
Model-Based Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Kumar, Aditya; Viassolo, Daniel
2008-01-01
The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.
Global Seismic Event Detection Using Surface Waves: 15 Possible Antarctic Glacial Sliding Events
NASA Astrophysics Data System (ADS)
Chen, X.; Shearer, P. M.; Walker, K. T.; Fricker, H. A.
2008-12-01
To identify overlooked or anomalous seismic events not listed in standard catalogs, we have developed an algorithm to detect and locate global seismic events using intermediate-period (35-70s) surface waves. We apply our method to continuous vertical-component seismograms from the global seismic networks as archived in the IRIS UV FARM database from 1997 to 2007. We first bandpass filter the seismograms, apply automatic gain control, and compute envelope functions. We then examine 1654 target event locations defined at 5 degree intervals and stack the seismogram envelopes along the predicted Rayleigh-wave travel times. The resulting function has spatial and temporal peaks that indicate possible seismic events. We visually check these peaks using a graphical user interface to eliminate artifacts and assign an overall reliability grade (A, B or C) to the new events. We detect 78% of events in the Global Centroid Moment Tensor (CMT) catalog. However, we also find 840 new events not listed in the PDE, ISC and REB catalogs. Many of these new events were previously identified by Ekstrom (2006) using a different Rayleigh-wave detection scheme. Most of these new events are located along oceanic ridges and transform faults. Some new events can be associated with volcanic eruptions such as the 2000 Miyakejima sequence near Japan and others with apparent glacial sliding events in Greenland (Ekstrom et al., 2003). We focus our attention on 15 events detected from near the Antarctic coastline and relocate them using a cross-correlation approach. The events occur in 3 groups which are well-separated from areas of cataloged earthquake activity. We speculate that these are iceberg calving and/or glacial sliding events, and hope to test this by inverting for their source mechanisms and examining remote sensing data from their source regions.
Microseismic Velocity Imaging of the Fracturing Zone
NASA Astrophysics Data System (ADS)
Zhang, H.; Chen, Y.
2015-12-01
Hydraulic fracturing of low permeability reservoirs can induce microseismic events during fracture development. For this reason, microseismic monitoring using sensors on surface or in borehole have been widely used to delineate fracture spatial distribution and to understand fracturing mechanisms. It is often the case that the stimulated reservoir volume (SRV) is determined solely based on microseismic locations. However, it is known that for some fracture development stage, long period long duration events, instead of microseismic events may be associated. In addition, because microseismic events are essentially weak and there exist different sources of noise during monitoring, some microseismic events could not be detected and thus located. Therefore the estimation of the SRV is biased if it is solely determined by microseismic locations. With the existence of fluids and fractures, the seismic velocity of reservoir layers will be decreased. Based on this fact, we have developed a near real time seismic velocity tomography method to characterize velocity changes associated with fracturing process. The method is based on double-difference seismic tomography algorithm to image the fracturing zone where microseismic events occur by using differential arrival times from microseismic event pairs. To take into account varying data distribution for different fracking stages, the method solves the velocity model in the wavelet domain so that different scales of model features can be obtained according to different data distribution. We have applied this real time tomography method to both acoustic emission data from lab experiment and microseismic data from a downhole microseismic monitoring project for shale gas hydraulic fracturing treatment. The tomography results from lab data clearly show the velocity changes associated with different rock fracturing stages. For the field data application, it shows that microseismic events are located in low velocity anomalies. By combining low velocity anomalies with microseismic events, we should better estimate the SRV.
Acoustic 3D modeling by the method of integral equations
NASA Astrophysics Data System (ADS)
Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.
2018-02-01
This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.
Multi-model data fusion to improve an early warning system for hypo-/hyperglycemic events.
Botwey, Ransford Henry; Daskalaki, Elena; Diem, Peter; Mougiakakou, Stavroula G
2014-01-01
Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.
Integrated G and C Implementation within IDOS: A Simulink Based Reusable Launch Vehicle Simulation
NASA Technical Reports Server (NTRS)
Fisher, Joseph E.; Bevacqua, Tim; Lawrence, Douglas A.; Zhu, J. Jim; Mahoney, Michael
2003-01-01
The implementation of multiple Integrated Guidance and Control (IG&C) algorithms per flight phase within a vehicle simulation poses a daunting task to coordinate algorithm interactions with the other G&C components and with vehicle subsystems. Currently being developed by Universal Space Lines LLC (USL) under contract from NASA, the Integrated Development and Operations System (IDOS) contains a high fidelity Simulink vehicle simulation, which provides a means to test cutting edge G&C technologies. Combining the modularity of this vehicle simulation and Simulink s built-in primitive blocks provide a quick way to implement algorithms. To add discrete-event functionality to the unfinished IDOS simulation, Vehicle Event Manager (VEM) and Integrated Vehicle Health Monitoring (IVHM) subsystems were created to provide discrete-event and pseudo-health monitoring processing capabilities. Matlab's Stateflow is used to create the IVHM and Event Manager subsystems and to implement a supervisory logic controller referred to as the Auto-commander as part of the IG&C to coordinate the control system adaptation and reconfiguration and to select the control and guidance algorithms for a given flight phase. Manual creation of the Stateflow charts for all of these subsystems is a tedious and time-consuming process. The Stateflow Auto-builder was developed as a Matlab based software tool for the automatic generation of a Stateflow chart from information contained in a database. This paper describes the IG&C, VEM and IVHM implementations in IDOS. In addition, this paper describes the Stateflow Auto-builder.
Alday, Erick A Perez; Colman, Michael A; Langley, Philip; Zhang, Henggui
2017-03-01
Atrial tachy-arrhytmias, such as atrial fibrillation (AF), are characterised by irregular electrical activity in the atria, generally associated with erratic excitation underlain by re-entrant scroll waves, fibrillatory conduction of multiple wavelets or rapid focal activity. Epidemiological studies have shown an increase in AF prevalence in the developed world associated with an ageing society, highlighting the need for effective treatment options. Catheter ablation therapy, commonly used in the treatment of AF, requires spatial information on atrial electrical excitation. The standard 12-lead electrocardiogram (ECG) provides a method for non-invasive identification of the presence of arrhythmia, due to irregularity in the ECG signal associated with atrial activation compared to sinus rhythm, but has limitations in providing specific spatial information. There is therefore a pressing need to develop novel methods to identify and locate the origin of arrhythmic excitation. Invasive methods provide direct information on atrial activity, but may induce clinical complications. Non-invasive methods avoid such complications, but their development presents a greater challenge due to the non-direct nature of monitoring. Algorithms based on the ECG signals in multiple leads (e.g. a 64-lead vest) may provide a viable approach. In this study, we used a biophysically detailed model of the human atria and torso to investigate the correlation between the morphology of the ECG signals from a 64-lead vest and the location of the origin of rapid atrial excitation arising from rapid focal activity and/or re-entrant scroll waves. A focus-location algorithm was then constructed from this correlation. The algorithm had success rates of 93% and 76% for correctly identifying the origin of focal and re-entrant excitation with a spatial resolution of 40 mm, respectively. The general approach allows its application to any multi-lead ECG system. This represents a significant extension to our previously developed algorithms to predict the AF origins in association with focal activities.
Multi-Array Detection, Association and Location of Infrasound and Seismo-Acoustic Events in Utah
2008-09-30
techniques for detecting , associating, and locating infrasound signals at single and multiple arrays and then combining the processed results with...was detected and located by both infrasound and seismic instruments (Figure 3). Infrasound signals at all three arrays , from one of the explosions, are...COVERED (From - To) 30-Sep-2008 REPRINT 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER MULTI- ARRAY DETECTION , ASSOCIATION AND LOCATION OF INFRASOUND FA8718
Developing a system for blind acoustic source localization and separation
NASA Astrophysics Data System (ADS)
Kulkarni, Raghavendra
This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.
Analysis and synthesis of intonation using the Tilt model.
Taylor, P
2000-03-01
This paper introduces the Tilt intonational model and describes how this model can be used to automatically analyze and synthesize intonation. In the model, intonation is represented as a linear sequence of events, which can be pitch accents or boundary tones. Each event is characterized by continuous parameters representing amplitude, duration, and tilt (a measure of the shape of the event). The paper describes an event detector, in effect an intonational recognition system, which produces a transcription of an utterance's intonation. The features and parameters of the event detector are discussed and performance figures are shown on a variety of read and spontaneous speaker independent conversational speech databases. Given the event locations, algorithms are described which produce an automatic analysis of each event in terms of the Tilt parameters. Synthesis algorithms are also presented which generate F0 contours from Tilt representations. The accuracy of these is shown by comparing synthetic F0 contours to real F0 contours. The paper concludes with an extensive discussion on linguistic representations of intonation and gives evidence that the Tilt model goes a long way to satisfying the desired goals of such a representation in that it has the right number of degrees of freedom to be able to describe and synthesize intonation accurately.
The Power of Flexibility: Autonomous Agents That Conserve Energy in Commercial Buildings
NASA Astrophysics Data System (ADS)
Kwak, Jun-young
Agent-based systems for energy conservation are now a growing area of research in multiagent systems, with applications ranging from energy management and control on the smart grid, to energy conservation in residential buildings, to energy generation and dynamic negotiations in distributed rural communities. Contributing to this area, my thesis presents new agent-based models and algorithms aiming to conserve energy in commercial buildings. More specifically, my thesis provides three sets of algorithmic contributions. First, I provide online predictive scheduling algorithms to handle massive numbers of meeting/event scheduling requests considering flexibility , which is a novel concept for capturing generic user constraints while optimizing the desired objective. Second, I present a novel BM-MDP ( Bounded-parameter Multi-objective Markov Decision Problem) model and robust algorithms for multi-objective optimization under uncertainty both at the planning and execution time. The BM-MDP model and its robust algorithms are useful in (re)scheduling events to achieve energy efficiency in the presence of uncertainty over user's preferences. Third, when multiple users contribute to energy savings, fair division of credit for such savings to incentivize users for their energy saving activities arises as an important question. I appeal to cooperative game theory and specifically to the concept of Shapley value for this fair division. Unfortunately, scaling up this Shapley value computation is a major hindrance in practice. Therefore, I present novel approximation algorithms to efficiently compute the Shapley value based on sampling and partitions and to speed up the characteristic function computation. These new models have not only advanced the state of the art in multiagent algorithms, but have actually been successfully integrated within agents dedicated to energy efficiency: SAVES, TESLA and THINC. SAVES focuses on the day-to-day energy consumption of individuals and groups in commercial buildings by reactively suggesting energy conserving alternatives. TESLA takes a long-range planning perspective and optimizes overall energy consumption of a large number of group events or meetings together. THINC provides an end-to-end integration within a single agent of energy efficient scheduling, rescheduling and credit allocation. While SAVES, TESLA and THINC thus differ in their scope and applicability, they demonstrate the utility of agent-based systems in actually reducing energy consumption in commercial buildings. I evaluate my algorithms and agents using extensive analysis on data from over 110,000 real meetings/events at multiple educational buildings including the main libraries at the University of Southern California. I also provide results on simulations and real-world experiments, clearly demonstrating the power of agent technology to assist human users in saving energy in commercial buildings.
Status of the Desert Fireball Network
NASA Astrophysics Data System (ADS)
Devillepoix, H. A. R.; Bland, P. A.; Towner, M. C.; Cupák, M.; Sansom, E. K.; Jansen-Sturgeon, T.; Howie, R. M.; Paxman, J.; Hartig, B. A. D.
2016-01-01
A meteorite fall precisely observed from multiple locations allows us to track the object back to the region of the Solar System it came from, and sometimes link it with a parent body, providing context information that helps trace the history of the Solar System. The Desert Fireball Network (DFN) is built in arid areas of Australia: its observatories get favorable observing conditions, and meteorite recovery is eased thanks to the mostly featureless terrain. After the successful recovery of two meteorites with 4 film cameras, the DFN has now switched to a digital network, operating 51 cameras, covering 2.5 million km2 of double station triangulable area. Mostly made of off-the-shelf components, the new observatories are cost effective while maintaining high imaging performance. To process the data (~70TB/month), a significant effort has been put to writing an automated reduction pipeline so that all events are reduced with little human intervention. Innovative techniques have been implemented for this purpose: machine learning algorithms for event detection, blind astrometric calibration, and particle filter simulations to estimate both physical properties and state vector of the meteoroid. On 31 December 2015, the first meteorite from the digital systems was recovered: Murrili (the 1.68 kg H5 ordinary chondrite was observed to fall on 27 November 2015). Another 11 events have been flagged as potential meteorites droppers, and are to be searched in the coming months.
Kim, Mary S.; Tsutsui, Kenta; Stern, Michael D.; Lakatta, Edward G.; Maltsev, Victor A.
2017-01-01
Local Ca2+ Releases (LCRs) are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA) node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame) sensitivity algorithm applied to each pixel (cell location). An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves), sparks and embers in muscle cells and Ca2+ puffs and syntillas in neurons. PMID:28683095
SLAMM: Visual monocular SLAM with continuous mapping using multiple maps
Md. Sabri, Aznul Qalid; Loo, Chu Kiong; Mansoor, Ali Mohammed
2018-01-01
This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM). It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor’s malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM. PMID:29702697
Sparse reconstruction localization of multiple acoustic emissions in large diameter pipelines
NASA Astrophysics Data System (ADS)
Dubuc, Brennan; Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-04-01
A sparse reconstruction localization method is proposed, which is capable of localizing multiple acoustic emission events occurring closely in time. The events may be due to a number of sources, such as the growth of corrosion patches or cracks. Such acoustic emissions may yield localization failure if a triangulation method is used. The proposed method is implemented both theoretically and experimentally on large diameter thin-walled pipes. Experimental examples are presented, which demonstrate the failure of a triangulation method when multiple sources are present in this structure, while highlighting the capabilities of the proposed method. The examples are generated from experimental data of simulated acoustic emission events. The data corresponds to helical guided ultrasonic waves generated in a 3 m long large diameter pipe by pencil lead breaks on its outer surface. Acoustic emission waveforms are recorded by six sparsely distributed low-profile piezoelectric transducers instrumented on the outer surface of the pipe. The same array of transducers is used for both the proposed and the triangulation method. It is demonstrated that the proposed method is able to localize multiple events occurring closely in time. Furthermore, the matching pursuit algorithm and the basis pursuit densoising approach are each evaluated as potential numerical tools in the proposed sparse reconstruction method.
Compressed sensing based missing nodes prediction in temporal communication network
NASA Astrophysics Data System (ADS)
Cheng, Guangquan; Ma, Yang; Liu, Zhong; Xie, Fuli
2018-02-01
The reconstruction of complex network topology is of great theoretical and practical significance. Most research so far focuses on the prediction of missing links. There are many mature algorithms for link prediction which have achieved good results, but research on the prediction of missing nodes has just begun. In this paper, we propose an algorithm for missing node prediction in complex networks. We detect the position of missing nodes based on their neighbor nodes under the theory of compressed sensing, and extend the algorithm to the case of multiple missing nodes using spectral clustering. Experiments on real public network datasets and simulated datasets show that our algorithm can detect the locations of hidden nodes effectively with high precision.
2013-03-01
82 4.3.2 Bayes Decision Criteria and Risk Minimization ............................................ 86...on the globe. In its mission to achieve information superiority, AFTAC has historically combined data garnered from seismic and infrasound networks...to improve location estimates for nuclear events. For instance, underground explosions produce seismic waves that can couple into the atmosphere
NASA Astrophysics Data System (ADS)
Che, Il-Young; Jeon, Jeong-Soo
2010-05-01
Korea Institute of Geoscience and Mineral Resources (KIGAM) operates an infrasound network consisting of seven seismo-acoustic arrays in South Korea. Development of the arrays began in 1999, partially in collaboration with Southern Methodist University, with the goal of detecting distant infrasound signals from natural and anthropogenic phenomena in and around the Korean Peninsula. The main operational purpose of this network is to discriminate man-made seismic events from seismicity including thousands of seismic events per year in the region. The man-made seismic events are major cause of error in estimating the natural seismicity, especially where the seismic activity is weak or moderate such as in the Korean Peninsula. In order to discriminate the man-made explosions from earthquakes, we have applied the seismo-acoustic analysis associating seismic and infrasonic signals generated from surface explosion. The observations of infrasound at multiple arrays made it possible to discriminate surface explosion, because small or moderate size earthquake is not sufficient to generate infrasound. Till now we have annually discriminated hundreds of seismic events in seismological catalog as surface explosions by the seismo-acoustic analysis. Besides of the surface explosions, the network also detected infrasound signals from other sources, such as bolide, typhoons, rocket launches, and underground nuclear test occurred in and around the Korean Peninsula. In this study, ten years of seismo-acoustic data are reviewed with recent infrasonic detection algorithm and association method that finally linked to the seismic monitoring system of the KIGAM to increase the detection rate of surface explosions. We present the long-term results of seismo-acoustic analysis, the detection capability of the multiple arrays, and implications for seismic source location. Since the seismo-acoustic analysis is proved as a definite method to discriminate surface explosion, the analysis will be continuously used for estimating natural seismicity and understanding infrasonic sources.
The First Fermi-GBM Terrestrial Gamma Ray Flash Catalog
NASA Astrophysics Data System (ADS)
Roberts, O. J.; Fitzpatrick, G.; Stanbro, M.; McBreen, S.; Briggs, M. S.; Holzworth, R. H.; Grove, J. E.; Chekhtman, A.; Cramer, E. S.; Mailyan, B. G.
2018-05-01
We present the first Fermi Space Telescope Gamma Ray Burst Monitor (GBM) catalog of 4,144 terrestrial gamma ray flashes (TGFs), detected since launch in 11 July 2008 through 31 July 2016. We discuss the updates and improvements to the triggered data and off-line search algorithms, comparing this improved detection rate of ˜800 TGFs per year with event rates from previously published TGF catalogs from other missions. A Bayesian block algorithm calculated the temporal and spectral properties of the TGFs, revealing a delay between the hard (>300 keV) and soft (≤300 keV) photons of around 27 μs. Detector count rates of "low-fluence" events were found to have average rates exceeding 150 kHz. Searching the World-Wide Lightning Location Network data for radio sferics within ±5 min of each TGF revealed a clean sample of 1,314 World-Wide Lightning Location Network locations, which were used to to accurately locate TGF-producing storms. It also revealed lightning and storm activity for specific regions, as well as seasonal and daily variations of global lightning patterns. Correcting for the orbit of Fermi, we quantitatively find a marginal excess of TGFs being produced from storms over land near oceans (i.e., narrow isthmuses and small islands). No difference was observed between the duration of TGFs over the ocean and land. The distribution of TGFs at a given local solar time for predefined American, Asian, and African regions were confirmed to correlate well with known regional lightning rates.
NASA Technical Reports Server (NTRS)
Starr, Stanley O.
1998-01-01
NASA, at the John F. Kennedy Space Center (KSC), developed and operates a unique high-precision lightning location system to provide lightning-related weather warnings. These warnings are used to stop lightning- sensitive operations such as space vehicle launches and ground operations where equipment and personnel are at risk. The data is provided to the Range Weather Operations (45th Weather Squadron, U.S. Air Force) where it is used with other meteorological data to issue weather advisories and warnings for Cape Canaveral Air Station and KSC operations. This system, called Lightning Detection and Ranging (LDAR), provides users with a graphical display in three dimensions of 66 megahertz radio frequency events generated by lightning processes. The locations of these events provide a sound basis for the prediction of lightning hazards. This document provides the basis for the design approach and data analysis for a system of radio frequency receivers to provide azimuth and elevation data for lightning pulses detected simultaneously by the LDAR system. The intent is for this direction-finding system to correct and augment the data provided by LDAR and, thereby, increase the rate of valid data and to correct or discard any invalid data. This document develops the necessary equations and algorithms, identifies sources of systematic errors and means to correct them, and analyzes the algorithms for random error. This data analysis approach is not found in the existing literature and was developed to facilitate the operation of this Short Baseline LDAR (SBLDAR). These algorithms may also be useful for other direction-finding systems using radio pulses or ultrasonic pulse data.
NASA Astrophysics Data System (ADS)
Vecherin, Sergey N.; Wilson, D. Keith; Pettit, Chris L.
2010-04-01
Determination of an optimal configuration (numbers, types, and locations) of a sensor network is an important practical problem. In most applications, complex signal propagation effects and inhomogeneous coverage preferences lead to an optimal solution that is highly irregular and nonintuitive. The general optimization problem can be strictly formulated as a binary linear programming problem. Due to the combinatorial nature of this problem, however, its strict solution requires significant computational resources (NP-complete class of complexity) and is unobtainable for large spatial grids of candidate sensor locations. For this reason, a greedy algorithm for approximate solution was recently introduced [S. N. Vecherin, D. K. Wilson, and C. L. Pettit, "Optimal sensor placement with terrain-based constraints and signal propagation effects," Unattended Ground, Sea, and Air Sensor Technologies and Applications XI, SPIE Proc. Vol. 7333, paper 73330S (2009)]. Here further extensions to the developed algorithm are presented to include such practical needs and constraints as sensor availability, coverage by multiple sensors, and wireless communication of the sensor information. Both communication and detection are considered in a probabilistic framework. Communication signal and signature propagation effects are taken into account when calculating probabilities of communication and detection. Comparison of approximate and strict solutions on reduced-size problems suggests that the approximate algorithm yields quick and good solutions, which thus justifies using that algorithm for full-size problems. Examples of three-dimensional outdoor sensor placement are provided using a terrain-based software analysis tool.
BiPACE 2D--graph-based multiple alignment for comprehensive 2D gas chromatography-mass spectrometry.
Hoffmann, Nils; Wilhelm, Mathias; Doebbe, Anja; Niehaus, Karsten; Stoye, Jens
2014-04-01
Comprehensive 2D gas chromatography-mass spectrometry is an established method for the analysis of complex mixtures in analytical chemistry and metabolomics. It produces large amounts of data that require semiautomatic, but preferably automatic handling. This involves the location of significant signals (peaks) and their matching and alignment across different measurements. To date, there exist only a few openly available algorithms for the retention time alignment of peaks originating from such experiments that scale well with increasing sample and peak numbers, while providing reliable alignment results. We describe BiPACE 2D, an automated algorithm for retention time alignment of peaks from 2D gas chromatography-mass spectrometry experiments and evaluate it on three previously published datasets against the mSPA, SWPA and Guineu algorithms. We also provide a fourth dataset from an experiment studying the H2 production of two different strains of Chlamydomonas reinhardtii that is available from the MetaboLights database together with the experimental protocol, peak-detection results and manually curated multiple peak alignment for future comparability with newly developed algorithms. BiPACE 2D is contained in the freely available Maltcms framework, version 1.3, hosted at http://maltcms.sf.net, under the terms of the L-GPL v3 or Eclipse Open Source licenses. The software used for the evaluation along with the underlying datasets is available at the same location. The C.reinhardtii dataset is freely available at http://www.ebi.ac.uk/metabolights/MTBLS37.
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.
2017-01-01
Declared North Korean nuclear tests in 2006, 2009, 2013 and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-D global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25 per cent shorter than the distances between events estimated using regional Pn phases. The 2009, 2013 and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of metres. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-D velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The modified velocity gradients reduce the residuals, the relative location uncertainties and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.
NASA Astrophysics Data System (ADS)
Schellenberg, Graham; Stortz, Greg; Goertzen, Andrew L.
2016-02-01
A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x-y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5-82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial template model used.
A consistent and uniform research earthquake catalog for the AlpArray region: preliminary results.
NASA Astrophysics Data System (ADS)
Molinari, I.; Bagagli, M.; Kissling, E. H.; Diehl, T.; Clinton, J. F.; Giardini, D.; Wiemer, S.
2017-12-01
The AlpArray initiative (www.alparray.ethz.ch) is a large-scale European collaboration ( 50 institutes involved) to study the entire Alpine orogen at high resolution with a variety of geoscientific methods. AlpArray provides unprecedentedly uniform station coverage for the region with more than 650 broadband seismic stations, 300 of which are temporary. The AlpArray Seismic Network (AASN) is a joint effort of 25 institutes from 10 nations, operates since January 2016 and is expected to continue until the end of 2018. In this study, we establish a uniform earthquake catalogue for the Greater Alpine region during the operation period of the AASN with a aimed completeness of M2.5. The catalog has two main goals: 1) calculation of consistent and precise hypocenter locations 2) provide preliminary but uniform magnitude calculations across the region. The procedure is based on automatic high-quality P- and S-wave pickers, providing consistent phase arrival times in combination with a picking quality assessment. First, we detect all events in the region in 2016/2017 using an STA/LTA based detector. Among the detected events, we select 50 geographically homogeneously distributed events with magnitudes ≥2.5 representative for the entire catalog. We manually pick the selected events to establish a consistent P- and S-phase reference data set, including arrival-time time uncertainties. The reference data, are used to adjust the automatic pickers and to assess their performance. In a first iteration, a simple P-picker algorithm is applied to the entire dataset, providing initial picks for the advanced MannekenPix (MPX) algorithm. In a second iteration, the MPX picker provides consistent and reliable automatic first arrival P picks together with a pick-quality estimate. The derived automatic P picks are then used as initial values for a multi-component S-phase picking algorithm. Subsequently, automatic picks of all well-locatable earthquakes will be considered to calculate final minimum 1D P and S velocity models for the region with appropriate stations corrections. Finally, all the events are relocated with the NonLinLoc algorithm in combination with the updated 1D models. The proposed procedure represents the first step towards uniform earthquake catalog for the entire greater Alpine region using the AASN.
Adam, J.
2016-01-19
ALICE is one of four large experiments at the CERN Large Hadron Collider near Geneva, specially designed to study particle production in ultra-relativistic heavy-ion collisions. Located 52 meters underground with 28 meters of overburden rock, it has also been used to detect muons produced by cosmic ray interactions in the upper atmosphere. Here, we present the multiplicity distribution of these atmospheric muons and its comparison with Monte Carlo simulations. Our analysis exploits the large size and excellent tracking capability of the ALICE Time Projection Chamber. A special emphasis is given to the study of high multiplicity events containing more thanmore » 100 reconstructed muons and corresponding to a muon areal density rho(mu) > 5.9 m(-2). Similar events have been studied in previous underground experiments such as ALEPH and DELPHI at LEP. While these experiments were able to reproduce the measured muon multiplicity distribution with Monte Carlo simulations at low and intermediate multiplicities, their simulations failed to describe the frequency of the highest multiplicity events. In this work we show that the high multiplicity events observed in ALICE stem from primary cosmic rays with energies above 10(16) eV and that the frequency of these events can be successfully described by assuming a heavy mass composition of primary cosmic rays in this energy range. Furthermore, the development of the resulting air showers was simulated using the latest version of QGSJET to model hadronic interactions. This observation places significant constraints on alternative, more exotic, production mechanisms for these events.« less
A Bayesian additive model for understanding public transport usage in special events.
Rodrigues, Filipe; Borysov, Stanislav; Ribeiro, Bernardete; Pereira, Francisco
2016-12-02
Public special events, like sports games, concerts and festivals are well known to create disruptions in transportation systems, often catching the operators by surprise. Although these are usually planned well in advance, their impact is difficult to predict, even when organisers and transportation operators coordinate. The problem highly increases when several events happen concurrently. To solve these problems, costly processes, heavily reliant on manual search and personal experience, are usual practice in large cities like Singapore, London or Tokyo. This paper presents a Bayesian additive model with Gaussian process components that combines smart card records from public transport with context information about events that is continuously mined from the Web. We develop an efficient approximate inference algorithm using expectation propagation, which allows us to predict the total number of public transportation trips to the special event areas, thereby contributing to a more adaptive transportation system. Furthermore, for multiple concurrent event scenarios, the proposed algorithm is able to disaggregate gross trip counts into their most likely components related to specific events and routine behavior. Using real data from Singapore, we show that the presented model outperforms the best baseline model by up to 26% in R2 and also has explanatory power for its individual components.
Development of an Inverse Algorithm for Resonance Inspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Wei; Sun, Xin
2012-10-01
Resonance inspection (RI), which employs the natural frequency spectra shift between the good and the anomalous part populations to detect defects, is a non-destructive evaluation (NDE) technique with many advantages such as low inspection cost, high testing speed, and broad applicability to structures with complex geometry compared to other contemporary NDE methods. It has already been widely used in the automobile industry for quality inspections of safety critical parts. Unlike some conventionally used NDE methods, the current RI technology is unable to provide details, i.e. location, dimension, or types, of the flaws for the discrepant parts. Such limitation severely hindersmore » its wide spread applications and further development. In this study, an inverse RI algorithm based on maximum correlation function is proposed to quantify the location and size of flaws for a discrepant part. A dog-bone shaped stainless steel sample with and without controlled flaws are used for algorithm development and validation. The results show that multiple flaws can be accurately pinpointed back using the algorithms developed, and the prediction accuracy decreases with increasing flaw numbers and decreasing distance between flaws.« less
NASA Astrophysics Data System (ADS)
Weber, R. C.; Dimech, J. L.; Phillips, D.; Molaro, J.; Schmerr, N. C.
2017-12-01
Apollo 17's Lunar Seismic Profiling Experiment's (LSPE) primary objective was to constrain the near-surface velocity structure at the landing site using active sources detected by a 100 m-wide triangular geophone array. The experiment was later operated in "listening mode," and early studies of these data revealed the presence of thermal moonquakes - short-duration seismic events associated with terminator crossings. However, the full data set has never been systematically analyzed for natural seismic signal content. In this study, we analyze 8 months of continuous LSPE data using an automated event detection technique that has previously successfully been applied to the Apollo 16 Passive Seismic Experiment data. We detected 50,000 thermal moonquakes from three distinct event templates, representing impulsive, intermediate, and emergent onset of seismic energy, which we interpret as reflecting their relative distance from the array. Impulsive events occur largely at sunrise, possibly representing the thermal "pinging" of the nearby lunar lander, while emergent events occur at sunset, possibly representing cracking or slumping in more distant surface rocks and regolith. Preliminary application of an iterative event location algorithm to a subset of the impulsive waveforms supports this interpretation. We also perform 3D modeling of the lunar surface to explore the relative contribution of the lander, known rocks and surrounding topography to the thermal state of the regolith in the vicinity of the Apollo 17 landing site over the course of the lunar diurnal cycle. Further development of both this model and the event location algorithm may permit definitive discrimination between different types of local diurnal events e.g. lander noise, thermally-induced rock breakdown, or fault creep on the nearby Lee-Lincoln scarp. These results could place important constraints on both the contribution of seismicity to regolith production, and the age of young lobate scarps.
Hydro-fractured reservoirs: A study using double-difference location techniques
NASA Astrophysics Data System (ADS)
Kahn, Dan Scott
The mapping of induced seismicity in enhanced geothermal systems presents the best tool available for understanding the resulting hydro-fractured reservoir. In this thesis, two geothermal systems are studied; one in Krafla, Iceland and the other in Basel Switzerland. The purpose of the Krafla survey was to determine the relation between water injection into the fault system and the resulting earthquakes and fluid pressure in the subsurface crack system. The epicenters obtained from analyzing the seismic data gave a set of locations that are aligned along the border of a high resistivity zone ˜2500 meters below the injection well. Further magneto-telluric/seismic-data correlation was seen in the polarity of the cracks through shear wave splitting. The purpose of the Basel project was to examine the creation of a reservoir by the initial stimulation, using an injection well bored to 5000 meters. This stimulation triggered a M3.4 event, extending the normal range of event sizes commonly incurred in hydro-fractured reservoirs. To monitor the seismic activity 6 seismometer sondes were deployed at depths from 317 to 2740 meters below the ground surface. During the seven-day period over 13,000 events were recorded and approximately 3,300 located. These events were first located by single-difference techniques. Subsequently, after calculating their cross-correlation coefficients, clusters of events were relocated using a double-difference algorithm. The event locations support the existence of a narrow reservoir spreading form the injection well. Analysis of the seismic data indicates that the reservoir grew at a uniform rate punctuated by fluctuations which occurred at times of larger events, which were perhaps caused by sudden changes in pressure. The orientation and size of the main fracture plane was found by determining focal mechanisms and locating events that were similar to the M3.4 event. To address the question of whether smaller quakes are simply larger quakes scaled down, the data set was analyzed to determine whether scaling relations held for the source parameters, including seismic moment, source dimension, stress drop, radiated energy and apparent stress. It was found that there was a breakdown in scaling for smaller quakes.
Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1
NASA Technical Reports Server (NTRS)
Park, Thomas; Oliver, Emerson; Smith, Austin
2018-01-01
The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GN&C software from the set of healthy measurements. This paper provides an overview of the algorithms used for both fault-detection and measurement down selection.
Data fusion for target tracking and classification with wireless sensor network
NASA Astrophysics Data System (ADS)
Pannetier, Benjamin; Doumerc, Robin; Moras, Julien; Dezert, Jean; Canevet, Loic
2016-10-01
In this paper, we address the problem of multiple ground target tracking and classification with information obtained from a unattended wireless sensor network. A multiple target tracking (MTT) algorithm, taking into account road and vegetation information, is proposed based on a centralized architecture. One of the key issue is how to adapt classical MTT approach to satisfy embedded processing. Based on track statistics, the classification algorithm uses estimated location, velocity and acceleration to help to classify targets. The algorithms enables tracking human and vehicles driving both on and off road. We integrate road or trail width and vegetation cover, as constraints in target motion models to improve performance of tracking under constraint with classification fusion. Our algorithm also presents different dynamic models, to palliate the maneuvers of targets. The tracking and classification algorithms are integrated into an operational platform (the fusion node). In order to handle realistic ground target tracking scenarios, we use an autonomous smart computer deposited in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of system is evaluated in a real exercise for intelligence operation ("hunter hunt" scenario).
Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data.
Kroenke, Candyce H; Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J
2016-03-01
The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women's Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms-one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV-using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this "triangulation." Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Dating the Vostok ice core record by importing the Devils Hole chronology
Landwehr, J.M.; Winograd, I.J.
2001-01-01
The development of an accurate chronology for the Vostok record continues to be an open research question because these invaluable ice cores cannot be dated directly. Depth-to-age relationships have been developed using many different approaches, but published age estimates are inconsistent, even for major paleoclimatic events. We have developed a chronology for the Vostok deuterium paleotemperature record using a simple and objective algorithm to transfer ages of major paleoclimatic events from the radiometrically dated 500,000-year ??18O-paleotemperature record from Devils Hole, Nevada. The method is based only on a strong inference that major shifts in paleotemperature recorded at both locations occurred synchronously, consistent with an atmospheric teleconnection. The derived depth-to-age relationship conforms with the physics of ice compaction, and internally produces ages for climatic events 5.4 and 11.24 which are consistent with the externally assigned ages that the Vostok team needed to assume in order to derive their most recent chronology, GT4. Indeed, the resulting V-DH chronology is highly correlated with GT4 because of the unexpected correspondence even in the timing of second-order climatic events that were not constrained by the algorithm. Furthermore, the algorithm developed herein is not specific to this problem; rather, the procedure can be used whenever two paleoclimate records are proxies for the same physical phenomenon, and paleoclimatic conditions forcing the two records can be considered to have occurred contemporaneously. The ability of the algorithm to date the East Antarctic Dome Fuji core is also demonstrated.
Charged-particle multiplicities in proton-proton collisions at √{s} = 0.9 to 8 TeV
NASA Astrophysics Data System (ADS)
Adam, J.; Adamová, D.; Aggarwal, M. M.; Rinella, G. Aglieri; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmed, I.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Molina, R. Alfaro; Alici, A.; Alkin, A.; Almaraz, J. R. M.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Prado, C. Alves Garcia; Andrei, C.; Andronic, A.; Anguelov, V.; Anielski, J.; Antičić, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Camejo, A. Batista; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Martinez, H. Bello; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biswas, R.; Biswas, S.; Bjelogrlic, S.; Blair, J. T.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Bøggild, H.; Boldizsár, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossú, F.; Botta, E.; Böttger, S.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Caffarri, D.; Cai, X.; Caines, H.; Diaz, L. Calero; Caliva, A.; Villar, E. Calvo; Camerini, P.; Carena, F.; Carena, W.; Carnesecchi, F.; Castellanos, J. Castillo; Castro, A. J.; Casula, E. A. R.; Sanchez, C. Ceballos; Cepila, J.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Barroso, V. Chibante; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Balbastre, G. Conesa; Valle, Z. Conesa del; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Morales, Y. Corrales; Maldonado, I. Cortés; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Albino, R. Cruz; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; Deisting, A.; Deloff, A.; Dénes, E.; Deplano, C.; Dhankher, P.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Corchero, M. A. Diaz; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Gimenez, D. Domenicis; Dönigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Engel, H.; Epple, E.; Erazmus, B.; Erdemir, I.; Erhardt, F.; Espagnon, B.; Estienne, M.; Esumi, S.; Eum, J.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Téllez, A. Fernández; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fuchs, U.; Furget, C.; Furs, A.; Girard, M. Fusco; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Germain, M.; Gheata, A.; Gheata, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Coral, D. M. Goméz; Ramirez, A. Gomez; Gonzalez, V.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Grachov, O. A.; Graczykowski, L. K.; Graham, K. L.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Gronefeld, J. M.; Grosse-Oetringhaus, J. F.; Grossiord, J.-Y.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, R.; Haake, R.; Haaland, Ø.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Heide, M.; Helstrup, H.; Herghelegiu, A.; Corral, G. Herrera; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Hosokawa, R.; Hristov, P.; Huang, M.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jachołkowski, A.; Jacobs, P. M.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Bustamante, R. T. Jimenez; Jones, P. G.; Jung, H.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kang, J. H.; Kaplin, V.; Kar, S.; Uysal, A. Karasu; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Khan, M. Mohisin; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, H.; Kim, J. S.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobayashi, T.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Meethaleveedu, G. Koyithatta; Králik, I.; Kravčáková, A.; Kretz, M.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kuhn, C.; Kuijer, P. G.; Kumar, A.; Kumar, J.; Kumar, L.; Kumar, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kweon, M. J.; Kwon, Y.; Pointe, S. L. La; Rocca, P. La; de Guevara, P. Ladron; Fernandes, C. Lagana; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Lehas, F.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; Monzón, I. León; Vargas, H. León; Leoncino, M.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Lopez, X.; Torres, E. López; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Cervantes, I. Maldonado; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Blanco, J. Martin; Martinengo, P.; Martínez, M. I.; García, G. Martínez; Pedreira, M. Martinez; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Massacrier, L.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; Mcdonald, D.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Pérez, J. Mercado; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Minervini, L. M.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Zetina, L. Montaño; Montes, E.; De Godoy, D. A. Moreira; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Munzer, R. H.; Murray, S.; Musa, L.; Musinsky, J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; da Luz, H. Natal; Nattrass, C.; Nayak, K.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Silva, A. C. Oliveira Da; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Velasquez, A. Ortiz; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, P.; Paić, G.; Pal, S. K.; Pan, J.; Pandey, A. K.; Papcun, P.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Peitzmann, T.; Costa, H. Pereira Da; Filho, E. Pereira De Oliveira; Peresunko, D.; Lara, C. E. Pérez; Lezama, E. Perez; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; oskoń, M. Pł; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Read, K. F.; Redlich, K.; Reed, R. J.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Revol, J.-P.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rocco, E.; Cahuantzi, M. Rodríguez; Manso, A. Rodriguez; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Romita, R.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Montero, A. J. Rubio; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Sadovsky, S.; Šafařík, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Šándor, L.; Sandoval, A.; Sano, M.; Sarkar, D.; Scapparone, E.; Scarlassara, F.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schuster, T.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Šefčík, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shangaraev, A.; Sharma, A.; Sharma, M.; Sharma, M.; Sharma, N.; Shigaki, K.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Søgaard, C.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; Sozzi, F.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Stachel, J.; Stan, I.; Stefanek, G.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Šumbera, M.; Szabo, A.; de Toledo, A. Szanto; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tangaro, M. A.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Muñoz, G. Tejeda; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thäder, J.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vajzer, M.; Vala, M.; Palomo, L. Valencia; Vallero, S.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vyvre, P. Vande; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Venaruzzo, M.; Vercellin, E.; Limón, S. Vergara; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Baillie, O. Villalobos; Tello, A. Villatoro; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrláková, J.; Vulpescu, B.; Vyushin, A.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Weiser, D. F.; Wessels, J. P.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilde, M.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yaldo, C. G.; Yang, H.; Yang, P.; Yano, S.; Yasar, C.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yoon, J. H.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhang, C.; Zhang, Z.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.
2017-01-01
A detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at √{s} = 0.9, 2.36, 2.76, 7 and 8 TeV, in the pseudorapidity range |η |<2, was carried out using the ALICE detector. Measurements were obtained for three event classes: inelastic, non-single diffractive and events with at least one charged particle in the pseudorapidity interval |η |<1. The use of an improved track-counting algorithm combined with ALICE's measurements of diffractive processes allows a higher precision compared to our previous publications. A KNO scaling study was performed in the pseudorapidity intervals |η |< 0.5, 1.0 and 1.5. The data are compared to other experimental results and to models as implemented in Monte Carlo event generators PHOJET and recent tunes of PYTHIA6, PYTHIA8 and EPOS.
Charged-particle multiplicities in proton–proton collisions at $$\\sqrt{s} = 0.9$$ s = 0.9 to 8 TeV
Adam, J.; Adamová, D.; Aggarwal, M. M.; ...
2017-01-17
A detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton–proton collisions, at s= 0.9, 2.36, 2.76, 7 and 8 TeV, in the pseudorapidity range | η| < 2 , was carried out using the ALICE detector. Measurements were obtained for three event classes: inelastic, non-single diffractive and events with at least one charged particle in the pseudorapidity interval | η| < 1. The use of an improved track-counting algorithm combined with ALICE’s measurements of diffractive processes allows a higher precision compared to our previous publications. A KNO scaling study was performed in the pseudorapiditymore » intervals | η| < 0.5, 1.0 and 1.5. The data are compared to other experimental results and to models as implemented in Monte Carlo event generators PHOJET and recent tunes of PYTHIA6, PYTHIA8 and EPOS.« less
NASA Astrophysics Data System (ADS)
Carlton, A.; Cahoy, K.
2015-12-01
Reliability of geostationary communication satellites (GEO ComSats) is critical to many industries worldwide. The space radiation environment poses a significant threat and manufacturers and operators expend considerable effort to maintain reliability for users. Knowledge of the space radiation environment at the orbital location of a satellite is of critical importance for diagnosing and resolving issues resulting from space weather, for optimizing cost and reliability, and for space situational awareness. For decades, operators and manufacturers have collected large amounts of telemetry from geostationary (GEO) communications satellites to monitor system health and performance, yet this data is rarely mined for scientific purposes. The goal of this work is to acquire and analyze archived data from commercial operators using new algorithms that can detect when a space weather (or non-space weather) event of interest has occurred or is in progress. We have developed algorithms, collectively called SEER (System Event Evaluation Routine), to statistically analyze power amplifier current and temperature telemetry by identifying deviations from nominal operations or other events and trends of interest. This paper focuses on our work in progress, which currently includes methods for detection of jumps ("spikes", outliers) and step changes (changes in the local mean) in the telemetry. We then examine available space weather data from the NOAA GOES and the NOAA-computed Kp index and sunspot numbers to see what role, if any, it might have played. By combining the results of the algorithm for many components, the spacecraft can be used as a "sensor" for the space radiation environment. Similar events occurring at one time across many component telemetry streams may be indicative of a space radiation event or system-wide health and safety concern. Using SEER on representative datasets of telemetry from Inmarsat and Intelsat, we find events that occur across all or many of telemetry files at certain dates. We compare these system-wide events to known space weather storms, such as the 2003 Halloween storms, and to spacecraft operational events, such as maneuvers. We also present future applications and expansions of SEER for robust space environment sensing and system health and safety monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engdahl, Eric, R.; Bergman, Eric, A.; Myers, Stephen, C.
A new catalog of seismicity at magnitudes above 2.5 for the period 1923-2008 in the Iran region is assembled from arrival times reported by global, regional, and local seismic networks. Using in-country data we have formed new events, mostly at lower magnitudes that were not previously included in standard global earthquake catalogs. The magnitude completeness of the catalog varies strongly through time, complete to about magnitude 4.2 prior to 1998 and reaching a minimum of about 3.6 during the period 1998-2005. Of the 25,722 events in the catalog, most of the larger events have been carefully reviewed for proper phasemore » association, especially for depth phases and to eliminate outlier readings, and relocated. To better understand the quality of the data set of arrival times reported by Iranian networks that are central to this study, many waveforms for events in Iran have been re-picked by an experienced seismic analyst. Waveforms at regional distances in this region are often complex. For many events this makes arrival time picks difficult to make, especially for smaller magnitude events, resulting in reported times that can be substantially improved by an experienced analyst. Even when the signal/noise ratio is large, re-picking can lead to significant differences. Picks made by our analyst are compared with original picks made by the regional networks. In spite of the obvious outliers, the median (-0.06 s) and spread (0.51 s) are small, suggesting that reasonable confidence can be placed in the picks reported by regional networks in Iran. This new catalog has been used to assess focal depth distributions throughout Iran. A principal result of this study is that the geographic pattern of depth distributions revealed by the relatively small number of earthquakes (~167) with depths constrained by waveform modeling (+/- 4 km) are now in agreement with the much larger number of depths (~1229) determined using reanalysis of ISC arrival-times (+/-10 km), within their respective errors. This is a significant advance, as outliers and future events with apparently anomalous depths can be readily identified and, if necessary, further investigated. The patterns of reliable focal depth distributions have been interpreted in the context of Middle Eastern active tectonics. Most earthquakes in the Iranian continental lithosphere occur in the upper crust, less than about 25-30 km in depth, with the crustal shortening produced by continental collision apparently accommodated entirely by thickening and distributed deformation rather than by subduction of crust into the mantle. However, intermediate-depth earthquakes associated with subducted slab do occur across the central Caspian Sea and beneath the Makran coast. A multiple-event relocation technique, specialized to use different kinds of near-source data, is used to calibrate the locations of 24 clusters containing 901 events drawn from the seismicity catalog. The absolute locations of these clusters are fixed either by comparing the pattern of relocated earthquakes with mapped fault geometry, by using one or more cluster events that have been accurately located independently by a local seismic network or aftershock deployment, by using InSAR data to determine the rupture zone of shallow earthquakes, or by some combination of these near-source data. This technique removes most of the systematic bias in single-event locations done with regional and teleseismic data, resulting in 624 calibrated events with location uncertainties of 5 km or better at the 90% confidence level (GT590). For 21 clusters (847 events) that are calibrated in both location and origin time we calculate empirical travel times, relative to a standard 1-D travel time model (ak135), and investigate event to station travel-time anomalies as functions of epicentral distance and azimuth. Substantial travel-time anomalies are seen in the Iran region which make accurate locations impossible unless observing stations are at very short distances (less than about 200 km) or travel-time models are improved to account for lateral heterogeneity in the region. Earthquake locations in the Iran region by international agencies, based on regional and teleseismic arrival time data, are systematically biased to the southwest and have a 90% location accuracy of 18-23 km, with the lower value achievable by applying limits on secondary azimuth gap. The data set of calibrated locations reported here provides an important constraint on travel-time models that would begin to account for the lateral heterogeneity in Earth structure in the Iran region, and permit seismic networks, especially the regional ones, to obtain in future more accurate locations of the earthquakes in the region.« less
Multiple disturbances classifier for electric signals using adaptive structuring neural networks
NASA Astrophysics Data System (ADS)
Lu, Yen-Ling; Chuang, Cheng-Long; Fahn, Chin-Shyurng; Jiang, Joe-Air
2008-07-01
This work proposes a novel classifier to recognize multiple disturbances for electric signals of power systems. The proposed classifier consists of a series of pipeline-based processing components, including amplitude estimator, transient disturbance detector, transient impulsive detector, wavelet transform and a brand-new neural network for recognizing multiple disturbances in a power quality (PQ) event. Most of the previously proposed methods usually treated a PQ event as a single disturbance at a time. In practice, however, a PQ event often consists of various types of disturbances at the same time. Therefore, the performances of those methods might be limited in real power systems. This work considers the PQ event as a combination of several disturbances, including steady-state and transient disturbances, which is more analogous to the real status of a power system. Six types of commonly encountered power quality disturbances are considered for training and testing the proposed classifier. The proposed classifier has been tested on electric signals that contain single disturbance or several disturbances at a time. Experimental results indicate that the proposed PQ disturbance classification algorithm can achieve a high accuracy of more than 97% in various complex testing cases.
Grigoryan, Artyom M; Dougherty, Edward R; Kononen, Juha; Bubendorf, Lukas; Hostetter, Galen; Kallioniemi, Olli
2002-01-01
Fluorescence in situ hybridization (FISH) is a molecular diagnostic technique in which a fluorescent labeled probe hybridizes to a target nucleotide sequence of deoxyribose nucleic acid. Upon excitation, each chromosome containing the target sequence produces a fluorescent signal (spot). Because fluorescent spot counting is tedious and often subjective, automated digital algorithms to count spots are desirable. New technology provides a stack of images on multiple focal planes throughout a tissue sample. Multiple-focal-plane imaging helps overcome the biases and imprecision inherent in single-focal-plane methods. This paper proposes an algorithm for global spot counting in stacked three-dimensional slice FISH images without the necessity of nuclei segmentation. It is designed to work in complex backgrounds, when there are agglomerated nuclei, and in the presence of illumination gradients. It is based on the morphological top-hat transform, which locates intensity spikes on irregular backgrounds. After finding signals in the slice images, the algorithm groups these together to form three-dimensional spots. Filters are employed to separate legitimate spots from fluorescent noise. The algorithm is set in a comprehensive toolbox that provides visualization and analytic facilities. It includes simulation software that allows examination of algorithm performance for various image and algorithm parameter settings, including signal size, signal density, and the number of slices.
NASA Technical Reports Server (NTRS)
Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.
2010-01-01
The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunton, Steven
Optical systems provide valuable information for evaluating interactions and associations between organisms and MHK energy converters and for capturing potentially rare encounters between marine organisms and MHK device. The deluge of optical data from cabled monitoring packages makes expert review time-consuming and expensive. We propose algorithms and a processing framework to automatically extract events of interest from underwater video. The open-source software framework consists of background subtraction, filtering, feature extraction and hierarchical classification algorithms. This principle classification pipeline was validated on real-world data collected with an experimental underwater monitoring package. An event detection rate of 100% was achieved using robustmore » principal components analysis (RPCA), Fourier feature extraction and a support vector machine (SVM) binary classifier. The detected events were then further classified into more complex classes – algae | invertebrate | vertebrate, one species | multiple species of fish, and interest rank. Greater than 80% accuracy was achieved using a combination of machine learning techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, A.; Santoso, S.; Muljadi, E.
2013-08-01
A network of multiple phasor measurement units (PMU) was created, set up, and maintained at the University of Texas at Austin to obtain actual power system measurements for power system analysis. Power system analysis in this report covers a variety of time ranges, such as short- term analysis for power system disturbances and their effects on power system behavior and long- term power system behavior using modal analysis. The first objective of this report is to screen the PMU data for events. The second objective of the report is to identify and describe common characteristics extracted from power system eventsmore » as measured by PMUs. The numerical characteristics for each category and how these characteristics are used to create selection rules for the algorithm are also described. Trends in PMU data related to different levels and fluctuations in wind power output are also examined.« less
You, Kaiming; Yang, Wei; Han, Ruisong
2015-09-29
Based on wireless multimedia sensor networks (WMSNs) deployed in an underground coal mine, a miner's lamp video collaborative localization algorithm was proposed to locate miners in the scene of insufficient illumination and bifurcated structures of underground tunnels. In bifurcation area, several camera nodes are deployed along the longitudinal direction of tunnels, forming a collaborative cluster in wireless way to monitor and locate miners in underground tunnels. Cap-lamps are regarded as the feature of miners in the scene of insufficient illumination of underground tunnels, which means that miners can be identified by detecting their cap-lamps. A miner's lamp will project mapping points on the imaging plane of collaborative cameras and the coordinates of mapping points are calculated by collaborative cameras. Then, multiple straight lines between the positions of collaborative cameras and their corresponding mapping points are established. To find the three-dimension (3D) coordinate location of the miner's lamp a least square method is proposed to get the optimal intersection of the multiple straight lines. Tests were carried out both in a corridor and a realistic scenario of underground tunnel, which show that the proposed miner's lamp video collaborative localization algorithm has good effectiveness, robustness and localization accuracy in real world conditions of underground tunnels.
Walthall, Jennifer D H; Burgess, Aaron; Weinstein, Elizabeth; Miramonti, Charles; Arkins, Thomas; Wiehe, Sarah
2018-02-01
This study aimed to describe spatiotemporal correlates of pediatric violent injury in an urban community. We performed a retrospective cohort study using patient-level data (2009-2011) from a novel emergency medical service computerized entry system for violent injury resulting in an ambulance dispatch among children aged 0 to 16 years. Assault location and patient residence location were cleaned and geocoded at a success rate of 98%. Distances from the assault location to both home and nearest school were calculated. Time and day of injury were used to evaluate temporal trends. Data from the event points were analyzed to locate injury "hotspots." Seventy-six percent of events occurred within 2 blocks of the patient's home. Clusters of violent injury correlated with areas with high adult crime and areas with multiple schools. More than half of the events occurred between 3:00 PM and 11:00 PM. During these peak hours, Sundays had significantly fewer events. Pediatric violent injuries occurred in identifiable geographic and temporal patterns. This has implications for injury prevention programming to prioritize highest-risk areas.
NASA Astrophysics Data System (ADS)
Panning, M. P.; Banerdt, W. B.; Beucler, E.; Blanchette-Guertin, J. F.; Boese, M.; Clinton, J. F.; Drilleau, M.; James, S. R.; Kawamura, T.; Khan, A.; Lognonne, P. H.; Mocquet, A.; van Driel, M.
2015-12-01
An important challenge for the upcoming InSight mission to Mars, which will deliver a broadband seismic station to Mars along with other geophysical instruments in 2016, is to accurately determine event locations with the use of a single station. Locations are critical for the primary objective of the mission, determining the internal structure of Mars, as well as a secondary objective of measuring the activity of distribution of seismic events. As part of the mission planning process, a variety of techniques have been explored for location of marsquakes and inversion of structure, and preliminary procedures and software are already under development as part of the InSight Mars Quake and Mars Structure Services. One proposed method, involving the use of recordings of multiple-orbit surface waves, has already been tested with synthetic data and Earth recordings. This method has the strength of not requiring an a priori velocity model of Mars for quake location, but will only be practical for larger events. For smaller events where only first orbit surface waves and body waves are observable, other methods are required. In this study, we implement a transdimensional Bayesian inversion approach to simultaneously invert for basic velocity structure and location parameters (epicentral distance and origin time) using only measurements of body wave arrival times and dispersion of first orbit surface waves. The method is tested with synthetic data with expected Mars noise and Earth data for single events and groups of events and evaluated for errors in both location and structural determination, as well as tradeoffs between resolvable parameters and the effect of 3D crustal variations.
Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A
2009-06-01
In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.
Particle swarm optimization based space debris surveillance network scheduling
NASA Astrophysics Data System (ADS)
Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao
2017-02-01
The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.
Evaluation of Algorithms for Photon Depth of Interaction Estimation for the TRIMAGE PET Component
NASA Astrophysics Data System (ADS)
Camarlinghi, Niccolò; Belcari, Nicola; Cerello, Piergiorgio; Pennazio, Francesco; Sportelli, Giancarlo; Zaccaro, Emanuele; Del Guerra, Alberto
2016-02-01
The TRIMAGE consortium aims to develop a multimodal PET/MR/EEG brain scanner dedicated to the early diagnosis of schizophrenia and other mental health disorders. The TRIMAGE PET component features a full ring made of 18 detectors, each one consisting of twelve 8 ×8 Silicon PhotoMultipliers (SiPMs) tiles coupled to two segmented LYSO crystal matrices with staggered layers. The identification of the pixel where a photon interacted is performed on-line at the front-end level, thus allowing the FPGA board to emit fully digital event packets. This allows to increase the effective bandwidth, but imposes restrictions on the complexity of the algorithms to be implemented. In this work, two algorithms, whose implementation is feasible directly on an FPGA, are presented and evaluated. The first algorithm is driven by physical considerations, while the other consists in a two-class linear Support Vector Machine (SVM). The validation of the algorithm performance is carried out by using simulated data generated with the GAMOS Monte Carlo. The obtained results show that the achieved accuracy in layer identification is above 90% for both the proposed approaches. The feasibility of tagging and rejecting events that underwent multiple interactions within the detector is also discussed.
Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data
Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J.
2016-01-01
Abstract Background: The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. Methods: We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women’s Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms—one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV—using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this “triangulation.” Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. Results: The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Conclusions: Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. PMID:26582243
Measurement of signal use and vehicle turns as indication of driver cognition.
Wallace, Bruce; Goubran, Rafik; Knoefel, Frank
2014-01-01
This paper uses data analytics to provide a method for the measurement of a key driving task, turn signal usage as a measure of an automatic over-learned cognitive function drivers. The paper augments previously reported more complex executive function cognition measures by proposing an algorithm that analyzes dashboard video to detect turn indicator use with 100% accuracy without any false positives. The paper proposes two algorithms that determine the actual turns made on a trip. The first through analysis of GPS location traces for the vehicle, locating 73% of the turns made with a very low false positive rate of 3%. A second algorithm uses GIS tools to retroactively create turn by turn directions. Fusion of GIS and GPS information raises performance to 77%. The paper presents the algorithm required to measure signal use for actual turns by realigning the 0.2Hz GPS data, 30fps video and GIS turn events. The result is a measure that can be tracked over time and changes in the driver's performance can result in alerts to the driver, caregivers or clinicians as indication of cognitive change. A lack of decline can also be shared as reassurance.
NASA Astrophysics Data System (ADS)
Eidietis, N. W.; Choi, W.; Hahn, S. H.; Humphreys, D. A.; Sammuli, B. S.; Walker, M. L.
2018-05-01
A finite-state off-normal and fault response (ONFR) system is presented that provides the supervisory logic for comprehensive disruption avoidance and machine protection in tokamaks. Robust event handling is critical for ITER and future large tokamaks, where plasma parameters will necessarily approach stability limits and many systems will operate near their engineering limits. Events can be classified as off-normal plasmas events, e.g. neoclassical tearing modes or vertical displacements events, or faults, e.g. coil power supply failures. The ONFR system presented provides four critical features of a robust event handling system: sequential responses to cascading events, event recovery, simultaneous handling of multiple events and actuator prioritization. The finite-state logic is implemented in Matlab®/Stateflow® to allow rapid development and testing in an easily understood graphical format before automated export to the real-time plasma control system code. Experimental demonstrations of the ONFR algorithm on the DIII-D and KSTAR tokamaks are presented. In the most complex demonstration, the ONFR algorithm asynchronously applies ‘catch and subdue’ electron cyclotron current drive (ECCD) injection scheme to suppress a virulent 2/1 neoclassical tearing mode, subsequently shuts down ECCD for machine protection when the plasma becomes over-dense, and enables rotating 3D field entrainment of the ensuing locked mode to allow a safe rampdown, all in the same discharge without user intervention. When multiple ONFR states are active simultaneously and requesting the same actuator (e.g. neutral beam injection or gyrotrons), actuator prioritization is accomplished by sorting the pre-assigned priority values of each active ONFR state and giving complete control of the actuator to the state with highest priority. This early experience makes evident that additional research is required to develop an improved actuator sharing protocol, as well as a methodology to minimize the number and topological complexity of states as the finite-state ONFR system is scaled to a large, highly constrained device like ITER.
A 20-year catalog comparing smooth and sharp estimates of slow slip events in Cascadia
NASA Astrophysics Data System (ADS)
Molitors Bergman, E. G.; Evans, E. L.; Loveless, J. P.
2017-12-01
Slow slip events (SSEs) are a form of aseismic strain release at subduction zones resulting in a temporary reversal in interseismic upper plate motion over a period of weeks, frequently accompanied in time and space by seismic tremor at the Cascadia subduction zone. Locating SSEs spatially along the subduction zone interface is essential to understanding the relationship between SSEs, earthquakes, and tremor and assessing megathrust earthquake hazard. We apply an automated slope comparison-based detection algorithm to single continuously recording GPS stations to determine dates and surface displacement vectors of SSEs, then apply network-based filters to eliminate false detections. The main benefits of this algorithm are its ability to detect SSEs while they are occurring and track the spatial migration of each event. We invert geodetic displacement fields for slip distributions on the subduction zone interface for SSEs between 1997 and 2017 using two estimation techniques: spatial smoothing and total variation regularization (TVR). Smoothing has been frequently used in determining the location of interseismic coupling, earthquake rupture, and SSE slip and yields spatially coherent but inherently blurred solutions. TVR yields compact, sharply bordered slip estimates of similar magnitude and along-strike extent to previously presented studied events, while fitting the constraining geodetic data as well as corresponding smoothing-based solutions. Slip distributions estimated using TVR have up-dip limits that align well with down-dip limits of interseismic coupling on the plate interface and spatial extents that approximately correspond to the distribution of tremor concurrent with each event. TVR gives a unique view of slow slip distributions that can contribute to understanding of the physical properties that govern megathrust slip processes.
Earthquake Relocation in the Middle East with Geodetically-Calibrated Events
NASA Astrophysics Data System (ADS)
Brengman, C.; Barnhart, W. D.
2017-12-01
Regional and global earthquake catalogs in tectonically active regions commonly contain mislocated earthquakes that impede efforts to address first order characteristics of seismogenic strain release and to monitor anthropogenic seismic events through the Comprehensive Nuclear-Test-Ban Treaty. Earthquake mislocations are particularly limiting in the plate boundary zone between the Arabia and Eurasia plates of Iran, Pakistan, and Turkey where earthquakes are commonly mislocated by 20+ kilometers and hypocentral depths are virtually unconstrained. Here, we present preliminary efforts to incorporate calibrated earthquake locations derived from Interferometric Synthetic Aperture Radar (InSAR) observations into a relocated catalog of seismicity in the Middle East. We use InSAR observations of co-seismic deformation to determine the locations, geometries, and slip distributions of small to moderate magnitude (M4.8+) crustal earthquakes. We incorporate this catalog of calibrated event locations, along with other seismologically-calibrated earthquake locations, as "priors" into a fully Bayesian multi-event relocation algorithm that relocates all teleseismically and regionally recorded earthquakes over the time span 1970-2017, including calibrated and uncalibrated events. Our relocations are conducted using cataloged phase picks and BayesLoc. We present a suite of sensitivity tests for the time span of 2003-2014 to explore the impacts of our input parameters (i.e., how a point source is defined from a finite fault inversion) on the behavior of the event relocations, potential improvements to depth estimates, the ability of the relocation to recover locations outside of the time span in which there are InSAR observations, and the degree to which our relocations can recover "known" calibrated earthquake locations that are not explicitly included as a-priori constraints. Additionally, we present a systematic comparison of earthquake relocations derived from phase picks of two different earthquake catalogs: The USGS Comprehensive Earthquake Catalog (ComCat) and the Reviewed ISC Bulletin (ISCB).
NASA Astrophysics Data System (ADS)
Jiang, Y.; Xing, H. L.
2016-12-01
Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation
Multi-Objective Community Detection Based on Memetic Algorithm
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646
Multi-objective community detection based on memetic algorithm.
Wu, Peng; Pan, Li
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.
Estimation of the uncertainty of elastic image registration with the demons algorithm.
Hub, M; Karger, C P
2013-05-07
The accuracy of elastic image registration is limited. We propose an approach to detect voxels where registration based on the demons algorithm is likely to perform inaccurately, compared to other locations of the same image. The approach is based on the assumption that the local reproducibility of the registration can be regarded as a measure of uncertainty of the image registration. The reproducibility is determined as the standard deviation of the displacement vector components obtained from multiple registrations. These registrations differ in predefined initial deformations. The proposed approach was tested with artificially deformed lung images, where the ground truth on the deformation is known. In voxels where the result of the registration was less reproducible, the registration turned out to have larger average registration errors as compared to locations of the same image, where the registration was more reproducible. The proposed method can show a clinician in which area of the image the elastic registration with the demons algorithm cannot be expected to be accurate.
Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.
2015-12-01
The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blom, Philip Stephen; Marcillo, Omar Eduardo; Euler, Garrett Gene
InfraPy is a Python-based analysis toolkit being development at LANL. The algorithms are intended for ground-based nuclear detonation detection applications to detect, locate, and characterize explosive sources using infrasonic observations. The implementation is usable as a stand-alone Python library or as a command line driven tool operating directly on a database. With multiple scientists working on the project, we've begun using a LANL git repository for collaborative development and version control. Current and planned work on InfraPy focuses on the development of new algorithms and propagation models. Collaboration with Southern Methodist University (SMU) has helped identify bugs and limitations ofmore » the algorithms. The current focus of usage development is focused on library imports and CLI.« less
A generalized method for multiple robotic manipulator programming applied to vertical-up welding
NASA Technical Reports Server (NTRS)
Fernandez, Kenneth R.; Cook, George E.; Andersen, Kristinn; Barnett, Robert Joel; Zein-Sabattou, Saleh
1991-01-01
The application is described of a weld programming algorithm for vertical-up welding, which is frequently desired for variable polarity plasma arc welding (VPPAW). The Basic algorithm performs three tasks simultaneously: control of the robotic mechanism so that proper torch motion is achieved while minimizing the sum-of-squares of joint displacement; control of the torch while the part is maintained in a desirable orientation; and control of the wire feed mechanism location with respect to the moving welding torch. Also presented is a modification of this algorithm which permits it to be used for vertical-up welding. The details of this modification are discussed and simulation examples are provided for illustration and verification.
A greedy, graph-based algorithm for the alignment of multiple homologous gene lists.
Fostier, Jan; Proost, Sebastian; Dhoedt, Bart; Saeys, Yvan; Demeester, Piet; Van de Peer, Yves; Vandepoele, Klaas
2011-03-15
Many comparative genomics studies rely on the correct identification of homologous genomic regions using accurate alignment tools. In such case, the alphabet of the input sequences consists of complete genes, rather than nucleotides or amino acids. As optimal multiple sequence alignment is computationally impractical, a progressive alignment strategy is often employed. However, such an approach is susceptible to the propagation of alignment errors in early pairwise alignment steps, especially when dealing with strongly diverged genomic regions. In this article, we present a novel accurate and efficient greedy, graph-based algorithm for the alignment of multiple homologous genomic segments, represented as ordered gene lists. Based on provable properties of the graph structure, several heuristics are developed to resolve local alignment conflicts that occur due to gene duplication and/or rearrangement events on the different genomic segments. The performance of the algorithm is assessed by comparing the alignment results of homologous genomic segments in Arabidopsis thaliana to those obtained by using both a progressive alignment method and an earlier graph-based implementation. Especially for datasets that contain strongly diverged segments, the proposed method achieves a substantially higher alignment accuracy, and proves to be sufficiently fast for large datasets including a few dozens of eukaryotic genomes. http://bioinformatics.psb.ugent.be/software. The algorithm is implemented as a part of the i-ADHoRe 3.0 package.
Sheldon, Signy; Chu, Sonja
2017-09-01
Autobiographical memory research has investigated how cueing distinct aspects of a past event can trigger different recollective experiences. This research has stimulated theories about how autobiographical knowledge is accessed and organized. Here, we test the idea that thematic information organizes multiple autobiographical events whereas spatial information organizes individual past episodes by investigating how retrieval guided by these two forms of information differs. We used a novel autobiographical fluency task in which participants accessed multiple memory exemplars to event theme and spatial (location) cues followed by a narrative description task in which they described the memories generated to these cues. Participants recalled significantly more memory exemplars to event theme than to spatial cues; however, spatial cues prompted faster access to past memories. Results from the narrative description task revealed that memories retrieved via event theme cues compared to spatial cues had a higher number of overall details, but those recalled to the spatial cues were recollected with a greater concentration on episodic details than those retrieved via event theme cues. These results provide evidence that thematic information organizes and integrates multiple memories whereas spatial information prompts the retrieval of specific episodic content from a past event.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collaboration: ALICE Collaboration
2016-01-01
ALICE is one of four large experiments at the CERN Large Hadron Collider near Geneva, specially designed to study particle production in ultra-relativistic heavy-ion collisions. Located 52 meters underground with 28 meters of overburden rock, it has also been used to detect muons produced by cosmic ray interactions in the upper atmosphere. In this paper, we present the multiplicity distribution of these atmospheric muons and its comparison with Monte Carlo simulations. This analysis exploits the large size and excellent tracking capability of the ALICE Time Projection Chamber. A special emphasis is given to the study of high multiplicity events containingmore » more than 100 reconstructed muons and corresponding to a muon areal density ρ{sub μ} > 5.9 m{sup −2}. Similar events have been studied in previous underground experiments such as ALEPH and DELPHI at LEP. While these experiments were able to reproduce the measured muon multiplicity distribution with Monte Carlo simulations at low and intermediate multiplicities, their simulations failed to describe the frequency of the highest multiplicity events. In this work we show that the high multiplicity events observed in ALICE stem from primary cosmic rays with energies above 10{sup 16} eV and that the frequency of these events can be successfully described by assuming a heavy mass composition of primary cosmic rays in this energy range. The development of the resulting air showers was simulated using the latest version of QGSJET to model hadronic interactions. This observation places significant constraints on alternative, more exotic, production mechanisms for these events.« less
MPL-Net data products available at co-located AERONET sites and field experiment locations
NASA Astrophysics Data System (ADS)
Welton, E. J.; Campbell, J. R.; Berkoff, T. A.
2002-05-01
Micro-pulse lidar (MPL) systems are small, eye-safe lidars capable of profiling the vertical distribution of aerosol and cloud layers. There are now over 20 MPL systems around the world, and they have been used in numerous field experiments. A new project was started at NASA Goddard Space Flight Center in 2000. The new project, MPL-Net, is a coordinated network of long-time MPL sites. The network also supports a limited number of field experiments each year. Most MPL-Net sites and field locations are co-located with AERONET sunphotometers. At these locations, the AERONET and MPL-Net data are combined together to provide both column and vertically resolved aerosol and cloud measurements. The MPL-Net project coordinates the maintenance and repair for all instruments in the network. In addition, data is archived and processed by the project using common, standardized algorithms that have been developed and utilized over the past 10 years. These procedures ensure that stable, calibrated MPL systems are operating at sites and that the data quality remains high. Rigorous uncertainty calculations are performed on all MPL-Net data products. Automated, real-time level 1.0 data processing algorithms have been developed and are operational. Level 1.0 algorithms are used to process the raw MPL data into the form of range corrected, uncalibrated lidar signals. Automated, real-time level 1.5 algorithms have also been developed and are now operational. Level 1.5 algorithms are used to calibrate the MPL systems, determine cloud and aerosol layer heights, and calculate the optical depth and extinction profile of the aerosol boundary layer. The co-located AERONET sunphotometer provides the aerosol optical depth, which is used as a constraint to solve for the extinction-to-backscatter ratio and the aerosol extinction profile. Browse images and data files are available on the MPL-Net web-site. An overview of the processing algorithms and initial results from selected sites and field experiments will be presented. The capability of the MPL-Net project to produce automated real-time (next day) profiles of aerosol extinction will be shown. Finally, early results from Level 2.0 and Level 3.0 algorithms currently under development will be presented. The level 3.0 data provide continuous (day/night) retrievals of multiple aerosol and cloud heights, and optical properties of each layer detected.
Lin, Yunyue; Wu, Qishi; Cai, Xiaoshan; ...
2010-01-01
Data transmission from sensor nodes to a base station or a sink node often incurs significant energy consumption, which critically affects network lifetime. We generalize and solve the problem of deploying multiple base stations to maximize network lifetime in terms of two different metrics under one-hop and multihop communication models. In the one-hop communication model, the sensors far away from base stations always deplete their energy much faster than others. We propose an optimal solution and a heuristic approach based on the minimal enclosing circle algorithm to deploy a base station at the geometric center of each cluster. In themore » multihop communication model, both base station location and data routing mechanism need to be considered in maximizing network lifetime. We propose an iterative algorithm based on rigorous mathematical derivations and use linear programming to compute the optimal routing paths for data transmission. Simulation results show the distinguished performance of the proposed deployment algorithms in maximizing network lifetime.« less
Multi-output decision trees for lesion segmentation in multiple sclerosis
NASA Astrophysics Data System (ADS)
Jog, Amod; Carass, Aaron; Pham, Dzung L.; Prince, Jerry L.
2015-03-01
Multiple Sclerosis (MS) is a disease of the central nervous system in which the protective myelin sheath of the neurons is damaged. MS leads to the formation of lesions, predominantly in the white matter of the brain and the spinal cord. The number and volume of lesions visible in magnetic resonance (MR) imaging (MRI) are important criteria for diagnosing and tracking the progression of MS. Locating and delineating lesions manually requires the tedious and expensive efforts of highly trained raters. In this paper, we propose an automated algorithm to segment lesions in MR images using multi-output decision trees. We evaluated our algorithm on the publicly available MICCAI 2008 MS Lesion Segmentation Challenge training dataset of 20 subjects, and showed improved results in comparison to state-of-the-art methods. We also evaluated our algorithm on an in-house dataset of 49 subjects with a true positive rate of 0.41 and a positive predictive value 0.36.
NASA Astrophysics Data System (ADS)
Zheng, Jing; Lu, Jiren; Peng, Suping; Jiang, Tianqi
2018-02-01
The conventional arrival pick-up algorithms cannot avoid the manual modification of the parameters for the simultaneous identification of multiple events under different signal-to-noise ratios (SNRs). Therefore, in order to automatically obtain the arrivals of multiple events with high precision under different SNRs, in this study an algorithm was proposed which had the ability to pick up the arrival of microseismic or acoustic emission events based on deep recurrent neural networks. The arrival identification was performed using two important steps, which included a training phase and a testing phase. The training process was mathematically modelled by deep recurrent neural networks using Long Short-Term Memory architecture. During the testing phase, the learned weights were utilized to identify the arrivals through the microseismic/acoustic emission data sets. The data sets were obtained by rock physics experiments of the acoustic emission. In order to obtain the data sets under different SNRs, this study added random noise to the raw experiments' data sets. The results showed that the outcome of the proposed method was able to attain an above 80 per cent hit-rate at SNR 0 dB, and an approximately 70 per cent hit-rate at SNR -5 dB, with an absolute error in 10 sampling points. These results indicated that the proposed method had high selection precision and robustness.
Comparison of four moderate-size earthquakes in southern California using seismology and InSAR
Mellors, R.J.; Magistrale, H.; Earle, P.; Cogbill, A.H.
2004-01-01
Source parameters determined from interferometric synthetic aperture radar (InSAR) measurements and from seismic data are compared from four moderate-size (less than M 6) earthquakes in southern California. The goal is to verify approximate detection capabilities of InSAR, assess differences in the results, and test how the two results can be reconciled. First, we calculated the expected surface deformation from all earthquakes greater than magnitude 4 in areas with available InSAR data (347 events). A search for deformation from the events in the interferograms yielded four possible events with magnitudes less than 6. The search for deformation was based on a visual inspection as well as cross-correlation in two dimensions between the measured signal and the expected signal. A grid-search algorithm was then used to estimate focal mechanism and depth from the InSAR data. The results were compared with locations and focal mechanisms from published catalogs. An independent relocation using seismic data was also performed. The seismic locations fell within the area of the expected rupture zone for the three events that show clear surface deformation. Therefore, the technique shows the capability to resolve locations with high accuracy and is applicable worldwide. The depths determined by InSAR agree with well-constrained seismic locations determined in a 3D velocity model. Depth control for well-imaged shallow events using InSAR data is good, and better than the seismic constraints in some cases. A major difficulty for InSAR analysis is the poor temporal coverage of InSAR data, which may make it impossible to distinguish deformation due to different earthquakes at the same location.
Structural health monitoring of inflatable structures for MMOD impacts
NASA Astrophysics Data System (ADS)
Anees, Muhammad; Gbaguidi, Audrey; Kim, Daewon; Namilae, Sirish
2017-04-01
Inflatable structures for space habitat are highly prone to damage caused by micrometeoroid and orbital debris impacts. Although the structures are effectively shielded against these impacts through multiple layers of impact resistant materials, there is a necessity for a health monitoring system to monitor the structural integrity and damage state within the structures. Assessment of damage is critical for the safety of personnel in the space habitat, as well as predicting the repair needs and the remaining useful life of the habitat. In this paper, we propose a unique impact detection and health monitoring system based on hybrid nanocomposite sensors. The sensors are composed of two fillers, carbon nanotubes and coarse graphene platelets with an epoxy matrix material. The electrical conductivity of these flexible nanocomposite sensors is highly sensitive to strains as well as presence of any holes and damage in the structure. The sensitivity of the sensors to the presence of 3mm holes due to an event of impact is evaluated using four point probe electrical resistivity measurements. An array of these sensors when sandwiched between soft good layers in a space habitat can act as a damage detection layer for inflatable structures. An algorithm is developed to determine the event of impact, its severity and location on the sensing layer for active health monitoring.
NASA Astrophysics Data System (ADS)
Merenda, K. D.
2016-12-01
Since 2013, the Pierre Auger Cosmic Ray Observatory in Mendoza, Argentina, extended its trigger algorithm to detect emissions of light consistent with the signature from very low frequency perturbations due to electromagnetic pulse sources (ELVES). Correlations with the World Wide Lightning Location Network (WWLLN), the Lightning Imaging Sensor (LIS) and simulated events were used to assess the quality of the reconstructed data. The FD is a pixel array telescope sensitive to the deep UV emissions of ELVES. The detector provides the finest time resolution of 100 nanoseconds ever applied to the study of ELVES. Four eyes, separated by approximately 40 kilometers, consist of six telescopes and span a total of 360 degrees of azimuth angle. The detector operates at night when storms are not in the field of view. An existing 3D EMP Model solves Maxwell's equations using a three dimensional finite-difference time-domain model to describe the propagation of electromagnetic pulses from lightning sources to the ionosphere. The simulation also provides a projection of the resulting ELVES onto the pixel array of the FD. A full reconstruction of simulated events is under development. We introduce the analog signal time evolution comparison between Auger reconstructed data and simulated events on individual FD pixels. In conjunction, we will present a study of the angular distribution of light emission around the vertical and above the causative lightning source. We will also contrast, with Monte Carlo, Auger double ELVES events separated by at most 5 microseconds. These events are too short to be explained by multiple return strokes, ground reflections, or compact intra-cloud lightning sources. Reconstructed ELVES data is 40% correlated to WWLLN data and an analysis with the LIS database is underway.
Defining and Enabling Resiliency of Electric Distribution Systems With Multiple Microgrids
Chanda, Sayonsom; Srivastava, Anurag K.
2016-05-02
This paper presents a method for quantifying and enabling the resiliency of a power distribution system (PDS) using analytical hierarchical process and percolation theory. Using this metric, quantitative analysis can be done to analyze the impact of possible control decisions to pro-actively enable the resilient operation of distribution system with multiple microgrids and other resources. Developed resiliency metric can also be used in short term distribution system planning. The benefits of being able to quantify resiliency can help distribution system planning engineers and operators to justify control actions, compare different reconfiguration algorithms, develop proactive control actions to avert power systemmore » outage due to impending catastrophic weather situations or other adverse events. Validation of the proposed method is done using modified CERTS microgrids and a modified industrial distribution system. Furthermore, simulation results show topological and composite metric considering power system characteristics to quantify the resiliency of a distribution system with the proposed methodology, and improvements in resiliency using two-stage reconfiguration algorithm and multiple microgrids.« less
Calibration of a rainfall-runoff hydrological model and flood simulation using data assimilation
NASA Astrophysics Data System (ADS)
Piacentini, A.; Ricci, S. M.; Thual, O.; Coustau, M.; Marchandise, A.
2010-12-01
Rainfall-runoff models are crucial tools for long-term assessment of flash floods or real-time forecasting. This work focuses on the calibration of a distributed parsimonious event-based rainfall-runoff model using data assimilation. The model combines a SCS-derived runoff model and a Lag and Route routing model for each cell of a regular grid mesh. The SCS-derived runoff model is parametrized by the initial water deficit, the discharge coefficient for the soil reservoir and a lagged discharge coefficient. The Lag and Route routing model is parametrized by the velocity of travel and the lag parameter. These parameters are assumed to be constant for a given catchment except for the initial water deficit and the velocity travel that are event-dependent (landuse, soil type and moisture initial conditions). In the present work, a BLUE filtering technique was used to calibrate the initial water deficit and the velocity travel for each flood event assimilating the first available discharge measurements at the catchment outlet. The advantages of the BLUE algorithm are its low computational cost and its convenient implementation, especially in the context of the calibration of a reduced number of parameters. The assimilation algorithm was applied on two Mediterranean catchment areas of different size and dynamics: Gardon d'Anduze and Lez. The Lez catchment, of 114 km2 drainage area, is located upstream Montpellier. It is a karstic catchment mainly affected by floods in autumn during intense rainstorms with short Lag-times and high discharge peaks (up to 480 m3.s-1 in September 2005). The Gardon d'Anduze catchment, mostly granite and schistose, of 545 km2 drainage area, lies over the departements of Lozère and Gard. It is often affected by flash and devasting floods (up to 3000 m3.s-1 in September 2002). The discharge observations at the beginning of the flood event are assimilated so that the BLUE algorithm provides optimal values for the initial water deficit and the velocity travel before the flood peak. These optimal values are used for a new simulation of the event in forecast mode (under the assumption of perfect rain-fall). On both catchments, it was shown over a significant number of flood events, that the data assimilation procedure improves the flood peak forecast. The improvement is globally more important for the Gardon d'Anduze catchment where the flood events are stronger. The peak can be forecasted up to 36 hours head of time assimilating very few observations (up to 4) during the rise of the water level. For multiple peaks events, the assimilation of the observations from the first peak leads to a significant improvement of the second peak simulation. It was also shown that the flood rise is often faster in reality than it is represented by the model. In this case and when the flood peak is under estimated in the simulation, the use of the first observations can be misleading for the data assimilation algorithm. The careful estimation of the observation and background error variances enabled the satisfying use of the data assimilation in these complex cases even though it does not allow the model error correction.
Bennetts, Victor Hernandez; Schaffernicht, Erik; Pomareda, Victor; Lilienthal, Achim J; Marco, Santiago; Trincavelli, Marco
2014-09-17
In this paper, we address the task of gas distribution modeling in scenarios where multiple heterogeneous compounds are present. Gas distribution modeling is particularly useful in emission monitoring applications where spatial representations of the gaseous patches can be used to identify emission hot spots. In realistic environments, the presence of multiple chemicals is expected and therefore, gas discrimination has to be incorporated in the modeling process. The approach presented in this work addresses the task of gas distribution modeling by combining different non selective gas sensors. Gas discrimination is addressed with an open sampling system, composed by an array of metal oxide sensors and a probabilistic algorithm tailored to uncontrolled environments. For each of the identified compounds, the mapping algorithm generates a calibrated gas distribution model using the classification uncertainty and the concentration readings acquired with a photo ionization detector. The meta parameters of the proposed modeling algorithm are automatically learned from the data. The approach was validated with a gas sensitive robot patrolling outdoor and indoor scenarios, where two different chemicals were released simultaneously. The experimental results show that the generated multi compound maps can be used to accurately predict the location of emitting gas sources.
NASA Astrophysics Data System (ADS)
Forouzanfar, F.; Tavakkoli-Moghaddam, R.; Bashiri, M.; Baboli, A.; Hadji Molana, S. M.
2017-11-01
This paper studies a location-routing-inventory problem in a multi-period closed-loop supply chain with multiple suppliers, producers, distribution centers, customers, collection centers, recovery, and recycling centers. In this supply chain, centers are multiple levels, a price increase factor is considered for operational costs at centers, inventory and shortage (including lost sales and backlog) are allowed at production centers, arrival time of vehicles of each plant to its dedicated distribution centers and also departure from them are considered, in such a way that the sum of system costs and the sum of maximum time at each level should be minimized. The aforementioned problem is formulated in the form of a bi-objective nonlinear integer programming model. Due to the NP-hard nature of the problem, two meta-heuristics, namely, non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO), are used in large sizes. In addition, a Taguchi method is used to set the parameters of these algorithms to enhance their performance. To evaluate the efficiency of the proposed algorithms, the results for small-sized problems are compared with the results of the ɛ-constraint method. Finally, four measuring metrics, namely, the number of Pareto solutions, mean ideal distance, spacing metric, and quality metric, are used to compare NSGA-II and MOPSO.
NASA Astrophysics Data System (ADS)
Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.
2011-12-01
With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Architecture for multi-technology real-time location systems.
Rodas, Javier; Barral, Valentín; Escudero, Carlos J
2013-02-07
The rising popularity of location-based services has prompted considerable research in the field of indoor location systems. Since there is no single technology to support these systems, it is necessary to consider the fusion of the information coming from heterogeneous sensors. This paper presents a software architecture designed for a hybrid location system where we can merge information from multiple sensor technologies. The architecture was designed to be used by different kinds of actors independently and with mutual transparency: hardware administrators, algorithm developers and user applications. The paper presents the architecture design, work-flow, case study examples and some results to show how different technologies can be exploited to obtain a good estimation of a target position.
NASA Astrophysics Data System (ADS)
Cobin, P. F.; Oommen, T.; Gierke, J. S.
2013-12-01
The Lake Atitlán watershed is home to approximately 200,000 people and is located in the western highlands of Guatemala. Steep slopes, highly susceptible to landslides during the rainy season, characterize the region. Typically these landslides occur during high-intensity precipitation events. Hurricane Stan hit Guatemala in October 2005; the resulting flooding and landslides devastated the region. Locations of landslide and non-landslide points were obtained from field observations and orthophotos taken following Hurricane Stan. Different datasets of landslide and non-landslide points across the watershed were used to compare model success at a small scale and regional scale. This study used data from multiple attributes: geology, geomorphology, distance to faults and streams, land use, slope, aspect, curvature, plan curvature, profile curvature and topographic wetness index. The open source software Weka was used for the data mining. Several attribute selection methods were applied to the data to predetermine the potential landslide causative influence. Different multivariate algorithms were then evaluated for their ability to predict landslide occurrence. The following statistical parameters were used to evaluate model accuracy: precision, recall, F measure and area under the receiver operating characteristic (ROC) curve. The attribute combinations of the most successful models were compared to the attribute evaluator results. The algorithm BayesNet yielded the most accurate model and was used to build a probability map of landslide initiation points for the regions selected in the watershed. The ultimate aim of this study is to share the methodology and results with municipal contacts from the author's time as a U.S. Peace Corps volunteer, to facilitate more effective future landslide hazard planning and mitigation.
Locator-Checker-Scaler Object Tracking Using Spatially Ordered and Weighted Patch Descriptor.
Kim, Han-Ul; Kim, Chang-Su
2017-08-01
In this paper, we propose a simple yet effective object descriptor and a novel tracking algorithm to track a target object accurately. For the object description, we divide the bounding box of a target object into multiple patches and describe them with color and gradient histograms. Then, we determine the foreground weight of each patch to alleviate the impacts of background information in the bounding box. To this end, we perform random walk with restart (RWR) simulation. We then concatenate the weighted patch descriptors to yield the spatially ordered and weighted patch (SOWP) descriptor. For the object tracking, we incorporate the proposed SOWP descriptor into a novel tracking algorithm, which has three components: locator, checker, and scaler (LCS). The locator and the scaler estimate the center location and the size of a target, respectively. The checker determines whether it is safe to adjust the target scale in a current frame. These three components cooperate with one another to achieve robust tracking. Experimental results demonstrate that the proposed LCS tracker achieves excellent performance on recent benchmarks.
Energy to the Edge (E2E) U.S. Army Rapid Equipping Force
2014-03-21
generators, parallel multiple sources, prioritize loads, and balance loads. Smart grids are based on complex algorithms and controls. 3. Reduce...stations are not able to be s rviced by prim power because of their location in the middle of a very active airfield and fueling a syst m that c ist
Explosive hazard detection using MIMO forward-looking ground penetrating radar
NASA Astrophysics Data System (ADS)
Shaw, Darren; Ho, K. C.; Stone, Kevin; Keller, James M.; Popescu, Mihail; Anderson, Derek T.; Luke, Robert H.; Burns, Brian
2015-05-01
This paper proposes a machine learning algorithm for subsurface object detection on multiple-input-multiple-output (MIMO) forward-looking ground-penetrating radar (FLGPR). By detecting hazards using FLGPR, standoff distances of up to tens of meters can be acquired, but this is at the degradation of performance due to high false alarm rates. The proposed system utilizes an anomaly detection prescreener to identify potential object locations. Alarm locations have multiple one-dimensional (ML) spectral features, two-dimensional (2D) spectral features, and log-Gabor statistic features extracted. The ability of these features to reduce the number of false alarms and increase the probability of detection is evaluated for both co-polarizations present in the Akela MIMO array. Classification is performed by a Support Vector Machine (SVM) with lane-based cross-validation for training and testing. Class imbalance and optimized SVM kernel parameters are considered during classifier training.
A Hybrid DV-Hop Algorithm Using RSSI for Localization in Large-Scale Wireless Sensor Networks.
Cheikhrouhou, Omar; M Bhatti, Ghulam; Alroobaea, Roobaea
2018-05-08
With the increasing realization of the Internet-of-Things (IoT) and rapid proliferation of wireless sensor networks (WSN), estimating the location of wireless sensor nodes is emerging as an important issue. Traditional ranging based localization algorithms use triangulation for estimating the physical location of only those wireless nodes that are within one-hop distance from the anchor nodes. Multi-hop localization algorithms, on the other hand, aim at localizing the wireless nodes that can physically be residing at multiple hops away from anchor nodes. These latter algorithms have attracted a growing interest from research community due to the smaller number of required anchor nodes. One such algorithm, known as DV-Hop (Distance Vector Hop), has gained popularity due to its simplicity and lower cost. However, DV-Hop suffers from reduced accuracy due to the fact that it exploits only the network topology (i.e., number of hops to anchors) rather than the distances between pairs of nodes. In this paper, we propose an enhanced DV-Hop localization algorithm that also uses the RSSI values associated with links between one-hop neighbors. Moreover, we exploit already localized nodes by promoting them to become additional anchor nodes. Our simulations have shown that the proposed algorithm significantly outperforms the original DV-Hop localization algorithm and two of its recently published variants, namely RSSI Auxiliary Ranging and the Selective 3-Anchor DV-hop algorithm. More precisely, in some scenarios, the proposed algorithm improves the localization accuracy by almost 95%, 90% and 70% as compared to the basic DV-Hop, Selective 3-Anchor, and RSSI DV-Hop algorithms, respectively.
NASA Astrophysics Data System (ADS)
Cao, Y.; Cervone, G.; Barkley, Z.; Lauvaux, T.; Deng, A.; Miles, N.; Richardson, S.
2016-12-01
Fugitive methane emission rates for the Marcellus shale area are estimated using a genetic algorithm that finds optimal weights to minimize the error between simulated and observed concentrations. The overall goal is to understand the relative contribution of methane due to Shale gas extraction. Methane sensors were installed on four towers located in northeastern Pennsylvania to measure atmospheric concentrations since May 2015. Inverse Lagrangian dispersion model runs are performed from each of these tower locations for each hour of 2015. Simulated methane concentrations at each of the four towers are computed by multiplying the resulting footprints from the atmospheric simulations by thousands of emission sources grouped into 11 classes. The emission sources were identified using GIS techniques, and include conventional and unconventional wells, different types of compressor stations, pipelines, landfills, farming and wetlands. Initial estimates for each source are calculated based on emission factors from EPA and few regional studies. A genetic algorithm is then used to identify optimal emission rates for the 11 classes of methane emissions and to explore extreme events and spatial and temporal structures in the emissions associated with natural gas activities.
NASA Astrophysics Data System (ADS)
Camilloni, Carlo; Broglia, Ricardo A.; Tiana, Guido
2011-01-01
The study of the mechanism which is at the basis of the phenomenon of protein folding requires the knowledge of multiple folding trajectories under biological conditions. Using a biasing molecular-dynamics algorithm based on the physics of the ratchet-and-pawl system, we carry out all-atom, explicit solvent simulations of the sequence of folding events which proteins G, CI2, and ACBP undergo in evolving from the denatured to the folded state. Starting from highly disordered conformations, the algorithm allows the proteins to reach, at the price of a modest computational effort, nativelike conformations, within a root mean square deviation (RMSD) of approximately 1 Å. A scheme is developed to extract, from the myriad of events, information concerning the sequence of native contact formation and of their eventual correlation. Such an analysis indicates that all the studied proteins fold hierarchically, through pathways which, although not deterministic, are well-defined with respect to the order of contact formation. The algorithm also allows one to study unfolding, a process which looks, to a large extent, like the reverse of the major folding pathway. This is also true in situations in which many pathways contribute to the folding process, like in the case of protein G.
De, Rajat K.
2015-01-01
Copy number variation (CNV) is a form of structural alteration in the mammalian DNA sequence, which are associated with many complex neurological diseases as well as cancer. The development of next generation sequencing (NGS) technology provides us a new dimension towards detection of genomic locations with copy number variations. Here we develop an algorithm for detecting CNVs, which is based on depth of coverage data generated by NGS technology. In this work, we have used a novel way to represent the read count data as a two dimensional geometrical point. A key aspect of detecting the regions with CNVs, is to devise a proper segmentation algorithm that will distinguish the genomic locations having a significant difference in read count data. We have designed a new segmentation approach in this context, using convex hull algorithm on the geometrical representation of read count data. To our knowledge, most algorithms have used a single distribution model of read count data, but here in our approach, we have considered the read count data to follow two different distribution models independently, which adds to the robustness of detection of CNVs. In addition, our algorithm calls CNVs based on the multiple sample analysis approach resulting in a low false discovery rate with high precision. PMID:26291322
Sinha, Rituparna; Samaddar, Sandip; De, Rajat K
2015-01-01
Copy number variation (CNV) is a form of structural alteration in the mammalian DNA sequence, which are associated with many complex neurological diseases as well as cancer. The development of next generation sequencing (NGS) technology provides us a new dimension towards detection of genomic locations with copy number variations. Here we develop an algorithm for detecting CNVs, which is based on depth of coverage data generated by NGS technology. In this work, we have used a novel way to represent the read count data as a two dimensional geometrical point. A key aspect of detecting the regions with CNVs, is to devise a proper segmentation algorithm that will distinguish the genomic locations having a significant difference in read count data. We have designed a new segmentation approach in this context, using convex hull algorithm on the geometrical representation of read count data. To our knowledge, most algorithms have used a single distribution model of read count data, but here in our approach, we have considered the read count data to follow two different distribution models independently, which adds to the robustness of detection of CNVs. In addition, our algorithm calls CNVs based on the multiple sample analysis approach resulting in a low false discovery rate with high precision.
Machine learning for the automatic detection of anomalous events
NASA Astrophysics Data System (ADS)
Fisher, Wendy D.
In this dissertation, we describe our research contributions for a novel approach to the application of machine learning for the automatic detection of anomalous events. We work in two different domains to ensure a robust data-driven workflow that could be generalized for monitoring other systems. Specifically, in our first domain, we begin with the identification of internal erosion events in earth dams and levees (EDLs) using geophysical data collected from sensors located on the surface of the levee. As EDLs across the globe reach the end of their design lives, effectively monitoring their structural integrity is of critical importance. The second domain of interest is related to mobile telecommunications, where we investigate a system for automatically detecting non-commercial base station routers (BSRs) operating in protected frequency space. The presence of non-commercial BSRs can disrupt the connectivity of end users, cause service issues for the commercial providers, and introduce significant security concerns. We provide our motivation, experimentation, and results from investigating a generalized novel data-driven workflow using several machine learning techniques. In Chapter 2, we present results from our performance study that uses popular unsupervised clustering algorithms to gain insights to our real-world problems, and evaluate our results using internal and external validation techniques. Using EDL passive seismic data from an experimental laboratory earth embankment, results consistently show a clear separation of events from non-events in four of the five clustering algorithms applied. Chapter 3 uses a multivariate Gaussian machine learning model to identify anomalies in our experimental data sets. For the EDL work, we used experimental data from two different laboratory earth embankments. Additionally, we explore five wavelet transform methods for signal denoising. The best performance is achieved with the Haar wavelets. We achieve up to 97.3% overall accuracy and less than 1.4% false negatives in anomaly detection. In Chapter 4, we research using two-class and one-class support vector machines (SVMs) for an effective anomaly detection system. We again use the two different EDL data sets from experimental laboratory earth embankments (each having approximately 80% normal and 20% anomalies) to ensure our workflow is robust enough to work with multiple data sets and different types of anomalous events (e.g., cracks and piping). We apply Haar wavelet-denoising techniques and extract nine spectral features from decomposed segments of the time series data. The two-class SVM with 10-fold cross validation achieved over 94% overall accuracy and 96% F1-score. Our approach provides a means for automatically identifying anomalous events using various machine learning techniques. Detecting internal erosion events in aging EDLs, earlier than is currently possible, can allow more time to prevent or mitigate catastrophic failures. Results show that we can successfully separate normal from anomalous data observations in passive seismic data, and provide a step towards techniques for continuous real-time monitoring of EDL health. Our lightweight non-commercial BSR detection system also has promise in separating commercial from non-commercial BSR scans without the need for prior geographic location information, extensive time-lapse surveys, or a database of known commercial carriers. (Abstract shortened by ProQuest.).
Novel maximum likelihood approach for passive detection and localisation of multiple emitters
NASA Astrophysics Data System (ADS)
Hernandez, Marcel
2017-12-01
In this paper, a novel target acquisition and localisation algorithm (TALA) is introduced that offers a capability for detecting and localising multiple targets using the intermittent "signals-of-opportunity" (e.g. acoustic impulses or radio frequency transmissions) they generate. The TALA is a batch estimator that addresses the complex multi-sensor/multi-target data association problem in order to estimate the locations of an unknown number of targets. The TALA is unique in that it does not require measurements to be of a specific type, and can be implemented for systems composed of either homogeneous or heterogeneous sensors. The performance of the TALA is demonstrated in simulated scenarios with a network of 20 sensors and up to 10 targets. The sensors generate angle-of-arrival (AOA), time-of-arrival (TOA), or hybrid AOA/TOA measurements. It is shown that the TALA is able to successfully detect 83-99% of the targets, with a negligible number of false targets declared. Furthermore, the localisation errors of the TALA are typically within 10% of the errors generated by a "genie" algorithm that is given the correct measurement-to-target associations. The TALA also performs well in comparison with an optimistic Cramér-Rao lower bound, with typical differences in performance of 10-20%, and differences in performance of 40-50% in the most difficult scenarios considered. The computational expense of the TALA is also controllable, which allows the TALA to maintain computational feasibility even in the most challenging scenarios considered. This allows the approach to be implemented in time-critical scenarios, such as in the localisation of artillery firing events. It is concluded that the TALA provides a powerful situational awareness aid for passive surveillance operations.
Analysis of lightning outliers in the EUCLID network
NASA Astrophysics Data System (ADS)
Poelman, Dieter R.; Schulz, Wolfgang; Kaltenboeck, Rudolf; Delobbe, Laurent
2017-11-01
Lightning data as observed by the European Cooperation for Lightning Detection (EUCLID) network are used in combination with radar data to retrieve the temporal and spatial behavior of lightning outliers, i.e., discharges located in a wrong place, over a 5-year period from 2011 to 2016. Cloud-to-ground (CG) stroke and intracloud (IC) pulse data are superimposed on corresponding 5 min radar precipitation fields in two topographically different areas, Belgium and Austria, in order to extract lightning outliers based on the distance between each lightning event and the nearest precipitation. It is shown that the percentage of outliers is sensitive to changes in the network and to the location algorithm itself. The total percentage of outliers for both regions varies over the years between 0.8 and 1.7 % for a distance to the nearest precipitation of 2 km, with an average of approximately 1.2 % in Belgium and Austria. Outside the European summer thunderstorm season, the percentage of outliers tends to increase somewhat. The majority of all the outliers are low peak current events with absolute values falling between 0 and 10 kA. More specifically, positive cloud-to-ground strokes are more likely to be classified as outliers compared to all other types of discharges. Furthermore, it turns out that the number of sensors participating in locating a lightning discharge is different for outliers versus correctly located events, with outliers having the lowest amount of sensors participating. In addition, it is shown that in most cases the semi-major axis (SMA) assigned to a lightning discharge as a confidence indicator in the location accuracy (LA) is smaller for correctly located events compared to the semi-major axis of outliers.
Global characterization of copy number variants in epilepsy patients from whole genome sequencing
Meloche, Caroline; Andrade, Danielle M.; Lafreniere, Ron G.; Gravel, Micheline; Spiegelman, Dan; Dionne-Laporte, Alexandre; Boelman, Cyrus; Hamdan, Fadi F.; Michaud, Jacques L.; Rouleau, Guy; Minassian, Berge A.; Bourque, Guillaume; Cossette, Patrick
2018-01-01
Epilepsy will affect nearly 3% of people at some point during their lifetime. Previous copy number variants (CNVs) studies of epilepsy have used array-based technology and were restricted to the detection of large or exonic events. In contrast, whole-genome sequencing (WGS) has the potential to more comprehensively profile CNVs but existing analytic methods suffer from limited accuracy. We show that this is in part due to the non-uniformity of read coverage, even after intra-sample normalization. To improve on this, we developed PopSV, an algorithm that uses multiple samples to control for technical variation and enables the robust detection of CNVs. Using WGS and PopSV, we performed a comprehensive characterization of CNVs in 198 individuals affected with epilepsy and 301 controls. For both large and small variants, we found an enrichment of rare exonic events in epilepsy patients, especially in genes with predicted loss-of-function intolerance. Notably, this genome-wide survey also revealed an enrichment of rare non-coding CNVs near previously known epilepsy genes. This enrichment was strongest for non-coding CNVs located within 100 Kbp of an epilepsy gene and in regions associated with changes in the gene expression, such as expression QTLs or DNase I hypersensitive sites. Finally, we report on 21 potentially damaging events that could be associated with known or new candidate epilepsy genes. Our results suggest that comprehensive sequence-based profiling of CNVs could help explain a larger fraction of epilepsy cases. PMID:29649218
Moving multiple sinks through wireless sensor networks for lifetime maximization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrioli, Chiara; Carosi, Alessio; Basagni, Stefano
2008-01-01
Unattended sensor networks typically watch for some phenomena such as volcanic events, forest fires, pollution, or movements in animal populations. Sensors report to a collection point periodically or when they observe reportable events. When sensors are too far from the collection point to communicate directly, other sensors relay messages for them. If the collection point location is static, sensor nodes that are closer to the collection point relay far more messages than those on the periphery. Assuming all sensor nodes have roughly the same capabilities, those with high relay burden experience battery failure much faster than the rest of themore » network. However, since their death disconnects the live nodes from the collection point, the whole network is then dead. We consider the problem of moving a set of collectors (sinks) through a wireless sensor network to balance the energy used for relaying messages, maximizing the lifetime of the network. We show how to compute an upper bound on the lifetime for any instance using linear and integer programming. We present a centralized heuristic that produces sink movement schedules that produce network lifetimes within 1.4% of the upper bound for realistic settings. We also present a distributed heuristic that produces lifetimes at most 25:3% below the upper bound. More specifically, we formulate a linear program (LP) that is a relaxation of the scheduling problem. The variables are naturally continuous, but the LP relaxes some constraints. The LP has an exponential number of constraints, but we can satisfy them all by enforcing only a polynomial number using a separation algorithm. This separation algorithm is a p-median facility location problem, which we can solve efficiently in practice for huge instances using integer programming technology. This LP selects a set of good sensor configurations. Given the solution to the LP, we can find a feasible schedule by selecting a subset of these configurations, ordering them via a traveling salesman heuristic, and computing feasible transitions using matching algorithms. This algorithm assumes sinks can get a schedule from a central server or a leader sink. If the network owner prefers the sinks make independent decisions, they can use our distributed heuristic. In this heuristic, sinks maintain estimates of the energy distribution in the network and move greedily (with some coordination) based on local search. This application uses the new SUCASA (Solver Utility for Customization with Automatic Symbol Access) facility within the PICO (Parallel Integer and Combinatorial Optimizer) integer programming solver system. SUCASA allows rapid development of customized math programming (search-based) solvers using a problem's natural multidimensional representation. In this case, SUCASA also significantly improves runtime compared to implementations in the ampl math programming language or in perl.« less
Study of multi-muon bundles in cosmic ray showers detected with the DELPHI detector at LEP
NASA Astrophysics Data System (ADS)
Delphi Collaboration; Abreu, P.; Adam, W.; Adzic, P.; Albrecht, T.; Alemany-Fernandez, R.; Allmendinger, T.; Allport, P. P.; Amaldi, U.; Amapane, N.; Amato, S.; Anashkin, E.; Andreazza, A.; Andringa, S.; Anjos, N.; Antilogus, P.; Apel, W.-D.; Arnoud, Y.; Ask, S.; Asman, B.; Augustinus, A.; Baillon, P.; Ballestrero, A.; Bambade, P.; Barbier, R.; Bardin, D.; Barker, G. J.; Baroncelli, A.; Battaglia, M.; Baubillier, M.; Becks, K.-H.; Begalli, M.; Behrmann, A.; Ben-Haim, E.; Benekos, N.; Benvenuti, A.; Berat, C.; Berggren, M.; Bertrand, D.; Besancon, M.; Besson, N.; Bloch, D.; Blom, M.; Bluj, M.; Bonesini, M.; Boonekamp, M.; Booth, P. S. L.; Borisov, G.; Botner, O.; Bouquet, B.; Bowcock, T. J. V.; Boyko, I.; Bracko, M.; Brenner, R.; Brodet, E.; Bruckman, P.; Brunet, J. M.; Buschbeck, B.; Buschmann, P.; Calvi, M.; Camporesi, T.; Canale, V.; Carena, F.; Castro, N.; Cavallo, F.; Chapkin, M.; Charpentier, Ph.; Checchia, P.; Chierici, R.; Chliapnikov, P.; Chudoba, J.; Chung, S. U.; Cieslik, K.; Collins, P.; Contri, R.; Cosme, G.; Cossutti, F.; Costa, M. J.; Crennell, D.; Cuevas, J.; D'Hondt, J.; da Silva, T.; da Silva, W.; Della Ricca, G.; de Angelis, A.; de Boer, W.; de Clercq, C.; de Lotto, B.; de Maria, N.; de Min, A.; de Paula, L.; di Ciaccio, L.; di Simone, A.; Doroba, K.; Drees, J.; Eigen, G.; Ekelof, T.; Ellert, M.; Elsing, M.; Espirito Santo, M. C.; Fanourakis, G.; Fassouliotis, D.; Feindt, M.; Fernandez, J.; Ferrer, A.; Ferro, F.; Flagmeyer, U.; Foeth, H.; Fokitis, E.; Fulda-Quenzer, F.; Fuster, J.; Gandelman, M.; Garcia, C.; Gavillet, Ph.; Gazis, E.; Gokieli, R.; Golob, B.; Gomez-Ceballos, G.; Goncalves, P.; Graziani, E.; Grosdidier, G.; Grzelak, K.; Guy, J.; Haag, C.; Hallgren, A.; Hamacher, K.; Hamilton, K.; Haug, S.; Hauler, F.; Hedberg, V.; Hennecke, M.; Herr, H.; Hoffman, J.; Holmgren, S.-O.; Holt, P. J.; Houlden, M. A.; Jackson, J. N.; Jarlskog, G.; Jarry, P.; Jeans, D.; Johansson, E. K.; Jonsson, P.; Joram, C.; Jungermann, L.; Kapusta, F.; Katsanevas, S.; Katsoufis, E.; Kernel, G.; Kersevan, B. P.; Kerzel, U.; King, B. T.; Kjaer, N. J.; Kluit, P.; Kokkinias, P.; Kourkoumelis, C.; Kouznetsov, O.; Krumstein, Z.; Kucharczyk, M.; Lamsa, J.; Leder, G.; Ledroit, F.; Leinonen, L.; Leitner, R.; Lemonne, J.; Lepeltier, V.; Lesiak, T.; Liebig, W.; Liko, D.; Lipniacka, A.; Lopes, J. H.; Lopez, J. M.; Loukas, D.; Lutz, P.; Lyons, L.; MacNaughton, J.; Malek, A.; Maltezos, S.; Mandl, F.; Marco, J.; Marco, R.; Marechal, B.; Margoni, M.; Marin, J.-C.; Mariotti, C.; Markou, A.; Martinez-Rivero, C.; Masik, J.; Mastroyiannopoulos, N.; Matorras, F.; Matteuzzi, C.; Mazzucato, F.; Mazzucato, M.; McNulty, R.; Meroni, C.; Migliore, E.; Mitaroff, W.; Mjoernmark, U.; Moa, T.; Moch, M.; Moenig, K.; Monge, R.; Montenegro, J.; Moraes, D.; Moreno, S.; Morettini, P.; Mueller, U.; Muenich, K.; Mulders, M.; Mundim, L.; Murray, W.; Muryn, B.; Myatt, G.; Myklebust, T.; Nassiakou, M.; Navarria, F.; Nawrocki, K.; Nicolaidou, R.; Nikolenko, M.; Oblakowska-Mucha, A.; Obraztsov, V.; Olshevski, A.; Onofre, A.; Orava, R.; Osterberg, K.; Ouraou, A.; Oyanguren, A.; Paganoni, M.; Paiano, S.; Palacios, J. P.; Palka, H.; Papadopoulou, Th. D.; Pape, L.; Parkes, C.; Parodi, F.; Parzefall, U.; Passeri, A.; Passon, O.; Peralta, L.; Perepelitsa, V.; Perrotta, A.; Petrolini, A.; Piedra, J.; Pieri, L.; Pierre, F.; Pimenta, M.; Piotto, E.; Podobnik, T.; Poireau, V.; Pol, M. E.; Polok, G.; Pozdniakov, V.; Pukhaeva, N.; Pullia, A.; Rames, J.; Read, A.; Rebecchi, P.; Rehn, J.; Reid, D.; Reinhardt, R.; Renton, P.; Richard, F.; Ridky, J.; Rivero, M.; Rodriguez, D.; Romero, A.; Ronchese, P.; Roudeau, P.; Rovelli, T.; Ruhlmann-Kleider, V.; Ryabtchikov, D.; Sadovsky, A.; Salmi, L.; Salt, J.; Sander, C.; Savoy-Navarro, A.; Schwickerath, U.; Sekulin, R.; Shellard, R. C.; Siebel, M.; Sisakian, A.; Smadja, G.; Smirnova, O.; Sokolov, A.; Sopczak, A.; Sosnowski, R.; Spassov, T.; Stanitzki, M.; Stocchi, A.; Strauss, J.; Stugu, B.; Szczekowski, M.; Szeptycka, M.; Szumlak, T.; Tabarelli, T.; Taffard, A. C.; Tegenfeldt, F.; Timmermans, J.; Tkatchev, L.; Tobin, M.; Todorovova, S.; Tome, B.; Tonazzo, A.; Tortosa, P.; Travnicek, P.; Treille, D.; Tristram, G.; Trochimczuk, M.; Troncon, C.; Turluer, M.-L.; Tyapkin, I. A.; Tyapkin, P.; Tzamarias, S.; Uvarov, V.; Valenti, G.; van Dam, P.; van Eldik, J.; van Remortel, N.; van Vulpen, I.; Vegni, G.; Veloso, F.; Venus, W.; Verdier, P.; Verzi, V.; Vilanova, D.; Vitale, L.; Vrba, V.; Wahlen, H.; Washbrook, A. J.; Weiser, C.; Wicke, D.; Wickens, J.; Wilkinson, G.; Winter, M.; Witek, M.; Yushchenko, O.; Zalewska, A.; Zalewski, P.; Zavrtanik, D.; Zhuravlov, V.; Zimin, N. I.; Zintchenko, A.; Zupan, M.
2007-11-01
The DELPHI detector at LEP has been used to measure multi-muon bundles originating from cosmic ray interactions with air. The cosmic events were recorded in “parasitic mode” between individual e+e- interactions and the total live time of this data taking is equivalent to 1.6 × 106 s. The DELPHI apparatus is located about 100 m underground and the 84 metres rock overburden imposes a cutoff of about 52 GeV/c on muon momenta. The data from the large volume Hadron Calorimeter allowed the muon multiplicity of 54,201 events to be reconstructed. The resulting muon multiplicity distribution is compared with the prediction of the Monte Carlo simulation based on CORSIKA/QGSJET01. The model fails to describe the abundance of high multiplicity events. The impact of QGSJET internal parameters on the results is also studied.
NASA Astrophysics Data System (ADS)
Valoroso, Luisa; Chiaraluce, Lauro; Di Stefano, Raffaele; Latorre, Diana; Piccinini, Davide
2014-05-01
The characterization of the geometry, kinematics and rheology of fault zones by seismological data depends on our capability of accurately locate the largest number of low-magnitude seismic events. To this aim, we have been working for the past three years to develop an advanced modular earthquake location procedure able to automatically retrieve high-resolution earthquakes catalogues directly from continuous waveforms data. We use seismograms recorded at about 60 seismic stations located both at surface and at depth. The network covers an area of about 80x60 km with a mean inter-station distance of 6 km. These stations are part of a Near fault Observatory (TABOO; http://taboo.rm.ingv.it/), consisting of multi-sensor stations (seismic, geodetic, geochemical and electromagnetic). This permanent scientific infrastructure managed by the INGV is devoted to studying the earthquakes preparatory phase and the fast/slow (i.e., seismic/aseismic) deformation process active along the Alto Tiberina fault (ATF) located in the northern Apennines (Italy). The ATF is potentially one of the rare worldwide examples of active low-angle (< 15°) normal fault accommodating crustal extension and characterized by a regular occurrence of micro-earthquakes. The modular procedure combines: i) a sensitive detection algorithm optimized to declare low-magnitude events; ii) an accurate picking procedure that provides consistently weighted P- and S-wave arrival times, P-wave first motion polarities and the maximum waveform amplitude for local magnitude calculation; iii) both linearized iterative and non-linear global-search earthquake location algorithms to compute accurate absolute locations of single-events in a 3D geological model (see Latorre et al. same session); iv) cross-correlation and double-difference location methods to compute high-resolution relative event locations. This procedure is now running off-line with a delay of 1 week to the real-time. We are now implementing this procedure to obtain high-resolution double-difference earthquake locations in real-time (DDRT). We show locations of ~30k low-magnitude earthquakes recorded during the past 4 years (2010-2013) of network operation, reaching a completeness magnitude of the catalogue of 0.2. The spatiotemporal seismicity distribution has an almost constant and high rate of r = 24.30e-04 eqks/day*km2, interrupted by low to moderate magnitude seismic sequences such as the 2010 Pietralunga sequence (M L 3.8) and the still ongoing 2013 Gubbio sequence (M L 4.0 on 22nd December 2013). Low-magnitude seismicity images the fine scale geometry of the ATF: an E-dipping plane at low angle (15°) from 4 km down to ~15 km of depth. While in the ATF hanging-wall we observe the activation of high-angle minor synthetic and antithetic normal faults (4-5 km long) confined at depth by the detachment. Both seismic sequences activated up to now only these high-angle fault segments.
Eigensolution of finite element problems in a completely connected parallel architecture
NASA Technical Reports Server (NTRS)
Akl, Fred A.; Morel, Michael R.
1989-01-01
A parallel algorithm for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi)=(M)(phi)(omega), where (K) and (M) are of order N, and (omega) is of order q is presented. The parallel algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm has been successfully implemented on a tightly coupled multiple-instruction-multiple-data (MIMD) parallel processing computer, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor, or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macro-tasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18 and 3.61 are achieved on two, four, six and eight processors, respectively.
NASA Astrophysics Data System (ADS)
Nissen, Katrin; Ulbrich, Uwe
2016-04-01
An event based detection algorithm for extreme precipitation is applied to a multi-model ensemble of regional climate model simulations. The algorithm determines extent, location, duration and severity of extreme precipitation events. We assume that precipitation in excess of the local present-day 10-year return value will potentially exceed the capacity of the drainage systems that protect critical infrastructure elements. This assumption is based on legislation for the design of drainage systems which is in place in many European countries. Thus, events exceeding the local 10-year return value are detected. In this study we distinguish between sub-daily events (3 hourly) with high precipitation intensities and long-duration events (1-3 days) with high precipitation amounts. The climate change simulations investigated here were conducted within the EURO-CORDEX framework and exhibit a horizontal resolution of approximately 12.5 km. The period between 1971-2100 forced with observed and scenario (RCP 8.5 and RCP 4.5) greenhouse gas concentrations was analysed. Examined are changes in event frequency, event duration and size. The simulations show an increase in the number of extreme precipitation events for the future climate period over most of the area, which is strongest in Northern Europe. Strength and statistical significance of the signal increase with increasing greenhouse gas concentrations. This work has been conducted within the EU project RAIN (Risk Analysis of Infrastructure Networks in response to extreme weather).
NASA Astrophysics Data System (ADS)
Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining
2017-12-01
We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.
NASA Astrophysics Data System (ADS)
Neagoe, Cristian; Grecu, Bogdan; Manea, Liviu
2016-04-01
National Institute for Earth Physics (NIEP) operates a real time seismic network which is designed to monitor the seismic activity on the Romanian territory, which is dominated by the intermediate earthquakes (60-200 km) from Vrancea area. The ability to reduce the impact of earthquakes on society depends on the existence of a large number of high-quality observational data. The development of the network in recent years and an advanced seismic acquisition are crucial to achieving this objective. The software package used to perform the automatic real-time locations is Seiscomp3. An accurate choice of the Seiscomp3 setting parameters is necessary to ensure the best performance of the real-time system i.e., the most accurate location for the earthquakes and avoiding any false events. The aim of this study is to optimize the algorithms of the real-time system that detect and locate the earthquakes in the monitored area. This goal is pursued by testing different parameters (e.g., STA/LTA, filters applied to the waveforms) on a data set of representative earthquakes of the local seismicity. The results are compared with the locations from the Romanian Catalogue ROMPLUS.
Toward Optimal Target Placement for Neural Prosthetic Devices
Cunningham, John P.; Yu, Byron M.; Gilja, Vikash; Ryu, Stephen I.; Shenoy, Krishna V.
2008-01-01
Neural prosthetic systems have been designed to estimate continuous reach trajectories (motor prostheses) and to predict discrete reach targets (communication prostheses). In the latter case, reach targets are typically decoded from neural spiking activity during an instructed delay period before the reach begins. Such systems use targets placed in radially symmetric geometries independent of the tuning properties of the neurons available. Here we seek to automate the target placement process and increase decode accuracy in communication prostheses by selecting target locations based on the neural population at hand. Motor prostheses that incorporate intended target information could also benefit from this consideration. We present an optimal target placement algorithm that approximately maximizes decode accuracy with respect to target locations. In simulated neural spiking data fit from two monkeys, the optimal target placement algorithm yielded statistically significant improvements up to 8 and 9% for two and sixteen targets, respectively. For four and eight targets, gains were more modest, as the target layouts found by the algorithm closely resembled the canonical layouts. We trained a monkey in this paradigm and tested the algorithm with experimental neural data to confirm some of the results found in simulation. In all, the algorithm can serve not only to create new target layouts that outperform canonical layouts, but it can also confirm or help select among multiple canonical layouts. The optimal target placement algorithm developed here is the first algorithm of its kind, and it should both improve decode accuracy and help automate target placement for neural prostheses. PMID:18829845
Discerning Trends in Performance Across Multiple Events
NASA Technical Reports Server (NTRS)
Slater, Simon; Hiltz, Mike; Rice, Craig
2006-01-01
Mass Data is a computer program that enables rapid, easy discernment of trends in performance data across multiple flights and ground tests. The program can perform Fourier analysis and other functions for the purposes of frequency analysis and trending of all variables. These functions facilitate identification of past use of diagnosed systems and of anomalies in such systems, and enable rapid assessment of related current problems. Many variables, for computation of which it is usually necessary to perform extensive manual manipulation of raw downlist data, are automatically computed and made available to all users, regularly eliminating the need for what would otherwise be an extensive amount of engineering analysis. Data from flight, ground test, and simulation are preprocessed and stored in one central location for instantaneous access and comparison for diagnostic and trending purposes. Rules are created so that an event log is created for every flight, making it easy to locate information on similar maneuvers across many flights. The same rules can be created for test sets and simulations, and are searchable, so that information on like events is easily accessible.
Ding, Fangyu; Ge, Quansheng; Fu, Jingying; Hao, Mengmeng
2017-01-01
Terror events can cause profound consequences for the whole society. Finding out the regularity of terrorist attacks has important meaning for the global counter-terrorism strategy. In the present study, we demonstrate a novel method using relatively popular and robust machine learning methods to simulate the risk of terrorist attacks at a global scale based on multiple resources, long time series and globally distributed datasets. Historical data from 1970 to 2015 was adopted to train and evaluate machine learning models. The model performed fairly well in predicting the places where terror events might occur in 2015, with a success rate of 96.6%. Moreover, it is noteworthy that the model with optimized tuning parameter values successfully predicted 2,037 terrorism event locations where a terrorist attack had never happened before. PMID:28591138
Ding, Fangyu; Ge, Quansheng; Jiang, Dong; Fu, Jingying; Hao, Mengmeng
2017-01-01
Terror events can cause profound consequences for the whole society. Finding out the regularity of terrorist attacks has important meaning for the global counter-terrorism strategy. In the present study, we demonstrate a novel method using relatively popular and robust machine learning methods to simulate the risk of terrorist attacks at a global scale based on multiple resources, long time series and globally distributed datasets. Historical data from 1970 to 2015 was adopted to train and evaluate machine learning models. The model performed fairly well in predicting the places where terror events might occur in 2015, with a success rate of 96.6%. Moreover, it is noteworthy that the model with optimized tuning parameter values successfully predicted 2,037 terrorism event locations where a terrorist attack had never happened before.
CoffeeShop Astrophysics: An Adventure in Public Outreach
NASA Astrophysics Data System (ADS)
Chamberlin, Sydney; Decesar, Megan; Caudill, Sarah; Sadeghian, Laleh; Nuttall, Laura; Urban, Alex; McGrath, Casey
2016-03-01
Engaging non-scientists in scientific discussions is inarguably important, both for researchers and society. Public lectures have long been utilized as a method for performing such outreach, but due to their format and location often reach a limited audience. More recently, events such as science cafés (events pairing a scientist with the public in a casual venue) have emerged as a potential tool for connecting with general audiences. The success of these events depends on multiple variables. In this talk, we describe an example of such an event entitled CoffeeShop Astrophysics, that uses multiple speakers, demonstrations and humor to successfully engage members of the public. We discuss the key elements that make CoffeeShop Astrophysics effective, and the viability of grassroots, coffeeshop-style outreach. The authors gratefully acknowledge support from the American Physical Society for this work.
Sasaki, Satoshi; Comber, Alexis J; Suzuki, Hiroshi; Brunsdon, Chris
2010-01-28
Ambulance response time is a crucial factor in patient survival. The number of emergency cases (EMS cases) requiring an ambulance is increasing due to changes in population demographics. This is decreasing ambulance response times to the emergency scene. This paper predicts EMS cases for 5-year intervals from 2020, to 2050 by correlating current EMS cases with demographic factors at the level of the census area and predicted population changes. It then applies a modified grouping genetic algorithm to compare current and future optimal locations and numbers of ambulances. Sets of potential locations were evaluated in terms of the (current and predicted) EMS case distances to those locations. Future EMS demands were predicted to increase by 2030 using the model (R2 = 0.71). The optimal locations of ambulances based on future EMS cases were compared with current locations and with optimal locations modelled on current EMS case data. Optimising the location of ambulance stations locations reduced the average response times by 57 seconds. Current and predicted future EMS demand at modelled locations were calculated and compared. The reallocation of ambulances to optimal locations improved response times and could contribute to higher survival rates from life-threatening medical events. Modelling EMS case 'demand' over census areas allows the data to be correlated to population characteristics and optimal 'supply' locations to be identified. Comparing current and future optimal scenarios allows more nuanced planning decisions to be made. This is a generic methodology that could be used to provide evidence in support of public health planning and decision making.
Exact simulation of max-stable processes.
Dombry, Clément; Engelke, Sebastian; Oesting, Marco
2016-06-01
Max-stable processes play an important role as models for spatial extreme events. Their complex structure as the pointwise maximum over an infinite number of random functions makes their simulation difficult. Algorithms based on finite approximations are often inexact and computationally inefficient. We present a new algorithm for exact simulation of a max-stable process at a finite number of locations. It relies on the idea of simulating only the extremal functions, that is, those functions in the construction of a max-stable process that effectively contribute to the pointwise maximum. We further generalize the algorithm by Dieker & Mikosch (2015) for Brown-Resnick processes and use it for exact simulation via the spectral measure. We study the complexity of both algorithms, prove that our new approach via extremal functions is always more efficient, and provide closed-form expressions for their implementation that cover most popular models for max-stable processes and multivariate extreme value distributions. For simulation on dense grids, an adaptive design of the extremal function algorithm is proposed.
Adapting an Ant Colony Metaphor for Multi-Robot Chemical Plume Tracing
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Li, Fei; Zeng, Ming
2012-01-01
We consider chemical plume tracing (CPT) in time-varying airflow environments using multiple mobile robots. The purpose of CPT is to approach a gas source with a previously unknown location in a given area. Therefore, the CPT could be considered as a dynamic optimization problem in continuous domains. The traditional ant colony optimization (ACO) algorithm has been successfully used for combinatorial optimization problems in discrete domains. To adapt the ant colony metaphor to the multi-robot CPT problem, the two-dimension continuous search area is discretized into grids and the virtual pheromone is updated according to both the gas concentration and wind information. To prevent the adapted ACO algorithm from being prematurely trapped in a local optimum, the upwind surge behavior is adopted by the robots with relatively higher gas concentration in order to explore more areas. The spiral surge (SS) algorithm is also examined for comparison. Experimental results using multiple real robots in two indoor natural ventilated airflow environments show that the proposed CPT method performs better than the SS algorithm. The simulation results for large-scale advection-diffusion plume environments show that the proposed method could also work in outdoor meandering plume environments. PMID:22666056
Adapting an ant colony metaphor for multi-robot chemical plume tracing.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Li, Fei; Zeng, Ming
2012-01-01
We consider chemical plume tracing (CPT) in time-varying airflow environments using multiple mobile robots. The purpose of CPT is to approach a gas source with a previously unknown location in a given area. Therefore, the CPT could be considered as a dynamic optimization problem in continuous domains. The traditional ant colony optimization (ACO) algorithm has been successfully used for combinatorial optimization problems in discrete domains. To adapt the ant colony metaphor to the multi-robot CPT problem, the two-dimension continuous search area is discretized into grids and the virtual pheromone is updated according to both the gas concentration and wind information. To prevent the adapted ACO algorithm from being prematurely trapped in a local optimum, the upwind surge behavior is adopted by the robots with relatively higher gas concentration in order to explore more areas. The spiral surge (SS) algorithm is also examined for comparison. Experimental results using multiple real robots in two indoor natural ventilated airflow environments show that the proposed CPT method performs better than the SS algorithm. The simulation results for large-scale advection-diffusion plume environments show that the proposed method could also work in outdoor meandering plume environments.
Detection of Coronal Mass Ejections Using Multiple Features and Space-Time Continuity
NASA Astrophysics Data System (ADS)
Zhang, Ling; Yin, Jian-qin; Lin, Jia-ben; Feng, Zhi-quan; Zhou, Jin
2017-07-01
Coronal Mass Ejections (CMEs) release tremendous amounts of energy in the solar system, which has an impact on satellites, power facilities and wireless transmission. To effectively detect a CME in Large Angle Spectrometric Coronagraph (LASCO) C2 images, we propose a novel algorithm to locate the suspected CME regions, using the Extreme Learning Machine (ELM) method and taking into account the features of the grayscale and the texture. Furthermore, space-time continuity is used in the detection algorithm to exclude the false CME regions. The algorithm includes three steps: i) define the feature vector which contains textural and grayscale features of a running difference image; ii) design the detection algorithm based on the ELM method according to the feature vector; iii) improve the detection accuracy rate by using the decision rule of the space-time continuum. Experimental results show the efficiency and the superiority of the proposed algorithm in the detection of CMEs compared with other traditional methods. In addition, our algorithm is insensitive to most noise.
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing.
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-02-15
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED light target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, direction location algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system.
NASA Astrophysics Data System (ADS)
Dey, Sudip; Karmakar, Amit
2014-02-01
This paper presents the time dependent response of multiple delaminated angle-ply composite pretwisted conical shells subjected to low velocity normal impact. The finite element formulation is based on Mindlin's theory incorporating rotary inertia and effects of transverse shear deformation. An eight-noded isoparametric plate bending element is employed to satisfy the compatibility of deformation and equilibrium of resultant forces and moments at the delamination crack front. A multipoint constraint algorithm is incorporated which leads to asymmetric stiffness matrices. The modified Hertzian contact law which accounts for permanent indentation is utilized to compute the contact force, and the time dependent equations are solved by Newmark's time integration algorithm. Parametric studies are conducted with respect to triggering parameters like laminate configuration, location of delamination, angle of twist, velocity of impactor, and impactor's displacement for centrally impacted shells.
Fault detection and isolation for complex system
NASA Astrophysics Data System (ADS)
Jing, Chan Shi; Bayuaji, Luhur; Samad, R.; Mustafa, M.; Abdullah, N. R. H.; Zain, Z. M.; Pebrianti, Dwi
2017-07-01
Fault Detection and Isolation (FDI) is a method to monitor, identify, and pinpoint the type and location of system fault in a complex multiple input multiple output (MIMO) non-linear system. A two wheel robot is used as a complex system in this study. The aim of the research is to construct and design a Fault Detection and Isolation algorithm. The proposed method for the fault identification is using hybrid technique that combines Kalman filter and Artificial Neural Network (ANN). The Kalman filter is able to recognize the data from the sensors of the system and indicate the fault of the system in the sensor reading. Error prediction is based on the fault magnitude and the time occurrence of fault. Additionally, Artificial Neural Network (ANN) is another algorithm used to determine the type of fault and isolate the fault in the system.
Detection of High-impedance Arcing Faults in Radial Distribution DC Systems
NASA Technical Reports Server (NTRS)
Gonzalez, Marcelo C.; Button, Robert M.
2003-01-01
High voltage, low current arcing faults in DC power systems have been researched at the NASA Glenn Research Center in order to develop a method for detecting these 'hidden faults', in-situ, before damage to cables and components from localized heating can occur. A simple arc generator was built and high-speed and low-speed monitoring of the voltage and current waveforms, respectively, has shown that these high impedance faults produce a significant increase in high frequency content in the DC bus voltage and low frequency content in the DC system current. Based on these observations, an algorithm was developed using a high-speed data acquisition system that was able to accurately detect high impedance arcing events induced in a single-line system based on the frequency content of the DC bus voltage or the system current. Next, a multi-line, radial distribution system was researched to see if the arc location could be determined through the voltage information when multiple 'detectors' are present in the system. It was shown that a small, passive LC filter was sufficient to reliably isolate the fault to a single line in a multi-line distribution system. Of course, no modification is necessary if only the current information is used to locate the arc. However, data shows that it might be necessary to monitor both the system current and bus voltage to improve the chances of detecting and locating high impedance arcing faults
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...
2016-12-12
In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less
Genetic Algorithms for Multiple-Choice Problems
NASA Astrophysics Data System (ADS)
Aickelin, Uwe
2010-04-01
This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.
Using ADOPT Algorithm and Operational Data to Discover Precursors to Aviation Adverse Events
NASA Technical Reports Server (NTRS)
Janakiraman, Vijay; Matthews, Bryan; Oza, Nikunj
2018-01-01
The US National Airspace System (NAS) is making its transition to the NextGen system and assuring safety is one of the top priorities in NextGen. At present, safety is managed reactively (correct after occurrence of an unsafe event). While this strategy works for current operations, it may soon become ineffective for future airspace designs and high density operations. There is a need for proactive management of safety risks by identifying hidden and "unknown" risks and evaluating the impacts on future operations. To this end, NASA Ames has developed data mining algorithms that finds anomalies and precursors (high-risk states) to safety issues in the NAS. In this paper, we describe a recently developed algorithm called ADOPT that analyzes large volumes of data and automatically identifies precursors from real world data. Precursors help in detecting safety risks early so that the operator can mitigate the risk in time. In addition, precursors also help identify causal factors and help predict the safety incident. The ADOPT algorithm scales well to large data sets and to multidimensional time series, reduce analyst time significantly, quantify multiple safety risks giving a holistic view of safety among other benefits. This paper details the algorithm and includes several case studies to demonstrate its application to discover the "known" and "unknown" safety precursors in aviation operation.
NASA Astrophysics Data System (ADS)
Gruber, Thomas; Grim, Larry; Fauth, Ryan; Tercha, Brian; Powell, Chris; Steinhardt, Kristin
2011-05-01
Large networks of disparate chemical/biological (C/B) sensors, MET sensors, and intelligence, surveillance, and reconnaissance (ISR) sensors reporting to various command/display locations can lead to conflicting threat information, questions of alarm confidence, and a confused situational awareness. Sensor netting algorithms (SNA) are being developed to resolve these conflicts and to report high confidence consensus threat map data products on a common operating picture (COP) display. A data fusion algorithm design was completed in a Phase I SBIR effort and development continues in the Phase II SBIR effort. The initial implementation and testing of the algorithm has produced some performance results. The algorithm accepts point and/or standoff sensor data, and event detection data (e.g., the location of an explosion) from various ISR sensors (e.g., acoustic, infrared cameras, etc.). These input data are preprocessed to assign estimated uncertainty to each incoming piece of data. The data are then sent to a weighted tomography process to obtain a consensus threat map, including estimated threat concentration level uncertainty. The threat map is then tested for consistency and the overall confidence for the map result is estimated. The map and confidence results are displayed on a COP. The benefits of a modular implementation of the algorithm and comparisons of fused / un-fused data results will be presented. The metrics for judging the sensor-netting algorithm performance are warning time, threat map accuracy (as compared to ground truth), false alarm rate, and false alarm rate v. reported threat confidence level.
Examining seismicity patterns in the 2010 M 8.8 Maule rupture zone.
NASA Astrophysics Data System (ADS)
Diniakos, R. S.; Bilek, S. L.; Rowe, C. A.; Draganov, D.
2016-12-01
The subduction of the Nazca Plate beneath the South American Plate along Chile has produced some of the largest earthquakes recorded on modern seismic instrumentation. These include the 1960 M 9.5 Valdivia, 2010 M 8.8 Maule, 2014 M 8.1 Iquique, and more recently the 2015 M 8.3 Illapel earthquakes. Slip heterogeneity in the 2010 Maule earthquake has been noted in various studies, with bilateral slip and peak slip of 15 m north of the epicenter. For other great subduction zone earthquakes, such as the 2004 M 9.1 Sumatra, 2010 M 8.8 Maule, and 2011 M 9.0 Tohoku, there was an increase in normal-faulting earthquakes in regions of high slip. In order to understand aftershock behavior of the 2010 Maule event, we are expanding the catalog of small magnitude earthquakes using a template-matching algorithm to find other small earthquakes in the rupture area. We use a starting earthquake catalog (magnitudes between 2.5-4.0) developed from regional and local array seismic data; these comprise our template catalog from Jan. - Dec. 2012 that we use to search through seismic waveforms recorded by a 2012 temporary seismic array in Malargüe, Argentina located 300 km east of the Maule rupture area. We use waveform cross correlation techniques in order to detect new events, and then we use HYPOINVERSE2000 (Klein, 2002) and a velocity model designed for the south-central Chilean region (Haberland et al., 2006) to locate new detections. We also determine focal mechanisms to further analyze aftershock behavior for the region. To date, over 2400 unique detections have been found, of which we have located 133 events with an RMS <1. Many of these events are located in the region of greatest coseismic slip, north of the 2010 epicenter, whereas catalog events are located north and south of the epicenter, along the regions of bilateral slip. Focal mechanisms for the new locations will also be presented.
A method for detecting and locating geophysical events using groups of arrays
NASA Astrophysics Data System (ADS)
de Groot-Hedlin, Catherine D.; Hedlin, Michael A. H.
2015-11-01
We have developed a novel method to detect and locate geophysical events that makes use of any sufficiently dense sensor network. This method is demonstrated using acoustic sensor data collected in 2013 at the USArray Transportable Array (TA). The algorithm applies Delaunay triangulation to divide the sensor network into a mesh of three-element arrays, called triads. Because infrasound waveforms are incoherent between the sensors within each triad, the data are transformed into envelopes, which are cross-correlated to find signals that satisfy a consistency criterion. The propagation azimuth, phase velocity and signal arrival time are computed for each signal. Triads with signals that are consistent with a single source are bundled as an event group. The ensemble of arrival times and azimuths of detected signals within each group are used to locate a common source in space and time. A total of 513 infrasonic stations that were active for part or all of 2013 were divided into over 2000 triads. Low (0.5-2 Hz) and high (2-8 Hz) catalogues of infrasonic events were created for the eastern USA. The low-frequency catalogue includes over 900 events and reveals several highly active source areas on land that correspond with coal mining regions. The high-frequency catalogue includes over 2000 events, with most occurring offshore. Although their cause is not certain, most events are clearly anthropogenic as almost all occur during regular working hours each week. The regions to which the TA is most sensitive vary seasonally, with the direction of reception dependent on the direction of zonal winds. The catalogue has also revealed large acoustic events that may provide useful insight into the nature of long-range infrasound propagation in the atmosphere.
You, Kaiming; Yang, Wei; Han, Ruisong
2015-01-01
Based on wireless multimedia sensor networks (WMSNs) deployed in an underground coal mine, a miner’s lamp video collaborative localization algorithm was proposed to locate miners in the scene of insufficient illumination and bifurcated structures of underground tunnels. In bifurcation area, several camera nodes are deployed along the longitudinal direction of tunnels, forming a collaborative cluster in wireless way to monitor and locate miners in underground tunnels. Cap-lamps are regarded as the feature of miners in the scene of insufficient illumination of underground tunnels, which means that miners can be identified by detecting their cap-lamps. A miner’s lamp will project mapping points on the imaging plane of collaborative cameras and the coordinates of mapping points are calculated by collaborative cameras. Then, multiple straight lines between the positions of collaborative cameras and their corresponding mapping points are established. To find the three-dimension (3D) coordinate location of the miner’s lamp a least square method is proposed to get the optimal intersection of the multiple straight lines. Tests were carried out both in a corridor and a realistic scenario of underground tunnel, which show that the proposed miner’s lamp video collaborative localization algorithm has good effectiveness, robustness and localization accuracy in real world conditions of underground tunnels. PMID:26426023
Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; ...
2017-06-09
Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order tomore » achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.« less
NASA Astrophysics Data System (ADS)
Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Savara, Aditya
2017-10-01
Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of "KMC stiffness" (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps/CPU time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events-allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm is designed for use in achieving and simulating steady-state conditions in KMC simulations. As shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.
Multiple-Beam Detection of Fast Transient Radio Sources
NASA Technical Reports Server (NTRS)
Thompson, David R.; Wagstaff, Kiri L.; Majid, Walid A.
2011-01-01
A method has been designed for using multiple independent stations to discriminate fast transient radio sources from local anomalies, such as antenna noise or radio frequency interference (RFI). This can improve the sensitivity of incoherent detection for geographically separated stations such as the very long baseline array (VLBA), the future square kilometer array (SKA), or any other coincident observations by multiple separated receivers. The transients are short, broadband pulses of radio energy, often just a few milliseconds long, emitted by a variety of exotic astronomical phenomena. They generally represent rare, high-energy events making them of great scientific value. For RFI-robust adaptive detection of transients, using multiple stations, a family of algorithms has been developed. The technique exploits the fact that the separated stations constitute statistically independent samples of the target. This can be used to adaptively ignore RFI events for superior sensitivity. If the antenna signals are independent and identically distributed (IID), then RFI events are simply outlier data points that can be removed through robust estimation such as a trimmed or Winsorized estimator. The alternative "trimmed" estimator is considered, which excises the strongest n signals from the list of short-beamed intensities. Because local RFI is independent at each antenna, this interference is unlikely to occur at many antennas on the same step. Trimming the strongest signals provides robustness to RFI that can theoretically outperform even the detection performance of the same number of antennas at a single site. This algorithm requires sorting the signals at each time step and dispersion measure, an operation that is computationally tractable for existing array sizes. An alternative uses the various stations to form an ensemble estimate of the conditional density function (CDF) evaluated at each time step. Both methods outperform standard detection strategies on a test sequence of VLBA data, and both are efficient enough for deployment in real-time, online transient detection applications.
Hunting for shallow slow-slip events at Cascadia
NASA Astrophysics Data System (ADS)
Tan, Y. J.; Bletery, Q.; Fan, W.; Janiszewski, H. A.; Lynch, E.; McCormack, K. A.; Phillips, N. J.; Rousset, B.; Seyler, C.; French, M. E.; Gaherty, J. B.; Regalla, C.
2017-12-01
The discovery of slow earthquakes at subduction zones is one of the major breakthroughs of Earth science in the last two decades. Slow earthquakes involve a wide spectrum of fault slip behaviors and seismic radiation patterns, such as tremor, low-frequency earthquakes, and slow-slip events. The last of these are particularly interesting due to their large moment releases accompanied by minimal ground shaking. Slow-slip events have been reported at various subduction zones ; most of these slow-slip events are located down-dip of the megathrust seismogenic zone, while a few up-dip cases have recently been observed at Nankai and New Zealand. Up-dip slow-slip events illuminate the structure of faulting environments and rupture mechanisms of tsunami earthquakes. Their possible presence and location at a particular subduction zone can help assess earthquake and tsunami hazard for that region. However, their typical location distant from the coast requires the development of techniques using offshore instrumentation. Here, we investigate the absolute pressure gauges (APG) of the Cascadia Initiative, a four year amphibious seismic experiment, to search for possible shallow up-dip slow-slip events in the Cascadia subduction zone. These instruments are collocated with ocean bottom seismometers (OBS) and located close to buoys and onshore GPS stations, offering the opportunity to investigate the utility of multiple datasets. Ultimately, we aim to develop a protocol to analyze APG data for offshore shallow slow-slip event detections and quantify uncertainties, with direct applications to understanding the up-dip subduction interface system in Cascadia.
NASA Astrophysics Data System (ADS)
Nealy, J. L.; Benz, H.; Hayes, G. P.; Bergman, E.; Barnhart, W. D.
2016-12-01
On February 21, 2008 at 14:16:02 (UTC), Wells, Nevada experienced a Mw 6.0 earthquake, the largest earthquake in the state within the past 50 years. Here, we re-analyze in detail the spatiotemporal variations of the foreshock and aftershock sequence and compare the distribution of seismicity to a recent slip model based on inversion of InSAR observations. A catalog of earthquakes for the time period of February 1, 2008 through August 31, 2008 was derived from a combination of arrival time picks using a kurtosis detector (primarily P arrival times), subspace detector (primarily S arrival times), associating the combined pick dataset, and applying multiple event relocation techniques using the 19 closest USArray Transportable Array stations, permanent regional seismic monitoring stations in Nevada and Utah, and temporary stations deployed for an aftershock study. We were able to detect several thousand earthquakes in the months following the mainshock as well as several foreshocks in the days leading up to the event. We reviewed the picks for the largest 986 earthquakes and relocated them using the Hypocentroidal Decomposition (HD) method. The HD technique provides both relative locations for the individual earthquakes and an absolute location for the earthquake cluster, resulting in absolute locations of the events in the cluster having minimal bias from unknown Earth structure. A subset of these "calibrated" earthquake locations that spanned the duration of the sequence and had small uncertainties in location were used as prior constraints within a second relocation effort using the entire dataset and the Bayesloc approach. Accurate locations (to within 2 km) were obtained using Bayesloc for 1,952 of the 2,157 events associated over the seven-month period of the study. The final catalog of earthquake hypocenters indicates that the aftershocks extend for about 20 km along the strike of the ruptured fault. The aftershocks occur primarily updip and along the southwestern edge of the zone of maximum slip as modeled by seismic waveform inversion (Dreger et al., 2011) and by InSAR. The aftershock locations illuminate areas of post-mainshock strain increase and their depths are consistent with InSAR imaging, which showed that the Wells earthquake was a buried source with no observable near-surface offset.
Ahmed, Afaz Uddin; Arablouei, Reza; Hoog, Frank de; Kusy, Branislav; Jurdak, Raja; Bergmann, Neil
2018-05-29
Channel state information (CSI) collected during WiFi packet transmissions can be used for localization of commodity WiFi devices in indoor environments with multipath propagation. To this end, the angle of arrival (AoA) and time of flight (ToF) for all dominant multipath components need to be estimated. A two-dimensional (2D) version of the multiple signal classification (MUSIC) algorithm has been shown to solve this problem using 2D grid search, which is computationally expensive and is therefore not suited for real-time localisation. In this paper, we propose using a modified matrix pencil (MMP) algorithm instead. Specifically, we show that the AoA and ToF estimates can be found independently of each other using the one-dimensional (1D) MMP algorithm and the results can be accurately paired to obtain the AoA⁻ToF pairs for all multipath components. Thus, the 2D estimation problem reduces to running 1D estimation multiple times, substantially reducing the computational complexity. We identify and resolve the problem of degenerate performance when two or more multipath components have the same AoA. In addition, we propose a packet aggregation model that uses the CSI data from multiple packets to improve the performance under noisy conditions. Simulation results show that our algorithm achieves two orders of magnitude reduction in the computational time over the 2D MUSIC algorithm while achieving similar accuracy. High accuracy and low computation complexity of our approach make it suitable for applications that require location estimation to run on resource-constrained embedded devices in real time.
Algorithms for System Identification and Source Location.
NASA Astrophysics Data System (ADS)
Nehorai, Arye
This thesis deals with several topics in least squares estimation and applications to source location. It begins with a derivation of a mapping between Wiener theory and Kalman filtering for nonstationary autoregressive moving average (ARMO) processes. Applying time domain analysis, connections are found between time-varying state space realizations and input-output impulse response by matrix fraction description (MFD). Using these connections, the whitening filters are derived by the two approaches, and the Kalman gain is expressed in terms of Wiener theory. Next, fast estimation algorithms are derived in a unified way as special cases of the Conjugate Direction Method. The fast algorithms included are the block Levinson, fast recursive least squares, ladder (or lattice) and fast Cholesky algorithms. The results give a novel derivation and interpretation for all these methods, which are efficient alternatives to available recursive system identification algorithms. Multivariable identification algorithms are usually designed only for left MFD models. In this work, recursive multivariable identification algorithms are derived for right MFD models with diagonal denominator matrices. The algorithms are of prediction error and model reference type. Convergence analysis results obtained by the Ordinary Differential Equation (ODE) method are presented along with simulations. Sources of energy can be located by estimating time differences of arrival (TDOA's) of waves between the receivers. A new method for TDOA estimation is proposed for multiple unknown ARMA sources and additive correlated receiver noise. The method is based on a formula that uses only the receiver cross-spectra and the source poles. Two algorithms are suggested that allow tradeoffs between computational complexity and accuracy. A new time delay model is derived and used to show the applicability of the methods for non -integer TDOA's. Results from simulations illustrate the performance of the algorithms. The last chapter analyzes the response of exact least squares predictors for enhancement of sinusoids with additive colored noise. Using the matrix inversion lemma and the Christoffel-Darboux formula, the frequency response and amplitude gain of the sinusoids are expressed as functions of the signal and noise characteristics. The results generalize the available white noise case.
Nguyen, Phong Ha; Arsalan, Muhammad; Koo, Ja Hyung; Naqvi, Rizwan Ali; Truong, Noi Quang; Park, Kang Ryoung
2018-05-24
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.
Multiple-camera/motion stereoscopy for range estimation in helicopter flight
NASA Technical Reports Server (NTRS)
Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.
1993-01-01
Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.
Association rule mining in the US Vaccine Adverse Event Reporting System (VAERS).
Wei, Lai; Scott, John
2015-09-01
Spontaneous adverse event reporting systems are critical tools for monitoring the safety of licensed medical products. Commonly used signal detection algorithms identify disproportionate product-adverse event pairs and may not be sensitive to more complex potential signals. We sought to develop a computationally tractable multivariate data-mining approach to identify product-multiple adverse event associations. We describe an application of stepwise association rule mining (Step-ARM) to detect potential vaccine-symptom group associations in the US Vaccine Adverse Event Reporting System. Step-ARM identifies strong associations between one vaccine and one or more adverse events. To reduce the number of redundant association rules found by Step-ARM, we also propose a clustering method for the post-processing of association rules. In sample applications to a trivalent intradermal inactivated influenza virus vaccine and to measles, mumps, rubella, and varicella (MMRV) vaccine and in simulation studies, we find that Step-ARM can detect a variety of medically coherent potential vaccine-symptom group signals efficiently. In the MMRV example, Step-ARM appears to outperform univariate methods in detecting a known safety signal. Our approach is sensitive to potentially complex signals, which may be particularly important when monitoring novel medical countermeasure products such as pandemic influenza vaccines. The post-processing clustering algorithm improves the applicability of the approach as a screening method to identify patterns that may merit further investigation. Copyright © 2015 John Wiley & Sons, Ltd.
Wang, Xuezhi; Huang, Xiaotao; Suvorova, Sofia; Moran, Bill
2018-01-01
Golay complementary waveforms can, in theory, yield radar returns of high range resolution with essentially zero sidelobes. In practice, when deployed conventionally, while high signal-to-noise ratios can be achieved for static target detection, significant range sidelobes are generated by target returns of nonzero Doppler causing unreliable detection. We consider signal processing techniques using Golay complementary waveforms to improve radar detection performance in scenarios involving multiple nonzero Doppler targets. A signal processing procedure based on an existing, so called, Binomial Design algorithm that alters the transmission order of Golay complementary waveforms and weights the returns is proposed in an attempt to achieve an enhanced illumination performance. The procedure applies one of three proposed waveform transmission ordering algorithms, followed by a pointwise nonlinear processor combining the outputs of the Binomial Design algorithm and one of the ordering algorithms. The computational complexity of the Binomial Design algorithm and the three ordering algorithms are compared, and a statistical analysis of the performance of the pointwise nonlinear processing is given. Estimation of the areas in the Delay–Doppler map occupied by significant range sidelobes for given targets are also discussed. Numerical simulations for the comparison of the performances of the Binomial Design algorithm and the three ordering algorithms are presented for both fixed and randomized target locations. The simulation results demonstrate that the proposed signal processing procedure has a better detection performance in terms of lower sidelobes and higher Doppler resolution in the presence of multiple nonzero Doppler targets compared to existing methods. PMID:29324708
NASA Astrophysics Data System (ADS)
Li, Xinlu; Lu, Hui; Lyu, Haobo
2017-04-01
Drought is one of the typical natural disasters around the world, and it has also been an important climatic event particular under the climate change. Assess and monitor drought accurately is crucial for addressing climate change and formulating corresponding policies. Several drought indices have been developed and widely used in regional and global scale to present and monitor drought, which integrate datasets such as precipitation, soil moisture, snowpack, streamflow, evapotranspiration that deprived from land surface models or remotely sensed datasets. Vegetation is a prominent component of ecosystem that modulates the water and energy flux between land surface and atmosphere, and thus can be regarded as one of the drought indicators especially for agricultural drought. Leaf area index (LAI), as an important parameter that quantifying the terrestrial vegetation conditions, can provide a new way for drought monitoring. Drought characteristics can be described as severity, area and duration. Andreadis et al. has constructed a severity-area-duration (SAD) algorithm to reflect the spatial patterns of droughts and their dynamics over time, which is a progress of drought analysis. In our study, a newly drought index product was developed using the LAI percentile (LAIpct) SAD algorithm. The remotely sensed global GLASS (Global LAnd Surface Satellite) LAI ranging from 2001-2011 has been used as the basic data. Data was normalized for each time phase to eliminate the phenology effect, and then the percentile of the normalized data was calculated as the SAD input. 20% was set as the drought threshold, and a clustering algorithm was used to identify individual drought events for each time step. Actual drought events were identified when considering multiple clusters merge to form a larger drought or a drought event breaks up into multiple small droughts according to the distance of drought centers and the overlapping drought area. Severity, duration and area were recorded for each actual drought event. Finally, we utilized the existing DSI drought index product for comparison. LAIpct drought index can detect both short-term and long-term drought events. In the last decades, most of the droughts at global scale are short-term that less than 1 year, and the longest drought event lasts for 3 year. The LAIpct drought area percentage consist well with DSI, and according to the drought severity classification of United States Drought Monitor system, we found the 20% LAIpct corresponds to moderate drought, 15% LAIpct corresponds to severe drought, and 10% LAIpct corresponds to extreme drought. For some typical drought event, we found the LAIpct drought spatial patterns agree well with DSI, and from the aspect of temporal consistency, LAIpct seems smoother and fitter to the reality than DSI product. Although the short period LAIpct drought index product hinders the analysis of global climate change to some extent, it provides a new way to better monitor the agricultural drought.
State estimation of spatio-temporal phenomena
NASA Astrophysics Data System (ADS)
Yu, Dan
This dissertation addresses the state estimation problem of spatio-temporal phenomena which can be modeled by partial differential equations (PDEs), such as pollutant dispersion in the atmosphere. After discretizing the PDE, the dynamical system has a large number of degrees of freedom (DOF). State estimation using Kalman Filter (KF) is computationally intractable, and hence, a reduced order model (ROM) needs to be constructed first. Moreover, the nonlinear terms, external disturbances or unknown boundary conditions can be modeled as unknown inputs, which leads to an unknown input filtering problem. Furthermore, the performance of KF could be improved by placing sensors at feasible locations. Therefore, the sensor scheduling problem to place multiple mobile sensors is of interest. The first part of the dissertation focuses on model reduction for large scale systems with a large number of inputs/outputs. A commonly used model reduction algorithm, the balanced proper orthogonal decomposition (BPOD) algorithm, is not computationally tractable for large systems with a large number of inputs/outputs. Inspired by the BPOD and randomized algorithms, we propose a randomized proper orthogonal decomposition (RPOD) algorithm and a computationally optimal RPOD (RPOD*) algorithm, which construct an ROM to capture the input-output behaviour of the full order model, while reducing the computational cost of BPOD by orders of magnitude. It is demonstrated that the proposed RPOD* algorithm could construct the ROM in real-time, and the performance of the proposed algorithms on different advection-diffusion equations. Next, we consider the state estimation problem of linear discrete-time systems with unknown inputs which can be treated as a wide-sense stationary process with rational power spectral density, while no other prior information needs to be known. We propose an autoregressive (AR) model based unknown input realization technique which allows us to recover the input statistics from the output data by solving an appropriate least squares problem, then fit an AR model to the recovered input statistics and construct an innovations model of the unknown inputs using the eigensystem realization algorithm. The proposed algorithm outperforms the augmented two-stage Kalman Filter (ASKF) and the unbiased minimum-variance (UMV) algorithm are shown in several examples. Finally, we propose a framework to place multiple mobile sensors to optimize the long-term performance of KF in the estimation of the state of a PDE. The major challenges are that placing multiple sensors is an NP-hard problem, and the optimization problem is non-convex in general. In this dissertation, first, we construct an ROM using RPOD* algorithm, and then reduce the feasible sensor locations into a subset using the ROM. The Information Space Receding Horizon Control (I-RHC) approach and a modified Monte Carlo Tree Search (MCTS) approach are applied to solve the sensor scheduling problem using the subset. Various applications have been provided to demonstrate the performance of the proposed approach.
Architecture for Multi-Technology Real-Time Location Systems
Rodas, Javier; Barral, Valentín; Escudero, Carlos J.
2013-01-01
The rising popularity of location-based services has prompted considerable research in the field of indoor location systems. Since there is no single technology to support these systems, it is necessary to consider the fusion of the information coming from heterogeneous sensors. This paper presents a software architecture designed for a hybrid location system where we can merge information from multiple sensor technologies. The architecture was designed to be used by different kinds of actors independently and with mutual transparency: hardware administrators, algorithm developers and user applications. The paper presents the architecture design, work-flow, case study examples and some results to show how different technologies can be exploited to obtain a good estimation of a target position. PMID:23435050
Detecting Structural Failures Via Acoustic Impulse Responses
NASA Technical Reports Server (NTRS)
Bayard, David S.; Joshi, Sanjay S.
1995-01-01
Advanced method of acoustic pulse reflectivity testing developed for use in determining sizes and locations of failures within structures. Used to detect breaks in electrical transmission lines, detect faults in optical fibers, and determine mechanical properties of materials. In method, structure vibrationally excited with acoustic pulse (a "ping") at one location and acoustic response measured at same or different location. Measured acoustic response digitized, then processed by finite-impulse-response (FIR) filtering algorithm unique to method and based on acoustic-wave-propagation and -reflection properties of structure. Offers several advantages: does not require training, does not require prior knowledge of mathematical model of acoustic response of structure, enables detection and localization of multiple failures, and yields data on extent of damage at each location.
A combined joint diagonalization-MUSIC algorithm for subsurface targets localization
NASA Astrophysics Data System (ADS)
Wang, Yinlin; Sigman, John B.; Barrowes, Benjamin E.; O'Neill, Kevin; Shubitidze, Fridon
2014-06-01
This paper presents a combined joint diagonalization (JD) and multiple signal classification (MUSIC) algorithm for estimating subsurface objects locations from electromagnetic induction (EMI) sensor data, without solving ill-posed inverse-scattering problems. JD is a numerical technique that finds the common eigenvectors that diagonalize a set of multistatic response (MSR) matrices measured by a time-domain EMI sensor. Eigenvalues from targets of interest (TOI) can be then distinguished automatically from noise-related eigenvalues. Filtering is also carried out in JD to improve the signal-to-noise ratio (SNR) of the data. The MUSIC algorithm utilizes the orthogonality between the signal and noise subspaces in the MSR matrix, which can be separated with information provided by JD. An array of theoreticallycalculated Green's functions are then projected onto the noise subspace, and the location of the target is estimated by the minimum of the projection owing to the orthogonality. This combined method is applied to data from the Time-Domain Electromagnetic Multisensor Towed Array Detection System (TEMTADS). Examples of TEMTADS test stand data and field data collected at Spencer Range, Tennessee are analyzed and presented. Results indicate that due to its noniterative mechanism, the method can be executed fast enough to provide real-time estimation of objects' locations in the field.
Event Reconstruction Techniques in NOvA
NASA Astrophysics Data System (ADS)
Baird, M.; Bian, J.; Messier, M.; Niner, E.; Rocco, D.; Sachdev, K.
2015-12-01
The NOvA experiment is a long-baseline neutrino oscillation experiment utilizing the NuMI beam generated at Fermilab. The experiment will measure the oscillations within a muon neutrino beam in a 300 ton Near Detector located underground at Fermilab and a functionally-identical 14 kiloton Far Detector placed 810 km away. The detectors are liquid scintillator tracking calorimeters with a fine-grained cellular structure that provides a wealth of information for separating the different particle track and shower topologies. Each detector has its own challenges with the Near Detector seeing multiple overlapping neutrino interactions in each event and the Far Detector having a large background of cosmic rays due to being located on the surface. A series of pattern recognition techniques have been developed to go from event records, to spatially and temporally separating individual interactions, to vertexing and tracking, and particle identification. This combination of methods to achieve the full event reconstruction will be discussed.
NASA Astrophysics Data System (ADS)
Solanki, K.; Hauksson, E.; Kanamori, H.; Wu, Y.; Heaton, T.; Boese, M.
2007-12-01
We have implemented an on-site early warning algorithm using the infrastructure of the Caltech/USGS Southern California Seismic Network (SCSN). We are evaluating the real-time performance of the software system and the algorithm for rapid assessment of earthquakes. In addition, we are interested in understanding what parts of the SCSN need to be improved to make early warning practical. Our EEW processing system is composed of many independent programs that process waveforms in real-time. The codes were generated by using a software framework. The Pd (maximum displacement amplitude of P wave during the first 3sec) and Tau-c (a period parameter during the first 3 sec) values determined during the EEW processing are being forwarded to the California Integrated Seismic Network (CISN) web page for independent evaluation of the results. The on-site algorithm measures the amplitude of the P-wave (Pd) and the frequency content of the P-wave during the first three seconds (Tau-c). The Pd and the Tau-c values make it possible to discriminate between a variety of events such as large distant events, nearby small events, and potentially damaging nearby events. The Pd can be used to infer the expected maximum ground shaking. The method relies on data from a single station although it will become more reliable if readings from several stations are associated. To eliminate false triggers from stations with high background noise level, we have created per station Pd threshold configuration for the Pd/Tau-c algorithm. To determine appropriate values for the Pd threshold we calculate Pd thresholds for stations based on the information from the EEW logs. We have operated our EEW test system for about a year and recorded numerous earthquakes in the magnitude range from M3 to M5. Two recent examples are a M4.5 earthquake near Chatsworth and a M4.7 earthquake near Elsinore. In both cases, the Pd and Tau-c parameters were determined successfully within 10 to 20 sec of the arrival of the P-wave at the station. The Tau-c values predicted the magnitude within 0.1 and the predicted average peak-ground-motion was 0.7 cm/s and 0.6 cm/s. The delays in the system are caused mostly by the packetizing delay because our software system is based on processing miniseed packets. Most recently we have begun reducing the data latency using new qmaserv2 software for the Q330 Quanterra datalogger. We implemented qmaserv2 based multicast receiver software to receive the native 1 sec packets from the dataloggers. The receiver reads multicast packets from the network and writes them into shared memory area. This new software will fully take advantage of the capabilities of the Q330 datalogger and significantly reduce data latency for EEW system. We have also implemented a new EEW sub-system that compliments the currently running EEW system by associating Pd and Tau-c values from multiple stations. So far, we have implemented a new trigger generation algorithm for real-time processing for the sub-system, and are able to routinely locate events and determine magnitudes using the Pd and Tau-c values.
North Alabama Lightning Mapping Array (LMA): VHF Source Retrieval Algorithm and Error Analyses
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J.; Bailey, J.; Krider, E. P.; Bateman, M. G.; Boccippio, D.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA Marshall Space Flight Center (MSFC) and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix Theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50 ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results. However, for many source locations, the Curvature Matrix Theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
NASA Technical Reports Server (NTRS)
Ellsworth, Joel C.
2017-01-01
During flight-testing of the National Aeronautics and Space Administration (NASA) Gulfstream III (G-III) airplane (Gulfstream Aerospace Corporation, Savannah, Georgia) SubsoniC Research Aircraft Testbed (SCRAT) between March 2013 and April 2015 it became evident that the sensor array used for stagnation point detection was not functioning as expected. The stagnation point detection system is a self calibrating hot-film array; the calibration was unknown and varied between flights, however, the channel with the lowest power consumption was expected to correspond with the point of least surface shear. While individual channels showed the expected behavior for the hot-film sensors, more often than not the lowest power consumption occurred at a single sensor (despite in-flight maneuvering) in the array located far from the expected stagnation point. An algorithm was developed to process the available system output and determine the stagnation point location. After multiple updates and refinements, the final algorithm was not sensitive to the failure of a single sensor in the array, but adjacent failures beneath the stagnation point crippled the algorithm.
Yock, Adam D; Kim, Gwe-Ya
2017-09-01
To present the k-means clustering algorithm as a tool to address treatment planning considerations characteristic of stereotactic radiosurgery using a single isocenter for multiple targets. For 30 patients treated with stereotactic radiosurgery for multiple brain metastases, the geometric centroids and radii of each met were determined from the treatment planning system. In-house software used this as well as weighted and unweighted versions of the k-means clustering algorithm to group the targets to be treated with a single isocenter, and to position each isocenter. The algorithm results were evaluated using within-cluster sum of squares as well as a minimum target coverage metric that considered the effect of target size. Both versions of the algorithm were applied to an example patient to demonstrate the prospective determination of the appropriate number and location of isocenters. Both weighted and unweighted versions of the k-means algorithm were applied successfully to determine the number and position of isocenters. Comparing the two, both the within-cluster sum of squares metric and the minimum target coverage metric resulting from the unweighted version were less than those from the weighted version. The average magnitudes of the differences were small (-0.2 cm 2 and 0.1% for the within cluster sum of squares and minimum target coverage, respectively) but statistically significant (Wilcoxon signed-rank test, P < 0.01). The differences between the versions of the k-means clustering algorithm represented an advantage of the unweighted version for the within-cluster sum of squares metric, and an advantage of the weighted version for the minimum target coverage metric. While additional treatment planning considerations have a large influence on the final treatment plan quality, both versions of the k-means algorithm provide automatic, consistent, quantitative, and objective solutions to the tasks associated with SRS treatment planning using a single isocenter for multiple targets. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Application of Subspace Detection to the 6 November 2011 M5.6 Prague, Oklahoma Aftershock Sequence
NASA Astrophysics Data System (ADS)
McMahon, N. D.; Benz, H.; Johnson, C. E.; Aster, R. C.; McNamara, D. E.
2015-12-01
Subspace detection is a powerful tool for the identification of small seismic events. Subspace detectors improve upon single-event matched filtering techniques by using multiple orthogonal waveform templates whose linear combinations characterize a range of observed signals from previously identified earthquakes. Subspace detectors running on multiple stations can significantly increasing the number of locatable events, lowering the catalog's magnitude of completeness and thus providing extraordinary detail on the kinematics of the aftershock process. The 6 November 2011 M5.6 earthquake near Prague, Oklahoma is the largest earthquake instrumentally recorded in Oklahoma history and the largest earthquake resultant from deep wastewater injection. A M4.8 foreshock on 5 November 2011 and the M5.6 mainshock triggered tens of thousands of detectable aftershocks along a 20 km splay of the Wilzetta Fault Zone known as the Meeker-Prague fault. In response to this unprecedented earthquake, 21 temporary seismic stations were deployed surrounding the seismic activity. We utilized a catalog of 767 previously located aftershocks to construct subspace detectors for the 21 temporary and 10 closest permanent seismic stations. Subspace detection identified more than 500,000 new arrival-time observations, which associated into more than 20,000 locatable earthquakes. The associated earthquakes were relocated using the Bayesloc multiple-event locator, resulting in ~7,000 earthquakes with hypocentral uncertainties of less than 500 m. The relocated seismicity provides unique insight into the spatio-temporal evolution of the aftershock sequence along the Wilzetta Fault Zone and its associated structures. We find that the crystalline basement and overlying sedimentary Arbuckle formation accommodate the majority of aftershocks. While we observe aftershocks along the entire 20 km length of the Meeker-Prague fault, the vast majority of earthquakes were confined to a 9 km wide by 9 km deep surface striking N54°E and dipping 83° to the northwest near the junction of the splay with the main Wilzetta fault structure. Relocated seismicity shows off-fault stress-related interaction to distances of 10 km or more from the mainshock, including clustered seismicity to the northwest and southeast of the mainshock.
A High-Resolution View of Global Seismicity
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Schaff, D. P.
2014-12-01
We present high-precision earthquake relocation results from our global-scale re-analysis of the combined seismic archives of parametric data for the years 1964 to present from the International Seismological Centre (ISC), the USGS's Earthquake Data Report (EDR), and selected waveform data from IRIS. We employed iterative, multistep relocation procedures that initially correct for large location errors present in standard global earthquake catalogs, followed by a simultaneous inversion of delay times formed from regional and teleseismic arrival times of first and later arriving phases. An efficient multi-scale double-difference (DD) algorithm is used to solve for relative event locations to the precision of a few km or less, while incorporating information on absolute hypocenter locations from catalogs such as EHB and GEM. We run the computations on both a 40-core cluster geared towards HTC problems (data processing) and a 500-core HPC cluster for data inversion. Currently, we are incorporating waveform correlation delay time measurements available for events in selected regions, but are continuously building up a comprehensive, global correlation database for densely distributed events recorded at stations with a long history of high-quality waveforms. The current global DD catalog includes nearly one million earthquakes, equivalent to approximately 70% of the number of events in the ISC/EDR catalogs initially selected for relocation. The relocations sharpen the view of seismicity in most active regions around the world, in particular along subduction zones where event density is high, but also along mid-ocean ridges where existing hypocenters are especially poorly located. The new data offers the opportunity to investigate earthquake processes and fault structures along entire plate boundaries at the ~km scale, and provides a common framework that facilitates analysis and comparisons of findings across different plate boundary systems.
NASA Astrophysics Data System (ADS)
Kodali, Anuradha
In this thesis, we develop dynamic multiple fault diagnosis (DMFD) algorithms to diagnose faults that are sporadic and coupled. Firstly, we formulate a coupled factorial hidden Markov model-based (CFHMM) framework to diagnose dependent faults occurring over time (dynamic case). Here, we implement a mixed memory Markov coupling model to determine the most likely sequence of (dependent) fault states, the one that best explains the observed test outcomes over time. An iterative Gauss-Seidel coordinate ascent optimization method is proposed for solving the problem. A soft Viterbi algorithm is also implemented within the framework for decoding dependent fault states over time. We demonstrate the algorithm on simulated and real-world systems with coupled faults; the results show that this approach improves the correct isolation rate as compared to the formulation where independent fault states are assumed. Secondly, we formulate a generalization of set-covering, termed dynamic set-covering (DSC), which involves a series of coupled set-covering problems over time. The objective of the DSC problem is to infer the most probable time sequence of a parsimonious set of failure sources that explains the observed test outcomes over time. The DSC problem is NP-hard and intractable due to the fault-test dependency matrix that couples the failed tests and faults via the constraint matrix, and the temporal dependence of failure sources over time. Here, the DSC problem is motivated from the viewpoint of a dynamic multiple fault diagnosis problem, but it has wide applications in operations research, for e.g., facility location problem. Thus, we also formulated the DSC problem in the context of a dynamically evolving facility location problem. Here, a facility can be opened, closed, or can be temporarily unavailable at any time for a given requirement of demand points. These activities are associated with costs or penalties, viz., phase-in or phase-out for the opening or closing of a facility, respectively. The set-covering matrix encapsulates the relationship among the rows (tests or demand points) and columns (faults or locations) of the system at each time. By relaxing the coupling constraints using Lagrange multipliers, the DSC problem can be decoupled into independent subproblems, one for each column. Each subproblem is solved using the Viterbi decoding algorithm, and a primal feasible solution is constructed by modifying the Viterbi solutions via a heuristic. The proposed Viterbi-Lagrangian relaxation algorithm (VLRA) provides a measure of suboptimality via an approximate duality gap. As a major practical extension of the above problem, we also consider the problem of diagnosing faults with delayed test outcomes, termed delay-dynamic set-covering (DDSC), and experiment with real-world problems that exhibit masking faults. Also, we present simulation results on OR-library datasets (set-covering formulations are predominantly validated on these matrices in the literature), posed as facility location problems. Finally, we implement these algorithms to solve problems in aerospace and automotive applications. Firstly, we address the diagnostic ambiguity problem in aerospace and automotive applications by developing a dynamic fusion framework that includes dynamic multiple fault diagnosis algorithms. This improves the correct fault isolation rate, while minimizing the false alarm rates, by considering multiple faults instead of the traditional data-driven techniques based on single fault (class)-single epoch (static) assumption. The dynamic fusion problem is formulated as a maximum a posteriori decision problem of inferring the fault sequence based on uncertain outcomes of multiple binary classifiers over time. The fusion process involves three steps: the first step transforms the multi-class problem into dichotomies using error correcting output codes (ECOC), thereby solving the concomitant binary classification problems; the second step fuses the outcomes of multiple binary classifiers over time using a sliding window or block dynamic fusion method that exploits temporal data correlations over time. We solve this NP-hard optimization problem via a Lagrangian relaxation (variational) technique. The third step optimizes the classifier parameters, viz., probabilities of detection and false alarm, using a genetic algorithm. The proposed algorithm is demonstrated by computing the diagnostic performance metrics on a twin-spool commercial jet engine, an automotive engine, and UCI datasets (problems with high classification error are specifically chosen for experimentation). We show that the primal-dual optimization framework performed consistently better than any traditional fusion technique, even when it is forced to give a single fault decision across a range of classification problems. Secondly, we implement the inference algorithms to diagnose faults in vehicle systems that are controlled by a network of electronic control units (ECUs). The faults, originating from various interactions and especially between hardware and software, are particularly challenging to address. Our basic strategy is to divide the fault universe of such cyber-physical systems in a hierarchical manner, and monitor the critical variables/signals that have impact at different levels of interactions. The proposed diagnostic strategy is validated on an electrical power generation and storage system (EPGS) controlled by two ECUs in an environment with CANoe/MATLAB co-simulation. Eleven faults are injected with the failures originating in actuator hardware, sensor, controller hardware and software components. Diagnostic matrix is established to represent the relationship between the faults and the test outcomes (also known as fault signatures) via simulations. The results show that the proposed diagnostic strategy is effective in addressing the interaction-caused faults.
A Fuzzy-Decision Based Approach for Composite Event Detection in Wireless Sensor Networks
Zhang, Shukui; Chen, Hao; Zhu, Qiaoming
2014-01-01
The event detection is one of the fundamental researches in wireless sensor networks (WSNs). Due to the consideration of various properties that reflect events status, the Composite event is more consistent with the objective world. Thus, the research of the Composite event becomes more realistic. In this paper, we analyze the characteristics of the Composite event; then we propose a criterion to determine the area of the Composite event and put forward a dominating set based network topology construction algorithm under random deployment. For the unreliability of partial data in detection process and fuzziness of the event definitions in nature, we propose a cluster-based two-dimensional τ-GAS algorithm and fuzzy-decision based composite event decision mechanism. In the case that the sensory data of most nodes are normal, the two-dimensional τ-GAS algorithm can filter the fault node data effectively and reduce the influence of erroneous data on the event determination. The Composite event judgment mechanism which is based on fuzzy-decision holds the superiority of the fuzzy-logic based algorithm; moreover, it does not need the support of a huge rule base and its computational complexity is small. Compared to CollECT algorithm and CDS algorithm, this algorithm improves the detection accuracy and reduces the traffic. PMID:25136690
Earthquake Monitoring with the MyShake Global Smartphone Seismic Network
NASA Astrophysics Data System (ADS)
Inbal, A.; Kong, Q.; Allen, R. M.; Savran, W. H.
2017-12-01
Smartphone arrays have the potential for significantly improving seismic monitoring in sparsely instrumented urban areas. This approach benefits from the dense spatial coverage of users, as well as from communication and computational capabilities built into smartphones, which facilitate big seismic data transfer and analysis. Advantages in data acquisition with smartphones trade-off with factors such as the low-quality sensors installed in phones, high noise levels, and strong network heterogeneity, all of which limit effective seismic monitoring. Here we utilize network and array-processing schemes to asses event detectability with the MyShake global smartphone network. We examine the benefits of using this network in either triggered or continuous modes of operation. A global database of ground motions measured on stationary phones triggered by M2-6 events is used to establish detection probabilities. We find that the probability of detecting an M=3 event with a single phone located <10 km from the epicenter exceeds 70%. Due to the sensor's self-noise, smaller magnitude events at short epicentral distances are very difficult to detect. To increase the signal-to-noise ratio, we employ array back-projection techniques on continuous data recorded by thousands of phones. In this class of methods, the array is used as a spatial filter that suppresses signals emitted from shallow noise sources. Filtered traces are stacked to further enhance seismic signals from deep sources. We benchmark our technique against traditional location algorithms using recordings from California, a region with large MyShake user database. We find that locations derived from back-projection images of M 3 events recorded by >20 nearby phones closely match the regional catalog locations. We use simulated broadband seismic data to examine how location uncertainties vary with user distribution and noise levels. To this end, we have developed an empirical noise model for the metropolitan Los-Angeles (LA) area. We find that densities larger than 100 stationary phones/km2 are required to accurately locate M 2 events in the LA basin. Given the projected MyShake user distribution, that condition may be met within the next few years.
Investigating the Use of the Intel Xeon Phi for Event Reconstruction
NASA Astrophysics Data System (ADS)
Sherman, Keegan; Gilfoyle, Gerard
2014-09-01
The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. Work supported by the University of Richmond and the US Department of Energy.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments
NASA Astrophysics Data System (ADS)
Lane, Peter C. R.; Gobet, Fernand
2013-03-01
Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.
A Fusion Model of Seismic and Hydro-Acoustic Propagation for Treaty Monitoring
NASA Astrophysics Data System (ADS)
Arora, Nimar; Prior, Mark
2014-05-01
We present an extension to NET-VISA (Network Processing Vertically Integrated Seismic Analysis), which is a probabilistic generative model of the propagation of seismic waves and their detection on a global scale, to incorporate hydro-acoustic data from the IMS (International Monitoring System) network. The new model includes the coupling of seismic waves into the ocean's SOFAR channel, as well as the propagation of hydro-acoustic waves from underwater explosions. The generative model is described in terms of multiple possible hypotheses -- seismic-to-hydro-acoustic, under-water explosion, other noise sources such as whales singing or icebergs breaking up -- that could lead to signal detections. We decompose each hypothesis into conditional probability distributions that are carefully analyzed and calibrated. These distributions include ones for detection probabilities, blockage in the SOFAR channel (including diffraction, refraction, and reflection around obstacles), energy attenuation, and other features of the resulting waveforms. We present a study of the various features that are extracted from the hydro-acoustic waveforms, and their correlations with each other as well the source of the energy. Additionally, an inference algorithm is presented that concurrently infers the seismic and under-water events, and associates all arrivals (aka triggers), both from seismic and hydro-acoustic stations, to the appropriate event, and labels the path taken by the wave. Finally, our results demonstrate that this fusion of seismic and hydro-acoustic data leads to very good performance. A majority of the under-water events that IDC (International Data Center) analysts built in 2010 are correctly located, and the arrivals that correspond to seismic-to-hydroacoustic coupling, the T phases, are mostly correctly identified. There is no loss in the accuracy of seismic events, in fact, there is a slight overall improvement.
NASA Astrophysics Data System (ADS)
Lindquist, Kent Gordon
We constructed a near-real-time system, called Iceworm, to automate seismic data collection, processing, storage, and distribution at the Alaska Earthquake Information Center (AEIC). Phase-picking, phase association, and interprocess communication components come from Earthworm (U.S. Geological Survey). A new generic, internal format for digital data supports unified handling of data from diverse sources. A new infrastructure for applying processing algorithms to near-real-time data streams supports automated information extraction from seismic wavefields. Integration of Datascope (U. of Colorado) provides relational database management of all automated measurements, parametric information for located hypocenters, and waveform data from Iceworm. Data from 1997 yield 329 earthquakes located by both Iceworm and the AEIC. Of these, 203 have location residuals under 22 km, sufficient for hazard response. Regionalized inversions for local magnitude in Alaska yield Msb{L} calibration curves (logAsb0) that differ from the Californian Richter magnitude. The new curve is 0.2\\ Msb{L} units more attenuative than the Californian curve at 400 km for earthquakes north of the Denali fault. South of the fault, and for a region north of Cook Inlet, the difference is 0.4\\ Msb{L}. A curve for deep events differs by 0.6\\ Msb{L} at 650 km. We expand geographic coverage of Alaskan regional seismic monitoring to the Aleutians, the Bering Sea, and the entire Arctic by initiating the processing of four short-period, Alaskan seismic arrays. To show the array stations' sensitivity, we detect and locate two microearthquakes that were missed by the AEIC. An empirical study of the location sensitivity of the arrays predicts improvements over the Alaskan regional network that are shown as map-view contour plots. We verify these predictions by detecting an Msb{L} 3.2 event near Unimak Island with one array. The detection and location of four representative earthquakes illustrates the expansion of geographic coverage from array processing. Measurements at the arrays of systematic azimuth residuals, between 5sp° and 50sp° from 203 Aleutian events, reveal significant effects of heterogeneous structure on wavefields. Finally, algorithms to automatically detect earthquakes in continuous array data are demonstrated with the detection of an Aleutian earthquake.
Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems.
Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao
2017-12-20
Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm.
Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems
Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao
2017-01-01
Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm. PMID:29261135
Map based navigation for autonomous underwater vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuohy, S.T.; Leonard, J.J.; Bellingham, J.G.
1995-12-31
In this work, a map based navigation algorithm is developed wherein measured geophysical properties are matched to a priori maps. The objectives is a complete algorithm applicable to a small, power-limited AUV which performs in real time to a required resolution with bounded position error. Interval B-Splines are introduced for the non-linear representation of two-dimensional geophysical parameters that have measurement uncertainty. Fine-scale position determination involves the solution of a system of nonlinear polynomial equations with interval coefficients. This system represents the complete set of possible vehicle locations and is formulated as the intersection of contours established on each map frommore » the simultaneous measurement of associated geophysical parameters. A standard filter mechanisms, based on a bounded interval error model, predicts the position of the vehicle and, therefore, screens extraneous solutions. When multiple solutions are found, a tracking mechanisms is applied until a unique vehicle location is determined.« less
Reed Solomon codes for error control in byte organized computer memory systems
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation are presented. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1987-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View
NASA Astrophysics Data System (ADS)
Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.
2017-09-01
Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.
Prediction of monthly rainfall in Victoria, Australia: Clusterwise linear regression approach
NASA Astrophysics Data System (ADS)
Bagirov, Adil M.; Mahmood, Arshad; Barton, Andrew
2017-05-01
This paper develops the Clusterwise Linear Regression (CLR) technique for prediction of monthly rainfall. The CLR is a combination of clustering and regression techniques. It is formulated as an optimization problem and an incremental algorithm is designed to solve it. The algorithm is applied to predict monthly rainfall in Victoria, Australia using rainfall data with five input meteorological variables over the period of 1889-2014 from eight geographically diverse weather stations. The prediction performance of the CLR method is evaluated by comparing observed and predicted rainfall values using four measures of forecast accuracy. The proposed method is also compared with the CLR using the maximum likelihood framework by the expectation-maximization algorithm, multiple linear regression, artificial neural networks and the support vector machines for regression models using computational results. The results demonstrate that the proposed algorithm outperforms other methods in most locations.
Evaluation of Automatically Assigned Job-Specific Interview Modules
Friesen, Melissa C.; Lan, Qing; Ge, Calvin; Locke, Sarah J.; Hosgood, Dean; Fritschi, Lin; Sadkowsky, Troy; Chen, Yu-Cheng; Wei, Hu; Xu, Jun; Lam, Tai Hing; Kwong, Yok Lam; Chen, Kexin; Xu, Caigang; Su, Yu-Chieh; Chiu, Brian C. H.; Ip, Kai Ming Dennis; Purdue, Mark P.; Bassig, Bryan A.; Rothman, Nat; Vermeulen, Roel
2016-01-01
Objective: In community-based epidemiological studies, job- and industry-specific ‘modules’ are often used to systematically obtain details about the subject’s work tasks. The module assignment is often made by the interviewer, who may have insufficient occupational hygiene knowledge to assign the correct module. We evaluated, in the context of a case–control study of lymphoid neoplasms in Asia (‘AsiaLymph’), the performance of an algorithm that provided automatic, real-time module assignment during a computer-assisted personal interview. Methods: AsiaLymph’s occupational component began with a lifetime occupational history questionnaire with free-text responses and three solvent exposure screening questions. To assign each job to one of 23 study-specific modules, an algorithm automatically searched the free-text responses to the questions ‘job title’ and ‘product made or services provided by employer’ using a list of module-specific keywords, comprising over 5800 keywords in English, Traditional and Simplified Chinese. Hierarchical decision rules were used when the keyword match triggered multiple modules. If no keyword match was identified, a generic solvent module was assigned if the subject responded ‘yes’ to any of the three solvent screening questions. If these question responses were all ‘no’, a work location module was assigned, which redirected the subject to the farming, teaching, health professional, solvent, or industry solvent modules or ended the questions for that job, depending on the location response. We conducted a reliability assessment that compared the algorithm-assigned modules to consensus module assignments made by two industrial hygienists for a subset of 1251 (of 11409) jobs selected using a stratified random selection procedure using module-specific strata. Discordant assignments between the algorithm and consensus assignments (483 jobs) were qualitatively reviewed by the hygienists to evaluate the potential information lost from missed questions with using the algorithm-assigned module (none, low, medium, high). Results: The most frequently assigned modules were the work location (33%), solvent (20%), farming and food industry (19%), and dry cleaning and textile industry (6.4%) modules. In the reliability subset, the algorithm assignment had an exact match to the expert consensus-assigned module for 722 (57.7%) of the 1251 jobs. Overall, adjusted for the proportion of jobs in each stratum, we estimated that 86% of the algorithm-assigned modules would result in no information loss, 2% would have low information loss, and 12% would have medium to high information loss. Medium to high information loss occurred for <10% of the jobs assigned the generic solvent module and for 21, 32, and 31% of the jobs assigned the work location module with location responses of ‘someplace else’, ‘factory’, and ‘don’t know’, respectively. Other work location responses had ≤8% with medium to high information loss because of redirections to other modules. Medium to high information loss occurred more frequently when a job description matched with multiple keywords pointing to different modules (29–69%, depending on the triggered assignment rule). Conclusions: These evaluations demonstrated that automatically assigned modules can reliably reproduce an expert’s module assignment without the direct involvement of an industrial hygienist or interviewer. The feasibility of adapting this framework to other studies will be language- and exposure-specific. PMID:27250109
Evaluation of Automatically Assigned Job-Specific Interview Modules.
Friesen, Melissa C; Lan, Qing; Ge, Calvin; Locke, Sarah J; Hosgood, Dean; Fritschi, Lin; Sadkowsky, Troy; Chen, Yu-Cheng; Wei, Hu; Xu, Jun; Lam, Tai Hing; Kwong, Yok Lam; Chen, Kexin; Xu, Caigang; Su, Yu-Chieh; Chiu, Brian C H; Ip, Kai Ming Dennis; Purdue, Mark P; Bassig, Bryan A; Rothman, Nat; Vermeulen, Roel
2016-08-01
In community-based epidemiological studies, job- and industry-specific 'modules' are often used to systematically obtain details about the subject's work tasks. The module assignment is often made by the interviewer, who may have insufficient occupational hygiene knowledge to assign the correct module. We evaluated, in the context of a case-control study of lymphoid neoplasms in Asia ('AsiaLymph'), the performance of an algorithm that provided automatic, real-time module assignment during a computer-assisted personal interview. AsiaLymph's occupational component began with a lifetime occupational history questionnaire with free-text responses and three solvent exposure screening questions. To assign each job to one of 23 study-specific modules, an algorithm automatically searched the free-text responses to the questions 'job title' and 'product made or services provided by employer' using a list of module-specific keywords, comprising over 5800 keywords in English, Traditional and Simplified Chinese. Hierarchical decision rules were used when the keyword match triggered multiple modules. If no keyword match was identified, a generic solvent module was assigned if the subject responded 'yes' to any of the three solvent screening questions. If these question responses were all 'no', a work location module was assigned, which redirected the subject to the farming, teaching, health professional, solvent, or industry solvent modules or ended the questions for that job, depending on the location response. We conducted a reliability assessment that compared the algorithm-assigned modules to consensus module assignments made by two industrial hygienists for a subset of 1251 (of 11409) jobs selected using a stratified random selection procedure using module-specific strata. Discordant assignments between the algorithm and consensus assignments (483 jobs) were qualitatively reviewed by the hygienists to evaluate the potential information lost from missed questions with using the algorithm-assigned module (none, low, medium, high). The most frequently assigned modules were the work location (33%), solvent (20%), farming and food industry (19%), and dry cleaning and textile industry (6.4%) modules. In the reliability subset, the algorithm assignment had an exact match to the expert consensus-assigned module for 722 (57.7%) of the 1251 jobs. Overall, adjusted for the proportion of jobs in each stratum, we estimated that 86% of the algorithm-assigned modules would result in no information loss, 2% would have low information loss, and 12% would have medium to high information loss. Medium to high information loss occurred for <10% of the jobs assigned the generic solvent module and for 21, 32, and 31% of the jobs assigned the work location module with location responses of 'someplace else', 'factory', and 'don't know', respectively. Other work location responses had ≤8% with medium to high information loss because of redirections to other modules. Medium to high information loss occurred more frequently when a job description matched with multiple keywords pointing to different modules (29-69%, depending on the triggered assignment rule). These evaluations demonstrated that automatically assigned modules can reliably reproduce an expert's module assignment without the direct involvement of an industrial hygienist or interviewer. The feasibility of adapting this framework to other studies will be language- and exposure-specific. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2016.
An efficient algorithm for global periodic orbits generation near irregular-shaped asteroids
NASA Astrophysics Data System (ADS)
Shang, Haibin; Wu, Xiaoyu; Ren, Yuan; Shan, Jinjun
2017-07-01
Periodic orbits (POs) play an important role in understanding dynamical behaviors around natural celestial bodies. In this study, an efficient algorithm was presented to generate the global POs around irregular-shaped uniformly rotating asteroids. The algorithm was performed in three steps, namely global search, local refinement, and model continuation. First, a mascon model with a low number of particles and optimized mass distribution was constructed to remodel the exterior gravitational potential of the asteroid. Using this model, a multi-start differential evolution enhanced with a deflection strategy with strong global exploration and bypassing abilities was adopted. This algorithm can be regarded as a search engine to find multiple globally optimal regions in which potential POs were located. This was followed by applying a differential correction to locally refine global search solutions and generate the accurate POs in the mascon model in which an analytical Jacobian matrix was derived to improve convergence. Finally, the concept of numerical model continuation was introduced and used to convert the POs from the mascon model into a high-fidelity polyhedron model by sequentially correcting the initial states. The efficiency of the proposed algorithm was substantiated by computing the global POs around an elongated shoe-shaped asteroid 433 Eros. Various global POs with different topological structures in the configuration space were successfully located. Specifically, the proposed algorithm was generic and could be conveniently extended to explore periodic motions in other gravitational systems.
Frömke, Cornelia; Hothorn, Ludwig A; Kropf, Siegfried
2008-01-27
In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases. This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevance-shifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis. The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.
Making adjustments to event annotations for improved biological event extraction.
Baek, Seung-Cheol; Park, Jong C
2016-09-16
Current state-of-the-art approaches to biological event extraction train statistical models in a supervised manner on corpora annotated with event triggers and event-argument relations. Inspecting such corpora, we observe that there is ambiguity in the span of event triggers (e.g., "transcriptional activity" vs. 'transcriptional'), leading to inconsistencies across event trigger annotations. Such inconsistencies make it quite likely that similar phrases are annotated with different spans of event triggers, suggesting the possibility that a statistical learning algorithm misses an opportunity for generalizing from such event triggers. We anticipate that adjustments to the span of event triggers to reduce these inconsistencies would meaningfully improve the present performance of event extraction systems. In this study, we look into this possibility with the corpora provided by the 2009 BioNLP shared task as a proof of concept. We propose an Informed Expectation-Maximization (EM) algorithm, which trains models using the EM algorithm with a posterior regularization technique, which consults the gold-standard event trigger annotations in a form of constraints. We further propose four constraints on the possible event trigger annotations to be explored by the EM algorithm. The algorithm is shown to outperform the state-of-the-art algorithm on the development corpus in a statistically significant manner and on the test corpus by a narrow margin. The analysis of the annotations generated by the algorithm shows that there are various types of ambiguity in event annotations, even though they could be small in number.
NASA Astrophysics Data System (ADS)
Booth, A.; Carless, D.; Kulessa, B.
2014-12-01
Ground penetrating radar (GPR) is widely applied to qualitative and quantitative interpretation of near-surface targets. Surface deployments of GPR most widely characterise physical properties in terms of some measure of GPR wavelet velocity. Wavelet amplitude is less-often considered, potentially due to difficulties in measuring this quantity: amplitudes are distorted by the anisotropic radiation pattern of antennas, and the ringy GPR wavelet can make successive events difficult to isolate. However, amplitude loss attributes could provide a useful means of estimating the physical properties of a target. GPR energy loss is described by the bandwidth-limited quality factor Q* which, for low-loss media, is proportional to the ratio of dielectric permittivity, ɛ, and electrical conductivity, σ. Comparing the frequency content of two arrivals yields an estimate of interval Q*, but only if they are sufficiently distinct. There may be sufficient separation between a primary reflection and its long-path multiple (i.e. a 'repeat path' of the primary reflection) therefore a dataset that is rich in multiples may be suitable for robust Q* analysis. The Q* between a primary and multiple arrival describes all frequency-dependent loss mechanisms in the interval between the free-surface and the multiple-generating horizon: assuming that all reflectivity is frequency-independent, Q* can be used to estimate ɛ and/or σ. We measure Q* according to the spectral ratio method, for synthetic and real GPR datasets. Our simulations are performed using the finite-difference algorithm GprMax, and represent our example data of GPR acquisitions over peat bogs. These data are a series of 100 MHz GPR acquisitions over sites in the Brecon Beacons National Park of South Wales. The base of the bogs (the basal peat/mineral soil contact) is often a strong multiple-generating horizon. As an example, data from Waun Ddu bog show these events lagging by ~75 ns: GPR velocity is measured here at 0.034 m/ns (relative ɛ of 77.9) and spectral ratios suggest Q* of 19.9 [-6.6 +19.4]. This Q* implies that the bulk σ of the bog is 21.7 [-10.7 +10.8] mS/m. Our measurements require in situ verification (e.g. comparison with co-located electrical resistivity profiles) but our method provides a promising addition to the suite of GPR analysis tools.
Using Cross-Correlation Methods to Characterize Earthquakes Associated with the Socorro Magma Body
NASA Astrophysics Data System (ADS)
Vieceli, R.; Bilek, S. L.; Worthington, L. L.; Schmandt, B.; Aster, R. C.; Dodge, D. A.; Pyle, M. L.; Walter, W. R.
2017-12-01
The Socorro Magma Body (SMB), a thin, sill-like body with a top surface-depth of 19 km situated within the Rio Grande Rift in central New Mexico, is one of the largest recognized continental mid-crustal magma bodies in the world by area. SMB-associated inflation leads to slow regional uplift of a few mm/yr and has been linked to longstanding concentrated shallow seismicity (< 10 km depth) with variable spatial and temporal occurrence, including early 20th century events of magnitude 5.5 - 6. Recent small earthquakes (magnitudes 3 to -1) have been monitored with a variety of broadband and short-term local seismic networks over the past several decades, but these routine catalogs have not been relocated or fully interpreted in terms of newer models of the structure, or its emplacement and history. In February 2015 seismic data were collected above the northern and most rapidly uplifting region of the SMB with a densely spaced temporary array, consisting of seven broadband and 804 short period Fairfield nodal vertical component seismographs. The total array area was 50 x 25 km with typical node spacing of 300 m along a road network. In this study, we exploit all available seismic network data in a cross-correlation framework developed at Lawrence Livermore National Laboratory to detect events and characterize earthquake swarms, clusters, and patterns occurring over the last 15 years. We use a power detector to build an initial catalog of small magnitude earthquakes, including 33 events (M <= 2.5) recorded during the February 2015 deployment, as templates to detect additional events. We also develop an updated shallow velocity model for the region and refine event hypocenters using Bayesloc, a bayesian, multiple-event location algorithm. This enhanced seismicity catalog will be utilized in interpreting the recent seismicity of the SMB. LLNL-ABS-735529
Location of Microearthquakes in Various Noisy Environments Using Envelope Stacking
NASA Astrophysics Data System (ADS)
Oye, V.; Gharti, H.
2009-12-01
Monitoring of microearthquakes is routinely conducted in various environments such as hydrocarbon and geothermal reservoirs, mines, dams, seismically active faults, volcanoes, nuclear power plants and CO2 storages. In many of these cases the handled data is sensitive and the interpretation of the data may be vital. In some cases, such as during mining or hydraulic fracturing activities, the number of microearthquakes is very large with tens to thousands of events per hour. In others, almost no events occur during a week and furthermore, it might not be anticipated that many events occur at all. However, the general setup of seismic networks, including surface and downhole stations, is usually optimized to record as many microearthquakes as possible, thereby trying to lower the detection threshold of the network. This process is obviously limited to some extent. Most microearthquake location techniques take advantage of a combination of P- and S-wave onset times that often can be picked reliably in an automatic mode. Moreover, when using seismic wave onset times, sometimes in combination with seismic wave polarization, these methods are more accurate compared to migration-based location routines. However, many events cannot be located because their magnitude is too small, i.e. the P- and/or S-wave onset times cannot be picked accurately on a sufficient number of receivers. Nevertheless, these small events are important for the interpretation of the processes that are monitored and even an inferior estimate of event locations and strengths is valuable information. Moreover, the smaller the event the more often such events statistically occur and the more important such additional information becomes. In this study we try to enhance the performance of any microseismic network, providing additional estimates of event locations below the actual detection threshold. We present a migration-based event location method, where we project the recorded seismograms onto the ray coordinate system, which corresponds to a configuration of trial sources and the real receiver network. A time window of predefined length is centered on the arrival time of the related phase that is calculated for the same grid of trial locations. The area spanned by the time window below the computed envelope is stacked for each component (L, T, Q) individually. Subsequently, the objective function is formulated as the squared sum of the stacked values. To obtain the final location, we apply a robust global optimization routine called differential evolution, which provides the maximum value of the objective function. This method provides a complete algorithm with a minimum of control parameters making it suitable for automated processing. The method can be applied to both single and multi-component data, and either P or S or both phases can be used. As a result, this method allows for a flexible application to a wide range of data. Synthetic data were computed for a complex and heterogeneous model of an ore mine and we applied this method to real, observed microearthquake data.
NASA Astrophysics Data System (ADS)
Gao, Shibo; Cheng, Yongmei; Song, Chunhua
2013-09-01
The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.
Using Natural Language to Enable Mission Managers to Control Multiple Heterogeneous UAVs
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Puig-Navarro, Javier; Mehdi, S. Bilal; Mcquarry, A. Kyle
2016-01-01
The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for commercial activities. This research is developing methods beyond classical control-stick pilot inputs, to allow operators to manage complex missions without in-depth vehicle expertise. These missions may entail several heterogeneous UAVs flying coordinated patterns or flying multiple trajectories deconflicted in time or space to predefined locations. This paper describes the functionality and preliminary usability measures of an interface that allows an operator to define a mission using speech inputs. With a defined and simple vocabulary, operators can input the vast majority of mission parameters using simple, intuitive voice commands. Although the operator interface is simple, it is based upon autonomous algorithms that allow the mission to proceed with minimal input from the operator. This paper also describes these underlying algorithms that allow an operator to manage several UAVs.
A space-based classification system for RF transients
NASA Astrophysics Data System (ADS)
Moore, K. R.; Call, D.; Johnson, S.; Payne, T.; Ford, W.; Spencer, K.; Wilkerson, J. F.; Baumgart, C.
The FORTE (Fast On-Orbit Recording of Transient Events) small satellite is scheduled for launch in mid 1995. The mission is to measure and classify VHF (30-300 MHz) electromagnetic pulses, primarily due to lightning, within a high noise environment dominated by continuous wave carriers such as TV and FM stations. The FORTE Event Classifier will use specialized hardware to implement signal processing and neural network algorithms that perform onboard classification of RF transients and carriers. Lightning events will also be characterized with optical data telemetered to the ground. A primary mission science goal is to develop a comprehensive understanding of the correlation between the optical flash and the VHF emissions from lightning. By combining FORTE measurements with ground measurements and/or active transmitters, other science issues can be addressed. Examples include the correlation of global precipitation rates with lightning flash rates and location, the effects of large scale structures within the ionosphere (such as traveling ionospheric disturbances and horizontal gradients in the total electron content) on the propagation of broad bandwidth RF signals, and various areas of lightning physics. Event classification is a key feature of the FORTE mission. Neural networks are promising candidates for this application. The authors describe the proposed FORTE Event Classifier flight system, which consists of a commercially available digital signal processing board and a custom board, and discuss work on signal processing and neural network algorithms.
Rapidly locating and characterizing pollutant releases in buildings.
Sohn, Michael D; Reynolds, Pamela; Singh, Navtej; Gadgil, Ashok J
2002-12-01
Releases of airborne contaminants in or near a building can lead to significant human exposures unless prompt response measures are taken. However, possible responses can include conflicting strategies, such as shutting the ventilation system off versus running it in a purge mode or having occupants evacuate versus sheltering in place. The proper choice depends in part on knowing the source locations, the amounts released, and the likely future dispersion routes of the pollutants. We present an approach that estimates this information in real time. It applies Bayesian statistics to interpret measurements of airborne pollutant concentrations from multiple sensors placed in the building and computes best estimates and uncertainties of the release conditions. The algorithm is fast, capable of continuously updating the estimates as measurements stream in from sensors. We demonstrate the approach using a hypothetical pollutant release in a five-room building. Unknowns to the interpretation algorithm include location, duration, and strength of the source, and some building and weather conditions. Two sensor sampling plans and three levels of data quality are examined. Data interpretation in all examples is rapid; however, locating and characterizing the source with high probability depends on the amount and quality of data and the sampling plan.
Control strategy of grid-connected photovoltaic generation system based on GMPPT method
NASA Astrophysics Data System (ADS)
Wang, Zhongfeng; Zhang, Xuyang; Hu, Bo; Liu, Jun; Li, Ligang; Gu, Yongqiang; Zhou, Bowen
2018-02-01
There are multiple local maximum power points when photovoltaic (PV) array runs under partial shading condition (PSC).However, the traditional maximum power point tracking (MPPT) algorithm might be easily trapped in local maximum power points (MPPs) and cannot find the global maximum power point (GMPP). To solve such problem, a global maximum power point tracking method (GMPPT) is improved, combined with traditional MPPT method and particle swarm optimization (PSO) algorithm. Under different operating conditions of PV cells, different tracking algorithms are used. When the environment changes, the improved PSO algorithm is adopted to realize the global optimal search, and the variable step incremental conductance (INC) method is adopted to achieve MPPT in optimal local location. Based on the simulation model of the PV grid system built in Matlab/Simulink, comparative analysis of the tracking effect of MPPT by the proposed control algorithm and the traditional MPPT method under the uniform solar condition and PSC, validate the correctness, feasibility and effectiveness of the proposed control strategy.
Modeling the Volcanic Source at Long Valley, CA, Using a Genetic Algorithm Technique
NASA Technical Reports Server (NTRS)
Tiampo, Kristy F.
1999-01-01
In this project, we attempted to model the deformation pattern due to the magmatic source at Long Valley caldera using a real-value coded genetic algorithm (GA) inversion similar to that found in Michalewicz, 1992. The project has been both successful and rewarding. The genetic algorithm, coded in the C programming language, performs stable inversions over repeated trials, with varying initial and boundary conditions. The original model used a GA in which the geophysical information was coded into the fitness function through the computation of surface displacements for a Mogi point source in an elastic half-space. The program was designed to invert for a spherical magmatic source - its depth, horizontal location and volume - using the known surface deformations. It also included the capability of inverting for multiple sources.
TU-H-206-01: An Automated Approach for Identifying Geometric Distortions in Gamma Cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, S; Nelson, J; Samei, E
2016-06-15
Purpose: To develop a clinically-deployable, automated process for detecting artifacts in routine nuclear medicine (NM) quality assurance (QA) bar phantom images. Methods: An artifact detection algorithm was created to analyze bar phantom images as part of an ongoing QA program. A low noise, high resolution reference image was acquired from an x-ray of the bar phantom with a Philips Digital Diagnost system utilizing image stitching. NM bar images, acquired for 5 million counts over a 512×512 matrix, were registered to the template image by maximizing mutual information (MI). The MI index was used as an initial test for artifacts; lowmore » values indicate an overall presence of distortions regardless of their spatial location. Images with low MI scores were further analyzed for bar linearity, periodicity, alignment, and compression to locate differences with respect to the template. Findings from each test were spatially correlated and locations failing multiple tests were flagged as potential artifacts requiring additional visual analysis. The algorithm was initially deployed for GE Discovery 670 and Infinia Hawkeye gamma cameras. Results: The algorithm successfully identified clinically relevant artifacts from both systems previously unnoticed by technologists performing the QA. Average MI indices for artifact-free images are 0.55. Images with MI indices < 0.50 have shown 100% sensitivity and specificity for artifact detection when compared with a thorough visual analysis. Correlation of geometric tests confirms the ability to spatially locate the most likely image regions containing an artifact regardless of initial phantom orientation. Conclusion: The algorithm shows the potential to detect gamma camera artifacts that may be missed by routine technologist inspections. Detection and subsequent correction of artifacts ensures maximum image quality and may help to identify failing hardware before it impacts clinical workflow. Going forward, the algorithm is being deployed to monitor data from all gamma cameras within our health system.« less
Assessing the severity of sleep apnea syndrome based on ballistocardiogram
Zhou, Xingshe; Zhao, Weichao; Liu, Fan; Ni, Hongbo; Yu, Zhiwen
2017-01-01
Background Sleep Apnea Syndrome (SAS) is a common sleep-related breathing disorder, which affects about 4-7% males and 2-4% females all around the world. Different approaches have been adopted to diagnose SAS and measure its severity, including the gold standard Polysomnography (PSG) in sleep study field as well as several alternative techniques such as single-channel ECG, pulse oximeter and so on. However, many shortcomings still limit their generalization in home environment. In this study, we aim to propose an efficient approach to automatically assess the severity of sleep apnea syndrome based on the ballistocardiogram (BCG) signal, which is non-intrusive and suitable for in home environment. Methods We develop an unobtrusive sleep monitoring system to capture the BCG signals, based on which we put forward a three-stage sleep apnea syndrome severity assessment framework, i.e., data preprocessing, sleep-related breathing events (SBEs) detection, and sleep apnea syndrome severity evaluation. First, in the data preprocessing stage, to overcome the limits of BCG signals (e.g., low precision and reliability), we utilize wavelet decomposition to obtain the outline information of heartbeats, and apply a RR correction algorithm to handle missing or spurious RR intervals. Afterwards, in the event detection stage, we propose an automatic sleep-related breathing event detection algorithm named Physio_ICSS based on the iterative cumulative sums of squares (i.e., the ICSS algorithm), which is originally used to detect structural breakpoints in a time series. In particular, to efficiently detect sleep-related breathing events in the obtained time series of RR intervals, the proposed algorithm not only explores the practical factors of sleep-related breathing events (e.g., the limit of lasting duration and possible occurrence sleep stages) but also overcomes the event segmentation issue (e.g., equal-length segmentation method might divide one sleep-related breathing event into different fragments and lead to incorrect results) of existing approaches. Finally, by fusing features extracted from multiple domains, we can identify sleep-related breathing events and assess the severity level of sleep apnea syndrome effectively. Conclusions Experimental results on 136 individuals of different sleep apnea syndrome severities validate the effectiveness of the proposed framework, with the accuracy of 94.12% (128/136). PMID:28445548
The LHCb Grid Simulation: Proof of Concept
NASA Astrophysics Data System (ADS)
Hushchyn, M.; Ustyuzhanin, A.; Arzymatov, K.; Roiser, S.; Baranov, A.
2017-10-01
The Worldwide LHC Computing Grid provides access to data and computational resources to analyze it for researchers with different geographical locations. The grid has a hierarchical topology with multiple sites distributed over the world with varying number of CPUs, amount of disk storage and connection bandwidth. Job scheduling and data distribution strategy are key elements of grid performance. Optimization of algorithms for those tasks requires their testing on real grid which is hard to achieve. Having a grid simulator might simplify this task and therefore lead to more optimal scheduling and data placement algorithms. In this paper we demonstrate a grid simulator for the LHCb distributed computing software.
Multiscale 3D Shape Analysis using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2013-01-01
Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data. PMID:16685992
Multiscale 3D shape analysis using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen R
2005-01-01
Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data.
Wang, Rui-Rong; Yu, Xiao-Qing; Zheng, Shu-Wang; Ye, Yang
2016-01-01
Location based services (LBS) provided by wireless sensor networks have garnered a great deal of attention from researchers and developers in recent years. Chirp spread spectrum (CSS) signaling formatting with time difference of arrival (TDOA) ranging technology is an effective LBS technique in regards to positioning accuracy, cost, and power consumption. The design and implementation of the location engine and location management based on TDOA location algorithms were the focus of this study; as the core of the system, the location engine was designed as a series of location algorithms and smoothing algorithms. To enhance the location accuracy, a Kalman filter algorithm and moving weighted average technique were respectively applied to smooth the TDOA range measurements and location results, which are calculated by the cooperation of a Kalman TDOA algorithm and a Taylor TDOA algorithm. The location management server, the information center of the system, was designed with Data Server and Mclient. To evaluate the performance of the location algorithms and the stability of the system software, we used a Nanotron nanoLOC Development Kit 3.0 to conduct indoor and outdoor location experiments. The results indicated that the location system runs stably with high accuracy at absolute error below 0.6 m.
NASA Astrophysics Data System (ADS)
Tunc, Suleyman; Tunc, Berna; Caka, Deniz; Baris, Serif
2016-04-01
Locating and calculating size of the seismic events is quickly one of the most important and challenging issue in especially real time seismology. In this study, we developed a Matlab application to locate seismic events and calculate their magnitudes (Local Magnitude and empirical Moment Magnitude) using single station called SSL_Calc. This newly developed sSoftware has been tested on the all stations of the Marsite project "New Directions in Seismic Hazard Assessment through Focused Earth Observation in the Marmara Supersite-MARsite". SSL_Calc algorithm is suitable both for velocity and acceleration sensors. Data has to be in GCF (Güralp Compressed Format). Online or offline data can be selected in SCREAM software (belongs to Guralp Systems Limited) and transferred to SSL_Calc. To locate event P and S wave picks have to be marked by using SSL_Calc window manually. During magnitude calculation, instrument correction has been removed and converted to real displacement in millimeter. Then the displacement data is converted to Wood Anderson Seismometer output by using; Z=[0;0]; P=[-6.28+4.71j; -6.28-4.71j]; A0=[2080] parameters. For Local Magnitude calculation,; maximum displacement amplitude (A) and distance (dist) are used in formula (1) for distances up to 200km and formula (2) for more than 200km. ML=log10(A)-(-1.118-0.0647*dist+0.00071*dist2-3.39E-6*dist3+5.71e-9*dist4) (1) ML=log10(A)+(2.1173+0.0082*dist-0.0000059628*dist2) (2) Following Local Magnitude calculation, the programcode calculates two empiric Moment Magnitudes using formulas (3) Akkar et al. (2010) and (4) Ulusay et al. (2004). Mw=0.953* ML+0.422 (3) Mw=0.7768* ML+1.5921 (4) SSL_Calc is a software that is easy to implement and user friendly and offers practical solution to individual users to location of event and ML, Mw calculation.
NASA Astrophysics Data System (ADS)
Edel, S.; Bilek, S. L.; Garcia, K.
2014-12-01
Induced seismicity is a class of crustal earthquakes resulting from human activities such as surface and underground mining, impoundment of reservoirs, withdrawal of fluids and gas from the subsurface, and injection of fluids into underground cavities. Within the Permian basin in southeastern New Mexico lies an active area of oil and gas production, as well as the Waste Isolation Pilot Plant (WIPP), a geologic nuclear waste repository located just east of Carlsbad, NM. Small magnitude earthquakes have been recognized in the area for many years, recorded by a network of short period vertical component seismometers operated by New Mexico Tech. However, for robust comparisons between the seismicity patterns and the injection well locations and rates, improved locations and a more complete catalog over time are necessary. We present results of earthquake relocations for this area by using data from the 3-component broadband EarthScope Flexible Array SIEDCAR experiment that operated in the area between 2008-2011. Relocated event locations tighten into a small cluster of ~38 km2, approximately 10 km from the nearest injection wells. The majority of events occurred at 10-12 km depth, given depth residuals of 1.7-3.6 km. We also present a newly developed more complete catalog of events from this area by using a waveform cross-correlation algorithm and the relocated events as templates. This allows us to detect smaller magnitude events that were previously undetected with the short period network data. The updated earthquake catalog is compared with geologic maps and cross sections to identify possible fault locations. The catalog is also compared with available well data on fluid injection and production. Our preliminary results suggest no obvious connection between seismic moment release, fluid injection, or production given the available monthly industry data. We do see evidence in the geologic and well data of previously unidentified faults in the area.
Chronodes: Interactive Multifocus Exploration of Event Sequences
POLACK, PETER J.; CHEN, SHANG-TSE; KAHNG, MINSUK; DE BARBARO, KAYA; BASOLE, RAHUL; SHARMIN, MOUSHUMI; CHAU, DUEN HORNG
2018-01-01
The advent of mobile health (mHealth) technologies challenges the capabilities of current visualizations, interactive tools, and algorithms. We present Chronodes, an interactive system that unifies data mining and human-centric visualization techniques to support explorative analysis of longitudinal mHealth data. Chronodes extracts and visualizes frequent event sequences that reveal chronological patterns across multiple participant timelines of mHealth data. It then combines novel interaction and visualization techniques to enable multifocus event sequence analysis, which allows health researchers to interactively define, explore, and compare groups of participant behaviors using event sequence combinations. Through summarizing insights gained from a pilot study with 20 behavioral and biomedical health experts, we discuss Chronodes’s efficacy and potential impact in the mHealth domain. Ultimately, we outline important open challenges in mHealth, and offer recommendations and design guidelines for future research. PMID:29515937
NASA Astrophysics Data System (ADS)
Li, J.; Abers, G. A.; Christensen, D. H.; Kim, Y.; Calkins, J. A.
2011-12-01
Earthquakes in subduction zones are mostly generated at the interface between the subducting and overlying plates. In 2006-2009, the MOOS (Multidisciplinary Observations Of Subduction) seismic array was deployed around the Kenai Peninsula, Alaska, consisting of 34 broadband seismometers recording for 1-3 years. This region spans the eastern end of the Aleutian megathrust that ruptured in the 1964 Mw 9.2 great earthquake, the second largest recorded earthquake, and ongoing seismicity is abundant. Here, we report an initial analysis of seismicity recorded by MOOS, in the context of preliminary imaging. There were 16,462 events detected in one year from initial STA/LTA signal detections and subsequent event associations from the MOOS Array. We manually reviewed them to eliminate distant earthquakes and noise, leaving 11,879 local earthquakes. To refine this catalog, an adaptive auto-regressive onset estimation algorithm was applied, doubling the original dataset and producing 20,659 P picks and 22,999 S picks for one month (September 2007). Inspection shows that this approach lead to almost negligible false alarms and many more events than hand picking. Within the well-sampled part of the array, roughly 200 km by 300 km, we locate 250% more earthquakes for one month than the permanent network catalog, or 10 earthquakes per day on this patch of the megathrust. Although the preliminary locations of earthquakes still show some scatter, we can see a concentration of events in a ~20-km-wide belt, part of which can be interpreted as seismogenic thrust zone. In conjunction with the seismicity study, we are imaging the plate interface with receiver functions. The main seismicity zone corresponds to the top of a low-velocity layer imaged in receiver functions, nominally attributed to the top of the downgoing plate. As we refine velocity models and apply relative relocation algorithms, we expect to improve the precision of the locations substantially. When combined with image of velocity structure from scattered wave migration, we can test whether the thrust zone is above the Yakutat terrane or between the Yakutat terrane and the subducting Pacific plate. Our refined relocations will also improve our understanding of other active faults (e.g., splay faults) and their relationship to the plate boundary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Díaz, Mario C.; Beroiz, Martín; Peñuela, Tania
We present the results of the optical follow-up conducted by the TOROS collaboration of the first gravitational-wave event GW150914. We conducted unfiltered CCD observations (0.35–1 μ m) with the 1.5 m telescope at Bosque Alegre starting ∼2.5 days after the alarm. Given our limited field of view (∼100 arcmin{sup 2}), we targeted 14 nearby galaxies that were observable from the site and were located within the area of higher localization probability. We analyzed the observations using two independent implementations of difference-imaging algorithms, followed by a Random-Forest-based algorithm to discriminate between real and bogus transients. We did not find any bonamore » fide transient event in the surveyed area down to a 5 σ limiting magnitude of r = 21.7 mag (AB). Our result is consistent with the LIGO detection of a binary black hole merger, for which no electromagnetic counterparts are expected, and with the expected rates of other astrophysical transients.« less
Spatial pattern recognition of seismic events in South West Colombia
NASA Astrophysics Data System (ADS)
Benítez, Hernán D.; Flórez, Juan F.; Duque, Diana P.; Benavides, Alberto; Lucía Baquero, Olga; Quintero, Jiber
2013-09-01
Recognition of seismogenic zones in geographical regions supports seismic hazard studies. This recognition is usually based on visual, qualitative and subjective analysis of data. Spatial pattern recognition provides a well founded means to obtain relevant information from large amounts of data. The purpose of this work is to identify and classify spatial patterns in instrumental data of the South West Colombian seismic database. In this research, clustering tendency analysis validates whether seismic database possesses a clustering structure. A non-supervised fuzzy clustering algorithm creates groups of seismic events. Given the sensitivity of fuzzy clustering algorithms to centroid initial positions, we proposed a methodology to initialize centroids that generates stable partitions with respect to centroid initialization. As a result of this work, a public software tool provides the user with the routines developed for clustering methodology. The analysis of the seismogenic zones obtained reveals meaningful spatial patterns in South-West Colombia. The clustering analysis provides a quantitative location and dispersion of seismogenic zones that facilitates seismological interpretations of seismic activities in South West Colombia.
Enhanced Handover Decision Algorithm in Heterogeneous Wireless Network
Abdullah, Radhwan Mohamed; Zukarnain, Zuriati Ahmad
2017-01-01
Transferring a huge amount of data between different network locations over the network links depends on the network’s traffic capacity and data rate. Traditionally, a mobile device may be moved to achieve the operations of vertical handover, considering only one criterion, that is the Received Signal Strength (RSS). The use of a single criterion may cause service interruption, an unbalanced network load and an inefficient vertical handover. In this paper, we propose an enhanced vertical handover decision algorithm based on multiple criteria in the heterogeneous wireless network. The algorithm consists of three technology interfaces: Long-Term Evolution (LTE), Worldwide interoperability for Microwave Access (WiMAX) and Wireless Local Area Network (WLAN). It also employs three types of vertical handover decision algorithms: equal priority, mobile priority and network priority. The simulation results illustrate that the three types of decision algorithms outperform the traditional network decision algorithm in terms of handover number probability and the handover failure probability. In addition, it is noticed that the network priority handover decision algorithm produces better results compared to the equal priority and the mobile priority handover decision algorithm. Finally, the simulation results are validated by the analytical model. PMID:28708067
Ryberg, T.; Haberland, C.H.; Fuis, G.S.; Ellsworth, W.L.; Shelly, D.R.
2010-01-01
Non-volcanic tremor (NVT) has been observed at several subduction zones and at the San Andreas Fault (SAF). Tremor locations are commonly derived by cross-correlating envelope-transformed seismic traces in combination with source-scanning techniques. Recently, they have also been located by using relative relocations with master events, that is low-frequency earthquakes that are part of the tremor; locations are derived by conventional traveltime-based methods. Here we present a method to locate the sources of NVT using an imaging approach for multiple array data. The performance of the method is checked with synthetic tests and the relocation of earthquakes. We also applied the method to tremor occurring near Cholame, California. A set of small-aperture arrays (i.e. an array consisting of arrays) installed around Cholame provided the data set for this study. We observed several tremor episodes and located tremor sources in the vicinity of SAF. During individual tremor episodes, we observed a systematic change of source location, indicating rapid migration of the tremor source along SAF. ?? 2010 The Authors Geophysical Journal International ?? 2010 RAS.
NASA Technical Reports Server (NTRS)
Kvernadze, George; Hagstrom,Thomas; Shapiro, Henry
1997-01-01
A key step for some methods dealing with the reconstruction of a function with jump discontinuities is the accurate approximation of the jumps and their locations. Various methods have been suggested in the literature to obtain this valuable information. In the present paper, we develop an algorithm based on identities which determine the jumps of a 2(pi)-periodic bounded not-too-highly oscillating function by the partial sums of its differentiated Fourier series. The algorithm enables one to approximate the locations of discontinuities and the magnitudes of jumps of a bounded function. We study the accuracy of approximation and establish asymptotic expansions for the approximations of a 27(pi)-periodic piecewise smooth function with one discontinuity. By an appropriate linear combination, obtained via derivatives of different order, we significantly improve the accuracy. Next, we use Richardson's extrapolation method to enhance the accuracy even more. For a function with multiple discontinuities we establish simple formulae which "eliminate" all discontinuities of the function but one. Then we treat the function as if it had one singularity following the method described above.
Stewart, C M; Newlands, S D; Perachio, A A
2004-12-01
Rapid and accurate discrimination of single units from extracellular recordings is a fundamental process for the analysis and interpretation of electrophysiological recordings. We present an algorithm that performs detection, characterization, discrimination, and analysis of action potentials from extracellular recording sessions. The program was entirely written in LabVIEW (National Instruments), and requires no external hardware devices or a priori information about action potential shapes. Waveform events are detected by scanning the digital record for voltages that exceed a user-adjustable trigger. Detected events are characterized to determine nine different time and voltage levels for each event. Various algebraic combinations of these waveform features are used as axis choices for 2-D Cartesian plots of events. The user selects axis choices that generate distinct clusters. Multiple clusters may be defined as action potentials by manually generating boundaries of arbitrary shape. Events defined as action potentials are validated by visual inspection of overlain waveforms. Stimulus-response relationships may be identified by selecting any recorded channel for comparison to continuous and average cycle histograms of binned unit data. The algorithm includes novel aspects of feature analysis and acquisition, including higher acquisition rates for electrophysiological data compared to other channels. The program confirms that electrophysiological data may be discriminated with high-speed and efficiency using algebraic combinations of waveform features derived from high-speed digital records.
Parallel eigenanalysis of finite element models in a completely connected architecture
NASA Technical Reports Server (NTRS)
Akl, F. A.; Morel, M. R.
1989-01-01
A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.
Khan, Naveed; McClean, Sally; Zhang, Shuai; Nugent, Chris
2016-01-01
In recent years, smart phones with inbuilt sensors have become popular devices to facilitate activity recognition. The sensors capture a large amount of data, containing meaningful events, in a short period of time. The change points in this data are used to specify transitions to distinct events and can be used in various scenarios such as identifying change in a patient’s vital signs in the medical domain or requesting activity labels for generating real-world labeled activity datasets. Our work focuses on change-point detection to identify a transition from one activity to another. Within this paper, we extend our previous work on multivariate exponentially weighted moving average (MEWMA) algorithm by using a genetic algorithm (GA) to identify the optimal set of parameters for online change-point detection. The proposed technique finds the maximum accuracy and F_measure by optimizing the different parameters of the MEWMA, which subsequently identifies the exact location of the change point from an existing activity to a new one. Optimal parameter selection facilitates an algorithm to detect accurate change points and minimize false alarms. Results have been evaluated based on two real datasets of accelerometer data collected from a set of different activities from two users, with a high degree of accuracy from 99.4% to 99.8% and F_measure of up to 66.7%. PMID:27792177
Comparison of recent S-wave indicating methods
NASA Astrophysics Data System (ADS)
Hubicka, Katarzyna; Sokolowski, Jakub
2018-01-01
Seismic event consists of surface waves and body waves. Due to the fact that the body waves are faster (P-waves) and more energetic (S-waves) in literature the problem of their analysis is taken more often. The most universal information that is received from the recorded wave is its moment of arrival. When this information is obtained from at least four seismometers in different locations, the epicentre of the particular event can be estimated [1]. Since the recorded body waves may overlap in signal, the problem of wave onset moment is considered more often for faster P-wave than S-wave. This however does not mean that the issue of S-wave arrival time is not taken at all. As the process of manual picking is time-consuming, methods of automatic detection are recommended (these however may be less accurate). In this paper four recently developed methods estimating S-wave arrival are compared: the method operating on empirical mode decomposition and Teager-Kaiser operator [2], the modification of STA/LTA algorithm [3], the method using a nearest neighbour-based approach [4] and the algorithm operating on characteristic of signals' second moments. The methods will be also compared to wellknown algorithm based on the autoregressive model [5]. The algorithms will be tested in terms of their S-wave arrival identification accuracy on real data originating from International Research Institutions for Seismology (IRIS) database.
Virtual Network Embedding via Monte Carlo Tree Search.
Haeri, Soroush; Trajkovic, Ljiljana
2018-02-01
Network virtualization helps overcome shortcomings of the current Internet architecture. The virtualized network architecture enables coexistence of multiple virtual networks (VNs) on an existing physical infrastructure. VN embedding (VNE) problem, which deals with the embedding of VN components onto a physical network, is known to be -hard. In this paper, we propose two VNE algorithms: MaVEn-M and MaVEn-S. MaVEn-M employs the multicommodity flow algorithm for virtual link mapping while MaVEn-S uses the shortest-path algorithm. They formalize the virtual node mapping problem by using the Markov decision process (MDP) framework and devise action policies (node mappings) for the proposed MDP using the Monte Carlo tree search algorithm. Service providers may adjust the execution time of the MaVEn algorithms based on the traffic load of VN requests. The objective of the algorithms is to maximize the profit of infrastructure providers. We develop a discrete event VNE simulator to implement and evaluate performance of MaVEn-M, MaVEn-S, and several recently proposed VNE algorithms. We introduce profitability as a new performance metric that captures both acceptance and revenue to cost ratios. Simulation results show that the proposed algorithms find more profitable solutions than the existing algorithms. Given additional computation time, they further improve embedding solutions.
Analysis of Acoustic Emission Parameters from Corrosion of AST Bottom Plate in Field Testing
NASA Astrophysics Data System (ADS)
Jomdecha, C.; Jirarungsatian, C.; Suwansin, W.
Field testing of aboveground storage tank (AST) to monitor corrosion of the bottom plate is presented in this chapter. AE testing data of the ten AST with different sizes, materials, and products were employed to monitor the bottom plate condition. AE sensors of 30 and 150 kHz were used to monitor the corrosion activity of up to 24 channels including guard sensors. Acoustic emission (AE) parameters were analyzed to explore the AE parameter patterns of occurring corrosion compared to the laboratory results. Amplitude, count, duration, and energy were main parameters of analysis. Pattern recognition technique with statistical was implemented to eliminate the electrical and environmental noises. The results showed the specific AE patterns of corrosion activities related to the empirical results. In addition, plane algorithm was utilized to locate the significant AE events from corrosion. Both results of parameter patterns and AE event locations can be used to interpret and locate the corrosion activities. Finally, basic statistical grading technique was used to evaluate the bottom plate condition of the AST.
Application of genetic algorithms to focal mechanism determination
NASA Astrophysics Data System (ADS)
Kobayashi, Reiji; Nakanishi, Ichiro
1994-04-01
Genetic algorithms are a new class of methods for global optimization. They resemble Monte Carlo techniques, but search for solutions more efficiently than uniform Monte Carlo sampling. In the field of geophysics, genetic algorithms have recently been used to solve some non-linear inverse problems (e.g., earthquake location, waveform inversion, migration velocity estimation). We present an application of genetic algorithms to focal mechanism determination from first-motion polarities of P-waves and apply our method to two recent large events, the Kushiro-oki earthquake of January 15, 1993 and the SW Hokkaido (Japan Sea) earthquake of July 12, 1993. Initial solution and curvature information of the objective function that gradient methods need are not required in our approach. Moreover globally optimal solutions can be efficiently obtained. Calculation of polarities based on double-couple models is the most time-consuming part of the source mechanism determination. The amount of calculations required by the method designed in this study is much less than that of previous grid search methods.
Foreshocks and aftershocks of Pisagua 2014 earthquake: time and space evolution of megathrust event.
NASA Astrophysics Data System (ADS)
Fuenzalida Velasco, Amaya; Rietbrock, Andreas; Wollam, Jack; Thomas, Reece; de Lima Neto, Oscar; Tavera, Hernando; Garth, Thomas; Ruiz, Sergio
2016-04-01
The 2014 Pisagua earthquake of magnitude 8.2 is the first case in Chile where a foreshock sequence was clearly recorded by a local network, as well all the complete sequence including the mainshock and its aftershocks. The seismicity of the last year before the mainshock include numerous clusters close to the epicentral zone (Ruiz et al; 2014) but it was on 16th March that this activity became stronger with the Mw 6.7 precursory event taking place in front of Iquique coast at 12 km depth. The Pisagua earthquake arrived on 1st April 2015 breaking almost 120 km N-S and two days after a 7.6 aftershock occurred in the south of the rupture, enlarging the zone affected by this sequence. In this work, we analyse the foreshocks and aftershock sequence of Pisagua earthquake, from the spatial and time evolution for a total of 15.764 events that were recorded from the 1st March to 31th May 2015. This event catalogue was obtained from the automatic analyse of seismic raw data of more than 50 stations installed in the north of Chile and the south of Peru. We used the STA/LTA algorithm for the detection of P and S arrival times on the vertical components and then a method of back propagation in a 1D velocity model for the event association and preliminary location of its hypocenters following the algorithm outlined by Rietbrock et al. (2012). These results were then improved by locating with NonLinLoc software using a regional velocity model. We selected the larger events to analyse its moment tensor solution by a full waveform inversion using ISOLA software. In order to understand the process of nucleation and propagation of the Pisagua earthquake, we also analysed the evolution in time of the seismicity of the three months of data. The zone where the precursory events took place was strongly activated two weeks before the mainshock and remained very active until the end of the analysed period with an important quantity of the seismicity located in the upper plate and having variations in its focal mechanisms. The evolution of the Pisagua sequence point out a rupture by steps, that we suggest to be related to the properties of the upper plate, as well as along in the subduction interface. The spatial distribution of seismicity was compared to the inter-seismic coupling of previous studies, the regional bathymetry and the slip distribution of both the mainshock and the Magnitude 7.6 event. The results show an important relation between the low coupling zones and the areas lacking large magnitude events
NASA Astrophysics Data System (ADS)
Murray, Jon E.; Brindley, Helen E.; Bryant, Robert G.; Russell, Jacqui E.; Jenkins, Katherine F.
2013-04-01
Understanding the processes governing the availability and entrainment of mineral dust into the atmosphere requires dust sources to be identified and the evolution of dust events to be monitored. To achieve this aim a wide range of approaches have been developed utilising observations from a variety of different satellite sensors. Global maps of source regions and their relative strengths have been derived from instruments in low Earth orbit (e.g. Total Ozone Monitoring Spectrometer (TOMS) (Prospero et al., 2002), MODerate resolution Imaging Spectrometer (MODIS) (Ginoux et al., 2012)). Instruments such as MODIS can also be used to improve precise source location (Baddock et al., 2009) but the information available is restricted to the satellite overpass times which may not be coincident with active dust emission from the source. Hence, at a regional scale, some of the more successful approaches used to characterise the activity of different sources use high temporal resolution data available from instruments in geostationary orbit. For example, the widely used red-green-blue (RGB) dust scheme developed by Lensky and Rosenfeld (2008) (hereafter LR2008) makes use of observations from selected thermal channels of the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) in a false colour rendering scheme in which dust appears pink. This scheme has provided the basis for numerous studies of north African dust sources and factors governing their activation (e.g. Schepanski et al., 2007, 2009, 2012). However, the LR2008 imagery can fail to identify dust events due to the effects of atmospheric moisture, variations in dust layer height and optical properties, and surface conditions (Brindley et al., 2012). Here we introduce a new method designed to circumvent some of these issues and enhance the signature of dust events using observations from SEVIRI. The approach involves the derivation of a composite clear-sky signal for selected channels on an individual time-step and pixel basis. These composite signals are subtracted from each observation in the relevant channels to enhance weak transient signals associated with low levels of dust emission. Different channel combinations are then rendered in false colour imagery to better identify dust source locations and activity. We have applied this new clear-sky difference (CSD) algorithm over three key source regions in southern Africa: the Makgadikgadi Basin, Etosha Pan, and the Namibian and western South African coast. Case studies indicate that advantages associated with the CSD approach include an improved ability to detect dust and distinguish multiple sources, the observation of source activation earlier in the diurnal cycle, and an improved ability to pinpoint dust source locations. These advantages are confirmed by a survey of four-years of data, comparing the results obtained using the CSD technique with those derived from LR2008 dust imagery. On average the new algorithm more than doubles the number of dust events identified, with the greatest improvement for the Makgadigkadi Basin and coastal regions. We anticipate exploiting this new activation record derived using the CSD approach to better understand the surface and meteorological conditions controlling dust uplift and subsequent atmospheric transport.
Jayaraman, Chandrasekaran; Mummidisetty, Chaithanya Krishna; Mannix-Slobig, Alannah; McGee Koch, Lori; Jayaraman, Arun
2018-03-13
Monitoring physical activity and leveraging wearable sensor technologies to facilitate active living in individuals with neurological impairment has been shown to yield benefits in terms of health and quality of living. In this context, accurate measurement of physical activity estimates from these sensors are vital. However, wearable sensor manufacturers generally only provide standard proprietary algorithms based off of healthy individuals to estimate physical activity metrics which may lead to inaccurate estimates in population with neurological impairment like stroke and incomplete spinal cord injury (iSCI). The main objective of this cross-sectional investigation was to evaluate the validity of physical activity estimates provided by standard proprietary algorithms for individuals with stroke and iSCI. Two research grade wearable sensors used in clinical settings were chosen and the outcome metrics estimated using standard proprietary algorithms were validated against designated golden standard measures (Cosmed K4B2 for energy expenditure and metabolic equivalent and manual tallying for step counts). The influence of sensor location, sensor type and activity characteristics were also studied. 28 participants (Healthy (n = 10); incomplete SCI (n = 8); stroke (n = 10)) performed a spectrum of activities in a laboratory setting using two wearable sensors (ActiGraph and Metria-IH1) at different body locations. Manufacturer provided standard proprietary algorithms estimated the step count, energy expenditure (EE) and metabolic equivalent (MET). These estimates were compared with the estimates from gold standard measures. For verifying validity, a series of Kruskal Wallis ANOVA tests (Games-Howell multiple comparison for post-hoc analyses) were conducted to compare the mean rank and absolute agreement of outcome metrics estimated by each of the devices in comparison with the designated gold standard measurements. The sensor type, sensor location, activity characteristics and the population specific condition influences the validity of estimation of physical activity metrics using standard proprietary algorithms. Implementing population specific customized algorithms accounting for the influences of sensor location, type and activity characteristics for estimating physical activity metrics in individuals with stroke and iSCI could be beneficial.
RBT-GA: a novel metaheuristic for solving the Multiple Sequence Alignment problem.
Taheri, Javid; Zomaya, Albert Y
2009-07-07
Multiple Sequence Alignment (MSA) has always been an active area of research in Bioinformatics. MSA is mainly focused on discovering biologically meaningful relationships among different sequences or proteins in order to investigate the underlying main characteristics/functions. This information is also used to generate phylogenetic trees. This paper presents a novel approach, namely RBT-GA, to solve the MSA problem using a hybrid solution methodology combining the Rubber Band Technique (RBT) and the Genetic Algorithm (GA) metaheuristic. RBT is inspired by the behavior of an elastic Rubber Band (RB) on a plate with several poles, which is analogues to locations in the input sequences that could potentially be biologically related. A GA attempts to mimic the evolutionary processes of life in order to locate optimal solutions in an often very complex landscape. RBT-GA is a population based optimization algorithm designed to find the optimal alignment for a set of input protein sequences. In this novel technique, each alignment answer is modeled as a chromosome consisting of several poles in the RBT framework. These poles resemble locations in the input sequences that are most likely to be correlated and/or biologically related. A GA-based optimization process improves these chromosomes gradually yielding a set of mostly optimal answers for the MSA problem. RBT-GA is tested with one of the well-known benchmarks suites (BALiBASE 2.0) in this area. The obtained results show that the superiority of the proposed technique even in the case of formidable sequences.
Passive imaging of hydrofractures in the South Belridge diatomite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilderton, D.C.; Patzek, T.W.; Rector, J.W.
1996-03-01
The authors present the results of a seismic analysis of two hydrofractures spanning the entire diatomite column (1,110--1,910 ft or 338--582 m) in Shell`s Phase 2 steam drive pilot in South Belridge, California. These hydrofractures were induced at two depths (1,110--1,460 and 1,560--1,910 ft) and imaged passively using the seismic energy released during fracturing. The arrivals of shear waves from the cracking rock (microseismic events) were recorded at a 1 ms sampling rate by 56 geophones in three remote observation wells, resulting in 10 GB of raw data. These arrival times were then inverted for the event locations, from whichmore » the hydrofracture geometry was inferred. A five-dimensional conjugate-gradient algorithm with a depth-dependent, but otherwise constant shear wave velocity model (CVM) was developed for the inversions. To validate CVM, they created a layered shear wave velocity model of the formation and used it to calculate synthetic arrival times from known locations chosen at various depths along the estimated fracture plane. These arrival times were then inverted with CVM and the calculated locations compared with the known ones, quantifying the systematic error associated with the assumption of constant shear wave velocity. They also performed Monte Carlo sensitivity analyses on the synthetic arrival times to account for all other random errors that exist in field data. After determining the limitations of the inversion algorithm, they hand-picked the shear wave arrival times for both hydrofractures and inverted them with CVM.« less
NASA Astrophysics Data System (ADS)
Kumar, Rakesh; Chandrawat, Rajesh Kumar; Garg, B. P.; Joshi, Varun
2017-07-01
Opening the new firm or branch with desired execution is very relevant to facility location problem. Along the lines to locate the new ambulances and firehouses, the government desires to minimize average response time for emergencies from all residents of cities. So finding the best location is biggest challenge in day to day life. These type of problems were named as facility location problems. A lot of algorithms have been developed to handle these problems. In this paper, we review five algorithms that were applied to facility location problems. The significance of clustering in facility location problems is also presented. First we compare Fuzzy c-means clustering (FCM) algorithm with alternating heuristic (AH) algorithm, then with Particle Swarm Optimization (PSO) algorithms using different type of distance function. The data was clustered with the help of FCM and then we apply median model and min-max problem model on that data. After finding optimized locations using these algorithms we find the distance from optimized location point to the demanded point with different distance techniques and compare the results. At last, we design a general example to validate the feasibility of the five algorithms for facilities location optimization, and authenticate the advantages and drawbacks of them.
DL-ADR: a novel deep learning model for classifying genomic variants into adverse drug reactions.
Liang, Zhaohui; Huang, Jimmy Xiangji; Zeng, Xing; Zhang, Gang
2016-08-10
Genomic variations are associated with the metabolism and the occurrence of adverse reactions of many therapeutic agents. The polymorphisms on over 2000 locations of cytochrome P450 enzymes (CYP) due to many factors such as ethnicity, mutations, and inheritance attribute to the diversity of response and side effects of various drugs. The associations of the single nucleotide polymorphisms (SNPs), the internal pharmacokinetic patterns and the vulnerability of specific adverse reactions become one of the research interests of pharmacogenomics. The conventional genomewide association studies (GWAS) mainly focuses on the relation of single or multiple SNPs to a specific risk factors which are a one-to-many relation. However, there are no robust methods to establish a many-to-many network which can combine the direct and indirect associations between multiple SNPs and a serial of events (e.g. adverse reactions, metabolic patterns, prognostic factors etc.). In this paper, we present a novel deep learning model based on generative stochastic networks and hidden Markov chain to classify the observed samples with SNPs on five loci of two genes (CYP2D6 and CYP1A2) respectively to the vulnerable population of 14 types of adverse reactions. A supervised deep learning model is proposed in this study. The revised generative stochastic networks (GSN) model with transited by the hidden Markov chain is used. The data of the training set are collected from clinical observation. The training set is composed of 83 observations of blood samples with the genotypes respectively on CYP2D6*2, *10, *14 and CYP1A2*1C, *1 F. The samples are genotyped by the polymerase chain reaction (PCR) method. A hidden Markov chain is used as the transition operator to simulate the probabilistic distribution. The model can perform learning at lower cost compared to the conventional maximal likelihood method because the transition distribution is conditional on the previous state of the hidden Markov chain. A least square loss (LASSO) algorithm and a k-Nearest Neighbors (kNN) algorithm are used as the baselines for comparison and to evaluate the performance of our proposed deep learning model. There are 53 adverse reactions reported during the observation. They are assigned to 14 categories. In the comparison of classification accuracy, the deep learning model shows superiority over the LASSO and kNN model with a rate over 80 %. In the comparison of reliability, the deep learning model shows the best stability among the three models. Machine learning provides a new method to explore the complex associations among genomic variations and multiple events in pharmacogenomics studies. The new deep learning algorithm is capable of classifying various SNPs to the corresponding adverse reactions. We expect that as more genomic variations are added as features and more observations are made, the deep learning model can improve its performance and can act as a black-box but reliable verifier for other GWAS studies.
Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy
NASA Astrophysics Data System (ADS)
Rendon, A.; Beck, J. C.; Lilge, Lothar
2008-02-01
Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.
A Bayesian Approach for Sensor Optimisation in Impact Identification
Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.
2016-01-01
This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064
Seismicity of the Bering Glacier Region: Inferences from Relocations Using Data from STEEP
NASA Astrophysics Data System (ADS)
Panessa, A. L.; Pavlis, G. L.; Hansen, R. A.; Ruppert, N.
2008-12-01
We relocated earthquakes recorded from 1990 to 2007 in the area of the Bering Glacier in southeastern Alaska to test a hypothesis that faults in this area are linked to glaciers. We used waveform correlation to improve arrival time measurements for data from all broadband channels including all the data from the STEEP experiment. We used a novel form of correlation based on interactive array processing of common receiver gathers linked to a three-dimensional grid of control points. This procedure produced 8556 gathers that we processed interactively to produce improved arrival time estimates. The interactive procedure allowed us to select which events in each gather were sufficiently similar to warrant correlation. Redundancy in the result was resolved in a secondary correlation that aligned event stacks of the same station-event pair associated with multiple control points. This procedure yielded only 2240 waveforms that correlated and modified only a total of 524 arrivals in a total database of 12263 arrivals. The correlation procedure changed arrival times on 145 of 509 events in this database. Events with arrivals constrained by correlation were not clustered but were randomly distributed throughout the study area. We used a version of the Progressive Multiple Event Location (PMEL) that analyzed data at each control point to invert for relative locations and a set of path anomalies for each control point. We applied the PMEL procedure with different velocity models and constraints and compared the results to a HypoDD solution produced from the original arrival time data. The relocations are all significant improvements from the standard single-event, catalog locations. The relocations suggest the seismicity in this region is mostly linked to fold and thrust deformation in the Yakatat block. There is a suggestion of a north-dipping trend to much of the seismicity, but the dominant trend is a fairly diffuse cloud of events largely confined to the Yakatat block south of the Bagley Icefield. This is consistent with the recently published tectonic model by Berger et al. (2008).
Thubagere, Anupama J; Li, Wei; Johnson, Robert F; Chen, Zibo; Doroudi, Shayan; Lee, Yae Lim; Izatt, Gregory; Wittman, Sarah; Srinivas, Niranjan; Woods, Damien; Winfree, Erik; Qian, Lulu
2017-09-15
Two critical challenges in the design and synthesis of molecular robots are modularity and algorithm simplicity. We demonstrate three modular building blocks for a DNA robot that performs cargo sorting at the molecular level. A simple algorithm encoding recognition between cargos and their destinations allows for a simple robot design: a single-stranded DNA with one leg and two foot domains for walking, and one arm and one hand domain for picking up and dropping off cargos. The robot explores a two-dimensional testing ground on the surface of DNA origami, picks up multiple cargos of two types that are initially at unordered locations, and delivers them to specified destinations until all molecules are sorted into two distinct piles. The robot is designed to perform a random walk without any energy supply. Exploiting this feature, a single robot can repeatedly sort multiple cargos. Localization on DNA origami allows for distinct cargo-sorting tasks to take place simultaneously in one test tube or for multiple robots to collectively perform the same task. Copyright © 2017, American Association for the Advancement of Science.
NASA Astrophysics Data System (ADS)
LIU, Q.; Lv, Q.; Klucik, R.; Chen, C.; Gallaher, D. W.; Grant, G.; Shang, L.
2016-12-01
Due to the high volume and complexity of satellite data, computer-aided tools for fast quality assessments and scientific discovery are indispensable for scientists in the era of Big Data. In this work, we have developed a framework for automated anomalous event detection in massive satellite data. The framework consists of a clustering-based anomaly detection algorithm and a cloud-based tool for interactive analysis of detected anomalies. The algorithm is unsupervised and requires no prior knowledge of the data (e.g., expected normal pattern or known anomalies). As such, it works for diverse data sets, and performs well even in the presence of missing and noisy data. The cloud-based tool provides an intuitive mapping interface that allows users to interactively analyze anomalies using multiple features. As a whole, our framework can (1) identify outliers in a spatio-temporal context, (2) recognize and distinguish meaningful anomalous events from individual outliers, (3) rank those events based on "interestingness" (e.g., rareness or total number of outliers) defined by users, and (4) enable interactively query, exploration, and analysis of those anomalous events. In this presentation, we will demonstrate the effectiveness and efficiency of our framework in the application of detecting data quality issues and unusual natural events using two satellite datasets. The techniques and tools developed in this project are applicable for a diverse set of satellite data and will be made publicly available for scientists in early 2017.
NASA Astrophysics Data System (ADS)
Floriane, Provost; Jean-Philippe, Malet; Cécile, Doubre; Julien, Gance; Alessia, Maggi; Agnès, Helmstetter
2015-04-01
Characterizing the micro-seismic activity of landslides is an important parameter for a better understanding of the physical processes controlling landslide behaviour. However, the location of the seismic sources on landslides is a challenging task mostly because of (a) the recording system geometry, (b) the lack of clear P-wave arrivals and clear wave differentiation, (c) the heterogeneous velocities of the ground. The objective of this work is therefore to test whether the integration of a 3D velocity model in probabilistic seismic source location codes improves the quality of the determination especially in depth. We studied the clay-rich landslide of Super-Sauze (French Alps). Most of the seismic events (rockfalls, slidequakes, tremors...) are generated in the upper part of the landslide near the main scarp. The seismic recording system is composed of two antennas with four vertical seismometers each located on the east and west sides of the seismically active part of the landslide. A refraction seismic campaign was conducted in August 2014 and a 3D P-wave model has been estimated using the Quasi-Newton tomography inversion algorithm. The shots of the seismic campaign are used as calibration shots to test the performance of the different location methods and to further update the 3D velocity model. Natural seismic events are detected with a semi-automatic technique using a frequency threshold. The first arrivals are picked using a kurtosis-based method and compared to the manual picking. Several location methods were finally tested. We compared a non-linear probabilistic method coupled with the 3D P-wave model and a beam-forming method inverted for an apparent velocity. We found that the Quasi-Newton tomography inversion algorithm provides results coherent with the original underlaying topography. The velocity ranges from 500 m.s-1 at the surface to 3000 m.s-1 in the bedrock. For the majority of the calibration shots, the use of a 3D velocity model significantly improve the results of the location procedure using P-wave arrivals. All the shots were made 50 centimeters below the surface and hence the vertical error could not be determined with the seismic campaign. We further discriminate the rockfalls and the slidequakes occurring on the landslide with the depth computed thanks to the 3D velocity model. This could be an additional criteria to automatically classify the events.
Peinemann, Frank; Kleijnen, Jos
2015-01-01
Objectives To develop an algorithm that aims to provide guidance and awareness for choosing multiple study designs in systematic reviews of healthcare interventions. Design Method study: (1) To summarise the literature base on the topic. (2) To apply the integration of various study types in systematic reviews. (3) To devise decision points and outline a pragmatic decision tree. (4) To check the plausibility of the algorithm by backtracking its pathways in four systematic reviews. Results (1) The results of our systematic review of the published literature have already been published. (2) We recaptured the experience from our four previously conducted systematic reviews that required the integration of various study types. (3) We chose length of follow-up (long, short), frequency of events (rare, frequent) and types of outcome as decision points (death, disease, discomfort, disability, dissatisfaction) and aligned the study design labels according to the Cochrane Handbook. We also considered practical or ethical concerns, and the problem of unavailable high-quality evidence. While applying the algorithm, disease-specific circumstances and aims of interventions should be considered. (4) We confirmed the plausibility of the pathways of the algorithm. Conclusions We propose that the algorithm can assist to bring seminal features of a systematic review with multiple study designs to the attention of anyone who is planning to conduct a systematic review. It aims to increase awareness and we think that it may reduce the time burden on review authors and may contribute to the production of a higher quality review. PMID:26289450
Inter-satellite links for satellite autonomous integrity monitoring
NASA Astrophysics Data System (ADS)
Rodríguez-Pérez, Irma; García-Serrano, Cristina; Catalán Catalán, Carlos; García, Alvaro Mozo; Tavella, Patrizia; Galleani, Lorenzo; Amarillo, Francisco
2011-01-01
A new integrity monitoring mechanisms to be implemented on-board on a GNSS taking advantage of inter-satellite links has been introduced. This is based on accurate range and Doppler measurements not affected neither by atmospheric delays nor ground local degradation (multipath and interference). By a linear combination of the Inter-Satellite Links Observables, appropriate observables for both satellite orbits and clock monitoring are obtained and by the proposed algorithms it is possible to reduce the time-to-alarm and the probability of undetected satellite anomalies.Several test cases have been run to assess the performances of the new orbit and clock monitoring algorithms in front of a complete scenario (satellite-to-satellite and satellite-to-ground links) and in a satellite-only scenario. The results of this experimentation campaign demonstrate that the Orbit Monitoring Algorithm is able to detect orbital feared events when the position error at the worst user location is still under acceptable limits. For instance, an unplanned manoeuvre in the along-track direction is detected (with a probability of false alarm equals to 5 × 10-9) when the position error at the worst user location is 18 cm. The experimentation also reveals that the clock monitoring algorithm is able to detect phase jumps, frequency jumps and instability degradation on the clocks but the latency of detection as well as the detection performances strongly depends on the noise added by the clock measurement system.
NASA Astrophysics Data System (ADS)
Che, Il-Young; Stump, Brian W.; Lee, Hee-Il
2011-04-01
The dependence of infrasound propagation on the season and path environment was quantified by the analysis of more than 1000 repetitive infrasonic ground-truth events at an active, open-pit mine over two years. Blast-associated infrasonic signals were analysed from two infrasound arrays (CHNAR and ULDAR) located at similar distances of 181 and 169 km, respectively, from the source but in different azimuthal directions and with different path environments. The CHNAR array is located to the NW of the source area with primarily a continental path, whereas the ULDAR is located East of the source with a path dominated by open ocean. As a result, CHNAR observations were dominated by stratospheric phases with characteristic celerities of 260-289 m s-1 and large seasonal variations in the traveltime, whereas data from ULDAR consisted primarily of tropospheric phases with larger celerities from 322 to 361 m s-1 and larger daily than seasonal variation in the traveltime. The interpretation of these observations is verified by ray tracing using atmospheric models incorporating daily weather balloon data that characterizes the shallow atmosphere for the two years of the study. Finally, experimental celerity models that included seasonal path effects were constructed from the long-term data set. These experimental celerity models were used to constrain traveltime variations in infrasonic location algorithms providing improved location estimates as illustrated with the empirical data set.
Vehicle coordinated transportation dispatching model base on multiple crisis locations
NASA Astrophysics Data System (ADS)
Tian, Ran; Li, Shanwei; Yang, Guoying
2018-05-01
Many disastrous events are often caused after unconventional emergencies occur, and the requirements of disasters are often different. It is difficult for a single emergency resource center to satisfy such requirements at the same time. Therefore, how to coordinate the emergency resources stored by multiple emergency resource centers to various disaster sites requires the coordinated transportation of emergency vehicles. In this paper, according to the problem of emergency logistics coordination scheduling, based on the related constraints of emergency logistics transportation, an emergency resource scheduling model based on multiple disasters is established.
Detecting and Characterizing Genomic Signatures of Positive Selection in Global Populations
Liu, Xuanyao; Ong, Rick Twee-Hee; Pillai, Esakimuthu Nisha; Elzein, Abier M.; Small, Kerrin S.; Clark, Taane G.; Kwiatkowski, Dominic P.; Teo, Yik-Ying
2013-01-01
Natural selection is a significant force that shapes the architecture of the human genome and introduces diversity across global populations. The question of whether advantageous mutations have arisen in the human genome as a result of single or multiple mutation events remains unanswered except for the fact that there exist a handful of genes such as those that confer lactase persistence, affect skin pigmentation, or cause sickle cell anemia. We have developed a long-range-haplotype method for identifying genomic signatures of positive selection to complement existing methods, such as the integrated haplotype score (iHS) or cross-population extended haplotype homozygosity (XP-EHH), for locating signals across the entire allele frequency spectrum. Our method also locates the founder haplotypes that carry the advantageous variants and infers their corresponding population frequencies. This presents an opportunity to systematically interrogate the whole human genome whether a selection signal shared across different populations is the consequence of a single mutation process followed subsequently by gene flow between populations or of convergent evolution due to the occurrence of multiple independent mutation events either at the same variant or within the same gene. The application of our method to data from 14 populations across the world revealed that positive-selection events tend to cluster in populations of the same ancestry. Comparing the founder haplotypes for events that are present across different populations revealed that convergent evolution is a rare occurrence and that the majority of shared signals stem from the same evolutionary event. PMID:23731540
Princic, Nicole; Gregory, Chris; Willson, Tina; Mahue, Maya; Felici, Diana; Werther, Winifred; Lenhart, Gregory; Foley, Kathleen A
2016-01-01
The objective was to expand on prior work by developing and validating a new algorithm to identify multiple myeloma (MM) patients in administrative claims. Two files were constructed to select MM cases from MarketScan Oncology Electronic Medical Records (EMR) and controls from the MarketScan Primary Care EMR during January 1, 2000-March 31, 2014. Patients were linked to MarketScan claims databases, and files were merged. Eligible cases were age ≥18, had a diagnosis and visit for MM in the Oncology EMR, and were continuously enrolled in claims for ≥90 days preceding and ≥30 days after diagnosis. Controls were age ≥18, had ≥12 months of overlap in claims enrollment (observation period) in the Primary Care EMR and ≥1 claim with an ICD-9-CM diagnosis code of MM (203.0×) during that time. Controls were excluded if they had chemotherapy; stem cell transplant; or text documentation of MM in the EMR during the observation period. A split sample was used to develop and validate algorithms. A maximum of 180 days prior to and following each MM diagnosis was used to identify events in the diagnostic process. Of 20 algorithms explored, the baseline algorithm of 2 MM diagnoses and the 3 best performing were validated. Values for sensitivity, specificity, and positive predictive value (PPV) were calculated. Three claims-based algorithms were validated with ~10% improvement in PPV (87-94%) over prior work (81%) and the baseline algorithm (76%) and can be considered for future research. Consistent with prior work, it was found that MM diagnoses before and after tests were needed.
Locating Local Earthquakes Using Single 3-Component Broadband Seismological Data
NASA Astrophysics Data System (ADS)
Das, S. B.; Mitra, S.
2015-12-01
We devised a technique to locate local earthquakes using single 3-component broadband seismograph and analyze the factors governing the accuracy of our result. The need for devising such a technique arises in regions of sparse seismic network. In state-of-the-art location algorithms, a minimum of three station recordings are required for obtaining well resolved locations. However, the problem arises when an event is recorded by less than three stations. This may be because of the following reasons: (a) down time of stations in a sparse network; (b) geographically isolated regions with limited logistic support to setup large network; (c) regions of insufficient economy for financing multi-station network and (d) poor signal-to-noise ratio for smaller events at most stations, except the one in its closest vicinity. Our technique provides a workable solution to the above problematic scenarios. However, our methodology is strongly dependent on the velocity model of the region. Our method uses a three step processing: (a) ascertain the back-azimuth of the event from the P-wave particle motion recorded on the horizontal components; (b) estimate the hypocentral distance using the S-P time; and (c) ascertain the emergent angle from the vertical and radial components. Once this is obtained, one can ray-trace through the 1-D velocity model to estimate the hypocentral location. We test our method on synthetic data, which produces results with 99% precision. With observed data, the accuracy of our results are very encouraging. The precision of our results depend on the signal-to-noise ratio (SNR) and choice of the right band-pass filter to isolate the P-wave signal. We used our method on minor aftershocks (3 < mb < 4) of the 2011 Sikkim earthquake using data from the Sikkim Himalayan network. Location of these events highlight the transverse strike-slip structure within the Indian plate, which was observed from source mechanism study of the mainshock and larger aftershocks.
Direction-Sensitive Hand-Held Gamma-Ray Spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukhopadhyay, S.
2012-10-04
A novel, light-weight, hand-held gamma-ray detector with directional sensitivity is being designed. The detector uses a set of multiple rings around two cylindrical surfaces, which provides precise location of two interaction points on two concentric cylindrical planes, wherefrom the source location can be traced back by back projection and/or Compton imaging technique. The detectors are 2.0 × 2.0 mm europium-doped strontium iodide (SrI2:Eu2+) crystals, whose light output has been measured to exceed 120,000 photons/MeV, making it one of the brightest scintillators in existence. The crystal’s energy resolution, less than 3% at 662 keV, is also excellent, and the response ismore » highly linear over a wide range of gamma-ray energies. The emission of SrI2:Eu2+ is well matched to both photo-multiplier tubes and blue-enhanced silicon photodiodes. The solid-state photomultipliers used in this design (each 2.0 × 2.0 mm) are arrays of active pixel sensors (avalanche photodiodes driven beyond their breakdown voltage in reverse bias); each pixel acts as a binary photon detector, and their summed output is an analog representation of the total photon energy, while the individual pixel accurately defines the point of interaction. A simple back-projection algorithm involving cone-surface mapping is being modeled. The back projection for an event cone is a conical surface defining the possible location of the source. The cone axis is the straight line passing through the first and second interaction points.« less
Operational EEW Networks in Turkey
NASA Astrophysics Data System (ADS)
Zulfikar, Can; Pinar, Ali
2016-04-01
There are several EEW networks and algorithms under operation in Turkey. The first EEW system was deployed in Istanbul in 2002 after the 1999 Mw7.4 Kocaeli and Mw7.1 Duzce earthquake events. The system consisted of 10 strong motion stations located as close as possible to the main Marmara Fault line. The system was upgraded by 5 OBS (Ocean Bottom Seismometer) in 2012 located in Marmara Sea. The system works in threshold based algorithm. The alert is given according to exceedance of certain threshold levels of amplitude of ground motion acceleration in certain time interval at least in 3 stations. Currently, there are two end-users of EEW system in Istanbul. The critical facilities of Istanbul Gas Distribution Company (IGDAS) and Marmaray Tube tunnel receives the EEW information in order to activate their automatic shut-off mechanisms. The IGDAS has their own strong motion network located at their district regulators. After receiving the EEW signal if the threshold values of ground motion parameters are exceeded the gas-flow is cut automatically at the district regulators. The IGDAS has 750 district regulators distributed in Istanbul. At the moment, the 110 of them are instrumented with strong motion accelerometers. As a 2nd stage of the on-going project, the IGDAS company proposes to install strong motion accelerometers to all remaining district regulators. The Marmaray railway tube tunnel is the world's deepest immersed tube tunnel with 60m undersea depth. The tunnel has 1.4km length with 13 segments. The tunnel is monitored with 2 strong motion accelerometers in each segment, 26 in total. Once the EEW signal is received, the monitoring system is activated and the recording ground motion parameters are calculated in real-time. Depending on the exceedance of threshold levels, further actions are taken such as reducing the train speed, stopping the train before entering the tunnel etc. In Istanbul, there are also on-site EEW system applied in several high-rise buildings. As similar to threshold based algorithm, once the threshold level is exceeded in several strong motion accelerometers installed in the high-rise building, the automated shut-off mechanism is activated in order to prevent secondary damage effects of the earthquakes. In addition to the threshold based EEW system, the regional EEW algorithms Virtual Seismologist (VS) as implemented in SeisComP3 VS(SC3) and PRESTo have been also implemented in Marmara region of Turkey. These applications use the regional seismic networks. The purpose of the regional EEW systems is to determine the magnitude and location of the event from the P-wave information of the closest 3-4 stations and forward this information to interested sites. The regional EEW systems are also important for Istanbul in order to detect far distance earthquake events and provide alert especially for the high-rise buildings for their long duration shaking.
Integration of a Self-Coherence Algorithm into DISAT for Forced Oscillation Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follum, James D.; Tuffner, Francis K.; Amidan, Brett G.
2015-03-03
With the increasing number of phasor measurement units on the power system, behaviors typically not observable on the power system are becoming more apparent. Oscillatory behavior on the power system, notably forced oscillations, are one such behavior. However, the large amounts of data coming from the PMUs makes manually detecting and locating these oscillations difficult. To automate portions of the process, an oscillation detection routine was coded into the Data Integrity and Situational Awareness Tool (DISAT) framework. Integration into the DISAT framework allows forced oscillations to be detected and information about the event provided to operational engineers. The oscillation detectionmore » algorithm integrates with the data handling and atypical data detecting capabilities of DISAT, building off of a standard library of functions. This report details that integration with information on the algorithm, some implementation issues, and some sample results from the western United States’ power grid.« less
NASA Astrophysics Data System (ADS)
Guex, Guillaume
2016-05-01
In recent articles about graphs, different models proposed a formalism to find a type of path between two nodes, the source and the target, at crossroads between the shortest-path and the random-walk path. These models include a freely adjustable parameter, allowing to tune the behavior of the path toward randomized movements or direct routes. This article presents a natural generalization of these models, namely a model with multiple sources and targets. In this context, source nodes can be viewed as locations with a supply of a certain good (e.g. people, money, information) and target nodes as locations with a demand of the same good. An algorithm is constructed to display the flow of goods in the network between sources and targets. With again a freely adjustable parameter, this flow can be tuned to follow routes of minimum cost, thus displaying the flow in the context of the optimal transportation problem or, by contrast, a random flow, known to be similar to the electrical current flow if the random-walk is reversible. Moreover, a source-targetcoupling can be retrieved from this flow, offering an optimal assignment to the transportation problem. This algorithm is described in the first part of this article and then illustrated with case studies.
Spatio-Temporal Pattern Mining on Trajectory Data Using Arm
NASA Astrophysics Data System (ADS)
Khoshahval, S.; Farnaghi, M.; Taleai, M.
2017-09-01
Preliminary mobile was considered to be a device to make human connections easier. But today the consumption of this device has been evolved to a platform for gaming, web surfing and GPS-enabled application capabilities. Embedding GPS in handheld devices, altered them to significant trajectory data gathering facilities. Raw GPS trajectory data is a series of points which contains hidden information. For revealing hidden information in traces, trajectory data analysis is needed. One of the most beneficial concealed information in trajectory data is user activity patterns. In each pattern, there are multiple stops and moves which identifies users visited places and tasks. This paper proposes an approach to discover user daily activity patterns from GPS trajectories using association rules. Finding user patterns needs extraction of user's visited places from stops and moves of GPS trajectories. In order to locate stops and moves, we have implemented a place recognition algorithm. After extraction of visited points an advanced association rule mining algorithm, called Apriori was used to extract user activity patterns. This study outlined that there are useful patterns in each trajectory that can be emerged from raw GPS data using association rule mining techniques in order to find out about multiple users' behaviour in a system and can be utilized in various location-based applications.
Fast Ss-Ilm a Computationally Efficient Algorithm to Discover Socially Important Locations
NASA Astrophysics Data System (ADS)
Dokuz, A. S.; Celik, M.
2017-11-01
Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.
Sudden Event Recognition: A Survey
Suriani, Nor Surayahani; Hussain, Aini; Zulkifley, Mohd Asyraf
2013-01-01
Event recognition is one of the most active research areas in video surveillance fields. Advancement in event recognition systems mainly aims to provide convenience, safety and an efficient lifestyle for humanity. A precise, accurate and robust approach is necessary to enable event recognition systems to respond to sudden changes in various uncontrolled environments, such as the case of an emergency, physical threat and a fire or bomb alert. The performance of sudden event recognition systems depends heavily on the accuracy of low level processing, like detection, recognition, tracking and machine learning algorithms. This survey aims to detect and characterize a sudden event, which is a subset of an abnormal event in several video surveillance applications. This paper discusses the following in detail: (1) the importance of a sudden event over a general anomalous event; (2) frameworks used in sudden event recognition; (3) the requirements and comparative studies of a sudden event recognition system and (4) various decision-making approaches for sudden event recognition. The advantages and drawbacks of using 3D images from multiple cameras for real-time application are also discussed. The paper concludes with suggestions for future research directions in sudden event recognition. PMID:23921828
Using Alternative Multiplication Algorithms to "Offload" Cognition
ERIC Educational Resources Information Center
Jazby, Dan; Pearn, Cath
2015-01-01
When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…
Underlying-event sensitive observables in Drell–Yan production using GENEVA
Alioli, Simone; Bauer, Christian W.; Guns, Sam; ...
2016-11-09
We present an extension of the Geneva Monte Carlo framework to include multiple parton interactions (MPI) provided by Pythia8. This allows us to obtain predictions for underlying-event sensitive measurements in Drell–Yan production, in conjunction with Geneva ’s fully differential NNLO calculation, NNLL' resummation for the 0-jet resolution variable (beam thrust), and NLL resummation for the 1-jet resolution variable. We describe the interface with the parton-shower algorithm and MPI model of Pythia8, which preserves both the precision of the partonic N-jet cross sections in Geneva as well as the shower accuracy and good description of soft hadronic physics of Pythia8. Wemore » present results for several underlying-event sensitive observables and compare to data from ATLAS and CMS as well as to standalone Pythia8 predictions. This includes a comparison with the recent ATLAS measurement of the beam thrust spectrum, which provides a potential avenue to fully disentangle the physical effects from the primary hard interaction, primary soft radiation, multiple parton interactions, and nonperturbative hadronization.« less
Underlying-event sensitive observables in Drell–Yan production using GENEVA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alioli, Simone; Bauer, Christian W.; Guns, Sam
We present an extension of the Geneva Monte Carlo framework to include multiple parton interactions (MPI) provided by Pythia8. This allows us to obtain predictions for underlying-event sensitive measurements in Drell–Yan production, in conjunction with Geneva ’s fully differential NNLO calculation, NNLL' resummation for the 0-jet resolution variable (beam thrust), and NLL resummation for the 1-jet resolution variable. We describe the interface with the parton-shower algorithm and MPI model of Pythia8, which preserves both the precision of the partonic N-jet cross sections in Geneva as well as the shower accuracy and good description of soft hadronic physics of Pythia8. Wemore » present results for several underlying-event sensitive observables and compare to data from ATLAS and CMS as well as to standalone Pythia8 predictions. This includes a comparison with the recent ATLAS measurement of the beam thrust spectrum, which provides a potential avenue to fully disentangle the physical effects from the primary hard interaction, primary soft radiation, multiple parton interactions, and nonperturbative hadronization.« less
Adaptive track scheduling to optimize concurrency and vectorization in GeantV
Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...
2015-05-22
The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The modelmore » has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. Lastly, this work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.« less
Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.
Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F
1980-01-01
Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.
Long Period (LP) volcanic earthquake source location at Merapi volcano by using dense array technics
NASA Astrophysics Data System (ADS)
Metaxian, Jean Philippe; Budi Santoso, Agus; Laurin, Antoine; Subandriyo, Subandriyo; Widyoyudo, Wiku; Arshab, Ghofar
2015-04-01
Since 2010, Merapi shows unusual activity compared to last decades. Powerful phreatic explosions are observed; some of them are preceded by LP signals. In the literature, LP seismicity is thought to be originated within the fluid, and therefore to be representative of the pressurization state of the volcano plumbing system. Another model suggests that LP events are caused by slow, quasi-brittle, low stress-drop failure driven by transient upper-edifice deformations. Knowledge of the spatial distribution of LP events is fundamental for better understanding the physical processes occurring in the conduit, as well as for the monitoring and the improvement of eruption forecasting. LP events recorded at Merapi have a spectral content dominated by frequencies between 0.8 and 3 Hz. To locate the source of these events, we installed a seismic antenna composed of 4 broadband CMG-6TD Güralp stations. This network has an aperture of 300 m. It is located on the site of Pasarbubar, between 500 and 800 m from the crater rim. Two multi-parameter stations (seismic, tiltmeter, S-P) located in the same area, equipped with broadband CMG-40T Güralp sensors may also be used to complete the data of the antenna. The source of LP events is located by using different approaches. In the first one, we used a method based on the measurement of the time delays between the early beginnings of LP events for each array receiver. The observed differences of time delays obtained for each pair of receivers are compared to theoretical values calculated from the travel times computed between grid nodes, which are positioned in the structure, and each receiver. In a second approach, we estimate the slowness vector by using MUSIC algorithm applied to 3-components data. From the slowness vector, we deduce the back-azimuth and the incident angle, which give an estimation of LP source depth in the conduit. This work is part of the Domerapi project funded by French Agence Nationale de la Recherche (https://sites.google.com/site/domerapi2).
Guyot, Patricia; Ades, A E; Ouwens, Mario J N M; Welton, Nicky J
2012-02-01
The results of Randomized Controlled Trials (RCTs) on time-to-event outcomes that are usually reported are median time to events and Cox Hazard Ratio. These do not constitute the sufficient statistics required for meta-analysis or cost-effectiveness analysis, and their use in secondary analyses requires strong assumptions that may not have been adequately tested. In order to enhance the quality of secondary data analyses, we propose a method which derives from the published Kaplan Meier survival curves a close approximation to the original individual patient time-to-event data from which they were generated. We develop an algorithm that maps from digitised curves back to KM data by finding numerical solutions to the inverted KM equations, using where available information on number of events and numbers at risk. The reproducibility and accuracy of survival probabilities, median survival times and hazard ratios based on reconstructed KM data was assessed by comparing published statistics (survival probabilities, medians and hazard ratios) with statistics based on repeated reconstructions by multiple observers. The validation exercise established there was no material systematic error and that there was a high degree of reproducibility for all statistics. Accuracy was excellent for survival probabilities and medians, for hazard ratios reasonable accuracy can only be obtained if at least numbers at risk or total number of events are reported. The algorithm is a reliable tool for meta-analysis and cost-effectiveness analyses of RCTs reporting time-to-event data. It is recommended that all RCTs should report information on numbers at risk and total number of events alongside KM curves.
Revealing the source of the radial flow patterns in proton-proton collisions using hard probes
NASA Astrophysics Data System (ADS)
Ortiz, Antonio; Bencédi, Gyula; Bello, Héctor
2017-06-01
In this work, we propose a tool to reveal the origin of the collective-like phenomena observed in proton-proton collisions. We exploit the fundamental difference between the underlying mechanisms, color reconnection and hydrodynamics, which produce radial flow patterns in Pythia 8 and Epos 3, respectively. Specifically, we proceed by examining the strength of the coupling between the soft and hard components which, by construction, is larger in Pythia 8 than in Epos 3. We study the transverse momentum ({p}{{T}}) distributions of charged pions, kaons and (anti) protons in inelastic pp collisions at \\sqrt{s}=7 TeV produced at mid-rapidity. Specific selections are made on an event-by-event basis as a function of the charged particle multiplicity and the transverse momentum of the leading jet ({p}{{T}}{jet}) reconstructed using the FastJet algorithm at mid-pseudorapidity (| η | < 1). From our studies, quantitative and qualitative differences between Pythia 8 and Epos 3 are found in the {p}{{T}} spectra when (for a given multiplicity class) the leading jet {p}{{T}} is increased. In addition, we show that for low-multiplicity events the presence of jets can produce radial flow-like behavior. Motivated by our findings, we propose to perform a similar analysis using experimental data from RHIC and LHC.
An Improved Elastic and Nonelastic Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Clowdsley, Martha S.; Wilson, John W.; Heinbockel, John H.; Tripathi, R. K.; Singleterry, Robert C., Jr.; Shinn, Judy L.
2000-01-01
A neutron transport algorithm including both elastic and nonelastic particle interaction processes for use in space radiation protection for arbitrary shield material is developed. The algorithm is based upon a multiple energy grouping and analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. The algorithm is then coupled to the Langley HZETRN code through a bidirectional neutron evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for an aluminum water shield-target configuration is then compared with MCNPX and LAHET Monte Carlo calculations for the same shield-target configuration. With the Monte Carlo calculation as a benchmark, the algorithm developed in this paper showed a great improvement in results over the unmodified HZETRN solution. In addition, a high-energy bidirectional neutron source based on a formula by Ranft showed even further improvement of the fluence results over previous results near the front of the water target where diffusion out the front surface is important. Effects of improved interaction cross sections are modest compared with the addition of the high-energy bidirectional source terms.
Automatic partitioning of head CTA for enabling segmentation
NASA Astrophysics Data System (ADS)
Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin
2004-05-01
Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.
Color object detection using spatial-color joint probability functions.
Luo, Jiebo; Crandall, David
2006-06-01
Object detection in unconstrained images is an important image understanding problem with many potential applications. There has been little success in creating a single algorithm that can detect arbitrary objects in unconstrained images; instead, algorithms typically must be customized for each specific object. Consequently, it typically requires a large number of exemplars (for rigid objects) or a large amount of human intuition (for nonrigid objects) to develop a robust algorithm. We present a robust algorithm designed to detect a class of compound color objects given a single model image. A compound color object is defined as having a set of multiple, particular colors arranged spatially in a particular way, including flags, logos, cartoon characters, people in uniforms, etc. Our approach is based on a particular type of spatial-color joint probability function called the color edge co-occurrence histogram. In addition, our algorithm employs perceptual color naming to handle color variation, and prescreening to limit the search scope (i.e., size and location) for the object. Experimental results demonstrated that the proposed algorithm is insensitive to object rotation, scaling, partial occlusion, and folding, outperforming a closely related algorithm based on color co-occurrence histograms by a decisive margin.
NASA Satellite Monitoring of Water Clarity in Mobile Bay for Nutrient Criteria Development
NASA Technical Reports Server (NTRS)
Blonski, Slawomir; Holekamp, Kara; Spiering, Bruce A.
2009-01-01
This project has demonstrated feasibility of deriving from MODIS daily measurements time series of water clarity parameters that provide coverage of a specific location or an area of interest for 30-50% of days. Time series derived for estuarine and coastal waters display much higher variability than time series of ecological parameters (such as vegetation indices) derived for land areas. (Temporal filtering often applied in terrestrial studies cannot be used effectively in ocean color processing). IOP-based algorithms for retrieval of diffuse light attenuation coefficient and TSS concentration perform well for the Mobile Bay environment: only a minor adjustment was needed in the TSS algorithm, despite generally recognized dependence of such algorithms on local conditions. The current IOP-based algorithm for retrieval of chlorophyll a concentration has not performed as well: a more reliable algorithm is needed that may be based on IOPs at additional wavelengths or on remote sensing reflectance from multiple spectral bands. CDOM algorithm also needs improvement to provide better separation between effects of gilvin (gelbstoff) and detritus. (Identification or development of such algorithm requires more data from in situ measurements of CDOM concentration in Gulf of Mexico coastal waters (ongoing collaboration with the EPA Gulf Ecology Division))
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
NASA Astrophysics Data System (ADS)
Acciarri, R.; Adams, C.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fadeeva, A. A.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Hourlier, A.; Huang, E.-C.; James, C.; Jan de Vries, J.; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; Rudolf von Rohr, C.; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Smith, A.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van De Pontseele, W.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.
2018-01-01
The development and operation of liquid-argon time-projection chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.
NASA Astrophysics Data System (ADS)
Bhattacharjee, Sudipta; Deb, Debasis
2016-07-01
Digital image correlation (DIC) is a technique developed for monitoring surface deformation/displacement of an object under loading conditions. This method is further refined to make it capable of handling discontinuities on the surface of the sample. A damage zone is referred to a surface area fractured and opened in due course of loading. In this study, an algorithm is presented to automatically detect multiple damage zones in deformed image. The algorithm identifies the pixels located inside these zones and eliminate them from FEM-DIC processes. The proposed algorithm is successfully implemented on several damaged samples to estimate displacement fields of an object under loading conditions. This study shows that displacement fields represent the damage conditions reasonably well as compared to regular FEM-DIC technique without considering the damage zones.
Event-Based Control Strategy for Mobile Robots in Wireless Environments.
Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto
2015-12-02
In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy.
Event-Based Control Strategy for Mobile Robots in Wireless Environments
Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto
2015-01-01
In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy. PMID:26633412
NASA Astrophysics Data System (ADS)
Harmon, Stephanie A.; Tuite, Michael J.; Jeraj, Robert
2016-10-01
Site selection for image-guided biopsies in patients with multiple lesions is typically based on clinical feasibility and physician preference. This study outlines the development of a selection algorithm that, in addition to clinical requirements, incorporates quantitative imaging data for automatic identification of candidate lesions for biopsy. The algorithm is designed to rank potential targets by maximizing a lesion-specific score, incorporating various criteria separated into two categories: (1) physician-feasibility category including physician-preferred lesion location and absolute volume scores, and (2) imaging-based category including various modality and application-specific metrics. This platform was benchmarked in two clinical scenarios, a pre-treatment setting and response-based setting using imaging from metastatic prostate cancer patients with high disease burden (multiple lesions) undergoing conventional treatment and receiving whole-body [18F]NaF PET/CT scans pre- and mid-treatment. Targeting of metastatic lesions was robust to different weighting ratios and candidacy for biopsy was physician confirmed. Lesion ranked as top targets for biopsy remained so for all patients in pre-treatment and post-treatment biopsy selection after sensitivity testing was completed for physician-biased or imaging-biased scenarios. After identifying candidates, biopsy feasibility was evaluated by a physician and confirmed for 90% (32/36) of high-ranking lesions, of which all top choices were confirmed. The remaining cases represented lesions with high anatomical difficulty for targeting, such as proximity to sciatic nerve. This newly developed selection method was successfully used to quantitatively identify candidate lesions for biopsies in patients with multiple lesions. In a prospective study, we were able to successfully plan, develop, and implement this technique for the selection of a pre-treatment biopsy location.
Fault Identification by Unsupervised Learning Algorithm
NASA Astrophysics Data System (ADS)
Nandan, S.; Mannu, U.
2012-12-01
Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault orientations for finite source inversions. The faults produced by the method exhibited good correlation with the fault planes obtained by focal mechanism solutions and previously mapped faults.
NASA Technical Reports Server (NTRS)
Shvartzvald, Y.; Li, Z.; Udalski, A.; Gould, A.; Sumi, T.; Street, R. A.; Calchi Novati, S.; Hundertmark, M.; Bozza, V.; Beichman, C.;
2016-01-01
Simultaneous observations of microlensing events from multiple locations allow for the breaking of degeneracies between the physical properties of the lensing system, specifically by exploring different regions of the lens plane and by directly measuring the "microlens parallax". We report the discovery of a 30-65M J brown dwarf orbiting a K dwarf in the microlensing event OGLE-2015-BLG-1319. The system is located at a distance of approximately 5 kpc toward the Galactic Bulge. The event was observed by several ground-based groups as well as by Spitzer and Swift, allowing a measurement of the physical properties. However, the event is still subject to an eight-fold degeneracy, in particular the well-known close-wide degeneracy, and thus the projected separation between the two lens components is either approximately 0.25 au or approximately 45 au. This is the first microlensing event observed by Swift, with the UVOT camera. We study the region of microlensing parameter space to which Swift is sensitive, finding that though Swift could not measure the microlens parallax with respect to ground-based observations for this event, it can be important for other events. Specifically, it is important for detecting nearby brown dwarfs and free-floating planets in high magnification events.
Characteristics of the Central Costa Rican Seismogenic Zone Determined from Microseismicity
NASA Astrophysics Data System (ADS)
DeShon, H. R.; Schwartz, S. Y.; Bilek, S. L.; Dorman, L. M.; Protti, M.; Gonzalez, V.
2001-12-01
Large or great subduction zone thrust earthquakes commonly nucleate within the seismogenic zone, a region of unstable slip on or near the converging plate interface. A better understanding of the mechanical, thermal and hydrothermal processes controlling seismic behavior in these regions requires accurate earthquake locations. Using arrival time data from an onland and offshore local seismic array and advanced 3D absolute and relative earthquake location techniques, we locate interplate seismic activity northwest of the Osa Peninsula, Costa Rica. We present high resolution locations of ~600 aftershocks of the 8/20/1999 Mw=6.9 underthrusting earthquake recorded by our local network between September and December 1999. We have developed a 3D velocity model based on published refraction lines and located events within a subducting slab geometry using QUAKE3D, a finite-differences based grid-searching algorithm (Nelson & Vidale, 1990). These absolute locations are input into HYPODD, a location program that uses P and S wave arrival time differences from nearby events and solves for the best relative locations (Waldhauser & Ellsworth, 2000). The pattern of relative earthquake locations is tied to an absolute reference using the absolute positions of the best-located earthquakes in the entire population. By using these programs in parallel, we minimize location errors, retain the aftershock pattern and provide the best absolute locations within a complex subduction geometry. We use the resulting seismicity pattern to determine characteristics of the seismogenic zone including geometry and up- and down-dip limits. These are compared with thermal models of the Middle America subduction zone, structures of the upper and lower plates, and characteristics of the Nankai seismogenic zone.
Classification of Multiple Seizure-Like States in Three Different Rodent Models of Epileptogenesis.
Guirgis, Mirna; Serletis, Demitre; Zhang, Jane; Florez, Carlos; Dian, Joshua A; Carlen, Peter L; Bardakjian, Berj L
2014-01-01
Epilepsy is a dynamical disease and its effects are evident in over fifty million people worldwide. This study focused on objective classification of the multiple states involved in the brain's epileptiform activity. Four datasets from three different rodent hippocampal preparations were explored, wherein seizure-like-events (SLE) were induced by the perfusion of a low - Mg(2+) /high-K(+) solution or 4-Aminopyridine. Local field potentials were recorded from CA3 pyramidal neurons and interneurons and modeled as Markov processes. Specifically, hidden Markov models (HMM) were used to determine the nature of the states present. Properties of the Hilbert transform were used to construct the feature spaces for HMM training. By sequentially applying the HMM training algorithm, multiple states were identified both in episodes of SLE and nonSLE activity. Specifically, preSLE and postSLE states were differentiated and multiple inner SLE states were identified. This was accomplished using features extracted from the lower frequencies (1-4 Hz, 4-8 Hz) alongside those of both the low- (40-100 Hz) and high-gamma (100-200 Hz) of the recorded electrical activity. The learning paradigm of this HMM-based system eliminates the inherent bias associated with other learning algorithms that depend on predetermined state segmentation and renders it an appropriate candidate for SLE classification.
Data Discovery and Access via the Heliophysics Events Knowledgebase (HEK)
NASA Astrophysics Data System (ADS)
Somani, A.; Hurlburt, N. E.; Schrijver, C. J.; Cheung, M.; Freeland, S.; Slater, G. L.; Seguin, R.; Timmons, R.; Green, S.; Chang, L.; Kobashi, A.; Jaffey, A.
2011-12-01
The HEK is a integrated system which helps direct scientists to solar events and data from a variety of providers. The system is fully operational and adoption of HEK has been growing since the launch of NASA's SDO mission. In this presentation we describe the different components that comprise HEK. The Heliophysics Events Registry (HER) and Heliophysics Coverage Registry (HCR) form the two major databases behind the system. The HCR allows the user to search on coverage event metadata for a variety of instruments. The HER allows the user to search on annotated event metadata for a variety of instruments. Both the HCR and HER are accessible via a web API which can return search results in machine readable formats (e.g. XML and JSON). A variety of SolarSoft services are also provided to allow users to search the HEK as well as obtain and manipulate data. Other components include - the Event Detection System (EDS) continually runs feature finding algorithms on SDO data to populate the HER with relevant events, - A web form for users to request SDO data cutouts for multiple AIA channels as well as HMI line-of-sight magnetograms, - iSolSearch, which allows a user to browse events in the HER and search for specific events over a specific time interval, all within a graphical web page, - Panorama, which is the software tool used for rapid visualization of large volumes of solar image data in multiple channels/wavelengths. The user can also easily create WYSIWYG movies and launch the Annotator tool to describe events and features. - EVACS, which provides a JOGL powered client for the HER and HCR. EVACS displays the searched for events on a full disk magnetogram of the sun while displaying more detailed information for events.
Desert Dust Satellite Retrieval Intercomparison
NASA Technical Reports Server (NTRS)
Carboni, E.; Thomas, G. E.; Sayer, A. M.; Siddans, R.; Poulsen, C. A.; Grainger, R. G.; Ahn, C.; Antoine, D.; Bevan, S.; Braak, R.;
2012-01-01
This work provides a comparison of satellite retrievals of Saharan desert dust aerosol optical depth (AOD) during a strong dust event through March 2006. In this event, a large dust plume was transported over desert, vegetated, and ocean surfaces. The aim is to identify and understand the differences between current algorithms, and hence improve future retrieval algorithms. The satellite instruments considered are AATSR, AIRS, MERIS, MISR, MODIS, OMI, POLDER, and SEVIRI. An interesting aspect is that the different algorithms make use of different instrument characteristics to obtain retrievals over bright surfaces. These include multi-angle approaches (MISR, AATSR), polarisation measurements (POLDER), single-view approaches using solar wavelengths (OMI, MODIS), and the thermal infrared spectral region (SEVIRI, AIRS). Differences between instruments, together with the comparison of different retrieval algorithms applied to measurements from the same instrument, provide a unique insight into the performance and characteristics of the various techniques employed. As well as the intercomparison between different satellite products, the AODs have also been compared to co-located AERONET data. Despite the fact that the agreement between satellite and AERONET AODs is reasonably good for all of the datasets, there are significant differences between them when compared to each other, especially over land. These differences are partially due to differences in the algorithms, such as as20 sumptions about aerosol model and surface properties. However, in this comparison of spatially and temporally averaged data, at least as significant as these differences are sampling issues related to the actual footprint of each instrument on the heterogeneous aerosol field, cloud identification and the quality control flags of each dataset.
GOES-R Geostationary Lightning Mapper Performance Specifications and Algorithms
NASA Technical Reports Server (NTRS)
Mach, Douglas M.; Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Petersen, William A.; Boldi, Robert A.; Carey, Lawrence D.; Bateman, Monte G.; Buchler, Dennis E.; McCaul, E. William, Jr.
2008-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series will carry a GLM that will provide continuous day and night observations of lightning. The mission objectives for the GLM are to: (1) Provide continuous, full-disk lightning measurements for storm warning and nowcasting, (2) Provide early warning of tornadic activity, and (2) Accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997- present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. The science data will consist of lightning "events", "groups", and "flashes". The algorithm is being designed to be an efficient user of the computational resources. This may include parallelization of the code and the concept of sub-dividing the GLM FOV into regions to be processed in parallel. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama, Oklahoma, Central Florida, and the Washington DC Metropolitan area) are being used to develop the prelaunch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution.
76 FR 21422 - Reports, Forms, and Record Keeping Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-15
...-occurring event), the interviews will follow a test- control design where they are administered during the... multiple locations designed to reduce impaired motorcycle riding. NHTSA anticipates that the programs will... in up to 4 program sites, and in up to 2 control sites not carrying out an intervention. Motorcycle...
Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1
NASA Technical Reports Server (NTRS)
Park, Thomas; Smith, Austin; Oliver, T. Emerson
2018-01-01
The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.